00:00:00.001 Started by upstream project "autotest-per-patch" build number 126136 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.042 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.042 The recommended git tool is: git 00:00:00.043 using credential 00000000-0000-0000-0000-000000000002 00:00:00.044 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.056 Fetching changes from the remote Git repository 00:00:00.060 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.084 Using shallow fetch with depth 1 00:00:00.084 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.085 > git --version # timeout=10 00:00:00.107 > git --version # 'git version 2.39.2' 00:00:00.107 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.128 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.128 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.445 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.459 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.473 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:05.473 > git config core.sparsecheckout # timeout=10 00:00:05.486 > git read-tree -mu HEAD # timeout=10 00:00:05.503 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:05.523 Commit message: "inventory: add WCP3 to free inventory" 00:00:05.523 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:05.615 [Pipeline] Start of Pipeline 00:00:05.631 [Pipeline] library 00:00:05.633 Loading library shm_lib@master 00:00:05.633 Library shm_lib@master is cached. Copying from home. 00:00:05.649 [Pipeline] node 00:00:05.656 Running on VM-host-SM17 in /var/jenkins/workspace/freebsd-vg-autotest_2 00:00:05.659 [Pipeline] { 00:00:05.669 [Pipeline] catchError 00:00:05.670 [Pipeline] { 00:00:05.686 [Pipeline] wrap 00:00:05.695 [Pipeline] { 00:00:05.701 [Pipeline] stage 00:00:05.702 [Pipeline] { (Prologue) 00:00:05.718 [Pipeline] echo 00:00:05.720 Node: VM-host-SM17 00:00:05.724 [Pipeline] cleanWs 00:00:05.732 [WS-CLEANUP] Deleting project workspace... 00:00:05.732 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.737 [WS-CLEANUP] done 00:00:05.912 [Pipeline] setCustomBuildProperty 00:00:06.017 [Pipeline] httpRequest 00:00:06.041 [Pipeline] echo 00:00:06.042 Sorcerer 10.211.164.101 is alive 00:00:06.051 [Pipeline] httpRequest 00:00:06.054 HttpMethod: GET 00:00:06.054 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.055 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.055 Response Code: HTTP/1.1 200 OK 00:00:06.056 Success: Status code 200 is in the accepted range: 200,404 00:00:06.056 Saving response body to /var/jenkins/workspace/freebsd-vg-autotest_2/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.965 [Pipeline] sh 00:00:07.248 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:07.259 [Pipeline] httpRequest 00:00:07.282 [Pipeline] echo 00:00:07.283 Sorcerer 10.211.164.101 is alive 00:00:07.289 [Pipeline] httpRequest 00:00:07.292 HttpMethod: GET 00:00:07.293 URL: http://10.211.164.101/packages/spdk_eea7da68892b4ccc44290a3fef93afef4f720335.tar.gz 00:00:07.293 Sending request to url: http://10.211.164.101/packages/spdk_eea7da68892b4ccc44290a3fef93afef4f720335.tar.gz 00:00:07.309 Response Code: HTTP/1.1 200 OK 00:00:07.309 Success: Status code 200 is in the accepted range: 200,404 00:00:07.310 Saving response body to /var/jenkins/workspace/freebsd-vg-autotest_2/spdk_eea7da68892b4ccc44290a3fef93afef4f720335.tar.gz 00:01:26.306 [Pipeline] sh 00:01:26.628 + tar --no-same-owner -xf spdk_eea7da68892b4ccc44290a3fef93afef4f720335.tar.gz 00:01:29.930 [Pipeline] sh 00:01:30.286 + git -C spdk log --oneline -n5 00:01:30.286 eea7da688 fio/bdev: use socket_id when allocating io buffers 00:01:30.286 7d88ad9b8 bdevperf: allocate data buffers based on bdev's socket id 00:01:30.286 9cfa1d5f6 bdev/nvme: populate socket_id 00:01:30.286 4a45fec0d bdev: add socket_id to spdk_bdev 00:01:30.286 e8fe15377 fio/nvme: use socket_id when allocating io buffers 00:01:30.308 [Pipeline] writeFile 00:01:30.325 [Pipeline] sh 00:01:30.606 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:30.618 [Pipeline] sh 00:01:30.898 + cat autorun-spdk.conf 00:01:30.898 SPDK_TEST_UNITTEST=1 00:01:30.898 SPDK_RUN_VALGRIND=0 00:01:30.898 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.898 SPDK_TEST_NVME=1 00:01:30.898 SPDK_TEST_BLOCKDEV=1 00:01:30.898 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:30.905 RUN_NIGHTLY=0 00:01:30.908 [Pipeline] } 00:01:30.922 [Pipeline] // stage 00:01:30.936 [Pipeline] stage 00:01:30.938 [Pipeline] { (Run VM) 00:01:30.952 [Pipeline] sh 00:01:31.230 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:31.231 + echo 'Start stage prepare_nvme.sh' 00:01:31.231 Start stage prepare_nvme.sh 00:01:31.231 + [[ -n 4 ]] 00:01:31.231 + disk_prefix=ex4 00:01:31.231 + [[ -n /var/jenkins/workspace/freebsd-vg-autotest_2 ]] 00:01:31.231 + [[ -e /var/jenkins/workspace/freebsd-vg-autotest_2/autorun-spdk.conf ]] 00:01:31.231 + source /var/jenkins/workspace/freebsd-vg-autotest_2/autorun-spdk.conf 00:01:31.231 ++ SPDK_TEST_UNITTEST=1 00:01:31.231 ++ SPDK_RUN_VALGRIND=0 00:01:31.231 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.231 ++ SPDK_TEST_NVME=1 00:01:31.231 ++ SPDK_TEST_BLOCKDEV=1 00:01:31.231 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:31.231 ++ RUN_NIGHTLY=0 00:01:31.231 + cd /var/jenkins/workspace/freebsd-vg-autotest_2 00:01:31.231 + nvme_files=() 00:01:31.231 + declare -A nvme_files 00:01:31.231 + backend_dir=/var/lib/libvirt/images/backends 00:01:31.231 + nvme_files['nvme.img']=5G 00:01:31.231 + nvme_files['nvme-cmb.img']=5G 00:01:31.231 + nvme_files['nvme-multi0.img']=4G 00:01:31.231 + nvme_files['nvme-multi1.img']=4G 00:01:31.231 + nvme_files['nvme-multi2.img']=4G 00:01:31.231 + nvme_files['nvme-openstack.img']=8G 00:01:31.231 + nvme_files['nvme-zns.img']=5G 00:01:31.231 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:31.231 + (( SPDK_TEST_FTL == 1 )) 00:01:31.231 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:31.231 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:31.231 + for nvme in "${!nvme_files[@]}" 00:01:31.231 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:31.231 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:31.231 + for nvme in "${!nvme_files[@]}" 00:01:31.231 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:31.231 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:31.231 + for nvme in "${!nvme_files[@]}" 00:01:31.231 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:31.231 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:31.231 + for nvme in "${!nvme_files[@]}" 00:01:31.231 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:31.231 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:31.231 + for nvme in "${!nvme_files[@]}" 00:01:31.231 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:31.231 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:31.231 + for nvme in "${!nvme_files[@]}" 00:01:31.231 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:31.231 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:31.231 + for nvme in "${!nvme_files[@]}" 00:01:31.231 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:31.231 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:31.231 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:31.231 + echo 'End stage prepare_nvme.sh' 00:01:31.231 End stage prepare_nvme.sh 00:01:31.242 [Pipeline] sh 00:01:31.521 + DISTRO=freebsd14 CPUS=10 RAM=14336 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:31.521 Setup: -n 10 -s 14336 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -H -a -v -f freebsd14 00:01:31.521 00:01:31.521 DIR=/var/jenkins/workspace/freebsd-vg-autotest_2/spdk/scripts/vagrant 00:01:31.521 SPDK_DIR=/var/jenkins/workspace/freebsd-vg-autotest_2/spdk 00:01:31.521 VAGRANT_TARGET=/var/jenkins/workspace/freebsd-vg-autotest_2 00:01:31.521 HELP=0 00:01:31.521 DRY_RUN=0 00:01:31.521 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img, 00:01:31.522 NVME_DISKS_TYPE=nvme, 00:01:31.522 NVME_AUTO_CREATE=0 00:01:31.522 NVME_DISKS_NAMESPACES=, 00:01:31.522 NVME_CMB=, 00:01:31.522 NVME_PMR=, 00:01:31.522 NVME_ZNS=, 00:01:31.522 NVME_MS=, 00:01:31.522 NVME_FDP=, 00:01:31.522 SPDK_VAGRANT_DISTRO=freebsd14 00:01:31.522 SPDK_VAGRANT_VMCPU=10 00:01:31.522 SPDK_VAGRANT_VMRAM=14336 00:01:31.522 SPDK_VAGRANT_PROVIDER=libvirt 00:01:31.522 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:31.522 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:31.522 SPDK_OPENSTACK_NETWORK=0 00:01:31.522 VAGRANT_PACKAGE_BOX=0 00:01:31.522 VAGRANTFILE=/var/jenkins/workspace/freebsd-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:31.522 FORCE_DISTRO=true 00:01:31.522 VAGRANT_BOX_VERSION= 00:01:31.522 EXTRA_VAGRANTFILES= 00:01:31.522 NIC_MODEL=e1000 00:01:31.522 00:01:31.522 mkdir: created directory '/var/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt' 00:01:31.522 /var/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt /var/jenkins/workspace/freebsd-vg-autotest_2 00:01:34.808 Bringing machine 'default' up with 'libvirt' provider... 00:01:35.375 ==> default: Creating image (snapshot of base box volume). 00:01:35.632 ==> default: Creating domain with the following settings... 00:01:35.632 ==> default: -- Name: freebsd14-14.0-RELEASE-1718332871-2294_default_1720795801_c0800b01e3efae469195 00:01:35.632 ==> default: -- Domain type: kvm 00:01:35.632 ==> default: -- Cpus: 10 00:01:35.632 ==> default: -- Feature: acpi 00:01:35.632 ==> default: -- Feature: apic 00:01:35.632 ==> default: -- Feature: pae 00:01:35.632 ==> default: -- Memory: 14336M 00:01:35.632 ==> default: -- Memory Backing: hugepages: 00:01:35.632 ==> default: -- Management MAC: 00:01:35.632 ==> default: -- Loader: 00:01:35.632 ==> default: -- Nvram: 00:01:35.632 ==> default: -- Base box: spdk/freebsd14 00:01:35.632 ==> default: -- Storage pool: default 00:01:35.632 ==> default: -- Image: /var/lib/libvirt/images/freebsd14-14.0-RELEASE-1718332871-2294_default_1720795801_c0800b01e3efae469195.img (32G) 00:01:35.632 ==> default: -- Volume Cache: default 00:01:35.632 ==> default: -- Kernel: 00:01:35.632 ==> default: -- Initrd: 00:01:35.632 ==> default: -- Graphics Type: vnc 00:01:35.632 ==> default: -- Graphics Port: -1 00:01:35.632 ==> default: -- Graphics IP: 127.0.0.1 00:01:35.632 ==> default: -- Graphics Password: Not defined 00:01:35.632 ==> default: -- Video Type: cirrus 00:01:35.632 ==> default: -- Video VRAM: 9216 00:01:35.632 ==> default: -- Sound Type: 00:01:35.632 ==> default: -- Keymap: en-us 00:01:35.632 ==> default: -- TPM Path: 00:01:35.632 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:35.632 ==> default: -- Command line args: 00:01:35.632 ==> default: -> value=-device, 00:01:35.632 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:35.632 ==> default: -> value=-drive, 00:01:35.632 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:01:35.632 ==> default: -> value=-device, 00:01:35.632 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:35.889 ==> default: Creating shared folders metadata... 00:01:35.889 ==> default: Starting domain. 00:01:37.262 ==> default: Waiting for domain to get an IP address... 00:01:59.249 ==> default: Waiting for SSH to become available... 00:02:11.474 ==> default: Configuring and enabling network interfaces... 00:02:14.001 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:28.866 ==> default: Mounting SSHFS shared folder... 00:02:29.124 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt/output => /home/vagrant/spdk_repo/output 00:02:29.124 ==> default: Checking Mount.. 00:02:30.500 ==> default: Folder Successfully Mounted! 00:02:30.500 ==> default: Running provisioner: file... 00:02:31.447 default: ~/.gitconfig => .gitconfig 00:02:32.393 00:02:32.393 SUCCESS! 00:02:32.393 00:02:32.393 cd to /var/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt and type "vagrant ssh" to use. 00:02:32.393 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:32.393 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt" to destroy all trace of vm. 00:02:32.393 00:02:32.402 [Pipeline] } 00:02:32.422 [Pipeline] // stage 00:02:32.433 [Pipeline] dir 00:02:32.433 Running in /var/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt 00:02:32.435 [Pipeline] { 00:02:32.453 [Pipeline] catchError 00:02:32.455 [Pipeline] { 00:02:32.471 [Pipeline] sh 00:02:32.750 + vagrant ssh-config --host vagrant 00:02:32.750 + sed -ne /^Host/,$p 00:02:32.750 + tee ssh_conf 00:02:36.978 Host vagrant 00:02:36.978 HostName 192.168.121.129 00:02:36.978 User vagrant 00:02:36.978 Port 22 00:02:36.978 UserKnownHostsFile /dev/null 00:02:36.978 StrictHostKeyChecking no 00:02:36.978 PasswordAuthentication no 00:02:36.978 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-freebsd14/14.0-RELEASE-1718332871-2294/libvirt/freebsd14 00:02:36.978 IdentitiesOnly yes 00:02:36.978 LogLevel FATAL 00:02:36.978 ForwardAgent yes 00:02:36.978 ForwardX11 yes 00:02:36.978 00:02:36.994 [Pipeline] withEnv 00:02:36.997 [Pipeline] { 00:02:37.014 [Pipeline] sh 00:02:37.308 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:37.308 source /etc/os-release 00:02:37.308 [[ -e /image.version ]] && img=$(< /image.version) 00:02:37.308 # Minimal, systemd-like check. 00:02:37.308 if [[ -e /.dockerenv ]]; then 00:02:37.308 # Clear garbage from the node's name: 00:02:37.308 # agt-er_autotest_547-896 -> autotest_547-896 00:02:37.308 # $HOSTNAME is the actual container id 00:02:37.308 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:37.308 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:37.308 # We can assume this is a mount from a host where container is running, 00:02:37.308 # so fetch its hostname to easily identify the target swarm worker. 00:02:37.309 container="$(< /etc/hostname) ($agent)" 00:02:37.309 else 00:02:37.309 # Fallback 00:02:37.309 container=$agent 00:02:37.309 fi 00:02:37.309 fi 00:02:37.309 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:37.309 00:02:37.319 [Pipeline] } 00:02:37.341 [Pipeline] // withEnv 00:02:37.351 [Pipeline] setCustomBuildProperty 00:02:37.368 [Pipeline] stage 00:02:37.370 [Pipeline] { (Tests) 00:02:37.394 [Pipeline] sh 00:02:37.671 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:37.941 [Pipeline] sh 00:02:38.218 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:38.233 [Pipeline] timeout 00:02:38.234 Timeout set to expire in 1 hr 30 min 00:02:38.235 [Pipeline] { 00:02:38.250 [Pipeline] sh 00:02:38.527 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:39.092 HEAD is now at eea7da688 fio/bdev: use socket_id when allocating io buffers 00:02:39.108 [Pipeline] sh 00:02:39.385 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:39.419 [Pipeline] sh 00:02:39.745 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:40.019 [Pipeline] sh 00:02:40.297 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant CXX=/usr/bin/clang++ CC=/usr/bin/clang JOB_BASE_NAME=freebsd-vg-autotest ./autoruner.sh spdk_repo 00:02:40.297 ++ readlink -f spdk_repo 00:02:40.297 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:40.297 + [[ -n /home/vagrant/spdk_repo ]] 00:02:40.297 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:40.297 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:40.297 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:40.297 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:40.297 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:40.297 + [[ freebsd-vg-autotest == pkgdep-* ]] 00:02:40.297 + cd /home/vagrant/spdk_repo 00:02:40.297 + source /etc/os-release 00:02:40.297 ++ NAME=FreeBSD 00:02:40.297 ++ VERSION=14.0-RELEASE 00:02:40.297 ++ VERSION_ID=14.0 00:02:40.297 ++ ID=freebsd 00:02:40.297 ++ ANSI_COLOR='0;31' 00:02:40.297 ++ PRETTY_NAME='FreeBSD 14.0-RELEASE' 00:02:40.297 ++ CPE_NAME=cpe:/o:freebsd:freebsd:14.0 00:02:40.297 ++ HOME_URL=https://FreeBSD.org/ 00:02:40.297 ++ BUG_REPORT_URL=https://bugs.FreeBSD.org/ 00:02:40.297 + uname -a 00:02:40.297 FreeBSD freebsd-cloud-1718332871-2294.local 14.0-RELEASE FreeBSD 14.0-RELEASE #0 releng/14.0-n265380-f9716eee8ab4: Fri Nov 10 05:57:23 UTC 2023 root@releng1.nyi.freebsd.org:/usr/obj/usr/src/amd64.amd64/sys/GENERIC amd64 00:02:40.297 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:40.297 Contigmem (not present) 00:02:40.297 Buffer Size: not set 00:02:40.297 Num Buffers: not set 00:02:40.297 00:02:40.297 00:02:40.297 Type BDF Vendor Device Driver 00:02:40.297 NVMe 0:0:16:0 0x1b36 0x0010 nvme0 00:02:40.555 + rm -f /tmp/spdk-ld-path 00:02:40.555 + source autorun-spdk.conf 00:02:40.555 ++ SPDK_TEST_UNITTEST=1 00:02:40.555 ++ SPDK_RUN_VALGRIND=0 00:02:40.555 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:40.555 ++ SPDK_TEST_NVME=1 00:02:40.555 ++ SPDK_TEST_BLOCKDEV=1 00:02:40.555 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:40.555 ++ RUN_NIGHTLY=0 00:02:40.555 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:40.555 + [[ -n '' ]] 00:02:40.555 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:40.555 + for M in /var/spdk/build-*-manifest.txt 00:02:40.555 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:40.555 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:40.555 + for M in /var/spdk/build-*-manifest.txt 00:02:40.555 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:40.555 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:40.555 ++ uname 00:02:40.555 + [[ FreeBSD == \L\i\n\u\x ]] 00:02:40.555 + dmesg_pid=1231 00:02:40.555 + [[ FreeBSD == FreeBSD ]] 00:02:40.555 + export LC_ALL=C LC_CTYPE=C 00:02:40.555 + LC_ALL=C 00:02:40.555 + LC_CTYPE=C 00:02:40.555 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:40.555 + tail -F /var/log/messages 00:02:40.555 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:40.555 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:40.555 + [[ -x /usr/src/fio-static/fio ]] 00:02:40.555 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:40.555 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:40.555 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:40.555 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:02:40.555 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:40.555 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:40.555 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:40.555 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:40.555 Test configuration: 00:02:40.555 SPDK_TEST_UNITTEST=1 00:02:40.555 SPDK_RUN_VALGRIND=0 00:02:40.555 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:40.555 SPDK_TEST_NVME=1 00:02:40.555 SPDK_TEST_BLOCKDEV=1 00:02:40.555 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:40.555 RUN_NIGHTLY=0 14:51:06 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:40.555 14:51:06 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:40.555 14:51:06 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:40.555 14:51:06 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:40.555 14:51:06 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:02:40.556 14:51:06 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:02:40.556 14:51:06 -- paths/export.sh@4 -- $ export PATH 00:02:40.556 14:51:06 -- paths/export.sh@5 -- $ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:02:40.556 14:51:06 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:40.556 14:51:06 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:40.556 14:51:06 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720795866.XXXXXX 00:02:40.556 14:51:06 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720795866.XXXXXX.z0GAH2jcik 00:02:40.556 14:51:06 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:40.556 14:51:06 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:02:40.556 14:51:06 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:40.556 14:51:06 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:40.556 14:51:06 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:40.556 14:51:06 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:40.556 14:51:06 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:40.556 14:51:06 -- common/autotest_common.sh@10 -- $ set +x 00:02:40.814 14:51:06 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:02:40.814 14:51:06 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:40.814 14:51:06 -- pm/common@17 -- $ local monitor 00:02:40.814 14:51:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:40.814 14:51:06 -- pm/common@25 -- $ sleep 1 00:02:40.814 14:51:06 -- pm/common@21 -- $ date +%s 00:02:40.814 14:51:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720795866 00:02:40.814 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720795866_collect-vmstat.pm.log 00:02:41.748 14:51:07 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:41.748 14:51:07 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:41.748 14:51:07 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:41.748 14:51:07 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:41.748 14:51:07 -- spdk/autobuild.sh@16 -- $ date -u 00:02:41.748 Fri Jul 12 14:51:07 UTC 2024 00:02:41.748 14:51:07 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:41.748 v24.09-pre-233-geea7da688 00:02:41.748 14:51:07 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:41.748 14:51:07 -- spdk/autobuild.sh@23 -- $ '[' 0 -eq 1 ']' 00:02:41.748 14:51:07 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:41.748 14:51:07 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:41.748 14:51:07 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:41.748 14:51:07 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:41.748 14:51:07 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:41.748 14:51:07 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:02:41.748 14:51:07 -- spdk/autobuild.sh@58 -- $ unittest_build 00:02:41.748 14:51:07 -- common/autobuild_common.sh@420 -- $ run_test unittest_build _unittest_build 00:02:41.748 14:51:07 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:02:41.748 14:51:07 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:41.748 14:51:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:41.748 ************************************ 00:02:41.748 START TEST unittest_build 00:02:41.748 ************************************ 00:02:41.748 14:51:07 unittest_build -- common/autotest_common.sh@1123 -- $ _unittest_build 00:02:41.749 14:51:07 unittest_build -- common/autobuild_common.sh@411 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --without-shared 00:02:42.418 Notice: Vhost, rte_vhost library, virtio, and fuse 00:02:42.418 are only supported on Linux. Turning off default feature. 00:02:42.418 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:42.418 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:42.981 RDMA_OPTION_ID_ACK_TIMEOUT is not supported 00:02:43.238 Using 'verbs' RDMA provider 00:02:53.456 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:03.455 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:03.455 Creating mk/config.mk...done. 00:03:03.455 Creating mk/cc.flags.mk...done. 00:03:03.455 Type 'gmake' to build. 00:03:03.455 14:51:28 unittest_build -- common/autobuild_common.sh@412 -- $ gmake -j10 00:03:03.455 gmake[1]: Nothing to be done for 'all'. 00:03:05.998 ps: stdin: not a terminal 00:03:10.185 The Meson build system 00:03:10.185 Version: 1.4.0 00:03:10.185 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:10.185 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:10.185 Build type: native build 00:03:10.185 Program cat found: YES (/bin/cat) 00:03:10.185 Project name: DPDK 00:03:10.185 Project version: 24.03.0 00:03:10.185 C compiler for the host machine: /usr/bin/clang (clang 16.0.6 "FreeBSD clang version 16.0.6 (https://github.com/llvm/llvm-project.git llvmorg-16.0.6-0-g7cbf1a259152)") 00:03:10.185 C linker for the host machine: /usr/bin/clang ld.lld 16.0.6 00:03:10.185 Host machine cpu family: x86_64 00:03:10.185 Host machine cpu: x86_64 00:03:10.185 Message: ## Building in Developer Mode ## 00:03:10.185 Program pkg-config found: YES (/usr/local/bin/pkg-config) 00:03:10.185 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:10.185 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:10.185 Program python3 found: YES (/usr/local/bin/python3.9) 00:03:10.185 Program cat found: YES (/bin/cat) 00:03:10.185 Compiler for C supports arguments -march=native: YES 00:03:10.185 Checking for size of "void *" : 8 00:03:10.185 Checking for size of "void *" : 8 (cached) 00:03:10.185 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:10.185 Library m found: YES 00:03:10.185 Library numa found: NO 00:03:10.185 Library fdt found: NO 00:03:10.185 Library execinfo found: YES 00:03:10.185 Has header "execinfo.h" : YES 00:03:10.185 Found pkg-config: YES (/usr/local/bin/pkg-config) 2.2.0 00:03:10.185 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:10.185 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:10.185 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:10.185 Run-time dependency openssl found: YES 3.0.13 00:03:10.185 Run-time dependency libpcap found: NO (tried pkgconfig) 00:03:10.185 Library pcap found: YES 00:03:10.185 Has header "pcap.h" with dependency -lpcap: YES 00:03:10.185 Compiler for C supports arguments -Wcast-qual: YES 00:03:10.185 Compiler for C supports arguments -Wdeprecated: YES 00:03:10.185 Compiler for C supports arguments -Wformat: YES 00:03:10.185 Compiler for C supports arguments -Wformat-nonliteral: YES 00:03:10.185 Compiler for C supports arguments -Wformat-security: YES 00:03:10.185 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:10.185 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:10.185 Compiler for C supports arguments -Wnested-externs: YES 00:03:10.185 Compiler for C supports arguments -Wold-style-definition: YES 00:03:10.185 Compiler for C supports arguments -Wpointer-arith: YES 00:03:10.185 Compiler for C supports arguments -Wsign-compare: YES 00:03:10.185 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:10.185 Compiler for C supports arguments -Wundef: YES 00:03:10.185 Compiler for C supports arguments -Wwrite-strings: YES 00:03:10.185 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:10.185 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:03:10.185 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:10.185 Compiler for C supports arguments -mavx512f: YES 00:03:10.185 Checking if "AVX512 checking" compiles: YES 00:03:10.185 Fetching value of define "__SSE4_2__" : 1 00:03:10.185 Fetching value of define "__AES__" : 1 00:03:10.185 Fetching value of define "__AVX__" : 1 00:03:10.185 Fetching value of define "__AVX2__" : 1 00:03:10.185 Fetching value of define "__AVX512BW__" : (undefined) 00:03:10.185 Fetching value of define "__AVX512CD__" : (undefined) 00:03:10.185 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:10.185 Fetching value of define "__AVX512F__" : (undefined) 00:03:10.185 Fetching value of define "__AVX512VL__" : (undefined) 00:03:10.185 Fetching value of define "__PCLMUL__" : 1 00:03:10.185 Fetching value of define "__RDRND__" : 1 00:03:10.185 Fetching value of define "__RDSEED__" : 1 00:03:10.185 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:10.185 Fetching value of define "__znver1__" : (undefined) 00:03:10.185 Fetching value of define "__znver2__" : (undefined) 00:03:10.185 Fetching value of define "__znver3__" : (undefined) 00:03:10.185 Fetching value of define "__znver4__" : (undefined) 00:03:10.185 Compiler for C supports arguments -Wno-format-truncation: NO 00:03:10.185 Message: lib/log: Defining dependency "log" 00:03:10.185 Message: lib/kvargs: Defining dependency "kvargs" 00:03:10.185 Message: lib/telemetry: Defining dependency "telemetry" 00:03:10.185 Checking if "Detect argument count for CPU_OR" compiles: YES 00:03:10.185 Checking for function "getentropy" : YES 00:03:10.185 Message: lib/eal: Defining dependency "eal" 00:03:10.185 Message: lib/ring: Defining dependency "ring" 00:03:10.185 Message: lib/rcu: Defining dependency "rcu" 00:03:10.185 Message: lib/mempool: Defining dependency "mempool" 00:03:10.185 Message: lib/mbuf: Defining dependency "mbuf" 00:03:10.185 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:10.185 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:10.185 Compiler for C supports arguments -mpclmul: YES 00:03:10.185 Compiler for C supports arguments -maes: YES 00:03:10.185 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:10.185 Compiler for C supports arguments -mavx512bw: YES 00:03:10.185 Compiler for C supports arguments -mavx512dq: YES 00:03:10.185 Compiler for C supports arguments -mavx512vl: YES 00:03:10.185 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:10.185 Compiler for C supports arguments -mavx2: YES 00:03:10.185 Compiler for C supports arguments -mavx: YES 00:03:10.185 Message: lib/net: Defining dependency "net" 00:03:10.185 Message: lib/meter: Defining dependency "meter" 00:03:10.185 Message: lib/ethdev: Defining dependency "ethdev" 00:03:10.185 Message: lib/pci: Defining dependency "pci" 00:03:10.185 Message: lib/cmdline: Defining dependency "cmdline" 00:03:10.185 Message: lib/hash: Defining dependency "hash" 00:03:10.185 Message: lib/timer: Defining dependency "timer" 00:03:10.185 Message: lib/compressdev: Defining dependency "compressdev" 00:03:10.185 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:10.185 Message: lib/dmadev: Defining dependency "dmadev" 00:03:10.185 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:10.185 Message: lib/reorder: Defining dependency "reorder" 00:03:10.185 Message: lib/security: Defining dependency "security" 00:03:10.185 Has header "linux/userfaultfd.h" : NO 00:03:10.185 Has header "linux/vduse.h" : NO 00:03:10.185 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:03:10.185 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:10.185 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:10.185 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:10.185 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:10.185 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:10.185 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:10.185 Message: Disabling vdpa/* drivers: missing internal dependency "vhost" 00:03:10.185 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:10.185 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:10.185 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:10.185 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:10.185 Configuring doxy-api-html.conf using configuration 00:03:10.185 Configuring doxy-api-man.conf using configuration 00:03:10.185 Program mandb found: NO 00:03:10.185 Program sphinx-build found: NO 00:03:10.185 Configuring rte_build_config.h using configuration 00:03:10.185 Message: 00:03:10.185 ================= 00:03:10.185 Applications Enabled 00:03:10.185 ================= 00:03:10.185 00:03:10.185 apps: 00:03:10.185 00:03:10.185 00:03:10.185 Message: 00:03:10.185 ================= 00:03:10.185 Libraries Enabled 00:03:10.185 ================= 00:03:10.185 00:03:10.185 libs: 00:03:10.185 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:10.185 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:10.185 cryptodev, dmadev, reorder, security, 00:03:10.185 00:03:10.185 Message: 00:03:10.185 =============== 00:03:10.185 Drivers Enabled 00:03:10.185 =============== 00:03:10.185 00:03:10.185 common: 00:03:10.185 00:03:10.185 bus: 00:03:10.185 pci, vdev, 00:03:10.185 mempool: 00:03:10.185 ring, 00:03:10.185 dma: 00:03:10.185 00:03:10.185 net: 00:03:10.185 00:03:10.185 crypto: 00:03:10.185 00:03:10.185 compress: 00:03:10.185 00:03:10.185 00:03:10.185 Message: 00:03:10.185 ================= 00:03:10.185 Content Skipped 00:03:10.185 ================= 00:03:10.186 00:03:10.186 apps: 00:03:10.186 dumpcap: explicitly disabled via build config 00:03:10.186 graph: explicitly disabled via build config 00:03:10.186 pdump: explicitly disabled via build config 00:03:10.186 proc-info: explicitly disabled via build config 00:03:10.186 test-acl: explicitly disabled via build config 00:03:10.186 test-bbdev: explicitly disabled via build config 00:03:10.186 test-cmdline: explicitly disabled via build config 00:03:10.186 test-compress-perf: explicitly disabled via build config 00:03:10.186 test-crypto-perf: explicitly disabled via build config 00:03:10.186 test-dma-perf: explicitly disabled via build config 00:03:10.186 test-eventdev: explicitly disabled via build config 00:03:10.186 test-fib: explicitly disabled via build config 00:03:10.186 test-flow-perf: explicitly disabled via build config 00:03:10.186 test-gpudev: explicitly disabled via build config 00:03:10.186 test-mldev: explicitly disabled via build config 00:03:10.186 test-pipeline: explicitly disabled via build config 00:03:10.186 test-pmd: explicitly disabled via build config 00:03:10.186 test-regex: explicitly disabled via build config 00:03:10.186 test-sad: explicitly disabled via build config 00:03:10.186 test-security-perf: explicitly disabled via build config 00:03:10.186 00:03:10.186 libs: 00:03:10.186 argparse: explicitly disabled via build config 00:03:10.186 metrics: explicitly disabled via build config 00:03:10.186 acl: explicitly disabled via build config 00:03:10.186 bbdev: explicitly disabled via build config 00:03:10.186 bitratestats: explicitly disabled via build config 00:03:10.186 bpf: explicitly disabled via build config 00:03:10.186 cfgfile: explicitly disabled via build config 00:03:10.186 distributor: explicitly disabled via build config 00:03:10.186 efd: explicitly disabled via build config 00:03:10.186 eventdev: explicitly disabled via build config 00:03:10.186 dispatcher: explicitly disabled via build config 00:03:10.186 gpudev: explicitly disabled via build config 00:03:10.186 gro: explicitly disabled via build config 00:03:10.186 gso: explicitly disabled via build config 00:03:10.186 ip_frag: explicitly disabled via build config 00:03:10.186 jobstats: explicitly disabled via build config 00:03:10.186 latencystats: explicitly disabled via build config 00:03:10.186 lpm: explicitly disabled via build config 00:03:10.186 member: explicitly disabled via build config 00:03:10.186 pcapng: explicitly disabled via build config 00:03:10.186 power: only supported on Linux 00:03:10.186 rawdev: explicitly disabled via build config 00:03:10.186 regexdev: explicitly disabled via build config 00:03:10.186 mldev: explicitly disabled via build config 00:03:10.186 rib: explicitly disabled via build config 00:03:10.186 sched: explicitly disabled via build config 00:03:10.186 stack: explicitly disabled via build config 00:03:10.186 vhost: only supported on Linux 00:03:10.186 ipsec: explicitly disabled via build config 00:03:10.186 pdcp: explicitly disabled via build config 00:03:10.186 fib: explicitly disabled via build config 00:03:10.186 port: explicitly disabled via build config 00:03:10.186 pdump: explicitly disabled via build config 00:03:10.186 table: explicitly disabled via build config 00:03:10.186 pipeline: explicitly disabled via build config 00:03:10.186 graph: explicitly disabled via build config 00:03:10.186 node: explicitly disabled via build config 00:03:10.186 00:03:10.186 drivers: 00:03:10.186 common/cpt: not in enabled drivers build config 00:03:10.186 common/dpaax: not in enabled drivers build config 00:03:10.186 common/iavf: not in enabled drivers build config 00:03:10.186 common/idpf: not in enabled drivers build config 00:03:10.186 common/ionic: not in enabled drivers build config 00:03:10.186 common/mvep: not in enabled drivers build config 00:03:10.186 common/octeontx: not in enabled drivers build config 00:03:10.186 bus/auxiliary: not in enabled drivers build config 00:03:10.186 bus/cdx: not in enabled drivers build config 00:03:10.186 bus/dpaa: not in enabled drivers build config 00:03:10.186 bus/fslmc: not in enabled drivers build config 00:03:10.186 bus/ifpga: not in enabled drivers build config 00:03:10.186 bus/platform: not in enabled drivers build config 00:03:10.186 bus/uacce: not in enabled drivers build config 00:03:10.186 bus/vmbus: not in enabled drivers build config 00:03:10.186 common/cnxk: not in enabled drivers build config 00:03:10.186 common/mlx5: not in enabled drivers build config 00:03:10.186 common/nfp: not in enabled drivers build config 00:03:10.186 common/nitrox: not in enabled drivers build config 00:03:10.186 common/qat: not in enabled drivers build config 00:03:10.186 common/sfc_efx: not in enabled drivers build config 00:03:10.186 mempool/bucket: not in enabled drivers build config 00:03:10.186 mempool/cnxk: not in enabled drivers build config 00:03:10.186 mempool/dpaa: not in enabled drivers build config 00:03:10.186 mempool/dpaa2: not in enabled drivers build config 00:03:10.186 mempool/octeontx: not in enabled drivers build config 00:03:10.186 mempool/stack: not in enabled drivers build config 00:03:10.186 dma/cnxk: not in enabled drivers build config 00:03:10.186 dma/dpaa: not in enabled drivers build config 00:03:10.186 dma/dpaa2: not in enabled drivers build config 00:03:10.186 dma/hisilicon: not in enabled drivers build config 00:03:10.186 dma/idxd: not in enabled drivers build config 00:03:10.186 dma/ioat: not in enabled drivers build config 00:03:10.186 dma/skeleton: not in enabled drivers build config 00:03:10.186 net/af_packet: not in enabled drivers build config 00:03:10.186 net/af_xdp: not in enabled drivers build config 00:03:10.186 net/ark: not in enabled drivers build config 00:03:10.186 net/atlantic: not in enabled drivers build config 00:03:10.186 net/avp: not in enabled drivers build config 00:03:10.186 net/axgbe: not in enabled drivers build config 00:03:10.186 net/bnx2x: not in enabled drivers build config 00:03:10.186 net/bnxt: not in enabled drivers build config 00:03:10.186 net/bonding: not in enabled drivers build config 00:03:10.186 net/cnxk: not in enabled drivers build config 00:03:10.186 net/cpfl: not in enabled drivers build config 00:03:10.186 net/cxgbe: not in enabled drivers build config 00:03:10.186 net/dpaa: not in enabled drivers build config 00:03:10.186 net/dpaa2: not in enabled drivers build config 00:03:10.186 net/e1000: not in enabled drivers build config 00:03:10.186 net/ena: not in enabled drivers build config 00:03:10.186 net/enetc: not in enabled drivers build config 00:03:10.186 net/enetfec: not in enabled drivers build config 00:03:10.186 net/enic: not in enabled drivers build config 00:03:10.186 net/failsafe: not in enabled drivers build config 00:03:10.186 net/fm10k: not in enabled drivers build config 00:03:10.186 net/gve: not in enabled drivers build config 00:03:10.186 net/hinic: not in enabled drivers build config 00:03:10.186 net/hns3: not in enabled drivers build config 00:03:10.186 net/i40e: not in enabled drivers build config 00:03:10.186 net/iavf: not in enabled drivers build config 00:03:10.186 net/ice: not in enabled drivers build config 00:03:10.186 net/idpf: not in enabled drivers build config 00:03:10.186 net/igc: not in enabled drivers build config 00:03:10.186 net/ionic: not in enabled drivers build config 00:03:10.186 net/ipn3ke: not in enabled drivers build config 00:03:10.186 net/ixgbe: not in enabled drivers build config 00:03:10.186 net/mana: not in enabled drivers build config 00:03:10.186 net/memif: not in enabled drivers build config 00:03:10.186 net/mlx4: not in enabled drivers build config 00:03:10.186 net/mlx5: not in enabled drivers build config 00:03:10.186 net/mvneta: not in enabled drivers build config 00:03:10.186 net/mvpp2: not in enabled drivers build config 00:03:10.186 net/netvsc: not in enabled drivers build config 00:03:10.186 net/nfb: not in enabled drivers build config 00:03:10.186 net/nfp: not in enabled drivers build config 00:03:10.186 net/ngbe: not in enabled drivers build config 00:03:10.186 net/null: not in enabled drivers build config 00:03:10.186 net/octeontx: not in enabled drivers build config 00:03:10.186 net/octeon_ep: not in enabled drivers build config 00:03:10.186 net/pcap: not in enabled drivers build config 00:03:10.186 net/pfe: not in enabled drivers build config 00:03:10.186 net/qede: not in enabled drivers build config 00:03:10.186 net/ring: not in enabled drivers build config 00:03:10.186 net/sfc: not in enabled drivers build config 00:03:10.186 net/softnic: not in enabled drivers build config 00:03:10.186 net/tap: not in enabled drivers build config 00:03:10.186 net/thunderx: not in enabled drivers build config 00:03:10.186 net/txgbe: not in enabled drivers build config 00:03:10.186 net/vdev_netvsc: not in enabled drivers build config 00:03:10.186 net/vhost: not in enabled drivers build config 00:03:10.186 net/virtio: not in enabled drivers build config 00:03:10.186 net/vmxnet3: not in enabled drivers build config 00:03:10.186 raw/*: missing internal dependency, "rawdev" 00:03:10.186 crypto/armv8: not in enabled drivers build config 00:03:10.186 crypto/bcmfs: not in enabled drivers build config 00:03:10.186 crypto/caam_jr: not in enabled drivers build config 00:03:10.186 crypto/ccp: not in enabled drivers build config 00:03:10.186 crypto/cnxk: not in enabled drivers build config 00:03:10.186 crypto/dpaa_sec: not in enabled drivers build config 00:03:10.186 crypto/dpaa2_sec: not in enabled drivers build config 00:03:10.186 crypto/ipsec_mb: not in enabled drivers build config 00:03:10.186 crypto/mlx5: not in enabled drivers build config 00:03:10.186 crypto/mvsam: not in enabled drivers build config 00:03:10.186 crypto/nitrox: not in enabled drivers build config 00:03:10.186 crypto/null: not in enabled drivers build config 00:03:10.186 crypto/octeontx: not in enabled drivers build config 00:03:10.186 crypto/openssl: not in enabled drivers build config 00:03:10.186 crypto/scheduler: not in enabled drivers build config 00:03:10.186 crypto/uadk: not in enabled drivers build config 00:03:10.186 crypto/virtio: not in enabled drivers build config 00:03:10.186 compress/isal: not in enabled drivers build config 00:03:10.186 compress/mlx5: not in enabled drivers build config 00:03:10.186 compress/nitrox: not in enabled drivers build config 00:03:10.186 compress/octeontx: not in enabled drivers build config 00:03:10.186 compress/zlib: not in enabled drivers build config 00:03:10.186 regex/*: missing internal dependency, "regexdev" 00:03:10.186 ml/*: missing internal dependency, "mldev" 00:03:10.186 vdpa/*: missing internal dependency, "vhost" 00:03:10.186 event/*: missing internal dependency, "eventdev" 00:03:10.186 baseband/*: missing internal dependency, "bbdev" 00:03:10.186 gpu/*: missing internal dependency, "gpudev" 00:03:10.186 00:03:10.186 00:03:10.754 Build targets in project: 81 00:03:10.754 00:03:10.754 DPDK 24.03.0 00:03:10.754 00:03:10.754 User defined options 00:03:10.754 buildtype : debug 00:03:10.754 default_library : static 00:03:10.754 libdir : lib 00:03:10.754 prefix : / 00:03:10.754 c_args : -fPIC -Werror 00:03:10.754 c_link_args : 00:03:10.754 cpu_instruction_set: native 00:03:10.754 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:10.754 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:10.754 enable_docs : false 00:03:10.754 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:10.754 enable_kmods : true 00:03:10.754 max_lcores : 128 00:03:10.754 tests : false 00:03:10.754 00:03:10.754 Found ninja-1.11.1 at /usr/local/bin/ninja 00:03:11.320 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:11.320 [1/233] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:11.320 [2/233] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:11.320 [3/233] Compiling C object lib/librte_log.a.p/log_log_freebsd.c.o 00:03:11.320 [4/233] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:11.320 [5/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:11.579 [6/233] Linking static target lib/librte_log.a 00:03:11.579 [7/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:11.579 [8/233] Linking static target lib/librte_kvargs.a 00:03:11.579 [9/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:11.579 [10/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:11.579 [11/233] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:11.837 [12/233] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:11.837 [13/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:11.837 [14/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:11.837 [15/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:11.837 [16/233] Linking static target lib/librte_telemetry.a 00:03:11.837 [17/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:12.096 [18/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:12.096 [19/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:12.096 [20/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:12.096 [21/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:12.096 [22/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:12.096 [23/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:12.096 [24/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:12.096 [25/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:12.354 [26/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:12.354 [27/233] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.354 [28/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:12.354 [29/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:12.354 [30/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:12.612 [31/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:12.612 [32/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:12.612 [33/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:12.612 [34/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:12.612 [35/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:12.612 [36/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:12.612 [37/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:12.612 [38/233] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:12.612 [39/233] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:12.612 [40/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:12.870 [41/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:12.870 [42/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:12.870 [43/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:13.128 [44/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:13.128 [45/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:13.128 [46/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:13.128 [47/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:13.128 [48/233] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:13.128 [49/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:13.128 [50/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:13.128 [51/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_cpuflags.c.o 00:03:13.128 [52/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:13.385 [53/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:13.385 [54/233] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:13.385 [55/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:13.385 [56/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:13.643 [57/233] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:13.643 [58/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:13.643 [59/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_alarm.c.o 00:03:13.643 [60/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_dev.c.o 00:03:13.643 [61/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal.c.o 00:03:13.643 [62/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_hugepage_info.c.o 00:03:13.643 [63/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:13.643 [64/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:13.643 [65/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:13.643 [66/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_interrupts.c.o 00:03:13.901 [67/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_lcore.c.o 00:03:13.901 [68/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memory.c.o 00:03:13.901 [69/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_thread.c.o 00:03:13.901 [70/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_timer.c.o 00:03:14.161 [71/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memalloc.c.o 00:03:14.161 [72/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:14.161 [73/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:14.161 [74/233] Linking static target lib/librte_eal.a 00:03:14.161 [75/233] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:14.161 [76/233] Linking static target lib/librte_ring.a 00:03:14.161 [77/233] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:14.161 [78/233] Linking static target lib/librte_rcu.a 00:03:14.419 [79/233] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:14.419 [80/233] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.419 [81/233] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:14.419 [82/233] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.419 [83/233] Linking target lib/librte_log.so.24.1 00:03:14.419 [84/233] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:14.419 [85/233] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:14.419 [86/233] Linking static target lib/librte_mempool.a 00:03:14.675 [87/233] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:14.675 [88/233] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.675 [89/233] Linking target lib/librte_kvargs.so.24.1 00:03:14.675 [90/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:14.675 [91/233] Linking target lib/librte_telemetry.so.24.1 00:03:14.675 [92/233] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.675 [93/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:14.675 [94/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:14.675 [95/233] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:14.932 [96/233] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:14.932 [97/233] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:14.932 [98/233] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:14.932 [99/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:14.932 [100/233] Linking static target lib/librte_mbuf.a 00:03:14.932 [101/233] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:14.932 [102/233] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:15.190 [103/233] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:15.190 [104/233] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:15.190 [105/233] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:15.190 [106/233] Linking static target lib/librte_meter.a 00:03:15.190 [107/233] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:15.190 [108/233] Linking static target lib/librte_net.a 00:03:15.447 [109/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:15.447 [110/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:15.447 [111/233] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.447 [112/233] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.447 [113/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:15.705 [114/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:15.963 [115/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:15.963 [116/233] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.963 [117/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:15.963 [118/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:15.963 [119/233] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:15.963 [120/233] Linking static target lib/librte_pci.a 00:03:16.222 [121/233] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.222 [122/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:16.222 [123/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:16.222 [124/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:16.222 [125/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:16.222 [126/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:16.222 [127/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:16.222 [128/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:16.222 [129/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:16.222 [130/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:16.222 [131/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:16.222 [132/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:16.222 [133/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:16.222 [134/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:16.222 [135/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:16.481 [136/233] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:16.481 [137/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:16.481 [138/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:16.481 [139/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:16.481 [140/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:16.481 [141/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:16.739 [142/233] Linking static target lib/librte_ethdev.a 00:03:16.739 [143/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:16.739 [144/233] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.739 [145/233] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:16.997 [146/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:16.997 [147/233] Linking static target lib/librte_cmdline.a 00:03:16.997 [148/233] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:16.997 [149/233] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:16.997 [150/233] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:16.997 [151/233] Linking static target lib/librte_timer.a 00:03:16.997 [152/233] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:16.997 [153/233] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:16.997 [154/233] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:16.997 [155/233] Linking static target lib/librte_hash.a 00:03:17.254 [156/233] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:17.254 [157/233] Linking static target lib/librte_compressdev.a 00:03:17.254 [158/233] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:17.511 [159/233] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.511 [160/233] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:17.511 [161/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:17.511 [162/233] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:17.511 [163/233] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:17.511 [164/233] Linking static target lib/librte_dmadev.a 00:03:17.769 [165/233] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:17.769 [166/233] Linking static target lib/librte_reorder.a 00:03:17.769 [167/233] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.769 [168/233] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:17.769 [169/233] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.769 [170/233] Linking static target lib/librte_cryptodev.a 00:03:17.769 [171/233] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:18.027 [172/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:18.027 [173/233] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.027 [174/233] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.027 [175/233] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:18.027 [176/233] Linking static target lib/librte_security.a 00:03:18.027 [177/233] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.027 [178/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:18.027 [179/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_bsd_pci.c.o 00:03:18.027 [180/233] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:18.285 [181/233] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.285 [182/233] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:18.285 [183/233] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:18.285 [184/233] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:18.285 [185/233] Linking static target drivers/librte_bus_pci.a 00:03:18.285 [186/233] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:18.285 [187/233] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:18.546 [188/233] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:18.546 [189/233] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.546 [190/233] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:18.546 [191/233] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:18.546 [192/233] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:18.546 [193/233] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:18.546 [194/233] Linking static target drivers/librte_bus_vdev.a 00:03:18.546 [195/233] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.811 [196/233] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:18.811 [197/233] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.811 [198/233] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:18.811 [199/233] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:18.811 [200/233] Linking static target drivers/librte_mempool_ring.a 00:03:19.377 [201/233] Generating kernel/freebsd/contigmem with a custom command 00:03:19.377 machine -> /usr/src/sys/amd64/include 00:03:19.377 x86 -> /usr/src/sys/x86/include 00:03:19.377 i386 -> /usr/src/sys/i386/include 00:03:19.377 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/device_if.m -h 00:03:19.377 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/bus_if.m -h 00:03:19.377 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/dev/pci/pci_if.m -h 00:03:19.377 touch opt_global.h 00:03:19.377 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/home/vagrant/spdk_repo/spdk/dpdk/config -include /home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -fdebug-prefix-map=./i386=/usr/src/sys/i386/include -MD -MF.depend.contigmem.o -MTcontigmem.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-format-zero-length -mno-aes -mno-avx -std=gnu99 -c /home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/contigmem/contigmem.c -o contigmem.o 00:03:19.377 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o contigmem.ko contigmem.o 00:03:19.377 :> export_syms 00:03:19.377 awk -f /usr/src/sys/conf/kmod_syms.awk contigmem.ko export_syms | xargs -J% objcopy % contigmem.ko 00:03:19.377 objcopy --strip-debug contigmem.ko 00:03:19.636 [202/233] Generating kernel/freebsd/nic_uio with a custom command 00:03:19.636 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/home/vagrant/spdk_repo/spdk/dpdk/config -include /home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -fdebug-prefix-map=./i386=/usr/src/sys/i386/include -MD -MF.depend.nic_uio.o -MTnic_uio.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-format-zero-length -mno-aes -mno-avx -std=gnu99 -c /home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/nic_uio/nic_uio.c -o nic_uio.o 00:03:19.636 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o nic_uio.ko nic_uio.o 00:03:19.636 :> export_syms 00:03:19.636 awk -f /usr/src/sys/conf/kmod_syms.awk nic_uio.ko export_syms | xargs -J% objcopy % nic_uio.ko 00:03:19.636 objcopy --strip-debug nic_uio.ko 00:03:22.157 [203/233] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.691 [204/233] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.691 [205/233] Linking target lib/librte_eal.so.24.1 00:03:24.691 [206/233] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:24.691 [207/233] Linking target lib/librte_ring.so.24.1 00:03:24.692 [208/233] Linking target lib/librte_meter.so.24.1 00:03:24.692 [209/233] Linking target lib/librte_pci.so.24.1 00:03:24.692 [210/233] Linking target drivers/librte_bus_vdev.so.24.1 00:03:24.692 [211/233] Linking target lib/librte_timer.so.24.1 00:03:24.692 [212/233] Linking target lib/librte_dmadev.so.24.1 00:03:24.692 [213/233] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:24.692 [214/233] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:24.692 [215/233] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:24.692 [216/233] Linking target lib/librte_rcu.so.24.1 00:03:24.692 [217/233] Linking target lib/librte_mempool.so.24.1 00:03:24.692 [218/233] Linking target drivers/librte_bus_pci.so.24.1 00:03:24.950 [219/233] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:24.950 [220/233] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:24.950 [221/233] Linking target drivers/librte_mempool_ring.so.24.1 00:03:24.950 [222/233] Linking target lib/librte_mbuf.so.24.1 00:03:24.950 [223/233] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:25.209 [224/233] Linking target lib/librte_net.so.24.1 00:03:25.209 [225/233] Linking target lib/librte_compressdev.so.24.1 00:03:25.209 [226/233] Linking target lib/librte_reorder.so.24.1 00:03:25.209 [227/233] Linking target lib/librte_cryptodev.so.24.1 00:03:25.209 [228/233] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:25.209 [229/233] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:25.466 [230/233] Linking target lib/librte_cmdline.so.24.1 00:03:25.466 [231/233] Linking target lib/librte_security.so.24.1 00:03:25.466 [232/233] Linking target lib/librte_hash.so.24.1 00:03:25.466 [233/233] Linking target lib/librte_ethdev.so.24.1 00:03:25.466 INFO: autodetecting backend as ninja 00:03:25.466 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:26.396 CC lib/log/log.o 00:03:26.396 CC lib/log/log_flags.o 00:03:26.396 CC lib/log/log_deprecated.o 00:03:26.396 CC lib/ut_mock/mock.o 00:03:26.396 CC lib/ut/ut.o 00:03:26.396 LIB libspdk_ut_mock.a 00:03:26.396 LIB libspdk_log.a 00:03:26.396 LIB libspdk_ut.a 00:03:26.396 CXX lib/trace_parser/trace.o 00:03:26.396 CC lib/dma/dma.o 00:03:26.396 CC lib/ioat/ioat.o 00:03:26.396 CC lib/util/base64.o 00:03:26.396 CC lib/util/bit_array.o 00:03:26.396 CC lib/util/cpuset.o 00:03:26.396 CC lib/util/crc16.o 00:03:26.396 CC lib/util/crc32.o 00:03:26.396 CC lib/util/crc32c.o 00:03:26.396 CC lib/util/crc32_ieee.o 00:03:26.396 CC lib/util/crc64.o 00:03:26.652 CC lib/util/dif.o 00:03:26.652 CC lib/util/fd.o 00:03:26.652 CC lib/util/fd_group.o 00:03:26.652 CC lib/util/file.o 00:03:26.652 LIB libspdk_dma.a 00:03:26.652 CC lib/util/hexlify.o 00:03:26.652 CC lib/util/iov.o 00:03:26.652 CC lib/util/math.o 00:03:26.652 LIB libspdk_ioat.a 00:03:26.652 CC lib/util/net.o 00:03:26.652 CC lib/util/pipe.o 00:03:26.652 CC lib/util/strerror_tls.o 00:03:26.652 CC lib/util/string.o 00:03:26.652 CC lib/util/uuid.o 00:03:26.652 CC lib/util/xor.o 00:03:26.652 CC lib/util/zipf.o 00:03:26.652 LIB libspdk_util.a 00:03:26.910 CC lib/env_dpdk/env.o 00:03:26.910 CC lib/env_dpdk/memory.o 00:03:26.910 CC lib/env_dpdk/pci.o 00:03:26.910 CC lib/rdma_utils/rdma_utils.o 00:03:26.910 CC lib/conf/conf.o 00:03:26.910 CC lib/idxd/idxd.o 00:03:26.910 CC lib/rdma_provider/common.o 00:03:26.910 CC lib/vmd/vmd.o 00:03:26.910 CC lib/json/json_parse.o 00:03:26.910 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:26.910 LIB libspdk_conf.a 00:03:26.910 CC lib/json/json_util.o 00:03:26.910 CC lib/idxd/idxd_user.o 00:03:26.910 LIB libspdk_rdma_utils.a 00:03:26.910 CC lib/json/json_write.o 00:03:26.910 CC lib/vmd/led.o 00:03:26.910 CC lib/env_dpdk/init.o 00:03:26.910 LIB libspdk_rdma_provider.a 00:03:27.168 CC lib/env_dpdk/threads.o 00:03:27.168 CC lib/env_dpdk/pci_ioat.o 00:03:27.168 CC lib/env_dpdk/pci_virtio.o 00:03:27.168 LIB libspdk_vmd.a 00:03:27.168 CC lib/env_dpdk/pci_vmd.o 00:03:27.168 LIB libspdk_idxd.a 00:03:27.168 LIB libspdk_json.a 00:03:27.168 CC lib/env_dpdk/pci_idxd.o 00:03:27.168 CC lib/env_dpdk/pci_event.o 00:03:27.168 CC lib/env_dpdk/sigbus_handler.o 00:03:27.168 CC lib/env_dpdk/pci_dpdk.o 00:03:27.168 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:27.168 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:27.168 CC lib/jsonrpc/jsonrpc_server.o 00:03:27.168 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:27.168 CC lib/jsonrpc/jsonrpc_client.o 00:03:27.168 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:27.427 LIB libspdk_jsonrpc.a 00:03:27.427 CC lib/rpc/rpc.o 00:03:27.427 LIB libspdk_rpc.a 00:03:27.685 LIB libspdk_env_dpdk.a 00:03:27.685 CC lib/trace/trace.o 00:03:27.685 CC lib/trace/trace_flags.o 00:03:27.685 CC lib/trace/trace_rpc.o 00:03:27.685 CC lib/notify/notify_rpc.o 00:03:27.685 CC lib/notify/notify.o 00:03:27.685 CC lib/keyring/keyring.o 00:03:27.685 CC lib/keyring/keyring_rpc.o 00:03:27.685 LIB libspdk_notify.a 00:03:27.685 LIB libspdk_trace.a 00:03:27.685 LIB libspdk_keyring.a 00:03:27.685 CC lib/sock/sock.o 00:03:27.685 CC lib/sock/sock_rpc.o 00:03:27.685 CC lib/thread/thread.o 00:03:27.685 CC lib/thread/iobuf.o 00:03:27.942 LIB libspdk_trace_parser.a 00:03:27.942 LIB libspdk_sock.a 00:03:27.942 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:27.942 CC lib/nvme/nvme_ctrlr.o 00:03:27.942 CC lib/nvme/nvme_fabric.o 00:03:27.942 CC lib/nvme/nvme_ns.o 00:03:27.942 CC lib/nvme/nvme_ns_cmd.o 00:03:27.942 CC lib/nvme/nvme_pcie.o 00:03:27.942 CC lib/nvme/nvme_qpair.o 00:03:27.942 CC lib/nvme/nvme_pcie_common.o 00:03:27.942 CC lib/nvme/nvme.o 00:03:28.201 LIB libspdk_thread.a 00:03:28.201 CC lib/nvme/nvme_quirks.o 00:03:28.767 CC lib/nvme/nvme_transport.o 00:03:28.767 CC lib/nvme/nvme_discovery.o 00:03:28.768 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:28.768 CC lib/accel/accel.o 00:03:28.768 CC lib/blob/blobstore.o 00:03:28.768 CC lib/init/json_config.o 00:03:28.768 CC lib/init/subsystem.o 00:03:28.768 CC lib/accel/accel_rpc.o 00:03:28.768 CC lib/init/subsystem_rpc.o 00:03:28.768 CC lib/blob/request.o 00:03:28.768 CC lib/accel/accel_sw.o 00:03:28.768 CC lib/blob/zeroes.o 00:03:28.768 CC lib/init/rpc.o 00:03:28.768 CC lib/blob/blob_bs_dev.o 00:03:28.768 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:28.768 CC lib/nvme/nvme_tcp.o 00:03:28.768 LIB libspdk_accel.a 00:03:29.026 CC lib/nvme/nvme_opal.o 00:03:29.026 LIB libspdk_init.a 00:03:29.026 CC lib/nvme/nvme_io_msg.o 00:03:29.026 CC lib/bdev/bdev.o 00:03:29.026 CC lib/event/app.o 00:03:29.026 CC lib/event/reactor.o 00:03:29.284 CC lib/bdev/bdev_rpc.o 00:03:29.284 CC lib/bdev/bdev_zone.o 00:03:29.284 CC lib/event/log_rpc.o 00:03:29.284 CC lib/event/app_rpc.o 00:03:29.284 LIB libspdk_blob.a 00:03:29.284 CC lib/bdev/part.o 00:03:29.284 CC lib/event/scheduler_static.o 00:03:29.284 CC lib/nvme/nvme_poll_group.o 00:03:29.284 CC lib/nvme/nvme_zns.o 00:03:29.284 CC lib/bdev/scsi_nvme.o 00:03:29.284 LIB libspdk_event.a 00:03:29.284 CC lib/nvme/nvme_stubs.o 00:03:29.284 CC lib/blobfs/blobfs.o 00:03:29.285 CC lib/nvme/nvme_auth.o 00:03:29.285 CC lib/blobfs/tree.o 00:03:29.543 CC lib/nvme/nvme_rdma.o 00:03:29.543 CC lib/lvol/lvol.o 00:03:29.543 LIB libspdk_bdev.a 00:03:29.543 LIB libspdk_blobfs.a 00:03:29.543 CC lib/scsi/dev.o 00:03:29.543 CC lib/scsi/lun.o 00:03:29.543 CC lib/scsi/port.o 00:03:29.801 LIB libspdk_lvol.a 00:03:29.801 CC lib/scsi/scsi.o 00:03:29.801 CC lib/scsi/scsi_bdev.o 00:03:29.801 CC lib/scsi/scsi_pr.o 00:03:29.801 CC lib/scsi/scsi_rpc.o 00:03:29.801 CC lib/scsi/task.o 00:03:29.801 LIB libspdk_scsi.a 00:03:30.061 CC lib/iscsi/conn.o 00:03:30.061 CC lib/iscsi/init_grp.o 00:03:30.061 CC lib/iscsi/iscsi.o 00:03:30.061 CC lib/iscsi/md5.o 00:03:30.061 CC lib/iscsi/param.o 00:03:30.061 CC lib/iscsi/portal_grp.o 00:03:30.061 CC lib/iscsi/tgt_node.o 00:03:30.061 CC lib/iscsi/iscsi_subsystem.o 00:03:30.061 CC lib/iscsi/iscsi_rpc.o 00:03:30.061 CC lib/iscsi/task.o 00:03:30.061 LIB libspdk_nvme.a 00:03:30.319 CC lib/nvmf/ctrlr.o 00:03:30.319 CC lib/nvmf/ctrlr_discovery.o 00:03:30.319 CC lib/nvmf/ctrlr_bdev.o 00:03:30.319 CC lib/nvmf/subsystem.o 00:03:30.319 CC lib/nvmf/nvmf.o 00:03:30.319 CC lib/nvmf/nvmf_rpc.o 00:03:30.319 CC lib/nvmf/transport.o 00:03:30.319 CC lib/nvmf/tcp.o 00:03:30.319 CC lib/nvmf/stubs.o 00:03:30.319 LIB libspdk_iscsi.a 00:03:30.319 CC lib/nvmf/mdns_server.o 00:03:30.319 CC lib/nvmf/rdma.o 00:03:30.319 CC lib/nvmf/auth.o 00:03:30.885 LIB libspdk_nvmf.a 00:03:30.885 CC module/env_dpdk/env_dpdk_rpc.o 00:03:31.143 CC module/accel/ioat/accel_ioat.o 00:03:31.143 CC module/accel/ioat/accel_ioat_rpc.o 00:03:31.143 CC module/accel/error/accel_error.o 00:03:31.143 CC module/keyring/file/keyring.o 00:03:31.143 CC module/sock/posix/posix.o 00:03:31.143 CC module/blob/bdev/blob_bdev.o 00:03:31.143 CC module/accel/iaa/accel_iaa.o 00:03:31.143 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:31.143 CC module/accel/dsa/accel_dsa.o 00:03:31.143 LIB libspdk_env_dpdk_rpc.a 00:03:31.143 CC module/keyring/file/keyring_rpc.o 00:03:31.143 CC module/accel/iaa/accel_iaa_rpc.o 00:03:31.143 CC module/accel/dsa/accel_dsa_rpc.o 00:03:31.143 LIB libspdk_accel_ioat.a 00:03:31.143 CC module/accel/error/accel_error_rpc.o 00:03:31.143 LIB libspdk_scheduler_dynamic.a 00:03:31.143 LIB libspdk_keyring_file.a 00:03:31.143 LIB libspdk_accel_iaa.a 00:03:31.143 LIB libspdk_accel_dsa.a 00:03:31.143 LIB libspdk_blob_bdev.a 00:03:31.143 LIB libspdk_accel_error.a 00:03:31.401 CC module/bdev/error/vbdev_error.o 00:03:31.401 CC module/bdev/delay/vbdev_delay.o 00:03:31.401 CC module/bdev/malloc/bdev_malloc.o 00:03:31.401 CC module/blobfs/bdev/blobfs_bdev.o 00:03:31.402 LIB libspdk_sock_posix.a 00:03:31.402 CC module/bdev/null/bdev_null.o 00:03:31.402 CC module/bdev/passthru/vbdev_passthru.o 00:03:31.402 CC module/bdev/gpt/gpt.o 00:03:31.402 CC module/bdev/nvme/bdev_nvme.o 00:03:31.402 CC module/bdev/lvol/vbdev_lvol.o 00:03:31.402 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:31.402 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:31.402 CC module/bdev/error/vbdev_error_rpc.o 00:03:31.402 CC module/bdev/gpt/vbdev_gpt.o 00:03:31.402 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:31.402 LIB libspdk_bdev_passthru.a 00:03:31.402 CC module/bdev/null/bdev_null_rpc.o 00:03:31.402 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:31.402 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:31.402 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:31.402 LIB libspdk_blobfs_bdev.a 00:03:31.660 LIB libspdk_bdev_error.a 00:03:31.660 LIB libspdk_bdev_delay.a 00:03:31.660 CC module/bdev/nvme/nvme_rpc.o 00:03:31.660 CC module/bdev/raid/bdev_raid.o 00:03:31.660 LIB libspdk_bdev_malloc.a 00:03:31.660 LIB libspdk_bdev_gpt.a 00:03:31.660 CC module/bdev/split/vbdev_split.o 00:03:31.660 CC module/bdev/raid/bdev_raid_rpc.o 00:03:31.660 CC module/bdev/split/vbdev_split_rpc.o 00:03:31.660 LIB libspdk_bdev_null.a 00:03:31.660 LIB libspdk_bdev_lvol.a 00:03:31.660 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:31.660 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:31.660 CC module/bdev/raid/bdev_raid_sb.o 00:03:31.660 CC module/bdev/raid/raid0.o 00:03:31.660 CC module/bdev/nvme/bdev_mdns_client.o 00:03:31.660 CC module/bdev/raid/raid1.o 00:03:31.660 CC module/bdev/raid/concat.o 00:03:31.660 LIB libspdk_bdev_split.a 00:03:31.660 CC module/bdev/aio/bdev_aio.o 00:03:31.660 CC module/bdev/aio/bdev_aio_rpc.o 00:03:31.919 LIB libspdk_bdev_zone_block.a 00:03:31.919 LIB libspdk_bdev_raid.a 00:03:31.919 LIB libspdk_bdev_nvme.a 00:03:31.919 LIB libspdk_bdev_aio.a 00:03:32.177 CC module/event/subsystems/iobuf/iobuf.o 00:03:32.177 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:32.177 CC module/event/subsystems/vmd/vmd.o 00:03:32.177 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:32.177 CC module/event/subsystems/scheduler/scheduler.o 00:03:32.177 CC module/event/subsystems/keyring/keyring.o 00:03:32.177 CC module/event/subsystems/sock/sock.o 00:03:32.177 LIB libspdk_event_keyring.a 00:03:32.177 LIB libspdk_event_vmd.a 00:03:32.177 LIB libspdk_event_scheduler.a 00:03:32.177 LIB libspdk_event_sock.a 00:03:32.177 LIB libspdk_event_iobuf.a 00:03:32.435 CC module/event/subsystems/accel/accel.o 00:03:32.435 LIB libspdk_event_accel.a 00:03:32.694 CC module/event/subsystems/bdev/bdev.o 00:03:32.694 LIB libspdk_event_bdev.a 00:03:32.694 CC module/event/subsystems/scsi/scsi.o 00:03:32.694 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:32.694 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:32.952 LIB libspdk_event_scsi.a 00:03:32.952 LIB libspdk_event_nvmf.a 00:03:32.952 CC module/event/subsystems/iscsi/iscsi.o 00:03:33.287 LIB libspdk_event_iscsi.a 00:03:33.287 CC app/spdk_lspci/spdk_lspci.o 00:03:33.287 CXX app/trace/trace.o 00:03:33.287 CC app/trace_record/trace_record.o 00:03:33.287 CC app/spdk_nvme_perf/perf.o 00:03:33.287 CC app/nvmf_tgt/nvmf_main.o 00:03:33.287 CC app/iscsi_tgt/iscsi_tgt.o 00:03:33.287 CC examples/ioat/perf/perf.o 00:03:33.287 CC app/spdk_tgt/spdk_tgt.o 00:03:33.287 CC examples/util/zipf/zipf.o 00:03:33.287 CC test/thread/poller_perf/poller_perf.o 00:03:33.287 LINK spdk_lspci 00:03:33.287 LINK spdk_trace_record 00:03:33.287 LINK ioat_perf 00:03:33.287 LINK zipf 00:03:33.287 LINK nvmf_tgt 00:03:33.287 LINK poller_perf 00:03:33.547 LINK spdk_tgt 00:03:33.547 LINK iscsi_tgt 00:03:33.547 CC examples/ioat/verify/verify.o 00:03:33.547 CC test/thread/lock/spdk_lock.o 00:03:33.547 CC app/spdk_nvme_identify/identify.o 00:03:33.547 LINK verify 00:03:33.547 LINK spdk_nvme_perf 00:03:33.547 CC test/dma/test_dma/test_dma.o 00:03:33.547 CC examples/thread/thread/thread_ex.o 00:03:33.547 CC test/app/bdev_svc/bdev_svc.o 00:03:33.547 TEST_HEADER include/spdk/accel.h 00:03:33.547 TEST_HEADER include/spdk/accel_module.h 00:03:33.547 CC app/spdk_nvme_discover/discovery_aer.o 00:03:33.547 TEST_HEADER include/spdk/assert.h 00:03:33.547 TEST_HEADER include/spdk/barrier.h 00:03:33.547 TEST_HEADER include/spdk/base64.h 00:03:33.547 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:33.547 TEST_HEADER include/spdk/bdev.h 00:03:33.547 TEST_HEADER include/spdk/bdev_module.h 00:03:33.547 TEST_HEADER include/spdk/bdev_zone.h 00:03:33.547 TEST_HEADER include/spdk/bit_array.h 00:03:33.547 TEST_HEADER include/spdk/bit_pool.h 00:03:33.547 TEST_HEADER include/spdk/blob.h 00:03:33.547 TEST_HEADER include/spdk/blob_bdev.h 00:03:33.547 TEST_HEADER include/spdk/blobfs.h 00:03:33.547 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:33.547 TEST_HEADER include/spdk/conf.h 00:03:33.547 TEST_HEADER include/spdk/config.h 00:03:33.547 TEST_HEADER include/spdk/cpuset.h 00:03:33.547 TEST_HEADER include/spdk/crc16.h 00:03:33.547 TEST_HEADER include/spdk/crc32.h 00:03:33.547 TEST_HEADER include/spdk/crc64.h 00:03:33.547 TEST_HEADER include/spdk/dif.h 00:03:33.547 TEST_HEADER include/spdk/dma.h 00:03:33.547 TEST_HEADER include/spdk/endian.h 00:03:33.547 TEST_HEADER include/spdk/env.h 00:03:33.547 TEST_HEADER include/spdk/env_dpdk.h 00:03:33.547 TEST_HEADER include/spdk/event.h 00:03:33.547 TEST_HEADER include/spdk/fd.h 00:03:33.547 TEST_HEADER include/spdk/fd_group.h 00:03:33.547 TEST_HEADER include/spdk/file.h 00:03:33.547 TEST_HEADER include/spdk/ftl.h 00:03:33.547 TEST_HEADER include/spdk/gpt_spec.h 00:03:33.547 TEST_HEADER include/spdk/hexlify.h 00:03:33.547 TEST_HEADER include/spdk/histogram_data.h 00:03:33.547 TEST_HEADER include/spdk/idxd.h 00:03:33.547 TEST_HEADER include/spdk/idxd_spec.h 00:03:33.547 TEST_HEADER include/spdk/init.h 00:03:33.547 TEST_HEADER include/spdk/ioat.h 00:03:33.547 TEST_HEADER include/spdk/ioat_spec.h 00:03:33.547 TEST_HEADER include/spdk/iscsi_spec.h 00:03:33.547 TEST_HEADER include/spdk/json.h 00:03:33.547 TEST_HEADER include/spdk/jsonrpc.h 00:03:33.547 TEST_HEADER include/spdk/keyring.h 00:03:33.547 TEST_HEADER include/spdk/keyring_module.h 00:03:33.547 TEST_HEADER include/spdk/likely.h 00:03:33.547 TEST_HEADER include/spdk/log.h 00:03:33.547 TEST_HEADER include/spdk/lvol.h 00:03:33.547 TEST_HEADER include/spdk/memory.h 00:03:33.547 TEST_HEADER include/spdk/mmio.h 00:03:33.547 TEST_HEADER include/spdk/nbd.h 00:03:33.547 TEST_HEADER include/spdk/net.h 00:03:33.547 TEST_HEADER include/spdk/notify.h 00:03:33.547 TEST_HEADER include/spdk/nvme.h 00:03:33.547 TEST_HEADER include/spdk/nvme_intel.h 00:03:33.547 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:33.805 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:33.805 TEST_HEADER include/spdk/nvme_spec.h 00:03:33.805 TEST_HEADER include/spdk/nvme_zns.h 00:03:33.805 TEST_HEADER include/spdk/nvmf.h 00:03:33.805 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:33.805 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:33.805 TEST_HEADER include/spdk/nvmf_spec.h 00:03:33.805 TEST_HEADER include/spdk/nvmf_transport.h 00:03:33.805 TEST_HEADER include/spdk/opal.h 00:03:33.805 TEST_HEADER include/spdk/opal_spec.h 00:03:33.805 TEST_HEADER include/spdk/pci_ids.h 00:03:33.805 TEST_HEADER include/spdk/pipe.h 00:03:33.805 TEST_HEADER include/spdk/queue.h 00:03:33.805 TEST_HEADER include/spdk/reduce.h 00:03:33.805 TEST_HEADER include/spdk/rpc.h 00:03:33.805 TEST_HEADER include/spdk/scheduler.h 00:03:33.805 LINK bdev_svc 00:03:33.805 TEST_HEADER include/spdk/scsi.h 00:03:33.805 TEST_HEADER include/spdk/scsi_spec.h 00:03:33.805 TEST_HEADER include/spdk/sock.h 00:03:33.805 TEST_HEADER include/spdk/stdinc.h 00:03:33.805 TEST_HEADER include/spdk/string.h 00:03:33.805 TEST_HEADER include/spdk/thread.h 00:03:33.805 TEST_HEADER include/spdk/trace.h 00:03:33.806 TEST_HEADER include/spdk/trace_parser.h 00:03:33.806 TEST_HEADER include/spdk/tree.h 00:03:33.806 TEST_HEADER include/spdk/ublk.h 00:03:33.806 LINK test_dma 00:03:33.806 TEST_HEADER include/spdk/util.h 00:03:33.806 TEST_HEADER include/spdk/uuid.h 00:03:33.806 TEST_HEADER include/spdk/version.h 00:03:33.806 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:33.806 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:33.806 TEST_HEADER include/spdk/vhost.h 00:03:33.806 LINK spdk_nvme_discover 00:03:33.806 TEST_HEADER include/spdk/vmd.h 00:03:33.806 TEST_HEADER include/spdk/xor.h 00:03:33.806 TEST_HEADER include/spdk/zipf.h 00:03:33.806 CXX test/cpp_headers/accel.o 00:03:33.806 CC test/env/mem_callbacks/mem_callbacks.o 00:03:33.806 LINK spdk_nvme_identify 00:03:33.806 LINK thread 00:03:33.806 LINK nvme_fuzz 00:03:33.806 LINK spdk_lock 00:03:33.806 CXX test/cpp_headers/accel_module.o 00:03:33.806 CC test/rpc_client/rpc_client_test.o 00:03:33.806 CC test/app/histogram_perf/histogram_perf.o 00:03:33.806 CC examples/sock/hello_world/hello_sock.o 00:03:33.806 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:33.806 LINK histogram_perf 00:03:33.806 CC app/spdk_top/spdk_top.o 00:03:33.806 CC test/app/jsoncat/jsoncat.o 00:03:34.063 CXX test/cpp_headers/assert.o 00:03:34.064 CC app/fio/nvme/fio_plugin.o 00:03:34.064 LINK rpc_client_test 00:03:34.064 LINK jsoncat 00:03:34.064 CC test/env/vtophys/vtophys.o 00:03:34.064 LINK hello_sock 00:03:34.064 CXX test/cpp_headers/barrier.o 00:03:34.064 LINK vtophys 00:03:34.064 CC test/app/stub/stub.o 00:03:34.064 CC examples/vmd/lsvmd/lsvmd.o 00:03:34.064 fio_plugin.c:1603:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:03:34.064 struct spdk_nvme_fdp_ruhs ruhs; 00:03:34.064 ^ 00:03:34.064 CC examples/vmd/led/led.o 00:03:34.064 LINK spdk_top 00:03:34.064 CXX test/cpp_headers/base64.o 00:03:34.064 LINK spdk_trace 00:03:34.064 LINK lsvmd 00:03:34.064 LINK stub 00:03:34.323 1 warning generated. 00:03:34.323 LINK spdk_nvme 00:03:34.323 LINK led 00:03:34.323 LINK mem_callbacks 00:03:34.323 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:03:34.323 CC app/fio/bdev/fio_plugin.o 00:03:34.323 CC examples/idxd/perf/perf.o 00:03:34.323 CXX test/cpp_headers/bdev.o 00:03:34.323 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:34.323 LINK histogram_ut 00:03:34.323 CC test/unit/lib/log/log.c/log_ut.o 00:03:34.323 CC test/accel/dif/dif.o 00:03:34.323 LINK iscsi_fuzz 00:03:34.323 CC examples/accel/perf/accel_perf.o 00:03:34.323 LINK idxd_perf 00:03:34.323 CC test/env/memory/memory_ut.o 00:03:34.323 CC test/unit/lib/rdma/common.c/common_ut.o 00:03:34.323 LINK env_dpdk_post_init 00:03:34.582 LINK log_ut 00:03:34.582 LINK accel_perf 00:03:34.582 CXX test/cpp_headers/bdev_module.o 00:03:34.582 LINK spdk_bdev 00:03:34.582 CC examples/blob/hello_world/hello_blob.o 00:03:34.582 LINK dif 00:03:34.582 CC test/blobfs/mkfs/mkfs.o 00:03:34.582 CC test/env/pci/pci_ut.o 00:03:34.582 CC examples/nvme/hello_world/hello_world.o 00:03:34.582 CXX test/cpp_headers/bdev_zone.o 00:03:34.582 CC test/event/event_perf/event_perf.o 00:03:34.582 LINK hello_blob 00:03:34.582 LINK common_ut 00:03:34.582 LINK mkfs 00:03:34.582 CC examples/bdev/hello_world/hello_bdev.o 00:03:34.582 LINK event_perf 00:03:34.582 LINK pci_ut 00:03:34.841 LINK hello_world 00:03:34.841 CC test/event/reactor/reactor.o 00:03:34.841 CC test/unit/lib/util/base64.c/base64_ut.o 00:03:34.841 CXX test/cpp_headers/bit_array.o 00:03:34.841 CC examples/blob/cli/blobcli.o 00:03:34.841 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:03:34.841 LINK reactor 00:03:34.841 LINK hello_bdev 00:03:34.841 CC examples/nvme/reconnect/reconnect.o 00:03:34.841 CC test/unit/lib/dma/dma.c/dma_ut.o 00:03:34.841 LINK base64_ut 00:03:34.841 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:03:34.841 CC test/event/reactor_perf/reactor_perf.o 00:03:34.841 CXX test/cpp_headers/bit_pool.o 00:03:34.841 gmake[2]: Nothing to be done for 'all'. 00:03:34.841 LINK blobcli 00:03:34.841 CC examples/bdev/bdevperf/bdevperf.o 00:03:35.099 LINK reconnect 00:03:35.099 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:03:35.099 LINK reactor_perf 00:03:35.099 LINK bit_array_ut 00:03:35.099 CXX test/cpp_headers/blob.o 00:03:35.099 CXX test/cpp_headers/blob_bdev.o 00:03:35.099 LINK ioat_ut 00:03:35.099 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:35.099 LINK cpuset_ut 00:03:35.099 LINK memory_ut 00:03:35.099 CC test/nvme/aer/aer.o 00:03:35.099 CXX test/cpp_headers/blobfs.o 00:03:35.099 LINK dma_ut 00:03:35.099 CC examples/nvme/arbitration/arbitration.o 00:03:35.099 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:03:35.099 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:03:35.099 CC examples/nvme/hotplug/hotplug.o 00:03:35.099 LINK crc16_ut 00:03:35.099 LINK bdevperf 00:03:35.099 LINK crc32_ieee_ut 00:03:35.099 CC test/nvme/reset/reset.o 00:03:35.358 LINK aer 00:03:35.358 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:03:35.358 LINK nvme_manage 00:03:35.358 LINK arbitration 00:03:35.358 CXX test/cpp_headers/blobfs_bdev.o 00:03:35.358 CC test/nvme/sgl/sgl.o 00:03:35.358 LINK hotplug 00:03:35.358 CXX test/cpp_headers/conf.o 00:03:35.358 LINK crc32c_ut 00:03:35.358 CC test/bdev/bdevio/bdevio.o 00:03:35.358 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:03:35.358 LINK reset 00:03:35.358 CC test/unit/lib/util/dif.c/dif_ut.o 00:03:35.358 CC test/nvme/e2edp/nvme_dp.o 00:03:35.358 LINK sgl 00:03:35.358 CC test/nvme/overhead/overhead.o 00:03:35.358 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:35.358 LINK crc64_ut 00:03:35.358 CC examples/nvme/abort/abort.o 00:03:35.358 CXX test/cpp_headers/config.o 00:03:35.358 CXX test/cpp_headers/cpuset.o 00:03:35.358 CC test/unit/lib/util/file.c/file_ut.o 00:03:35.616 CC test/nvme/err_injection/err_injection.o 00:03:35.616 LINK nvme_dp 00:03:35.616 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:35.616 LINK bdevio 00:03:35.616 LINK cmb_copy 00:03:35.616 LINK overhead 00:03:35.616 LINK file_ut 00:03:35.616 CXX test/cpp_headers/crc16.o 00:03:35.616 CC test/unit/lib/util/iov.c/iov_ut.o 00:03:35.616 LINK abort 00:03:35.616 LINK pmr_persistence 00:03:35.616 LINK err_injection 00:03:35.616 CC test/nvme/startup/startup.o 00:03:35.616 CC test/unit/lib/util/net.c/net_ut.o 00:03:35.616 CC test/unit/lib/util/math.c/math_ut.o 00:03:35.616 CC test/nvme/reserve/reserve.o 00:03:35.616 LINK iov_ut 00:03:35.616 CXX test/cpp_headers/crc32.o 00:03:35.616 CXX test/cpp_headers/crc64.o 00:03:35.616 LINK net_ut 00:03:35.616 LINK startup 00:03:35.616 CXX test/cpp_headers/dif.o 00:03:35.616 LINK math_ut 00:03:35.874 CC test/nvme/simple_copy/simple_copy.o 00:03:35.874 LINK dif_ut 00:03:35.874 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:03:35.874 LINK reserve 00:03:35.874 CXX test/cpp_headers/dma.o 00:03:35.874 CC examples/nvmf/nvmf/nvmf.o 00:03:35.874 CC test/nvme/connect_stress/connect_stress.o 00:03:35.874 CC test/nvme/boot_partition/boot_partition.o 00:03:35.874 LINK simple_copy 00:03:35.874 CXX test/cpp_headers/endian.o 00:03:35.874 CC test/unit/lib/util/xor.c/xor_ut.o 00:03:35.874 CC test/unit/lib/util/string.c/string_ut.o 00:03:35.874 LINK connect_stress 00:03:35.874 CXX test/cpp_headers/env.o 00:03:35.874 CC test/nvme/compliance/nvme_compliance.o 00:03:35.874 LINK nvmf 00:03:35.874 LINK boot_partition 00:03:35.874 CXX test/cpp_headers/env_dpdk.o 00:03:35.874 LINK pipe_ut 00:03:36.131 CXX test/cpp_headers/event.o 00:03:36.131 CC test/nvme/fused_ordering/fused_ordering.o 00:03:36.131 LINK string_ut 00:03:36.131 LINK xor_ut 00:03:36.131 CXX test/cpp_headers/fd.o 00:03:36.131 CXX test/cpp_headers/fd_group.o 00:03:36.131 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:36.131 CC test/nvme/fdp/fdp.o 00:03:36.131 LINK fused_ordering 00:03:36.131 LINK nvme_compliance 00:03:36.131 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:03:36.131 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:03:36.131 LINK doorbell_aers 00:03:36.131 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:03:36.131 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:03:36.131 CXX test/cpp_headers/file.o 00:03:36.131 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:03:36.131 CXX test/cpp_headers/ftl.o 00:03:36.131 LINK fdp 00:03:36.131 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:03:36.388 CXX test/cpp_headers/gpt_spec.o 00:03:36.388 CXX test/cpp_headers/hexlify.o 00:03:36.388 LINK pci_event_ut 00:03:36.388 CXX test/cpp_headers/histogram_data.o 00:03:36.388 CXX test/cpp_headers/idxd.o 00:03:36.388 LINK json_util_ut 00:03:36.388 LINK idxd_user_ut 00:03:36.388 CXX test/cpp_headers/idxd_spec.o 00:03:36.388 CXX test/cpp_headers/init.o 00:03:36.388 CXX test/cpp_headers/ioat.o 00:03:36.388 CXX test/cpp_headers/ioat_spec.o 00:03:36.388 CXX test/cpp_headers/iscsi_spec.o 00:03:36.388 LINK idxd_ut 00:03:36.645 CXX test/cpp_headers/json.o 00:03:36.645 CXX test/cpp_headers/jsonrpc.o 00:03:36.645 CXX test/cpp_headers/keyring.o 00:03:36.645 CXX test/cpp_headers/keyring_module.o 00:03:36.645 CXX test/cpp_headers/likely.o 00:03:36.645 LINK json_write_ut 00:03:36.645 CXX test/cpp_headers/log.o 00:03:36.645 CXX test/cpp_headers/lvol.o 00:03:36.645 CXX test/cpp_headers/memory.o 00:03:36.645 CXX test/cpp_headers/mmio.o 00:03:36.645 CXX test/cpp_headers/nbd.o 00:03:36.645 CXX test/cpp_headers/net.o 00:03:36.645 CXX test/cpp_headers/notify.o 00:03:36.645 CXX test/cpp_headers/nvme.o 00:03:36.645 LINK json_parse_ut 00:03:36.645 CXX test/cpp_headers/nvme_intel.o 00:03:36.645 CXX test/cpp_headers/nvme_ocssd.o 00:03:36.645 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:36.645 CXX test/cpp_headers/nvme_spec.o 00:03:36.645 CXX test/cpp_headers/nvme_zns.o 00:03:36.902 CXX test/cpp_headers/nvmf.o 00:03:36.902 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:03:36.902 CXX test/cpp_headers/nvmf_cmd.o 00:03:36.902 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:36.902 CXX test/cpp_headers/nvmf_spec.o 00:03:36.902 CXX test/cpp_headers/nvmf_transport.o 00:03:36.902 CXX test/cpp_headers/opal.o 00:03:36.902 CXX test/cpp_headers/opal_spec.o 00:03:36.902 LINK jsonrpc_server_ut 00:03:36.902 CXX test/cpp_headers/pci_ids.o 00:03:36.902 CXX test/cpp_headers/pipe.o 00:03:36.902 CXX test/cpp_headers/queue.o 00:03:36.902 CXX test/cpp_headers/reduce.o 00:03:36.902 CXX test/cpp_headers/rpc.o 00:03:36.902 CXX test/cpp_headers/scheduler.o 00:03:36.902 CXX test/cpp_headers/scsi.o 00:03:37.159 CXX test/cpp_headers/scsi_spec.o 00:03:37.159 CXX test/cpp_headers/sock.o 00:03:37.159 CXX test/cpp_headers/stdinc.o 00:03:37.159 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:03:37.159 CXX test/cpp_headers/string.o 00:03:37.159 CXX test/cpp_headers/thread.o 00:03:37.159 CXX test/cpp_headers/trace.o 00:03:37.159 CXX test/cpp_headers/trace_parser.o 00:03:37.159 CXX test/cpp_headers/tree.o 00:03:37.159 CXX test/cpp_headers/ublk.o 00:03:37.159 CXX test/cpp_headers/util.o 00:03:37.159 CXX test/cpp_headers/uuid.o 00:03:37.159 CXX test/cpp_headers/version.o 00:03:37.159 CXX test/cpp_headers/vfio_user_pci.o 00:03:37.159 CXX test/cpp_headers/vfio_user_spec.o 00:03:37.159 CXX test/cpp_headers/vhost.o 00:03:37.159 CXX test/cpp_headers/vmd.o 00:03:37.416 CXX test/cpp_headers/xor.o 00:03:37.416 CXX test/cpp_headers/zipf.o 00:03:37.416 LINK rpc_ut 00:03:37.416 CC test/unit/lib/sock/sock.c/sock_ut.o 00:03:37.416 CC test/unit/lib/sock/posix.c/posix_ut.o 00:03:37.416 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:03:37.416 CC test/unit/lib/thread/thread.c/thread_ut.o 00:03:37.416 CC test/unit/lib/notify/notify.c/notify_ut.o 00:03:37.416 CC test/unit/lib/keyring/keyring.c/keyring_ut.o 00:03:37.673 LINK keyring_ut 00:03:37.673 LINK notify_ut 00:03:37.673 LINK iobuf_ut 00:03:37.931 LINK posix_ut 00:03:37.931 LINK thread_ut 00:03:37.931 LINK sock_ut 00:03:37.931 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:03:37.931 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:03:37.931 CC test/unit/lib/accel/accel.c/accel_ut.o 00:03:37.931 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:03:37.931 CC test/unit/lib/blob/blob.c/blob_ut.o 00:03:38.189 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:03:38.189 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:03:38.189 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:03:38.189 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:03:38.189 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:03:38.189 LINK rpc_ut 00:03:38.189 LINK subsystem_ut 00:03:38.189 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:03:38.189 LINK blob_bdev_ut 00:03:38.446 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:03:38.446 CC test/unit/lib/event/app.c/app_ut.o 00:03:38.703 LINK app_ut 00:03:38.704 LINK nvme_ctrlr_ocssd_cmd_ut 00:03:38.704 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:03:38.704 LINK accel_ut 00:03:38.704 LINK nvme_ctrlr_cmd_ut 00:03:38.704 LINK nvme_ns_ut 00:03:38.704 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:03:38.704 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:03:38.704 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:03:38.963 LINK nvme_ut 00:03:38.963 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:03:38.963 CC test/unit/lib/bdev/part.c/part_ut.o 00:03:38.963 LINK reactor_ut 00:03:38.963 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:03:39.221 LINK nvme_ns_ocssd_cmd_ut 00:03:39.221 LINK nvme_ctrlr_ut 00:03:39.221 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:03:39.221 LINK nvme_ns_cmd_ut 00:03:39.221 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:03:39.221 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:03:39.221 LINK scsi_nvme_ut 00:03:39.479 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:03:39.479 LINK nvme_poll_group_ut 00:03:39.479 LINK gpt_ut 00:03:39.479 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:03:39.479 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:03:39.479 LINK nvme_qpair_ut 00:03:39.479 LINK blob_ut 00:03:39.737 LINK nvme_pcie_ut 00:03:39.737 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:03:39.737 LINK nvme_quirks_ut 00:03:39.737 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:03:39.737 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:03:39.737 LINK part_ut 00:03:39.737 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:03:39.737 LINK vbdev_lvol_ut 00:03:39.737 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:03:39.737 LINK bdev_zone_ut 00:03:39.737 LINK tree_ut 00:03:39.996 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:03:39.996 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:03:39.996 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:03:39.996 LINK vbdev_zone_block_ut 00:03:39.996 LINK bdev_ut 00:03:39.996 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:03:39.996 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:03:40.254 LINK bdev_raid_ut 00:03:40.254 LINK nvme_transport_ut 00:03:40.254 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:03:40.254 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:03:40.254 LINK blobfs_async_ut 00:03:40.254 LINK bdev_raid_sb_ut 00:03:40.254 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:03:40.254 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:03:40.512 LINK bdev_ut 00:03:40.512 LINK lvol_ut 00:03:40.512 LINK nvme_io_msg_ut 00:03:40.512 LINK nvme_tcp_ut 00:03:40.512 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:03:40.512 LINK concat_ut 00:03:40.512 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:03:40.512 CC test/unit/lib/bdev/raid/raid0.c/raid0_ut.o 00:03:40.512 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:03:40.512 LINK nvme_pcie_common_ut 00:03:40.512 LINK blobfs_bdev_ut 00:03:40.770 LINK nvme_opal_ut 00:03:40.770 LINK blobfs_sync_ut 00:03:40.770 LINK raid1_ut 00:03:40.770 LINK raid0_ut 00:03:40.770 LINK nvme_fabric_ut 00:03:41.336 LINK bdev_nvme_ut 00:03:41.594 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:03:41.594 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:03:41.594 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:03:41.594 LINK nvme_rdma_ut 00:03:41.594 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:03:41.594 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:03:41.594 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:03:41.594 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:03:41.594 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:03:41.594 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:03:41.594 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:03:41.853 LINK dev_ut 00:03:41.853 LINK scsi_ut 00:03:41.853 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:03:41.853 CC test/unit/lib/nvmf/auth.c/auth_ut.o 00:03:41.853 LINK lun_ut 00:03:41.853 LINK scsi_pr_ut 00:03:41.853 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:03:41.853 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:03:41.853 LINK scsi_bdev_ut 00:03:41.853 LINK ctrlr_bdev_ut 00:03:42.111 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:03:42.111 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:03:42.111 LINK nvmf_ut 00:03:42.111 LINK ctrlr_discovery_ut 00:03:42.369 LINK init_grp_ut 00:03:42.369 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:03:42.369 LINK auth_ut 00:03:42.369 CC test/unit/lib/iscsi/param.c/param_ut.o 00:03:42.369 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:03:42.369 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:03:42.369 LINK ctrlr_ut 00:03:42.369 LINK subsystem_ut 00:03:42.369 LINK conn_ut 00:03:42.369 LINK param_ut 00:03:42.627 LINK rdma_ut 00:03:42.627 LINK transport_ut 00:03:42.627 LINK tcp_ut 00:03:42.627 LINK portal_grp_ut 00:03:42.627 LINK tgt_node_ut 00:03:42.886 LINK iscsi_ut 00:03:42.886 00:03:42.886 real 1m1.142s 00:03:42.886 user 4m22.317s 00:03:42.886 sys 0m43.333s 00:03:42.886 14:52:08 unittest_build -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:42.886 ************************************ 00:03:42.886 END TEST unittest_build 00:03:42.886 ************************************ 00:03:42.886 14:52:08 unittest_build -- common/autotest_common.sh@10 -- $ set +x 00:03:42.886 14:52:08 -- common/autotest_common.sh@1142 -- $ return 0 00:03:42.886 14:52:08 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:42.886 14:52:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:42.886 14:52:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:42.886 14:52:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.886 14:52:08 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:42.886 14:52:08 -- pm/common@44 -- $ pid=1274 00:03:42.886 14:52:08 -- pm/common@50 -- $ kill -TERM 1274 00:03:43.144 14:52:08 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:43.144 14:52:08 -- nvmf/common.sh@7 -- # uname -s 00:03:43.144 14:52:08 -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:03:43.144 14:52:08 -- nvmf/common.sh@7 -- # return 0 00:03:43.144 14:52:08 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:43.144 14:52:08 -- spdk/autotest.sh@32 -- # uname -s 00:03:43.145 14:52:08 -- spdk/autotest.sh@32 -- # '[' FreeBSD = Linux ']' 00:03:43.145 14:52:08 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:43.145 14:52:08 -- pm/common@17 -- # local monitor 00:03:43.145 14:52:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.145 14:52:08 -- pm/common@25 -- # sleep 1 00:03:43.145 14:52:08 -- pm/common@21 -- # date +%s 00:03:43.145 14:52:08 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720795928 00:03:43.145 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720795928_collect-vmstat.pm.log 00:03:44.079 14:52:09 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:44.079 14:52:09 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:44.079 14:52:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:44.079 14:52:09 -- common/autotest_common.sh@10 -- # set +x 00:03:44.079 14:52:09 -- spdk/autotest.sh@59 -- # create_test_list 00:03:44.079 14:52:09 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:44.079 14:52:09 -- common/autotest_common.sh@10 -- # set +x 00:03:44.337 14:52:09 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:44.337 14:52:09 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:44.337 14:52:09 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:44.337 14:52:09 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:44.337 14:52:09 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:44.337 14:52:09 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:44.337 14:52:09 -- common/autotest_common.sh@1455 -- # uname 00:03:44.337 14:52:09 -- common/autotest_common.sh@1455 -- # '[' FreeBSD = FreeBSD ']' 00:03:44.337 14:52:09 -- common/autotest_common.sh@1456 -- # kldunload contigmem.ko 00:03:44.337 kldunload: can't find file contigmem.ko 00:03:44.337 14:52:09 -- common/autotest_common.sh@1456 -- # true 00:03:44.337 14:52:09 -- common/autotest_common.sh@1457 -- # '[' -n '' ']' 00:03:44.337 14:52:09 -- common/autotest_common.sh@1463 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/modules/ 00:03:44.337 14:52:09 -- common/autotest_common.sh@1464 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/kernel/ 00:03:44.337 14:52:09 -- common/autotest_common.sh@1465 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/modules/ 00:03:44.337 14:52:09 -- common/autotest_common.sh@1466 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/kernel/ 00:03:44.337 14:52:09 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:44.337 14:52:09 -- common/autotest_common.sh@1475 -- # uname 00:03:44.337 14:52:09 -- common/autotest_common.sh@1475 -- # [[ FreeBSD = FreeBSD ]] 00:03:44.337 14:52:09 -- common/autotest_common.sh@1475 -- # sysctl -n kern.ipc.maxsockbuf 00:03:44.337 14:52:09 -- common/autotest_common.sh@1475 -- # (( 2097152 < 4194304 )) 00:03:44.337 14:52:09 -- common/autotest_common.sh@1476 -- # sysctl kern.ipc.maxsockbuf=4194304 00:03:44.337 kern.ipc.maxsockbuf: 2097152 -> 4194304 00:03:44.337 14:52:09 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:44.337 14:52:09 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:03:44.337 14:52:09 -- spdk/autotest.sh@72 -- # hash lcov 00:03:44.337 /home/vagrant/spdk_repo/spdk/autotest.sh: line 72: hash: lcov: not found 00:03:44.337 14:52:09 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:44.337 14:52:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:44.337 14:52:09 -- common/autotest_common.sh@10 -- # set +x 00:03:44.337 14:52:09 -- spdk/autotest.sh@91 -- # rm -f 00:03:44.337 14:52:09 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:44.337 kldunload: can't find file contigmem.ko 00:03:44.337 kldunload: can't find file nic_uio.ko 00:03:44.337 14:52:09 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:44.337 14:52:09 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:44.337 14:52:09 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:44.337 14:52:09 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:44.337 14:52:09 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:44.337 14:52:09 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:44.337 14:52:09 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:44.337 14:52:09 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0ns1 00:03:44.337 14:52:09 -- scripts/common.sh@378 -- # local block=/dev/nvme0ns1 pt 00:03:44.337 14:52:09 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0ns1 00:03:44.337 nvme0ns1 is not a block device 00:03:44.337 14:52:10 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0ns1 00:03:44.337 /home/vagrant/spdk_repo/spdk/scripts/common.sh: line 391: blkid: command not found 00:03:44.337 14:52:10 -- scripts/common.sh@391 -- # pt= 00:03:44.337 14:52:10 -- scripts/common.sh@392 -- # return 1 00:03:44.337 14:52:10 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0ns1 bs=1M count=1 00:03:44.337 1+0 records in 00:03:44.337 1+0 records out 00:03:44.337 1048576 bytes transferred in 0.005108 secs (205291014 bytes/sec) 00:03:44.337 14:52:10 -- spdk/autotest.sh@118 -- # sync 00:03:44.903 14:52:10 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:44.903 14:52:10 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:44.903 14:52:10 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:45.836 14:52:11 -- spdk/autotest.sh@124 -- # uname -s 00:03:45.836 14:52:11 -- spdk/autotest.sh@124 -- # '[' FreeBSD = Linux ']' 00:03:45.836 14:52:11 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:45.836 Contigmem (not present) 00:03:45.836 Buffer Size: not set 00:03:45.836 Num Buffers: not set 00:03:45.836 00:03:45.836 00:03:45.836 Type BDF Vendor Device Driver 00:03:45.836 NVMe 0:0:16:0 0x1b36 0x0010 nvme0 00:03:45.836 14:52:11 -- spdk/autotest.sh@130 -- # uname -s 00:03:45.836 14:52:11 -- spdk/autotest.sh@130 -- # [[ FreeBSD == Linux ]] 00:03:45.836 14:52:11 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:45.836 14:52:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:45.836 14:52:11 -- common/autotest_common.sh@10 -- # set +x 00:03:45.836 14:52:11 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:45.836 14:52:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:45.836 14:52:11 -- common/autotest_common.sh@10 -- # set +x 00:03:45.836 14:52:11 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:45.836 kldunload: can't find file nic_uio.ko 00:03:45.836 hw.nic_uio.bdfs="0:16:0" 00:03:45.836 hw.contigmem.num_buffers="8" 00:03:45.836 hw.contigmem.buffer_size="268435456" 00:03:46.402 14:52:12 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:46.402 14:52:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:46.402 14:52:12 -- common/autotest_common.sh@10 -- # set +x 00:03:46.402 14:52:12 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:46.402 14:52:12 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:46.402 14:52:12 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:46.402 14:52:12 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:46.402 14:52:12 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:46.402 14:52:12 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:46.402 14:52:12 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:46.402 14:52:12 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:46.402 14:52:12 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:46.402 14:52:12 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:46.402 14:52:12 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:46.402 14:52:12 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:46.402 14:52:12 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:03:46.402 14:52:12 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:46.402 14:52:12 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:46.402 cat: /sys/bus/pci/devices/0000:00:10.0/device: No such file or directory 00:03:46.402 14:52:12 -- common/autotest_common.sh@1580 -- # device= 00:03:46.402 14:52:12 -- common/autotest_common.sh@1580 -- # true 00:03:46.402 14:52:12 -- common/autotest_common.sh@1581 -- # [[ '' == \0\x\0\a\5\4 ]] 00:03:46.402 14:52:12 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:03:46.402 14:52:12 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:03:46.402 14:52:12 -- common/autotest_common.sh@1593 -- # return 0 00:03:46.402 14:52:12 -- spdk/autotest.sh@150 -- # '[' 1 -eq 1 ']' 00:03:46.402 14:52:12 -- spdk/autotest.sh@151 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:46.402 14:52:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.402 14:52:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.402 14:52:12 -- common/autotest_common.sh@10 -- # set +x 00:03:46.402 ************************************ 00:03:46.402 START TEST unittest 00:03:46.402 ************************************ 00:03:46.402 14:52:12 unittest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:46.402 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:46.699 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:03:46.699 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:03:46.699 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:46.699 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:03:46.699 + rootdir=/home/vagrant/spdk_repo/spdk 00:03:46.699 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:03:46.699 ++ rpc_py=rpc_cmd 00:03:46.699 ++ set -e 00:03:46.699 ++ shopt -s nullglob 00:03:46.699 ++ shopt -s extglob 00:03:46.699 ++ shopt -s inherit_errexit 00:03:46.699 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:03:46.699 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:03:46.699 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:03:46.699 +++ CONFIG_WPDK_DIR= 00:03:46.699 +++ CONFIG_ASAN=n 00:03:46.699 +++ CONFIG_VBDEV_COMPRESS=n 00:03:46.699 +++ CONFIG_HAVE_EXECINFO_H=y 00:03:46.699 +++ CONFIG_USDT=n 00:03:46.699 +++ CONFIG_CUSTOMOCF=n 00:03:46.699 +++ CONFIG_PREFIX=/usr/local 00:03:46.699 +++ CONFIG_RBD=n 00:03:46.699 +++ CONFIG_LIBDIR= 00:03:46.699 +++ CONFIG_IDXD=y 00:03:46.699 +++ CONFIG_NVME_CUSE=n 00:03:46.699 +++ CONFIG_SMA=n 00:03:46.699 +++ CONFIG_VTUNE=n 00:03:46.699 +++ CONFIG_TSAN=n 00:03:46.699 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:03:46.699 +++ CONFIG_VFIO_USER_DIR= 00:03:46.699 +++ CONFIG_PGO_CAPTURE=n 00:03:46.699 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=n 00:03:46.699 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:46.699 +++ CONFIG_LTO=n 00:03:46.699 +++ CONFIG_ISCSI_INITIATOR=n 00:03:46.699 +++ CONFIG_CET=n 00:03:46.699 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:03:46.699 +++ CONFIG_OCF_PATH= 00:03:46.699 +++ CONFIG_RDMA_SET_TOS=y 00:03:46.699 +++ CONFIG_HAVE_ARC4RANDOM=y 00:03:46.699 +++ CONFIG_HAVE_LIBARCHIVE=n 00:03:46.699 +++ CONFIG_UBLK=n 00:03:46.699 +++ CONFIG_ISAL_CRYPTO=y 00:03:46.699 +++ CONFIG_OPENSSL_PATH= 00:03:46.699 +++ CONFIG_OCF=n 00:03:46.699 +++ CONFIG_FUSE=n 00:03:46.699 +++ CONFIG_VTUNE_DIR= 00:03:46.699 +++ CONFIG_FUZZER_LIB= 00:03:46.699 +++ CONFIG_FUZZER=n 00:03:46.699 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:03:46.699 +++ CONFIG_CRYPTO=n 00:03:46.699 +++ CONFIG_PGO_USE=n 00:03:46.699 +++ CONFIG_VHOST=n 00:03:46.699 +++ CONFIG_DAOS=n 00:03:46.699 +++ CONFIG_DPDK_INC_DIR= 00:03:46.699 +++ CONFIG_DAOS_DIR= 00:03:46.699 +++ CONFIG_UNIT_TESTS=y 00:03:46.699 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=n 00:03:46.699 +++ CONFIG_VIRTIO=n 00:03:46.699 +++ CONFIG_DPDK_UADK=n 00:03:46.699 +++ CONFIG_COVERAGE=n 00:03:46.699 +++ CONFIG_RDMA=y 00:03:46.699 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:03:46.699 +++ CONFIG_URING_PATH= 00:03:46.699 +++ CONFIG_XNVME=n 00:03:46.699 +++ CONFIG_VFIO_USER=n 00:03:46.699 +++ CONFIG_ARCH=native 00:03:46.699 +++ CONFIG_HAVE_EVP_MAC=y 00:03:46.699 +++ CONFIG_URING_ZNS=n 00:03:46.699 +++ CONFIG_WERROR=y 00:03:46.699 +++ CONFIG_HAVE_LIBBSD=n 00:03:46.699 +++ CONFIG_UBSAN=n 00:03:46.699 +++ CONFIG_IPSEC_MB_DIR= 00:03:46.699 +++ CONFIG_GOLANG=n 00:03:46.699 +++ CONFIG_ISAL=y 00:03:46.699 +++ CONFIG_IDXD_KERNEL=n 00:03:46.699 +++ CONFIG_DPDK_LIB_DIR= 00:03:46.699 +++ CONFIG_RDMA_PROV=verbs 00:03:46.699 +++ CONFIG_APPS=y 00:03:46.699 +++ CONFIG_SHARED=n 00:03:46.699 +++ CONFIG_HAVE_KEYUTILS=n 00:03:46.699 +++ CONFIG_FC_PATH= 00:03:46.699 +++ CONFIG_DPDK_PKG_CONFIG=n 00:03:46.699 +++ CONFIG_FC=n 00:03:46.699 +++ CONFIG_AVAHI=n 00:03:46.699 +++ CONFIG_FIO_PLUGIN=y 00:03:46.699 +++ CONFIG_RAID5F=n 00:03:46.699 +++ CONFIG_EXAMPLES=y 00:03:46.699 +++ CONFIG_TESTS=y 00:03:46.699 +++ CONFIG_CRYPTO_MLX5=n 00:03:46.699 +++ CONFIG_MAX_LCORES=128 00:03:46.699 +++ CONFIG_IPSEC_MB=n 00:03:46.699 +++ CONFIG_PGO_DIR= 00:03:46.699 +++ CONFIG_DEBUG=y 00:03:46.699 +++ CONFIG_DPDK_COMPRESSDEV=n 00:03:46.699 +++ CONFIG_CROSS_PREFIX= 00:03:46.699 +++ CONFIG_URING=n 00:03:46.699 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:03:46.699 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:03:46.699 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:03:46.699 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:03:46.699 +++ _root=/home/vagrant/spdk_repo/spdk 00:03:46.699 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:03:46.699 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:03:46.699 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:03:46.699 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:03:46.699 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:03:46.699 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:03:46.699 +++ VHOST_APP=("$_app_dir/vhost") 00:03:46.699 +++ DD_APP=("$_app_dir/spdk_dd") 00:03:46.699 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:03:46.699 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:03:46.699 +++ [[ #ifndef SPDK_CONFIG_H 00:03:46.699 #define SPDK_CONFIG_H 00:03:46.699 #define SPDK_CONFIG_APPS 1 00:03:46.699 #define SPDK_CONFIG_ARCH native 00:03:46.699 #undef SPDK_CONFIG_ASAN 00:03:46.699 #undef SPDK_CONFIG_AVAHI 00:03:46.699 #undef SPDK_CONFIG_CET 00:03:46.699 #undef SPDK_CONFIG_COVERAGE 00:03:46.699 #define SPDK_CONFIG_CROSS_PREFIX 00:03:46.700 #undef SPDK_CONFIG_CRYPTO 00:03:46.700 #undef SPDK_CONFIG_CRYPTO_MLX5 00:03:46.700 #undef SPDK_CONFIG_CUSTOMOCF 00:03:46.700 #undef SPDK_CONFIG_DAOS 00:03:46.700 #define SPDK_CONFIG_DAOS_DIR 00:03:46.700 #define SPDK_CONFIG_DEBUG 1 00:03:46.700 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:03:46.700 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:46.700 #define SPDK_CONFIG_DPDK_INC_DIR 00:03:46.700 #define SPDK_CONFIG_DPDK_LIB_DIR 00:03:46.700 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:03:46.700 #undef SPDK_CONFIG_DPDK_UADK 00:03:46.700 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:46.700 #define SPDK_CONFIG_EXAMPLES 1 00:03:46.700 #undef SPDK_CONFIG_FC 00:03:46.700 #define SPDK_CONFIG_FC_PATH 00:03:46.700 #define SPDK_CONFIG_FIO_PLUGIN 1 00:03:46.700 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:03:46.700 #undef SPDK_CONFIG_FUSE 00:03:46.700 #undef SPDK_CONFIG_FUZZER 00:03:46.700 #define SPDK_CONFIG_FUZZER_LIB 00:03:46.700 #undef SPDK_CONFIG_GOLANG 00:03:46.700 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:03:46.700 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:03:46.700 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:03:46.700 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:03:46.700 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:03:46.700 #undef SPDK_CONFIG_HAVE_LIBBSD 00:03:46.700 #undef SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 00:03:46.700 #define SPDK_CONFIG_IDXD 1 00:03:46.700 #undef SPDK_CONFIG_IDXD_KERNEL 00:03:46.700 #undef SPDK_CONFIG_IPSEC_MB 00:03:46.700 #define SPDK_CONFIG_IPSEC_MB_DIR 00:03:46.700 #define SPDK_CONFIG_ISAL 1 00:03:46.700 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:03:46.700 #undef SPDK_CONFIG_ISCSI_INITIATOR 00:03:46.700 #define SPDK_CONFIG_LIBDIR 00:03:46.700 #undef SPDK_CONFIG_LTO 00:03:46.700 #define SPDK_CONFIG_MAX_LCORES 128 00:03:46.700 #undef SPDK_CONFIG_NVME_CUSE 00:03:46.700 #undef SPDK_CONFIG_OCF 00:03:46.700 #define SPDK_CONFIG_OCF_PATH 00:03:46.700 #define SPDK_CONFIG_OPENSSL_PATH 00:03:46.700 #undef SPDK_CONFIG_PGO_CAPTURE 00:03:46.700 #define SPDK_CONFIG_PGO_DIR 00:03:46.700 #undef SPDK_CONFIG_PGO_USE 00:03:46.700 #define SPDK_CONFIG_PREFIX /usr/local 00:03:46.700 #undef SPDK_CONFIG_RAID5F 00:03:46.700 #undef SPDK_CONFIG_RBD 00:03:46.700 #define SPDK_CONFIG_RDMA 1 00:03:46.700 #define SPDK_CONFIG_RDMA_PROV verbs 00:03:46.700 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:03:46.700 #undef SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 00:03:46.700 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:03:46.700 #undef SPDK_CONFIG_SHARED 00:03:46.700 #undef SPDK_CONFIG_SMA 00:03:46.700 #define SPDK_CONFIG_TESTS 1 00:03:46.700 #undef SPDK_CONFIG_TSAN 00:03:46.700 #undef SPDK_CONFIG_UBLK 00:03:46.700 #undef SPDK_CONFIG_UBSAN 00:03:46.700 #define SPDK_CONFIG_UNIT_TESTS 1 00:03:46.700 #undef SPDK_CONFIG_URING 00:03:46.700 #define SPDK_CONFIG_URING_PATH 00:03:46.700 #undef SPDK_CONFIG_URING_ZNS 00:03:46.700 #undef SPDK_CONFIG_USDT 00:03:46.700 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:03:46.700 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:03:46.700 #undef SPDK_CONFIG_VFIO_USER 00:03:46.700 #define SPDK_CONFIG_VFIO_USER_DIR 00:03:46.700 #undef SPDK_CONFIG_VHOST 00:03:46.700 #undef SPDK_CONFIG_VIRTIO 00:03:46.700 #undef SPDK_CONFIG_VTUNE 00:03:46.700 #define SPDK_CONFIG_VTUNE_DIR 00:03:46.700 #define SPDK_CONFIG_WERROR 1 00:03:46.700 #define SPDK_CONFIG_WPDK_DIR 00:03:46.700 #undef SPDK_CONFIG_XNVME 00:03:46.700 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:03:46.700 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:03:46.700 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:46.700 +++ [[ -e /bin/wpdk_common.sh ]] 00:03:46.700 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:46.700 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:46.700 ++++ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:03:46.700 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:03:46.700 ++++ export PATH 00:03:46.700 ++++ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:03:46.700 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:03:46.700 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:03:46.700 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:03:46.700 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:03:46.700 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:03:46.700 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:03:46.700 +++ TEST_TAG=N/A 00:03:46.700 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:03:46.700 +++ PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:03:46.700 ++++ uname -s 00:03:46.700 +++ PM_OS=FreeBSD 00:03:46.700 +++ MONITOR_RESOURCES_SUDO=() 00:03:46.700 +++ declare -A MONITOR_RESOURCES_SUDO 00:03:46.700 +++ MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:03:46.700 +++ MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:03:46.700 +++ MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:03:46.700 +++ MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:03:46.700 +++ SUDO[0]= 00:03:46.700 +++ SUDO[1]='sudo -E' 00:03:46.700 +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:03:46.700 +++ [[ FreeBSD == FreeBSD ]] 00:03:46.700 +++ MONITOR_RESOURCES=(collect-vmstat) 00:03:46.700 +++ [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:03:46.700 ++ : 0 00:03:46.700 ++ export RUN_NIGHTLY 00:03:46.700 ++ : 0 00:03:46.700 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:03:46.700 ++ : 0 00:03:46.700 ++ export SPDK_RUN_VALGRIND 00:03:46.700 ++ : 1 00:03:46.700 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:03:46.700 ++ : 1 00:03:46.700 ++ export SPDK_TEST_UNITTEST 00:03:46.700 ++ : 00:03:46.700 ++ export SPDK_TEST_AUTOBUILD 00:03:46.700 ++ : 0 00:03:46.700 ++ export SPDK_TEST_RELEASE_BUILD 00:03:46.700 ++ : 0 00:03:46.700 ++ export SPDK_TEST_ISAL 00:03:46.700 ++ : 0 00:03:46.700 ++ export SPDK_TEST_ISCSI 00:03:46.700 ++ : 0 00:03:46.700 ++ export SPDK_TEST_ISCSI_INITIATOR 00:03:46.700 ++ : 1 00:03:46.700 ++ export SPDK_TEST_NVME 00:03:46.700 ++ : 0 00:03:46.700 ++ export SPDK_TEST_NVME_PMR 00:03:46.700 ++ : 0 00:03:46.700 ++ export SPDK_TEST_NVME_BP 00:03:46.700 ++ : 0 00:03:46.700 ++ export SPDK_TEST_NVME_CLI 00:03:46.700 ++ : 0 00:03:46.700 ++ export SPDK_TEST_NVME_CUSE 00:03:46.700 ++ : 0 00:03:46.700 ++ export SPDK_TEST_NVME_FDP 00:03:46.700 ++ : 0 00:03:46.700 ++ export SPDK_TEST_NVMF 00:03:46.700 ++ : 0 00:03:46.700 ++ export SPDK_TEST_VFIOUSER 00:03:46.700 ++ : 0 00:03:46.700 ++ export SPDK_TEST_VFIOUSER_QEMU 00:03:46.700 ++ : 0 00:03:46.700 ++ export SPDK_TEST_FUZZER 00:03:46.700 ++ : 0 00:03:46.700 ++ export SPDK_TEST_FUZZER_SHORT 00:03:46.700 ++ : rdma 00:03:46.700 ++ export SPDK_TEST_NVMF_TRANSPORT 00:03:46.700 ++ : 0 00:03:46.700 ++ export SPDK_TEST_RBD 00:03:46.700 ++ : 0 00:03:46.700 ++ export SPDK_TEST_VHOST 00:03:46.700 ++ : 1 00:03:46.700 ++ export SPDK_TEST_BLOCKDEV 00:03:46.700 ++ : 0 00:03:46.700 ++ export SPDK_TEST_IOAT 00:03:46.700 ++ : 0 00:03:46.700 ++ export SPDK_TEST_BLOBFS 00:03:46.700 ++ : 0 00:03:46.700 ++ export SPDK_TEST_VHOST_INIT 00:03:46.700 ++ : 0 00:03:46.700 ++ export SPDK_TEST_LVOL 00:03:46.700 ++ : 0 00:03:46.700 ++ export SPDK_TEST_VBDEV_COMPRESS 00:03:46.700 ++ : 0 00:03:46.700 ++ export SPDK_RUN_ASAN 00:03:46.700 ++ : 0 00:03:46.700 ++ export SPDK_RUN_UBSAN 00:03:46.700 ++ : 00:03:46.700 ++ export SPDK_RUN_EXTERNAL_DPDK 00:03:46.700 ++ : 0 00:03:46.700 ++ export SPDK_RUN_NON_ROOT 00:03:46.700 ++ : 0 00:03:46.700 ++ export SPDK_TEST_CRYPTO 00:03:46.700 ++ : 0 00:03:46.701 ++ export SPDK_TEST_FTL 00:03:46.701 ++ : 0 00:03:46.701 ++ export SPDK_TEST_OCF 00:03:46.701 ++ : 0 00:03:46.701 ++ export SPDK_TEST_VMD 00:03:46.701 ++ : 0 00:03:46.701 ++ export SPDK_TEST_OPAL 00:03:46.701 ++ : 00:03:46.701 ++ export SPDK_TEST_NATIVE_DPDK 00:03:46.701 ++ : true 00:03:46.701 ++ export SPDK_AUTOTEST_X 00:03:46.701 ++ : 0 00:03:46.701 ++ export SPDK_TEST_RAID5 00:03:46.701 ++ : 0 00:03:46.701 ++ export SPDK_TEST_URING 00:03:46.701 ++ : 0 00:03:46.701 ++ export SPDK_TEST_USDT 00:03:46.701 ++ : 0 00:03:46.701 ++ export SPDK_TEST_USE_IGB_UIO 00:03:46.701 ++ : 0 00:03:46.701 ++ export SPDK_TEST_SCHEDULER 00:03:46.701 ++ : 0 00:03:46.701 ++ export SPDK_TEST_SCANBUILD 00:03:46.701 ++ : 00:03:46.701 ++ export SPDK_TEST_NVMF_NICS 00:03:46.701 ++ : 0 00:03:46.701 ++ export SPDK_TEST_SMA 00:03:46.701 ++ : 0 00:03:46.701 ++ export SPDK_TEST_DAOS 00:03:46.701 ++ : 0 00:03:46.701 ++ export SPDK_TEST_XNVME 00:03:46.701 ++ : 0 00:03:46.701 ++ export SPDK_TEST_ACCEL_DSA 00:03:46.701 ++ : 0 00:03:46.701 ++ export SPDK_TEST_ACCEL_IAA 00:03:46.701 ++ : 00:03:46.701 ++ export SPDK_TEST_FUZZER_TARGET 00:03:46.701 ++ : 0 00:03:46.701 ++ export SPDK_TEST_NVMF_MDNS 00:03:46.701 ++ : 0 00:03:46.701 ++ export SPDK_JSONRPC_GO_CLIENT 00:03:46.701 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:03:46.701 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:03:46.701 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:03:46.701 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:03:46.701 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:46.701 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:46.701 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:46.701 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:46.701 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:03:46.701 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:03:46.701 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:03:46.701 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:03:46.701 ++ export PYTHONDONTWRITEBYTECODE=1 00:03:46.701 ++ PYTHONDONTWRITEBYTECODE=1 00:03:46.701 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:03:46.701 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:03:46.701 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:03:46.701 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:03:46.701 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:03:46.701 ++ rm -rf /var/tmp/asan_suppression_file 00:03:46.701 ++ cat 00:03:46.701 ++ echo leak:libfuse3.so 00:03:46.701 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:03:46.701 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:03:46.701 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:03:46.701 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:03:46.701 ++ '[' -z /var/spdk/dependencies ']' 00:03:46.701 ++ export DEPENDENCY_DIR 00:03:46.701 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:03:46.701 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:03:46.701 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:03:46.701 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:03:46.701 ++ export QEMU_BIN= 00:03:46.701 ++ QEMU_BIN= 00:03:46.701 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:03:46.701 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:03:46.701 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:03:46.701 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:03:46.701 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:46.701 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:46.701 ++ '[' 0 -eq 0 ']' 00:03:46.701 ++ export valgrind= 00:03:46.701 ++ valgrind= 00:03:46.701 +++ uname -s 00:03:46.701 ++ '[' FreeBSD = Linux ']' 00:03:46.701 +++ uname -s 00:03:46.701 ++ '[' FreeBSD = FreeBSD ']' 00:03:46.701 ++ MAKE=gmake 00:03:46.701 +++ sysctl -a 00:03:46.701 +++ grep -E -i hw.ncpu 00:03:46.701 +++ awk '{print $2}' 00:03:46.701 ++ MAKEFLAGS=-j10 00:03:46.701 ++ HUGEMEM=2048 00:03:46.701 ++ export HUGEMEM=2048 00:03:46.701 ++ HUGEMEM=2048 00:03:46.701 ++ NO_HUGE=() 00:03:46.701 ++ TEST_MODE= 00:03:46.701 ++ [[ -z '' ]] 00:03:46.701 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:03:46.701 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:03:46.701 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:03:46.701 ++ exec 00:03:46.701 ++ set_test_storage 2147483648 00:03:46.701 ++ [[ -v testdir ]] 00:03:46.701 ++ local requested_size=2147483648 00:03:46.701 ++ local mount target_dir 00:03:46.701 ++ local -A mounts fss sizes avails uses 00:03:46.701 ++ local source fs size avail mount use 00:03:46.701 ++ local storage_fallback storage_candidates 00:03:46.701 +++ mktemp -udt spdk.XXXXXX 00:03:46.701 ++ storage_fallback=/tmp/spdk.XXXXXX.EE94hwSRbr 00:03:46.701 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:03:46.701 ++ [[ -n '' ]] 00:03:46.701 ++ [[ -n '' ]] 00:03:46.701 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.XXXXXX.EE94hwSRbr/tests/unit /tmp/spdk.XXXXXX.EE94hwSRbr 00:03:46.701 ++ requested_size=2214592512 00:03:46.701 ++ read -r source fs size use avail _ mount 00:03:46.701 +++ df -T 00:03:46.701 +++ grep -v Filesystem 00:03:46.701 ++ mounts["$mount"]=/dev/gptid/043e6f36-2a13-11ef-a525-001e676338ce 00:03:46.701 ++ fss["$mount"]=ufs 00:03:46.701 ++ avails["$mount"]=17235755008 00:03:46.701 ++ sizes["$mount"]=31182712832 00:03:46.701 ++ uses["$mount"]=11452342272 00:03:46.701 ++ read -r source fs size use avail _ mount 00:03:46.701 ++ mounts["$mount"]=devfs 00:03:46.701 ++ fss["$mount"]=devfs 00:03:46.701 ++ avails["$mount"]=1024 00:03:46.701 ++ sizes["$mount"]=1024 00:03:46.701 ++ uses["$mount"]=0 00:03:46.701 ++ read -r source fs size use avail _ mount 00:03:46.701 ++ mounts["$mount"]=tmpfs 00:03:46.701 ++ fss["$mount"]=tmpfs 00:03:46.701 ++ avails["$mount"]=2147442688 00:03:46.701 ++ sizes["$mount"]=2147483648 00:03:46.701 ++ uses["$mount"]=40960 00:03:46.701 ++ read -r source fs size use avail _ mount 00:03:46.701 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt/output 00:03:46.701 ++ fss["$mount"]=fusefs.sshfs 00:03:46.701 ++ avails["$mount"]=93565288448 00:03:46.701 ++ sizes["$mount"]=105088212992 00:03:46.701 ++ uses["$mount"]=6137491456 00:03:46.701 ++ read -r source fs size use avail _ mount 00:03:46.701 ++ printf '* Looking for test storage...\n' 00:03:46.701 * Looking for test storage... 00:03:46.701 ++ local target_space new_size 00:03:46.701 ++ for target_dir in "${storage_candidates[@]}" 00:03:46.701 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:03:46.701 +++ awk '$1 !~ /Filesystem/{print $6}' 00:03:46.701 ++ mount=/ 00:03:46.701 ++ target_space=17235755008 00:03:46.701 ++ (( target_space == 0 || target_space < requested_size )) 00:03:46.701 ++ (( target_space >= requested_size )) 00:03:46.702 ++ [[ ufs == tmpfs ]] 00:03:46.702 ++ [[ ufs == ramfs ]] 00:03:46.702 ++ [[ / == / ]] 00:03:46.702 ++ new_size=13666934784 00:03:46.702 ++ (( new_size * 100 / sizes[/] > 95 )) 00:03:46.702 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:03:46.702 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:03:46.702 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:03:46.702 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:03:46.702 ++ return 0 00:03:46.702 ++ set -o errtrace 00:03:46.702 ++ shopt -s extdebug 00:03:46.702 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:03:46.702 ++ PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:03:46.702 14:52:12 unittest -- common/autotest_common.sh@1687 -- # true 00:03:46.702 14:52:12 unittest -- common/autotest_common.sh@1689 -- # xtrace_fd 00:03:46.702 14:52:12 unittest -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:03:46.702 14:52:12 unittest -- common/autotest_common.sh@29 -- # exec 00:03:46.702 14:52:12 unittest -- common/autotest_common.sh@31 -- # xtrace_restore 00:03:46.702 14:52:12 unittest -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:03:46.702 14:52:12 unittest -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:03:46.702 14:52:12 unittest -- common/autotest_common.sh@18 -- # set -x 00:03:46.702 14:52:12 unittest -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:03:46.702 14:52:12 unittest -- unit/unittest.sh@153 -- # '[' 0 -eq 1 ']' 00:03:46.702 14:52:12 unittest -- unit/unittest.sh@160 -- # '[' -z x ']' 00:03:46.702 14:52:12 unittest -- unit/unittest.sh@167 -- # '[' 0 -eq 1 ']' 00:03:46.702 14:52:12 unittest -- unit/unittest.sh@180 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:03:46.702 14:52:12 unittest -- unit/unittest.sh@180 -- # CC_TYPE=CC_TYPE=clang 00:03:46.702 14:52:12 unittest -- unit/unittest.sh@181 -- # hash lcov 00:03:46.702 /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh: line 181: hash: lcov: not found 00:03:46.702 14:52:12 unittest -- unit/unittest.sh@184 -- # cov_avail=no 00:03:46.702 14:52:12 unittest -- unit/unittest.sh@186 -- # '[' no = yes ']' 00:03:46.702 14:52:12 unittest -- unit/unittest.sh@208 -- # uname -m 00:03:46.702 14:52:12 unittest -- unit/unittest.sh@208 -- # '[' amd64 = aarch64 ']' 00:03:46.702 14:52:12 unittest -- unit/unittest.sh@212 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:03:46.702 14:52:12 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.702 14:52:12 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.702 14:52:12 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:46.702 ************************************ 00:03:46.702 START TEST unittest_pci_event 00:03:46.702 ************************************ 00:03:46.702 14:52:12 unittest.unittest_pci_event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:03:46.702 00:03:46.702 00:03:46.702 CUnit - A unit testing framework for C - Version 2.1-3 00:03:46.702 http://cunit.sourceforge.net/ 00:03:46.702 00:03:46.702 00:03:46.702 Suite: pci_event 00:03:46.702 Test: test_pci_parse_event ...passed 00:03:46.702 00:03:46.702 Run Summary: Type Total Ran Passed Failed Inactive 00:03:46.702 suites 1 1 n/a 0 0 00:03:46.702 tests 1 1 1 0 0 00:03:46.702 asserts 1 1 1 0 n/a 00:03:46.702 00:03:46.702 Elapsed time = 0.000 seconds 00:03:46.702 00:03:46.702 real 0m0.024s 00:03:46.702 user 0m0.013s 00:03:46.702 sys 0m0.000s 00:03:46.702 14:52:12 unittest.unittest_pci_event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.702 14:52:12 unittest.unittest_pci_event -- common/autotest_common.sh@10 -- # set +x 00:03:46.702 ************************************ 00:03:46.702 END TEST unittest_pci_event 00:03:46.702 ************************************ 00:03:46.702 14:52:12 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:46.702 14:52:12 unittest -- unit/unittest.sh@213 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:03:46.702 14:52:12 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.702 14:52:12 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.702 14:52:12 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:46.702 ************************************ 00:03:46.702 START TEST unittest_include 00:03:46.702 ************************************ 00:03:46.702 14:52:12 unittest.unittest_include -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:03:46.702 00:03:46.702 00:03:46.702 CUnit - A unit testing framework for C - Version 2.1-3 00:03:46.702 http://cunit.sourceforge.net/ 00:03:46.702 00:03:46.702 00:03:46.702 Suite: histogram 00:03:46.702 Test: histogram_test ...passed 00:03:46.702 Test: histogram_merge ...passed 00:03:46.702 00:03:46.702 Run Summary: Type Total Ran Passed Failed Inactive 00:03:46.702 suites 1 1 n/a 0 0 00:03:46.702 tests 2 2 2 0 0 00:03:46.702 asserts 50 50 50 0 n/a 00:03:46.702 00:03:46.702 Elapsed time = 0.000 seconds 00:03:46.702 00:03:46.702 real 0m0.007s 00:03:46.702 user 0m0.000s 00:03:46.702 sys 0m0.007s 00:03:46.702 14:52:12 unittest.unittest_include -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.702 14:52:12 unittest.unittest_include -- common/autotest_common.sh@10 -- # set +x 00:03:46.702 ************************************ 00:03:46.702 END TEST unittest_include 00:03:46.702 ************************************ 00:03:46.702 14:52:12 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:46.702 14:52:12 unittest -- unit/unittest.sh@214 -- # run_test unittest_bdev unittest_bdev 00:03:46.702 14:52:12 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.702 14:52:12 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.702 14:52:12 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:46.702 ************************************ 00:03:46.702 START TEST unittest_bdev 00:03:46.702 ************************************ 00:03:46.702 14:52:12 unittest.unittest_bdev -- common/autotest_common.sh@1123 -- # unittest_bdev 00:03:46.702 14:52:12 unittest.unittest_bdev -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:03:46.702 00:03:46.702 00:03:46.702 CUnit - A unit testing framework for C - Version 2.1-3 00:03:46.702 http://cunit.sourceforge.net/ 00:03:46.702 00:03:46.702 00:03:46.702 Suite: bdev 00:03:46.702 Test: bytes_to_blocks_test ...passed 00:03:46.702 Test: num_blocks_test ...passed 00:03:46.702 Test: io_valid_test ...passed 00:03:46.702 Test: open_write_test ...[2024-07-12 14:52:12.493017] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:03:46.702 [2024-07-12 14:52:12.493199] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:03:46.702 [2024-07-12 14:52:12.493223] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:03:46.702 passed 00:03:46.702 Test: claim_test ...passed 00:03:46.702 Test: alias_add_del_test ...[2024-07-12 14:52:12.496622] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:03:46.702 [2024-07-12 14:52:12.496658] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4643:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:03:46.702 [2024-07-12 14:52:12.496673] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:03:46.702 passed 00:03:46.702 Test: get_device_stat_test ...passed 00:03:46.703 Test: bdev_io_types_test ...passed 00:03:46.703 Test: bdev_io_wait_test ...passed 00:03:46.703 Test: bdev_io_spans_split_test ...passed 00:03:46.703 Test: bdev_io_boundary_split_test ...passed 00:03:46.703 Test: bdev_io_max_size_and_segment_split_test ...[2024-07-12 14:52:12.503577] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3208:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:03:46.703 passed 00:03:46.703 Test: bdev_io_mix_split_test ...passed 00:03:46.703 Test: bdev_io_split_with_io_wait ...passed 00:03:46.703 Test: bdev_io_write_unit_split_test ...[2024-07-12 14:52:12.508357] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:03:46.703 [2024-07-12 14:52:12.508396] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:03:46.703 [2024-07-12 14:52:12.508412] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:03:46.703 [2024-07-12 14:52:12.508429] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:03:46.703 passed 00:03:46.963 Test: bdev_io_alignment_with_boundary ...passed 00:03:46.963 Test: bdev_io_alignment ...passed 00:03:46.963 Test: bdev_histograms ...passed 00:03:46.963 Test: bdev_write_zeroes ...passed 00:03:46.963 Test: bdev_compare_and_write ...passed 00:03:46.963 Test: bdev_compare ...passed 00:03:46.963 Test: bdev_compare_emulated ...passed 00:03:46.963 Test: bdev_zcopy_write ...passed 00:03:46.963 Test: bdev_zcopy_read ...passed 00:03:46.963 Test: bdev_open_while_hotremove ...passed 00:03:46.963 Test: bdev_close_while_hotremove ...passed 00:03:46.963 Test: bdev_open_ext_test ...passed 00:03:46.963 Test: bdev_open_ext_unregister ...passed 00:03:46.963 Test: bdev_set_io_timeout ...[2024-07-12 14:52:12.525429] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8184:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:03:46.963 [2024-07-12 14:52:12.525485] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8184:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:03:46.963 passed 00:03:46.963 Test: bdev_set_qd_sampling ...passed 00:03:46.963 Test: lba_range_overlap ...passed 00:03:46.963 Test: lock_lba_range_check_ranges ...passed 00:03:46.963 Test: lock_lba_range_with_io_outstanding ...passed 00:03:46.963 Test: lock_lba_range_overlapped ...passed 00:03:46.963 Test: bdev_quiesce ...[2024-07-12 14:52:12.534070] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10117:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:03:46.963 passed 00:03:46.963 Test: bdev_io_abort ...passed 00:03:46.963 Test: bdev_unmap ...passed 00:03:46.963 Test: bdev_write_zeroes_split_test ...passed 00:03:46.963 Test: bdev_set_options_test ...passed 00:03:46.963 Test: bdev_get_memory_domains ...passed 00:03:46.963 Test: bdev_io_ext ...[2024-07-12 14:52:12.539273] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 502:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:03:46.963 passed 00:03:46.963 Test: bdev_io_ext_no_opts ...passed 00:03:46.963 Test: bdev_io_ext_invalid_opts ...passed 00:03:46.963 Test: bdev_io_ext_split ...passed 00:03:46.963 Test: bdev_io_ext_bounce_buffer ...passed 00:03:46.963 Test: bdev_register_uuid_alias ...[2024-07-12 14:52:12.547326] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 5268b9a7-405e-11ef-b2a4-e9dca065e82e already exists 00:03:46.963 [2024-07-12 14:52:12.547364] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:5268b9a7-405e-11ef-b2a4-e9dca065e82e alias for bdev bdev0 00:03:46.963 passed 00:03:46.963 Test: bdev_unregister_by_name ...[2024-07-12 14:52:12.547743] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7974:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:03:46.963 [2024-07-12 14:52:12.547766] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7983:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:03:46.963 passed 00:03:46.963 Test: for_each_bdev_test ...passed 00:03:46.963 Test: bdev_seek_test ...passed 00:03:46.963 Test: bdev_copy ...passed 00:03:46.963 Test: bdev_copy_split_test ...passed 00:03:46.963 Test: examine_locks ...passed 00:03:46.963 Test: claim_v2_rwo ...passed 00:03:46.963 Test: claim_v2_rom ...passed 00:03:46.963 Test: claim_v2_rwm ...[2024-07-12 14:52:12.552071] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:46.963 [2024-07-12 14:52:12.552098] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8718:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:46.963 [2024-07-12 14:52:12.552109] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8883:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:46.963 [2024-07-12 14:52:12.552119] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8883:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:46.963 [2024-07-12 14:52:12.552127] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8555:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:46.963 [2024-07-12 14:52:12.552139] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8714:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:03:46.963 [2024-07-12 14:52:12.552167] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:46.963 [2024-07-12 14:52:12.552177] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8883:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:46.963 [2024-07-12 14:52:12.552187] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8883:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:46.963 [2024-07-12 14:52:12.552195] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8555:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:46.963 [2024-07-12 14:52:12.552206] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8756:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:03:46.963 [2024-07-12 14:52:12.552217] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8752:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:03:46.963 [2024-07-12 14:52:12.552255] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8787:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:03:46.963 passed 00:03:46.963 Test: claim_v2_existing_writer ...passed 00:03:46.963 Test: claim_v2_existing_v1 ...[2024-07-12 14:52:12.552278] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:46.963 [2024-07-12 14:52:12.552309] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8883:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:46.963 [2024-07-12 14:52:12.552325] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8883:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:46.963 [2024-07-12 14:52:12.552339] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8555:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:46.963 [2024-07-12 14:52:12.552354] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8806:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:03:46.963 [2024-07-12 14:52:12.552372] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8787:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:03:46.963 [2024-07-12 14:52:12.552399] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8752:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:03:46.963 [2024-07-12 14:52:12.552422] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8752:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:03:46.963 [2024-07-12 14:52:12.552447] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8883:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:03:46.963 [2024-07-12 14:52:12.552464] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8883:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:03:46.963 passed 00:03:46.963 Test: claim_v1_existing_v2 ...passed 00:03:46.963 Test: examine_claimed ...passed 00:03:46.963 00:03:46.963 Run Summary: Type Total Ran Passed Failed Inactive 00:03:46.963 suites 1 1 n/a 0 0 00:03:46.963 tests 59 59 59 0 0 00:03:46.963 asserts 4599 4599 4599 0 n/a 00:03:46.963 00:03:46.963 Elapsed time = 0.062 seconds 00:03:46.963 [2024-07-12 14:52:12.552480] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8883:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:03:46.963 [2024-07-12 14:52:12.552519] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8555:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:46.963 [2024-07-12 14:52:12.552534] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8555:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:46.963 [2024-07-12 14:52:12.552544] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8555:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:46.963 [2024-07-12 14:52:12.552612] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8883:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:03:46.963 14:52:12 unittest.unittest_bdev -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:03:46.963 00:03:46.963 00:03:46.963 CUnit - A unit testing framework for C - Version 2.1-3 00:03:46.963 http://cunit.sourceforge.net/ 00:03:46.963 00:03:46.963 00:03:46.963 Suite: nvme 00:03:46.963 Test: test_create_ctrlr ...passed 00:03:46.963 Test: test_reset_ctrlr ...passed 00:03:46.963 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:03:46.963 Test: test_failover_ctrlr ...passed 00:03:46.963 Test: test_race_between_failover_and_add_secondary_trid ...[2024-07-12 14:52:12.561627] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:46.963 passed 00:03:46.963 Test: test_pending_reset ...[2024-07-12 14:52:12.561931] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:46.963 [2024-07-12 14:52:12.561957] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:46.963 [2024-07-12 14:52:12.561974] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:46.963 passed 00:03:46.963 Test: test_attach_ctrlr ...passed 00:03:46.963 Test: test_aer_cb ...passed 00:03:46.963 Test: test_submit_nvme_cmd ...passed 00:03:46.963 Test: test_add_remove_trid ...passed 00:03:46.963 Test: test_abort ...passed 00:03:46.963 Test: test_get_io_qpair ...[2024-07-12 14:52:12.562098] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:46.963 [2024-07-12 14:52:12.562130] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:46.963 [2024-07-12 14:52:12.562192] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4325:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:03:46.963 [2024-07-12 14:52:12.562404] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7460:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:03:46.963 passed 00:03:46.963 Test: test_bdev_unregister ...passed 00:03:46.963 Test: test_compare_ns ...passed 00:03:46.963 Test: test_init_ana_log_page ...passed 00:03:46.963 Test: test_get_memory_domains ...passed 00:03:46.963 Test: test_reconnect_qpair ...[2024-07-12 14:52:12.562607] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:46.963 passed 00:03:46.963 Test: test_create_bdev_ctrlr ...passed 00:03:46.963 Test: test_add_multi_ns_to_bdev ...passed 00:03:46.963 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:03:46.963 Test: test_admin_path ...passed 00:03:46.963 Test: test_reset_bdev_ctrlr ...[2024-07-12 14:52:12.562653] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5390:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:03:46.963 [2024-07-12 14:52:12.562763] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4581:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:03:46.963 passed 00:03:46.963 Test: test_find_io_path ...passed 00:03:46.963 Test: test_retry_io_if_ana_state_is_updating ...passed 00:03:46.963 Test: test_retry_io_for_io_path_error ...passed 00:03:46.963 Test: test_retry_io_count ...passed 00:03:46.963 Test: test_concurrent_read_ana_log_page ...passed 00:03:46.963 Test: test_retry_io_for_ana_error ...passed 00:03:46.963 Test: test_check_io_error_resiliency_params ...passed 00:03:46.963 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:03:46.963 Test: test_reconnect_ctrlr ...[2024-07-12 14:52:12.563262] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6084:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:03:46.963 [2024-07-12 14:52:12.563279] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6088:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:03:46.963 [2024-07-12 14:52:12.563289] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6097:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:03:46.963 [2024-07-12 14:52:12.563298] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6100:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:03:46.963 [2024-07-12 14:52:12.563306] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6112:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:03:46.963 [2024-07-12 14:52:12.563315] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6112:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:03:46.963 [2024-07-12 14:52:12.563330] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6092:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:03:46.963 [2024-07-12 14:52:12.563339] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6107:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:03:46.963 [2024-07-12 14:52:12.563347] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6104:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:03:46.963 passed 00:03:46.963 Test: test_retry_failover_ctrlr ...passed 00:03:46.963 Test: test_fail_path ...[2024-07-12 14:52:12.563414] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:46.963 [2024-07-12 14:52:12.563451] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:46.963 [2024-07-12 14:52:12.563484] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:46.963 [2024-07-12 14:52:12.563499] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:46.963 [2024-07-12 14:52:12.563513] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:46.963 [2024-07-12 14:52:12.563550] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:46.963 [2024-07-12 14:52:12.563599] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:46.963 [2024-07-12 14:52:12.563616] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:46.963 passed 00:03:46.963 Test: test_nvme_ns_cmp ...passed 00:03:46.963 Test: test_ana_transition ...passed 00:03:46.963 Test: test_set_preferred_path ...passed 00:03:46.963 Test: test_find_next_io_path ...passed 00:03:46.963 Test: test_find_io_path_min_qd ...passed 00:03:46.963 Test: test_disable_auto_failback ...passed 00:03:46.963 Test: test_set_multipath_policy ...passed 00:03:46.963 Test: test_uuid_generation ...[2024-07-12 14:52:12.563631] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:46.963 [2024-07-12 14:52:12.563645] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:46.963 [2024-07-12 14:52:12.563658] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:46.963 [2024-07-12 14:52:12.563789] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:46.963 passed 00:03:46.963 Test: test_retry_io_to_same_path ...passed 00:03:46.964 Test: test_race_between_reset_and_disconnected ...passed 00:03:46.964 Test: test_ctrlr_op_rpc ...passed 00:03:46.964 Test: test_bdev_ctrlr_op_rpc ...passed 00:03:46.964 Test: test_disable_enable_ctrlr ...[2024-07-12 14:52:12.594316] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:46.964 [2024-07-12 14:52:12.594353] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:46.964 passed 00:03:46.964 Test: test_delete_ctrlr_done ...passed 00:03:46.964 Test: test_ns_remove_during_reset ...passed 00:03:46.964 Test: test_io_path_is_current ...passed 00:03:46.964 00:03:46.964 Run Summary: Type Total Ran Passed Failed Inactive 00:03:46.964 suites 1 1 n/a 0 0 00:03:46.964 tests 49 49 49 0 0 00:03:46.964 asserts 3577 3577 3577 0 n/a 00:03:46.964 00:03:46.964 Elapsed time = 0.016 seconds 00:03:46.964 14:52:12 unittest.unittest_bdev -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:03:46.964 00:03:46.964 00:03:46.964 CUnit - A unit testing framework for C - Version 2.1-3 00:03:46.964 http://cunit.sourceforge.net/ 00:03:46.964 00:03:46.964 Test Options 00:03:46.964 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:03:46.964 00:03:46.964 Suite: raid 00:03:46.964 Test: test_create_raid ...passed 00:03:46.964 Test: test_create_raid_superblock ...passed 00:03:46.964 Test: test_delete_raid ...passed 00:03:46.964 Test: test_create_raid_invalid_args ...[2024-07-12 14:52:12.602641] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1481:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:03:46.964 [2024-07-12 14:52:12.602934] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1475:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:03:46.964 [2024-07-12 14:52:12.603089] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1465:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:03:46.964 [2024-07-12 14:52:12.603128] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3193:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:03:46.964 [2024-07-12 14:52:12.603150] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3369:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:03:46.964 [2024-07-12 14:52:12.603362] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3193:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:03:46.964 [2024-07-12 14:52:12.603411] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3369:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:03:46.964 passed 00:03:46.964 Test: test_delete_raid_invalid_args ...passed 00:03:46.964 Test: test_io_channel ...passed 00:03:46.964 Test: test_reset_io ...passed 00:03:46.964 Test: test_multi_raid ...passed 00:03:46.964 Test: test_io_type_supported ...passed 00:03:46.964 Test: test_raid_json_dump_info ...passed 00:03:46.964 Test: test_context_size ...passed 00:03:46.964 Test: test_raid_level_conversions ...passed 00:03:46.964 Test: test_raid_io_split ...passed 00:03:46.964 Test: test_raid_process ...passed 00:03:46.964 00:03:46.964 Run Summary: Type Total Ran Passed Failed Inactive 00:03:46.964 suites 1 1 n/a 0 0 00:03:46.964 tests 14 14 14 0 0 00:03:46.964 asserts 6183 6183 6183 0 n/a 00:03:46.964 00:03:46.964 Elapsed time = 0.000 seconds 00:03:46.964 14:52:12 unittest.unittest_bdev -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:03:46.964 00:03:46.964 00:03:46.964 CUnit - A unit testing framework for C - Version 2.1-3 00:03:46.964 http://cunit.sourceforge.net/ 00:03:46.964 00:03:46.964 00:03:46.964 Suite: raid_sb 00:03:46.964 Test: test_raid_bdev_write_superblock ...passed 00:03:46.964 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:03:46.964 Test: test_raid_bdev_parse_superblock ...passed 00:03:46.964 Suite: raid_sb_md 00:03:46.964 Test: test_raid_bdev_write_superblock ...passed 00:03:46.964 Test: test_raid_bdev_load_base_bdev_superblock ...[2024-07-12 14:52:12.612022] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:03:46.964 passed 00:03:46.964 Test: test_raid_bdev_parse_superblock ...passed[2024-07-12 14:52:12.612333] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:03:46.964 00:03:46.964 Suite: raid_sb_md_interleaved 00:03:46.964 Test: test_raid_bdev_write_superblock ...passed 00:03:46.964 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:03:46.964 Test: test_raid_bdev_parse_superblock ...passed 00:03:46.964 00:03:46.964 Run Summary: Type Total Ran Passed Failed Inactive 00:03:46.964 suites 3 3 n/a 0 0 00:03:46.964 tests 9 9 9 0 0 00:03:46.964 asserts 139 139 139 0 n/a 00:03:46.964 00:03:46.964 Elapsed time = 0.000 seconds 00:03:46.964 [2024-07-12 14:52:12.612461] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:03:46.964 14:52:12 unittest.unittest_bdev -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:03:46.964 00:03:46.964 00:03:46.964 CUnit - A unit testing framework for C - Version 2.1-3 00:03:46.964 http://cunit.sourceforge.net/ 00:03:46.964 00:03:46.964 00:03:46.964 Suite: concat 00:03:46.964 Test: test_concat_start ...passed 00:03:46.964 Test: test_concat_rw ...passed 00:03:46.964 Test: test_concat_null_payload ...passed 00:03:46.964 00:03:46.964 Run Summary: Type Total Ran Passed Failed Inactive 00:03:46.964 suites 1 1 n/a 0 0 00:03:46.964 tests 3 3 3 0 0 00:03:46.964 asserts 8460 8460 8460 0 n/a 00:03:46.964 00:03:46.964 Elapsed time = 0.000 seconds 00:03:46.964 14:52:12 unittest.unittest_bdev -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid0.c/raid0_ut 00:03:46.964 00:03:46.964 00:03:46.964 CUnit - A unit testing framework for C - Version 2.1-3 00:03:46.964 http://cunit.sourceforge.net/ 00:03:46.964 00:03:46.964 00:03:46.964 Suite: raid0 00:03:46.964 Test: test_write_io ...passed 00:03:46.964 Test: test_read_io ...passed 00:03:46.964 Test: test_unmap_io ...passed 00:03:46.964 Test: test_io_failure ...passed 00:03:46.964 Suite: raid0_dif 00:03:46.964 Test: test_write_io ...passed 00:03:46.964 Test: test_read_io ...passed 00:03:46.964 Test: test_unmap_io ...passed 00:03:46.964 Test: test_io_failure ...passed 00:03:46.964 00:03:46.964 Run Summary: Type Total Ran Passed Failed Inactive 00:03:46.964 suites 2 2 n/a 0 0 00:03:46.964 tests 8 8 8 0 0 00:03:46.964 asserts 368291 368291 368291 0 n/a 00:03:46.964 00:03:46.964 Elapsed time = 0.008 seconds 00:03:46.964 14:52:12 unittest.unittest_bdev -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:03:46.964 00:03:46.964 00:03:46.964 CUnit - A unit testing framework for C - Version 2.1-3 00:03:46.964 http://cunit.sourceforge.net/ 00:03:46.964 00:03:46.964 00:03:46.964 Suite: raid1 00:03:46.964 Test: test_raid1_start ...passed 00:03:46.964 Test: test_raid1_read_balancing ...passed 00:03:46.964 Test: test_raid1_write_error ...passed 00:03:46.964 Test: test_raid1_read_error ...passed 00:03:46.964 00:03:46.964 Run Summary: Type Total Ran Passed Failed Inactive 00:03:46.964 suites 1 1 n/a 0 0 00:03:46.964 tests 4 4 4 0 0 00:03:46.964 asserts 4374 4374 4374 0 n/a 00:03:46.964 00:03:46.964 Elapsed time = 0.000 seconds 00:03:46.964 14:52:12 unittest.unittest_bdev -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:03:46.964 00:03:46.964 00:03:46.964 CUnit - A unit testing framework for C - Version 2.1-3 00:03:46.964 http://cunit.sourceforge.net/ 00:03:46.964 00:03:46.964 00:03:46.964 Suite: zone 00:03:46.964 Test: test_zone_get_operation ...passed 00:03:46.964 Test: test_bdev_zone_get_info ...passed 00:03:46.964 Test: test_bdev_zone_management ...passed 00:03:46.964 Test: test_bdev_zone_append ...passed 00:03:46.964 Test: test_bdev_zone_append_with_md ...passed 00:03:46.964 Test: test_bdev_zone_appendv ...passed 00:03:46.964 Test: test_bdev_zone_appendv_with_md ...passed 00:03:46.964 Test: test_bdev_io_get_append_location ...passed 00:03:46.964 00:03:46.964 Run Summary: Type Total Ran Passed Failed Inactive 00:03:46.964 suites 1 1 n/a 0 0 00:03:46.964 tests 8 8 8 0 0 00:03:46.964 asserts 94 94 94 0 n/a 00:03:46.964 00:03:46.964 Elapsed time = 0.000 seconds 00:03:46.964 14:52:12 unittest.unittest_bdev -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:03:46.964 00:03:46.964 00:03:46.964 CUnit - A unit testing framework for C - Version 2.1-3 00:03:46.964 http://cunit.sourceforge.net/ 00:03:46.964 00:03:46.964 00:03:46.964 Suite: gpt_parse 00:03:46.964 Test: test_parse_mbr_and_primary ...[2024-07-12 14:52:12.656550] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:46.964 [2024-07-12 14:52:12.656799] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:46.964 [2024-07-12 14:52:12.656840] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:03:46.964 [2024-07-12 14:52:12.656857] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:03:46.964 [2024-07-12 14:52:12.656876] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:03:46.964 [2024-07-12 14:52:12.656892] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:03:46.964 passed 00:03:46.964 Test: test_parse_secondary ...passed 00:03:46.964 Test: test_check_mbr ...[2024-07-12 14:52:12.657125] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:03:46.964 [2024-07-12 14:52:12.657141] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:03:46.964 [2024-07-12 14:52:12.657159] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:03:46.964 [2024-07-12 14:52:12.657174] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:03:46.964 [2024-07-12 14:52:12.657400] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:46.964 [2024-07-12 14:52:12.657416] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:46.964 passed 00:03:46.964 Test: test_read_header ...passed 00:03:46.964 Test: test_read_partitions ...[2024-07-12 14:52:12.657441] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:03:46.964 [2024-07-12 14:52:12.657459] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 178:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:03:46.964 [2024-07-12 14:52:12.657476] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:03:46.964 [2024-07-12 14:52:12.657493] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 192:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:03:46.964 [2024-07-12 14:52:12.657511] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 136:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:03:46.964 [2024-07-12 14:52:12.657525] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:03:46.964 [2024-07-12 14:52:12.657548] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:03:46.964 [2024-07-12 14:52:12.657564] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 96:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:03:46.964 [2024-07-12 14:52:12.657579] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:03:46.964 passed 00:03:46.964 00:03:46.964 [2024-07-12 14:52:12.657594] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:03:46.964 [2024-07-12 14:52:12.657712] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:03:46.964 Run Summary: Type Total Ran Passed Failed Inactive 00:03:46.964 suites 1 1 n/a 0 0 00:03:46.964 tests 5 5 5 0 0 00:03:46.964 asserts 33 33 33 0 n/a 00:03:46.964 00:03:46.964 Elapsed time = 0.000 seconds 00:03:46.964 14:52:12 unittest.unittest_bdev -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:03:46.964 00:03:46.964 00:03:46.964 CUnit - A unit testing framework for C - Version 2.1-3 00:03:46.964 http://cunit.sourceforge.net/ 00:03:46.964 00:03:46.964 00:03:46.964 Suite: bdev_part 00:03:46.964 Test: part_test ...[2024-07-12 14:52:12.663865] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 4edf0eb3-5eef-8c55-93c0-28c23c496b72 already exists 00:03:46.964 passed 00:03:46.964 Test: part_free_test ...passed 00:03:46.964 Test: part_get_io_channel_test ...[2024-07-12 14:52:12.664010] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:4edf0eb3-5eef-8c55-93c0-28c23c496b72 alias for bdev test1 00:03:46.964 passed 00:03:46.964 Test: part_construct_ext ...passed 00:03:46.964 00:03:46.964 Run Summary: Type Total Ran Passed Failed Inactive 00:03:46.964 suites 1 1 n/a 0 0 00:03:46.964 tests 4 4 4 0 0 00:03:46.964 asserts 48 48 48 0 n/a 00:03:46.964 00:03:46.964 Elapsed time = 0.000 seconds 00:03:46.964 14:52:12 unittest.unittest_bdev -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:03:46.964 00:03:46.964 00:03:46.964 CUnit - A unit testing framework for C - Version 2.1-3 00:03:46.964 http://cunit.sourceforge.net/ 00:03:46.964 00:03:46.964 00:03:46.964 Suite: scsi_nvme_suite 00:03:46.964 Test: scsi_nvme_translate_test ...passed 00:03:46.964 00:03:46.964 Run Summary: Type Total Ran Passed Failed Inactive 00:03:46.964 suites 1 1 n/a 0 0 00:03:46.964 tests 1 1 1 0 0 00:03:46.964 asserts 104 104 104 0 n/a 00:03:46.964 00:03:46.964 Elapsed time = 0.000 seconds 00:03:46.964 14:52:12 unittest.unittest_bdev -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:03:46.964 00:03:46.964 00:03:46.964 CUnit - A unit testing framework for C - Version 2.1-3 00:03:46.964 http://cunit.sourceforge.net/ 00:03:46.964 00:03:46.964 00:03:46.964 Suite: lvol 00:03:46.964 Test: ut_lvs_init ...passed 00:03:46.964 Test: ut_lvol_init ...passed 00:03:46.964 Test: ut_lvol_snapshot ...passed 00:03:46.964 Test: ut_lvol_clone ...passed 00:03:46.964 Test: ut_lvs_destroy ...passed 00:03:46.964 Test: ut_lvs_unload ...passed 00:03:46.964 Test: ut_lvol_resize ...passed 00:03:46.964 Test: ut_lvol_set_read_only ...passed 00:03:46.964 Test: ut_lvol_hotremove ...passed 00:03:46.964 Test: ut_vbdev_lvol_get_io_channel ...passed 00:03:46.964 Test: ut_vbdev_lvol_io_type_supported ...passed 00:03:46.964 Test: ut_lvol_read_write ...passed 00:03:46.964 Test: ut_vbdev_lvol_submit_request ...passed 00:03:46.964 Test: ut_lvol_examine_config ...passed 00:03:46.964 Test: ut_lvol_examine_disk ...passed 00:03:46.964 Test: ut_lvol_rename ...passed 00:03:46.964 Test: ut_bdev_finish ...passed 00:03:46.964 Test: ut_lvs_rename ...passed 00:03:46.964 Test: ut_lvol_seek ...passed 00:03:46.964 Test: ut_esnap_dev_create ...passed 00:03:46.964 Test: ut_lvol_esnap_clone_bad_args ...passed 00:03:46.964 Test: ut_lvol_shallow_copy ...passed 00:03:46.964 Test: ut_lvol_set_external_parent ...passed 00:03:46.965 00:03:46.965 Run Summary: Type Total Ran Passed Failed Inactive 00:03:46.965 suites 1 1 n/a 0 0 00:03:46.965 tests 23 23 23 0 0 00:03:46.965 asserts 770 770 770 0 n/a 00:03:46.965 00:03:46.965 Elapsed time = 0.000 seconds 00:03:46.965 [2024-07-12 14:52:12.675412] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:03:46.965 [2024-07-12 14:52:12.675603] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:03:46.965 [2024-07-12 14:52:12.675693] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1394:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:03:46.965 [2024-07-12 14:52:12.675763] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1536:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:03:46.965 [2024-07-12 14:52:12.675804] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:03:46.965 [2024-07-12 14:52:12.675816] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1344:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:03:46.965 [2024-07-12 14:52:12.675854] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:03:46.965 [2024-07-12 14:52:12.675865] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1885:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:03:46.965 [2024-07-12 14:52:12.675876] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1890:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:03:46.965 [2024-07-12 14:52:12.675905] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1280:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:03:46.965 [2024-07-12 14:52:12.675917] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1287:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:03:46.965 [2024-07-12 14:52:12.675942] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1977:vbdev_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:03:46.965 [2024-07-12 14:52:12.675952] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1982:vbdev_lvol_shallow_copy: *ERROR*: lvol lvol_sc, bdev name must not be NULL 00:03:46.965 [2024-07-12 14:52:12.675969] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:2037:vbdev_lvol_set_external_parent: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:03:46.965 14:52:12 unittest.unittest_bdev -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:03:46.965 00:03:46.965 00:03:46.965 CUnit - A unit testing framework for C - Version 2.1-3 00:03:46.965 http://cunit.sourceforge.net/ 00:03:46.965 00:03:46.965 00:03:46.965 Suite: zone_block 00:03:46.965 Test: test_zone_block_create ...passed 00:03:46.965 Test: test_zone_block_create_invalid ...[2024-07-12 14:52:12.687547] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:03:46.965 [2024-07-12 14:52:12.687773] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-12 14:52:12.687798] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:03:46.965 [2024-07-12 14:52:12.687812] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File existspassed 00:03:46.965 Test: test_get_zone_info ...[2024-07-12 14:52:12.687829] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:03:46.965 [2024-07-12 14:52:12.687842] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-12 14:52:12.687854] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:03:46.965 [2024-07-12 14:52:12.687866] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-12 14:52:12.687955] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:46.965 passed 00:03:46.965 Test: test_supported_io_types ...passed 00:03:46.965 Test: test_reset_zone ...[2024-07-12 14:52:12.687979] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:46.965 [2024-07-12 14:52:12.687995] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:46.965 [2024-07-12 14:52:12.688070] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:46.965 [2024-07-12 14:52:12.688088] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:46.965 passed 00:03:46.965 Test: test_open_zone ...[2024-07-12 14:52:12.688141] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:46.965 passed 00:03:46.965 Test: test_zone_write ...[2024-07-12 14:52:12.688486] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:46.965 [2024-07-12 14:52:12.688504] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:46.965 [2024-07-12 14:52:12.688557] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:03:46.965 [2024-07-12 14:52:12.688571] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:46.965 [2024-07-12 14:52:12.688588] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:03:46.965 [2024-07-12 14:52:12.688600] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:46.965 [2024-07-12 14:52:12.689342] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:03:46.965 [2024-07-12 14:52:12.689369] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:46.965 [2024-07-12 14:52:12.689386] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:03:46.965 [2024-07-12 14:52:12.689398] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:46.965 [2024-07-12 14:52:12.690261] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:03:46.965 [2024-07-12 14:52:12.690282] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:46.965 passed 00:03:46.965 Test: test_zone_read ...passed 00:03:46.965 Test: test_close_zone ...[2024-07-12 14:52:12.690330] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:03:46.965 [2024-07-12 14:52:12.690344] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:46.965 [2024-07-12 14:52:12.690361] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:03:46.965 [2024-07-12 14:52:12.690373] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:46.965 [2024-07-12 14:52:12.690439] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:03:46.965 [2024-07-12 14:52:12.690451] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:46.965 [2024-07-12 14:52:12.690489] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:46.965 [2024-07-12 14:52:12.690511] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:46.965 passed 00:03:46.965 Test: test_finish_zone ...[2024-07-12 14:52:12.690568] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:46.965 [2024-07-12 14:52:12.690584] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:46.965 [2024-07-12 14:52:12.690665] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:46.965 passed 00:03:46.965 Test: test_append_zone ...[2024-07-12 14:52:12.690683] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:46.965 [2024-07-12 14:52:12.690738] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:03:46.965 [2024-07-12 14:52:12.690753] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:46.965 [2024-07-12 14:52:12.690769] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:03:46.965 [2024-07-12 14:52:12.690781] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:46.965 [2024-07-12 14:52:12.692440] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:03:46.965 [2024-07-12 14:52:12.692465] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:46.965 passed 00:03:46.965 00:03:46.965 Run Summary: Type Total Ran Passed Failed Inactive 00:03:46.965 suites 1 1 n/a 0 0 00:03:46.965 tests 11 11 11 0 0 00:03:46.965 asserts 3437 3437 3437 0 n/a 00:03:46.965 00:03:46.965 Elapsed time = 0.000 seconds 00:03:46.965 14:52:12 unittest.unittest_bdev -- unit/unittest.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:03:46.965 00:03:46.965 00:03:46.965 CUnit - A unit testing framework for C - Version 2.1-3 00:03:46.965 http://cunit.sourceforge.net/ 00:03:46.965 00:03:46.965 00:03:46.965 Suite: bdev 00:03:46.965 Test: basic ...[2024-07-12 14:52:12.700176] thread.c:2374:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x24b289): Operation not permitted (rc=-1) 00:03:46.965 [2024-07-12 14:52:12.700364] thread.c:2374:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x1aba9746a480 (0x24b280): Operation not permitted (rc=-1) 00:03:46.965 [2024-07-12 14:52:12.700380] thread.c:2374:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x24b289): Operation not permitted (rc=-1) 00:03:46.965 passed 00:03:46.965 Test: unregister_and_close ...passed 00:03:46.965 Test: unregister_and_close_different_threads ...passed 00:03:46.965 Test: basic_qos ...passed 00:03:46.965 Test: put_channel_during_reset ...passed 00:03:46.965 Test: aborted_reset ...passed 00:03:46.965 Test: aborted_reset_no_outstanding_io ...passed 00:03:46.965 Test: io_during_reset ...passed 00:03:46.965 Test: reset_completions ...passed 00:03:46.965 Test: io_during_qos_queue ...passed 00:03:46.965 Test: io_during_qos_reset ...passed 00:03:46.965 Test: enomem ...passed 00:03:46.965 Test: enomem_multi_bdev ...passed 00:03:46.965 Test: enomem_multi_bdev_unregister ...passed 00:03:46.965 Test: enomem_multi_io_target ...passed 00:03:46.965 Test: qos_dynamic_enable ...passed 00:03:46.965 Test: bdev_histograms_mt ...passed 00:03:46.965 Test: bdev_set_io_timeout_mt ...[2024-07-12 14:52:12.732211] thread.c: 471:spdk_thread_lib_fini: *ERROR*: io_device 0x1aba9746a600 not unregistered 00:03:46.965 passed 00:03:46.965 Test: lock_lba_range_then_submit_io ...[2024-07-12 14:52:12.733245] thread.c:2178:spdk_io_device_register: *ERROR*: io_device 0x24b268 already registered (old:0x1aba9746a600 new:0x1aba9746a780) 00:03:46.965 passed 00:03:46.965 Test: unregister_during_reset ...passed 00:03:46.965 Test: event_notify_and_close ...passed 00:03:46.965 Test: unregister_and_qos_poller ...passed 00:03:46.965 Suite: bdev_wrong_thread 00:03:46.965 Test: spdk_bdev_register_wt ...passed 00:03:46.965 Test: spdk_bdev_examine_wt ...passed 00:03:46.965 00:03:46.965 Run Summary: Type Total Ran Passed Failed Inactive 00:03:46.965 suites 2 2 n/a 0 0 00:03:46.965 tests 24 24 24 0 0 00:03:46.965 asserts 621 621 621 0 n/a 00:03:46.965 00:03:46.965 Elapsed time = 0.039 seconds 00:03:46.965 [2024-07-12 14:52:12.739081] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8513:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x1aba97433380 (0x1aba97433380) 00:03:46.965 [2024-07-12 14:52:12.739128] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 811:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x1aba97433380 (0x1aba97433380) 00:03:46.965 00:03:46.965 real 0m0.256s 00:03:46.965 user 0m0.160s 00:03:46.965 sys 0m0.083s 00:03:46.965 14:52:12 unittest.unittest_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.965 ************************************ 00:03:46.965 END TEST unittest_bdev 00:03:46.965 ************************************ 00:03:46.965 14:52:12 unittest.unittest_bdev -- common/autotest_common.sh@10 -- # set +x 00:03:46.965 14:52:12 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:46.965 14:52:12 unittest -- unit/unittest.sh@215 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:47.222 14:52:12 unittest -- unit/unittest.sh@220 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:47.222 14:52:12 unittest -- unit/unittest.sh@225 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:47.222 14:52:12 unittest -- unit/unittest.sh@229 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:47.222 14:52:12 unittest -- unit/unittest.sh@233 -- # run_test unittest_blob_blobfs unittest_blob 00:03:47.222 14:52:12 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:47.222 14:52:12 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.222 14:52:12 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:47.222 ************************************ 00:03:47.222 START TEST unittest_blob_blobfs 00:03:47.222 ************************************ 00:03:47.222 14:52:12 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1123 -- # unittest_blob 00:03:47.222 14:52:12 unittest.unittest_blob_blobfs -- unit/unittest.sh@39 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:03:47.222 14:52:12 unittest.unittest_blob_blobfs -- unit/unittest.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:03:47.222 00:03:47.222 00:03:47.222 CUnit - A unit testing framework for C - Version 2.1-3 00:03:47.222 http://cunit.sourceforge.net/ 00:03:47.222 00:03:47.222 00:03:47.222 Suite: blob_nocopy_noextent 00:03:47.222 Test: blob_init ...[2024-07-12 14:52:12.800406] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:47.222 passed 00:03:47.222 Test: blob_thin_provision ...passed 00:03:47.222 Test: blob_read_only ...passed 00:03:47.222 Test: bs_load ...[2024-07-12 14:52:12.870666] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:47.222 passed 00:03:47.222 Test: bs_load_custom_cluster_size ...passed 00:03:47.222 Test: bs_load_after_failed_grow ...passed 00:03:47.222 Test: bs_cluster_sz ...[2024-07-12 14:52:12.893658] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:47.222 [2024-07-12 14:52:12.893766] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:47.222 [2024-07-12 14:52:12.893781] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:47.222 passed 00:03:47.222 Test: bs_resize_md ...passed 00:03:47.222 Test: bs_destroy ...passed 00:03:47.222 Test: bs_type ...passed 00:03:47.222 Test: bs_super_block ...passed 00:03:47.222 Test: bs_test_recover_cluster_count ...passed 00:03:47.222 Test: bs_grow_live ...passed 00:03:47.222 Test: bs_grow_live_no_space ...passed 00:03:47.222 Test: bs_test_grow ...passed 00:03:47.222 Test: blob_serialize_test ...passed 00:03:47.222 Test: super_block_crc ...passed 00:03:47.222 Test: blob_thin_prov_write_count_io ...passed 00:03:47.222 Test: blob_thin_prov_unmap_cluster ...passed 00:03:47.222 Test: bs_load_iter_test ...passed 00:03:47.479 Test: blob_relations ...[2024-07-12 14:52:13.038827] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:47.479 [2024-07-12 14:52:13.038902] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:47.479 [2024-07-12 14:52:13.039012] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:47.479 [2024-07-12 14:52:13.039022] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:47.479 passed 00:03:47.479 Test: blob_relations2 ...[2024-07-12 14:52:13.049218] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:47.479 [2024-07-12 14:52:13.049253] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:47.479 [2024-07-12 14:52:13.049263] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:47.479 [2024-07-12 14:52:13.049270] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:47.479 [2024-07-12 14:52:13.049390] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:47.479 [2024-07-12 14:52:13.049412] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:47.479 [2024-07-12 14:52:13.049447] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:47.479 [2024-07-12 14:52:13.049471] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:47.479 passed 00:03:47.479 Test: blob_relations3 ...passed 00:03:47.479 Test: blobstore_clean_power_failure ...passed 00:03:47.479 Test: blob_delete_snapshot_power_failure ...[2024-07-12 14:52:13.202889] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:47.479 [2024-07-12 14:52:13.213070] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:47.479 [2024-07-12 14:52:13.213113] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:47.479 [2024-07-12 14:52:13.213137] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:47.479 [2024-07-12 14:52:13.223131] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:47.479 [2024-07-12 14:52:13.223185] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:47.479 [2024-07-12 14:52:13.223208] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:47.479 [2024-07-12 14:52:13.223216] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:47.479 [2024-07-12 14:52:13.233293] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:47.479 [2024-07-12 14:52:13.233323] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:47.479 [2024-07-12 14:52:13.244082] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:47.479 [2024-07-12 14:52:13.244146] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:47.479 [2024-07-12 14:52:13.254355] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:47.479 [2024-07-12 14:52:13.254409] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:47.479 passed 00:03:47.479 Test: blob_create_snapshot_power_failure ...[2024-07-12 14:52:13.290642] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:47.736 [2024-07-12 14:52:13.311689] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:47.736 [2024-07-12 14:52:13.322266] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:47.736 passed 00:03:47.736 Test: blob_io_unit ...passed 00:03:47.736 Test: blob_io_unit_compatibility ...passed 00:03:47.736 Test: blob_ext_md_pages ...passed 00:03:47.736 Test: blob_esnap_io_4096_4096 ...passed 00:03:47.736 Test: blob_esnap_io_512_512 ...passed 00:03:47.736 Test: blob_esnap_io_4096_512 ...passed 00:03:47.736 Test: blob_esnap_io_512_4096 ...passed 00:03:47.736 Test: blob_esnap_clone_resize ...passed 00:03:47.736 Suite: blob_bs_nocopy_noextent 00:03:47.736 Test: blob_open ...passed 00:03:47.993 Test: blob_create ...[2024-07-12 14:52:13.567690] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:47.993 passed 00:03:47.993 Test: blob_create_loop ...passed 00:03:47.993 Test: blob_create_fail ...[2024-07-12 14:52:13.645938] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:47.993 passed 00:03:47.993 Test: blob_create_internal ...passed 00:03:47.993 Test: blob_create_zero_extent ...passed 00:03:47.993 Test: blob_snapshot ...passed 00:03:47.993 Test: blob_clone ...passed 00:03:48.308 Test: blob_inflate ...[2024-07-12 14:52:13.821545] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:48.308 passed 00:03:48.308 Test: blob_delete ...passed 00:03:48.308 Test: blob_resize_test ...[2024-07-12 14:52:13.889580] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:48.308 passed 00:03:48.308 Test: blob_resize_thin_test ...passed 00:03:48.308 Test: channel_ops ...passed 00:03:48.308 Test: blob_super ...passed 00:03:48.308 Test: blob_rw_verify_iov ...passed 00:03:48.308 Test: blob_unmap ...passed 00:03:48.308 Test: blob_iter ...passed 00:03:48.566 Test: blob_parse_md ...passed 00:03:48.566 Test: bs_load_pending_removal ...passed 00:03:48.566 Test: bs_unload ...[2024-07-12 14:52:14.197791] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:48.566 passed 00:03:48.566 Test: bs_usable_clusters ...passed 00:03:48.566 Test: blob_crc ...[2024-07-12 14:52:14.264921] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:48.566 [2024-07-12 14:52:14.264990] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:48.566 passed 00:03:48.566 Test: blob_flags ...passed 00:03:48.566 Test: bs_version ...passed 00:03:48.566 Test: blob_set_xattrs_test ...[2024-07-12 14:52:14.368116] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:48.566 [2024-07-12 14:52:14.368200] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:48.825 passed 00:03:48.825 Test: blob_thin_prov_alloc ...passed 00:03:48.825 Test: blob_insert_cluster_msg_test ...passed 00:03:48.825 Test: blob_thin_prov_rw ...passed 00:03:48.825 Test: blob_thin_prov_rle ...passed 00:03:48.825 Test: blob_thin_prov_rw_iov ...passed 00:03:48.825 Test: blob_snapshot_rw ...passed 00:03:48.825 Test: blob_snapshot_rw_iov ...passed 00:03:49.084 Test: blob_inflate_rw ...passed 00:03:49.084 Test: blob_snapshot_freeze_io ...passed 00:03:49.084 Test: blob_operation_split_rw ...passed 00:03:49.084 Test: blob_operation_split_rw_iov ...passed 00:03:49.084 Test: blob_simultaneous_operations ...[2024-07-12 14:52:14.893636] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:49.084 [2024-07-12 14:52:14.893701] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:49.084 [2024-07-12 14:52:14.894034] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:49.084 [2024-07-12 14:52:14.894044] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:49.342 [2024-07-12 14:52:14.897494] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:49.342 [2024-07-12 14:52:14.897517] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:49.342 [2024-07-12 14:52:14.897535] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:49.342 [2024-07-12 14:52:14.897543] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:49.342 passed 00:03:49.342 Test: blob_persist_test ...passed 00:03:49.342 Test: blob_decouple_snapshot ...passed 00:03:49.342 Test: blob_seek_io_unit ...passed 00:03:49.342 Test: blob_nested_freezes ...passed 00:03:49.342 Test: blob_clone_resize ...passed 00:03:49.342 Test: blob_shallow_copy ...[2024-07-12 14:52:15.120639] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:03:49.342 [2024-07-12 14:52:15.120704] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:03:49.342 [2024-07-12 14:52:15.120716] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:03:49.342 passed 00:03:49.342 Suite: blob_blob_nocopy_noextent 00:03:49.645 Test: blob_write ...passed 00:03:49.645 Test: blob_read ...passed 00:03:49.645 Test: blob_rw_verify ...passed 00:03:49.645 Test: blob_rw_verify_iov_nomem ...passed 00:03:49.645 Test: blob_rw_iov_read_only ...passed 00:03:49.645 Test: blob_xattr ...passed 00:03:49.645 Test: blob_dirty_shutdown ...passed 00:03:49.645 Test: blob_is_degraded ...passed 00:03:49.645 Suite: blob_esnap_bs_nocopy_noextent 00:03:49.904 Test: blob_esnap_create ...passed 00:03:49.904 Test: blob_esnap_thread_add_remove ...passed 00:03:49.904 Test: blob_esnap_clone_snapshot ...passed 00:03:49.904 Test: blob_esnap_clone_inflate ...passed 00:03:49.904 Test: blob_esnap_clone_decouple ...passed 00:03:49.904 Test: blob_esnap_clone_reload ...passed 00:03:49.904 Test: blob_esnap_hotplug ...passed 00:03:49.904 Test: blob_set_parent ...[2024-07-12 14:52:15.675755] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:03:49.904 [2024-07-12 14:52:15.675832] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:03:49.904 [2024-07-12 14:52:15.675853] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:03:49.904 [2024-07-12 14:52:15.675863] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:03:49.904 [2024-07-12 14:52:15.675916] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:49.904 passed 00:03:49.904 Test: blob_set_external_parent ...[2024-07-12 14:52:15.707267] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:03:49.904 [2024-07-12 14:52:15.707334] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:03:49.904 [2024-07-12 14:52:15.707344] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:03:49.904 [2024-07-12 14:52:15.707400] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:50.163 passed 00:03:50.163 Suite: blob_nocopy_extent 00:03:50.163 Test: blob_init ...[2024-07-12 14:52:15.718118] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:50.163 passed 00:03:50.163 Test: blob_thin_provision ...passed 00:03:50.163 Test: blob_read_only ...passed 00:03:50.163 Test: bs_load ...[2024-07-12 14:52:15.763275] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:50.163 passed 00:03:50.163 Test: bs_load_custom_cluster_size ...passed 00:03:50.163 Test: bs_load_after_failed_grow ...passed 00:03:50.163 Test: bs_cluster_sz ...[2024-07-12 14:52:15.785873] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:50.163 [2024-07-12 14:52:15.785947] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:50.163 [2024-07-12 14:52:15.785961] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:50.163 passed 00:03:50.163 Test: bs_resize_md ...passed 00:03:50.163 Test: bs_destroy ...passed 00:03:50.163 Test: bs_type ...passed 00:03:50.163 Test: bs_super_block ...passed 00:03:50.163 Test: bs_test_recover_cluster_count ...passed 00:03:50.163 Test: bs_grow_live ...passed 00:03:50.163 Test: bs_grow_live_no_space ...passed 00:03:50.163 Test: bs_test_grow ...passed 00:03:50.163 Test: blob_serialize_test ...passed 00:03:50.163 Test: super_block_crc ...passed 00:03:50.163 Test: blob_thin_prov_write_count_io ...passed 00:03:50.163 Test: blob_thin_prov_unmap_cluster ...passed 00:03:50.163 Test: bs_load_iter_test ...passed 00:03:50.163 Test: blob_relations ...[2024-07-12 14:52:15.944843] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:50.163 [2024-07-12 14:52:15.944899] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:50.163 [2024-07-12 14:52:15.945022] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:50.163 [2024-07-12 14:52:15.945033] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:50.163 passed 00:03:50.163 Test: blob_relations2 ...[2024-07-12 14:52:15.957304] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:50.163 [2024-07-12 14:52:15.957341] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:50.163 [2024-07-12 14:52:15.957350] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:50.163 [2024-07-12 14:52:15.957357] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:50.163 [2024-07-12 14:52:15.957496] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:50.163 [2024-07-12 14:52:15.957507] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:50.163 [2024-07-12 14:52:15.957545] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:50.163 [2024-07-12 14:52:15.957555] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:50.163 passed 00:03:50.163 Test: blob_relations3 ...passed 00:03:50.421 Test: blobstore_clean_power_failure ...passed 00:03:50.422 Test: blob_delete_snapshot_power_failure ...[2024-07-12 14:52:16.125800] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:50.422 [2024-07-12 14:52:16.137904] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:50.422 [2024-07-12 14:52:16.149990] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:50.422 [2024-07-12 14:52:16.150057] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:50.422 [2024-07-12 14:52:16.150066] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:50.422 [2024-07-12 14:52:16.162078] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:50.422 [2024-07-12 14:52:16.162131] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:50.422 [2024-07-12 14:52:16.162139] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:50.422 [2024-07-12 14:52:16.162148] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:50.422 [2024-07-12 14:52:16.174056] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:50.422 [2024-07-12 14:52:16.174106] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:50.422 [2024-07-12 14:52:16.174114] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:50.422 [2024-07-12 14:52:16.174121] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:50.422 [2024-07-12 14:52:16.186113] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:50.422 [2024-07-12 14:52:16.186152] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:50.422 [2024-07-12 14:52:16.198136] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:50.422 [2024-07-12 14:52:16.198217] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:50.422 [2024-07-12 14:52:16.210331] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:50.422 [2024-07-12 14:52:16.210396] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:50.679 passed 00:03:50.679 Test: blob_create_snapshot_power_failure ...[2024-07-12 14:52:16.246201] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:50.679 [2024-07-12 14:52:16.257882] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:50.679 [2024-07-12 14:52:16.281458] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:50.679 [2024-07-12 14:52:16.293362] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:50.679 passed 00:03:50.679 Test: blob_io_unit ...passed 00:03:50.679 Test: blob_io_unit_compatibility ...passed 00:03:50.679 Test: blob_ext_md_pages ...passed 00:03:50.679 Test: blob_esnap_io_4096_4096 ...passed 00:03:50.679 Test: blob_esnap_io_512_512 ...passed 00:03:50.679 Test: blob_esnap_io_4096_512 ...passed 00:03:50.679 Test: blob_esnap_io_512_4096 ...passed 00:03:50.679 Test: blob_esnap_clone_resize ...passed 00:03:50.679 Suite: blob_bs_nocopy_extent 00:03:50.936 Test: blob_open ...passed 00:03:50.936 Test: blob_create ...[2024-07-12 14:52:16.551987] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:50.936 passed 00:03:50.936 Test: blob_create_loop ...passed 00:03:50.936 Test: blob_create_fail ...[2024-07-12 14:52:16.638781] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:50.936 passed 00:03:50.936 Test: blob_create_internal ...passed 00:03:50.936 Test: blob_create_zero_extent ...passed 00:03:51.193 Test: blob_snapshot ...passed 00:03:51.193 Test: blob_clone ...passed 00:03:51.193 Test: blob_inflate ...[2024-07-12 14:52:16.823419] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:51.193 passed 00:03:51.193 Test: blob_delete ...passed 00:03:51.193 Test: blob_resize_test ...[2024-07-12 14:52:16.892262] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:51.193 passed 00:03:51.193 Test: blob_resize_thin_test ...passed 00:03:51.193 Test: channel_ops ...passed 00:03:51.451 Test: blob_super ...passed 00:03:51.451 Test: blob_rw_verify_iov ...passed 00:03:51.451 Test: blob_unmap ...passed 00:03:51.451 Test: blob_iter ...passed 00:03:51.451 Test: blob_parse_md ...passed 00:03:51.451 Test: bs_load_pending_removal ...passed 00:03:51.451 Test: bs_unload ...[2024-07-12 14:52:17.219243] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:51.451 passed 00:03:51.708 Test: bs_usable_clusters ...passed 00:03:51.708 Test: blob_crc ...[2024-07-12 14:52:17.291179] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:51.708 [2024-07-12 14:52:17.291246] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:51.708 passed 00:03:51.708 Test: blob_flags ...passed 00:03:51.708 Test: bs_version ...passed 00:03:51.708 Test: blob_set_xattrs_test ...[2024-07-12 14:52:17.396528] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:51.708 [2024-07-12 14:52:17.396578] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:51.708 passed 00:03:51.708 Test: blob_thin_prov_alloc ...passed 00:03:51.708 Test: blob_insert_cluster_msg_test ...passed 00:03:51.966 Test: blob_thin_prov_rw ...passed 00:03:51.966 Test: blob_thin_prov_rle ...passed 00:03:51.966 Test: blob_thin_prov_rw_iov ...passed 00:03:51.966 Test: blob_snapshot_rw ...passed 00:03:51.966 Test: blob_snapshot_rw_iov ...passed 00:03:51.966 Test: blob_inflate_rw ...passed 00:03:51.966 Test: blob_snapshot_freeze_io ...passed 00:03:52.226 Test: blob_operation_split_rw ...passed 00:03:52.226 Test: blob_operation_split_rw_iov ...passed 00:03:52.226 Test: blob_simultaneous_operations ...[2024-07-12 14:52:17.913957] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:52.226 [2024-07-12 14:52:17.914032] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:52.226 [2024-07-12 14:52:17.914344] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:52.226 [2024-07-12 14:52:17.914370] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:52.226 [2024-07-12 14:52:17.917921] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:52.226 [2024-07-12 14:52:17.917946] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:52.226 [2024-07-12 14:52:17.917963] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:52.226 [2024-07-12 14:52:17.917971] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:52.226 passed 00:03:52.226 Test: blob_persist_test ...passed 00:03:52.226 Test: blob_decouple_snapshot ...passed 00:03:52.485 Test: blob_seek_io_unit ...passed 00:03:52.485 Test: blob_nested_freezes ...passed 00:03:52.485 Test: blob_clone_resize ...passed 00:03:52.485 Test: blob_shallow_copy ...[2024-07-12 14:52:18.145231] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:03:52.485 [2024-07-12 14:52:18.145306] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:03:52.485 [2024-07-12 14:52:18.145317] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:03:52.485 passed 00:03:52.485 Suite: blob_blob_nocopy_extent 00:03:52.485 Test: blob_write ...passed 00:03:52.485 Test: blob_read ...passed 00:03:52.485 Test: blob_rw_verify ...passed 00:03:52.742 Test: blob_rw_verify_iov_nomem ...passed 00:03:52.742 Test: blob_rw_iov_read_only ...passed 00:03:52.742 Test: blob_xattr ...passed 00:03:52.742 Test: blob_dirty_shutdown ...passed 00:03:52.742 Test: blob_is_degraded ...passed 00:03:52.742 Suite: blob_esnap_bs_nocopy_extent 00:03:52.742 Test: blob_esnap_create ...passed 00:03:52.742 Test: blob_esnap_thread_add_remove ...passed 00:03:53.000 Test: blob_esnap_clone_snapshot ...passed 00:03:53.000 Test: blob_esnap_clone_inflate ...passed 00:03:53.000 Test: blob_esnap_clone_decouple ...passed 00:03:53.000 Test: blob_esnap_clone_reload ...passed 00:03:53.000 Test: blob_esnap_hotplug ...passed 00:03:53.000 Test: blob_set_parent ...[2024-07-12 14:52:18.715028] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:03:53.000 [2024-07-12 14:52:18.715103] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:03:53.000 [2024-07-12 14:52:18.715126] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:03:53.000 [2024-07-12 14:52:18.715136] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:03:53.000 [2024-07-12 14:52:18.715193] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:53.000 passed 00:03:53.000 Test: blob_set_external_parent ...[2024-07-12 14:52:18.747915] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:03:53.000 [2024-07-12 14:52:18.747975] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:03:53.000 [2024-07-12 14:52:18.747984] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:03:53.000 [2024-07-12 14:52:18.748027] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:53.000 passed 00:03:53.000 Suite: blob_copy_noextent 00:03:53.000 Test: blob_init ...[2024-07-12 14:52:18.758584] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:53.000 passed 00:03:53.000 Test: blob_thin_provision ...passed 00:03:53.000 Test: blob_read_only ...passed 00:03:53.000 Test: bs_load ...[2024-07-12 14:52:18.803731] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:53.000 passed 00:03:53.258 Test: bs_load_custom_cluster_size ...passed 00:03:53.258 Test: bs_load_after_failed_grow ...passed 00:03:53.258 Test: bs_cluster_sz ...[2024-07-12 14:52:18.827184] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:53.258 [2024-07-12 14:52:18.827277] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:53.258 [2024-07-12 14:52:18.827293] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:53.258 passed 00:03:53.258 Test: bs_resize_md ...passed 00:03:53.258 Test: bs_destroy ...passed 00:03:53.258 Test: bs_type ...passed 00:03:53.258 Test: bs_super_block ...passed 00:03:53.258 Test: bs_test_recover_cluster_count ...passed 00:03:53.258 Test: bs_grow_live ...passed 00:03:53.258 Test: bs_grow_live_no_space ...passed 00:03:53.258 Test: bs_test_grow ...passed 00:03:53.258 Test: blob_serialize_test ...passed 00:03:53.258 Test: super_block_crc ...passed 00:03:53.258 Test: blob_thin_prov_write_count_io ...passed 00:03:53.258 Test: blob_thin_prov_unmap_cluster ...passed 00:03:53.258 Test: bs_load_iter_test ...passed 00:03:53.258 Test: blob_relations ...[2024-07-12 14:52:18.987442] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:53.258 [2024-07-12 14:52:18.987498] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:53.258 [2024-07-12 14:52:18.987606] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:53.258 [2024-07-12 14:52:18.987617] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:53.258 passed 00:03:53.258 Test: blob_relations2 ...[2024-07-12 14:52:19.000135] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:53.258 [2024-07-12 14:52:19.000161] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:53.258 [2024-07-12 14:52:19.000170] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:53.258 [2024-07-12 14:52:19.000178] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:53.258 [2024-07-12 14:52:19.000300] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:53.258 [2024-07-12 14:52:19.000322] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:53.258 [2024-07-12 14:52:19.000358] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:53.258 [2024-07-12 14:52:19.000366] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:53.258 passed 00:03:53.258 Test: blob_relations3 ...passed 00:03:53.516 Test: blobstore_clean_power_failure ...passed 00:03:53.516 Test: blob_delete_snapshot_power_failure ...[2024-07-12 14:52:19.162165] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:53.516 [2024-07-12 14:52:19.173437] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:53.516 [2024-07-12 14:52:19.173495] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:53.516 [2024-07-12 14:52:19.173504] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:53.516 [2024-07-12 14:52:19.184333] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:53.516 [2024-07-12 14:52:19.184364] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:53.516 [2024-07-12 14:52:19.184372] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:53.516 [2024-07-12 14:52:19.184380] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:53.516 [2024-07-12 14:52:19.195073] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:53.516 [2024-07-12 14:52:19.195116] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:53.516 [2024-07-12 14:52:19.206815] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:53.516 [2024-07-12 14:52:19.206878] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:53.516 [2024-07-12 14:52:19.218663] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:53.516 [2024-07-12 14:52:19.218707] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:53.516 passed 00:03:53.516 Test: blob_create_snapshot_power_failure ...[2024-07-12 14:52:19.253186] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:53.516 [2024-07-12 14:52:19.276277] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:53.516 [2024-07-12 14:52:19.287192] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:53.516 passed 00:03:53.775 Test: blob_io_unit ...passed 00:03:53.775 Test: blob_io_unit_compatibility ...passed 00:03:53.775 Test: blob_ext_md_pages ...passed 00:03:53.775 Test: blob_esnap_io_4096_4096 ...passed 00:03:53.775 Test: blob_esnap_io_512_512 ...passed 00:03:53.775 Test: blob_esnap_io_4096_512 ...passed 00:03:53.775 Test: blob_esnap_io_512_4096 ...passed 00:03:53.775 Test: blob_esnap_clone_resize ...passed 00:03:53.775 Suite: blob_bs_copy_noextent 00:03:53.775 Test: blob_open ...passed 00:03:53.775 Test: blob_create ...[2024-07-12 14:52:19.529235] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:53.775 passed 00:03:54.032 Test: blob_create_loop ...passed 00:03:54.033 Test: blob_create_fail ...[2024-07-12 14:52:19.610492] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:54.033 passed 00:03:54.033 Test: blob_create_internal ...passed 00:03:54.033 Test: blob_create_zero_extent ...passed 00:03:54.033 Test: blob_snapshot ...passed 00:03:54.033 Test: blob_clone ...passed 00:03:54.033 Test: blob_inflate ...[2024-07-12 14:52:19.776282] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:54.033 passed 00:03:54.033 Test: blob_delete ...passed 00:03:54.033 Test: blob_resize_test ...[2024-07-12 14:52:19.836056] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:54.291 passed 00:03:54.291 Test: blob_resize_thin_test ...passed 00:03:54.291 Test: channel_ops ...passed 00:03:54.291 Test: blob_super ...passed 00:03:54.291 Test: blob_rw_verify_iov ...passed 00:03:54.291 Test: blob_unmap ...passed 00:03:54.291 Test: blob_iter ...passed 00:03:54.291 Test: blob_parse_md ...passed 00:03:54.548 Test: bs_load_pending_removal ...passed 00:03:54.548 Test: bs_unload ...[2024-07-12 14:52:20.134006] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:54.548 passed 00:03:54.548 Test: bs_usable_clusters ...passed 00:03:54.548 Test: blob_crc ...[2024-07-12 14:52:20.196057] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:54.548 [2024-07-12 14:52:20.196117] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:54.548 passed 00:03:54.548 Test: blob_flags ...passed 00:03:54.548 Test: bs_version ...passed 00:03:54.548 Test: blob_set_xattrs_test ...[2024-07-12 14:52:20.294062] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:54.548 [2024-07-12 14:52:20.294124] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:54.548 passed 00:03:54.548 Test: blob_thin_prov_alloc ...passed 00:03:54.805 Test: blob_insert_cluster_msg_test ...passed 00:03:54.805 Test: blob_thin_prov_rw ...passed 00:03:54.805 Test: blob_thin_prov_rle ...passed 00:03:54.805 Test: blob_thin_prov_rw_iov ...passed 00:03:54.805 Test: blob_snapshot_rw ...passed 00:03:54.805 Test: blob_snapshot_rw_iov ...passed 00:03:55.064 Test: blob_inflate_rw ...passed 00:03:55.064 Test: blob_snapshot_freeze_io ...passed 00:03:55.064 Test: blob_operation_split_rw ...passed 00:03:55.064 Test: blob_operation_split_rw_iov ...passed 00:03:55.064 Test: blob_simultaneous_operations ...[2024-07-12 14:52:20.834143] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:55.064 [2024-07-12 14:52:20.834214] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:55.064 [2024-07-12 14:52:20.834481] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:55.064 [2024-07-12 14:52:20.834491] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:55.064 [2024-07-12 14:52:20.836624] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:55.064 [2024-07-12 14:52:20.836645] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:55.064 [2024-07-12 14:52:20.836662] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:55.064 [2024-07-12 14:52:20.836669] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:55.064 passed 00:03:55.324 Test: blob_persist_test ...passed 00:03:55.324 Test: blob_decouple_snapshot ...passed 00:03:55.324 Test: blob_seek_io_unit ...passed 00:03:55.324 Test: blob_nested_freezes ...passed 00:03:55.324 Test: blob_clone_resize ...passed 00:03:55.324 Test: blob_shallow_copy ...[2024-07-12 14:52:21.053232] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:03:55.324 [2024-07-12 14:52:21.053319] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:03:55.324 [2024-07-12 14:52:21.053331] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:03:55.324 passed 00:03:55.324 Suite: blob_blob_copy_noextent 00:03:55.324 Test: blob_write ...passed 00:03:55.324 Test: blob_read ...passed 00:03:55.581 Test: blob_rw_verify ...passed 00:03:55.581 Test: blob_rw_verify_iov_nomem ...passed 00:03:55.581 Test: blob_rw_iov_read_only ...passed 00:03:55.581 Test: blob_xattr ...passed 00:03:55.581 Test: blob_dirty_shutdown ...passed 00:03:55.581 Test: blob_is_degraded ...passed 00:03:55.581 Suite: blob_esnap_bs_copy_noextent 00:03:55.581 Test: blob_esnap_create ...passed 00:03:55.839 Test: blob_esnap_thread_add_remove ...passed 00:03:55.839 Test: blob_esnap_clone_snapshot ...passed 00:03:55.839 Test: blob_esnap_clone_inflate ...passed 00:03:55.839 Test: blob_esnap_clone_decouple ...passed 00:03:55.839 Test: blob_esnap_clone_reload ...passed 00:03:55.839 Test: blob_esnap_hotplug ...passed 00:03:55.839 Test: blob_set_parent ...[2024-07-12 14:52:21.609463] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:03:55.839 [2024-07-12 14:52:21.609524] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:03:55.839 [2024-07-12 14:52:21.609548] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:03:55.839 [2024-07-12 14:52:21.609558] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:03:55.839 [2024-07-12 14:52:21.609609] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:55.839 passed 00:03:55.839 Test: blob_set_external_parent ...[2024-07-12 14:52:21.640900] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:03:55.839 [2024-07-12 14:52:21.640960] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:03:55.839 [2024-07-12 14:52:21.640984] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:03:55.839 [2024-07-12 14:52:21.641033] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:55.840 passed 00:03:55.840 Suite: blob_copy_extent 00:03:55.840 Test: blob_init ...[2024-07-12 14:52:21.651338] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:56.097 passed 00:03:56.097 Test: blob_thin_provision ...passed 00:03:56.097 Test: blob_read_only ...passed 00:03:56.097 Test: bs_load ...[2024-07-12 14:52:21.691724] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:56.097 passed 00:03:56.097 Test: bs_load_custom_cluster_size ...passed 00:03:56.097 Test: bs_load_after_failed_grow ...passed 00:03:56.097 Test: bs_cluster_sz ...[2024-07-12 14:52:21.712341] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:56.097 [2024-07-12 14:52:21.712406] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:56.097 [2024-07-12 14:52:21.712420] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:56.097 passed 00:03:56.097 Test: bs_resize_md ...passed 00:03:56.097 Test: bs_destroy ...passed 00:03:56.097 Test: bs_type ...passed 00:03:56.097 Test: bs_super_block ...passed 00:03:56.097 Test: bs_test_recover_cluster_count ...passed 00:03:56.097 Test: bs_grow_live ...passed 00:03:56.097 Test: bs_grow_live_no_space ...passed 00:03:56.097 Test: bs_test_grow ...passed 00:03:56.097 Test: blob_serialize_test ...passed 00:03:56.097 Test: super_block_crc ...passed 00:03:56.097 Test: blob_thin_prov_write_count_io ...passed 00:03:56.097 Test: blob_thin_prov_unmap_cluster ...passed 00:03:56.097 Test: bs_load_iter_test ...passed 00:03:56.097 Test: blob_relations ...[2024-07-12 14:52:21.864470] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:56.097 [2024-07-12 14:52:21.864544] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:56.097 [2024-07-12 14:52:21.864683] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:56.097 [2024-07-12 14:52:21.864694] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:56.097 passed 00:03:56.097 Test: blob_relations2 ...[2024-07-12 14:52:21.875895] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:56.097 [2024-07-12 14:52:21.875936] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:56.097 [2024-07-12 14:52:21.875945] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:56.097 [2024-07-12 14:52:21.875952] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:56.097 [2024-07-12 14:52:21.876088] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:56.097 [2024-07-12 14:52:21.876099] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:56.097 [2024-07-12 14:52:21.876137] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:56.097 [2024-07-12 14:52:21.876146] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:56.097 passed 00:03:56.097 Test: blob_relations3 ...passed 00:03:56.353 Test: blobstore_clean_power_failure ...passed 00:03:56.353 Test: blob_delete_snapshot_power_failure ...[2024-07-12 14:52:22.026184] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:56.353 [2024-07-12 14:52:22.037104] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:56.353 [2024-07-12 14:52:22.048202] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:56.353 [2024-07-12 14:52:22.048260] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:56.353 [2024-07-12 14:52:22.048269] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:56.353 [2024-07-12 14:52:22.059377] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:56.353 [2024-07-12 14:52:22.059408] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:56.353 [2024-07-12 14:52:22.059417] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:56.353 [2024-07-12 14:52:22.059424] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:56.353 [2024-07-12 14:52:22.070634] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:56.353 [2024-07-12 14:52:22.070669] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:56.353 [2024-07-12 14:52:22.070677] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:56.353 [2024-07-12 14:52:22.070685] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:56.353 [2024-07-12 14:52:22.081735] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:56.353 [2024-07-12 14:52:22.081780] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:56.353 [2024-07-12 14:52:22.092724] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:56.353 [2024-07-12 14:52:22.092776] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:56.353 [2024-07-12 14:52:22.103416] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:56.353 [2024-07-12 14:52:22.103469] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:56.353 passed 00:03:56.353 Test: blob_create_snapshot_power_failure ...[2024-07-12 14:52:22.136467] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:56.353 [2024-07-12 14:52:22.147976] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:56.610 [2024-07-12 14:52:22.169728] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:56.610 [2024-07-12 14:52:22.180510] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:56.610 passed 00:03:56.610 Test: blob_io_unit ...passed 00:03:56.610 Test: blob_io_unit_compatibility ...passed 00:03:56.610 Test: blob_ext_md_pages ...passed 00:03:56.610 Test: blob_esnap_io_4096_4096 ...passed 00:03:56.610 Test: blob_esnap_io_512_512 ...passed 00:03:56.610 Test: blob_esnap_io_4096_512 ...passed 00:03:56.610 Test: blob_esnap_io_512_4096 ...passed 00:03:56.610 Test: blob_esnap_clone_resize ...passed 00:03:56.610 Suite: blob_bs_copy_extent 00:03:56.610 Test: blob_open ...passed 00:03:56.610 Test: blob_create ...[2024-07-12 14:52:22.400749] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:56.610 passed 00:03:56.867 Test: blob_create_loop ...passed 00:03:56.867 Test: blob_create_fail ...[2024-07-12 14:52:22.480441] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:56.867 passed 00:03:56.867 Test: blob_create_internal ...passed 00:03:56.867 Test: blob_create_zero_extent ...passed 00:03:56.867 Test: blob_snapshot ...passed 00:03:56.867 Test: blob_clone ...passed 00:03:56.867 Test: blob_inflate ...[2024-07-12 14:52:22.638670] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:56.867 passed 00:03:56.867 Test: blob_delete ...passed 00:03:57.127 Test: blob_resize_test ...[2024-07-12 14:52:22.700347] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:57.127 passed 00:03:57.127 Test: blob_resize_thin_test ...passed 00:03:57.127 Test: channel_ops ...passed 00:03:57.127 Test: blob_super ...passed 00:03:57.127 Test: blob_rw_verify_iov ...passed 00:03:57.127 Test: blob_unmap ...passed 00:03:57.127 Test: blob_iter ...passed 00:03:57.127 Test: blob_parse_md ...passed 00:03:57.385 Test: bs_load_pending_removal ...passed 00:03:57.385 Test: bs_unload ...[2024-07-12 14:52:22.984048] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:57.385 passed 00:03:57.385 Test: bs_usable_clusters ...passed 00:03:57.385 Test: blob_crc ...[2024-07-12 14:52:23.049669] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:57.385 [2024-07-12 14:52:23.049719] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:57.385 passed 00:03:57.385 Test: blob_flags ...passed 00:03:57.385 Test: bs_version ...passed 00:03:57.385 Test: blob_set_xattrs_test ...[2024-07-12 14:52:23.145581] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:57.385 [2024-07-12 14:52:23.145630] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:57.385 passed 00:03:57.642 Test: blob_thin_prov_alloc ...passed 00:03:57.642 Test: blob_insert_cluster_msg_test ...passed 00:03:57.642 Test: blob_thin_prov_rw ...passed 00:03:57.642 Test: blob_thin_prov_rle ...passed 00:03:57.642 Test: blob_thin_prov_rw_iov ...passed 00:03:57.642 Test: blob_snapshot_rw ...passed 00:03:57.642 Test: blob_snapshot_rw_iov ...passed 00:03:57.899 Test: blob_inflate_rw ...passed 00:03:57.899 Test: blob_snapshot_freeze_io ...passed 00:03:57.899 Test: blob_operation_split_rw ...passed 00:03:57.899 Test: blob_operation_split_rw_iov ...passed 00:03:57.899 Test: blob_simultaneous_operations ...[2024-07-12 14:52:23.637934] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:57.899 [2024-07-12 14:52:23.638015] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:57.899 [2024-07-12 14:52:23.638249] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:57.899 [2024-07-12 14:52:23.638260] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:57.899 [2024-07-12 14:52:23.640326] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:57.899 [2024-07-12 14:52:23.640342] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:57.899 [2024-07-12 14:52:23.640359] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:57.899 [2024-07-12 14:52:23.640367] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:57.899 passed 00:03:57.899 Test: blob_persist_test ...passed 00:03:58.202 Test: blob_decouple_snapshot ...passed 00:03:58.202 Test: blob_seek_io_unit ...passed 00:03:58.202 Test: blob_nested_freezes ...passed 00:03:58.202 Test: blob_clone_resize ...passed 00:03:58.202 Test: blob_shallow_copy ...[2024-07-12 14:52:23.840381] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:03:58.202 [2024-07-12 14:52:23.840443] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:03:58.202 [2024-07-12 14:52:23.840455] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:03:58.202 passed 00:03:58.202 Suite: blob_blob_copy_extent 00:03:58.202 Test: blob_write ...passed 00:03:58.202 Test: blob_read ...passed 00:03:58.202 Test: blob_rw_verify ...passed 00:03:58.202 Test: blob_rw_verify_iov_nomem ...passed 00:03:58.477 Test: blob_rw_iov_read_only ...passed 00:03:58.477 Test: blob_xattr ...passed 00:03:58.477 Test: blob_dirty_shutdown ...passed 00:03:58.477 Test: blob_is_degraded ...passed 00:03:58.477 Suite: blob_esnap_bs_copy_extent 00:03:58.477 Test: blob_esnap_create ...passed 00:03:58.477 Test: blob_esnap_thread_add_remove ...passed 00:03:58.477 Test: blob_esnap_clone_snapshot ...passed 00:03:58.477 Test: blob_esnap_clone_inflate ...passed 00:03:58.477 Test: blob_esnap_clone_decouple ...passed 00:03:58.735 Test: blob_esnap_clone_reload ...passed 00:03:58.735 Test: blob_esnap_hotplug ...passed 00:03:58.735 Test: blob_set_parent ...[2024-07-12 14:52:24.353949] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:03:58.735 [2024-07-12 14:52:24.354022] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:03:58.735 [2024-07-12 14:52:24.354060] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:03:58.735 [2024-07-12 14:52:24.354070] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:03:58.735 [2024-07-12 14:52:24.354321] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:58.735 passed 00:03:58.735 Test: blob_set_external_parent ...[2024-07-12 14:52:24.386551] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:03:58.735 [2024-07-12 14:52:24.386624] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:03:58.735 [2024-07-12 14:52:24.386650] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:03:58.735 [2024-07-12 14:52:24.386697] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:58.735 passed 00:03:58.735 00:03:58.735 Run Summary: Type Total Ran Passed Failed Inactive 00:03:58.735 suites 16 16 n/a 0 0 00:03:58.735 tests 376 376 376 0 0 00:03:58.735 asserts 143965 143965 143965 0 n/a 00:03:58.735 00:03:58.735 Elapsed time = 11.586 seconds 00:03:58.735 14:52:24 unittest.unittest_blob_blobfs -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:03:58.735 00:03:58.735 00:03:58.735 CUnit - A unit testing framework for C - Version 2.1-3 00:03:58.735 http://cunit.sourceforge.net/ 00:03:58.735 00:03:58.735 00:03:58.735 Suite: blob_bdev 00:03:58.735 Test: create_bs_dev ...passed 00:03:58.735 Test: create_bs_dev_ro ...passed 00:03:58.735 Test: create_bs_dev_rw ...passed 00:03:58.735 Test: claim_bs_dev ...[2024-07-12 14:52:24.407666] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 529:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:03:58.735 passed 00:03:58.735 Test: claim_bs_dev_ro ...passed 00:03:58.735 Test: deferred_destroy_refs ...passed 00:03:58.735 Test: deferred_destroy_channels ...passed 00:03:58.735 Test: deferred_destroy_threads ...passed[2024-07-12 14:52:24.407937] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:03:58.735 00:03:58.735 00:03:58.735 Run Summary: Type Total Ran Passed Failed Inactive 00:03:58.735 suites 1 1 n/a 0 0 00:03:58.735 tests 8 8 8 0 0 00:03:58.735 asserts 119 119 119 0 n/a 00:03:58.735 00:03:58.735 Elapsed time = 0.000 seconds 00:03:58.735 14:52:24 unittest.unittest_blob_blobfs -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:03:58.735 00:03:58.735 00:03:58.735 CUnit - A unit testing framework for C - Version 2.1-3 00:03:58.735 http://cunit.sourceforge.net/ 00:03:58.735 00:03:58.735 00:03:58.735 Suite: tree 00:03:58.735 Test: blobfs_tree_op_test ...passed 00:03:58.735 00:03:58.735 Run Summary: Type Total Ran Passed Failed Inactive 00:03:58.735 suites 1 1 n/a 0 0 00:03:58.735 tests 1 1 1 0 0 00:03:58.735 asserts 27 27 27 0 n/a 00:03:58.735 00:03:58.735 Elapsed time = 0.000 seconds 00:03:58.735 14:52:24 unittest.unittest_blob_blobfs -- unit/unittest.sh@44 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:03:58.735 00:03:58.735 00:03:58.735 CUnit - A unit testing framework for C - Version 2.1-3 00:03:58.735 http://cunit.sourceforge.net/ 00:03:58.735 00:03:58.735 00:03:58.735 Suite: blobfs_async_ut 00:03:58.735 Test: fs_init ...passed 00:03:58.735 Test: fs_open ...passed 00:03:58.735 Test: fs_create ...passed 00:03:58.735 Test: fs_truncate ...passed 00:03:58.735 Test: fs_rename ...[2024-07-12 14:52:24.506441] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:03:58.735 passed 00:03:58.735 Test: fs_rw_async ...passed 00:03:58.735 Test: fs_writev_readv_async ...passed 00:03:58.735 Test: tree_find_buffer_ut ...passed 00:03:58.735 Test: channel_ops ...passed 00:03:58.993 Test: channel_ops_sync ...passed 00:03:58.993 00:03:58.993 Run Summary: Type Total Ran Passed Failed Inactive 00:03:58.993 suites 1 1 n/a 0 0 00:03:58.993 tests 10 10 10 0 0 00:03:58.993 asserts 292 292 292 0 n/a 00:03:58.993 00:03:58.993 Elapsed time = 0.125 seconds 00:03:58.994 14:52:24 unittest.unittest_blob_blobfs -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:03:58.994 00:03:58.994 00:03:58.994 CUnit - A unit testing framework for C - Version 2.1-3 00:03:58.994 http://cunit.sourceforge.net/ 00:03:58.994 00:03:58.994 00:03:58.994 Suite: blobfs_sync_ut 00:03:58.994 Test: cache_read_after_write ...[2024-07-12 14:52:24.606477] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:03:58.994 passed 00:03:58.994 Test: file_length ...passed 00:03:58.994 Test: append_write_to_extend_blob ...passed 00:03:58.994 Test: partial_buffer ...passed 00:03:58.994 Test: cache_write_null_buffer ...passed 00:03:58.994 Test: fs_create_sync ...passed 00:03:58.994 Test: fs_rename_sync ...passed 00:03:58.994 Test: cache_append_no_cache ...passed 00:03:58.994 Test: fs_delete_file_without_close ...passed 00:03:58.994 00:03:58.994 Run Summary: Type Total Ran Passed Failed Inactive 00:03:58.994 suites 1 1 n/a 0 0 00:03:58.994 tests 9 9 9 0 0 00:03:58.994 asserts 345 345 345 0 n/a 00:03:58.994 00:03:58.994 Elapsed time = 0.266 seconds 00:03:58.994 14:52:24 unittest.unittest_blob_blobfs -- unit/unittest.sh@47 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:03:58.994 00:03:58.994 00:03:58.994 CUnit - A unit testing framework for C - Version 2.1-3 00:03:58.994 http://cunit.sourceforge.net/ 00:03:58.994 00:03:58.994 00:03:58.994 Suite: blobfs_bdev_ut 00:03:58.994 Test: spdk_blobfs_bdev_detect_test ...[2024-07-12 14:52:24.712015] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:03:58.994 passed 00:03:58.994 Test: spdk_blobfs_bdev_create_test ...passed 00:03:58.994 Test: spdk_blobfs_bdev_mount_test ...passed 00:03:58.994 00:03:58.994 [2024-07-12 14:52:24.712609] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:03:58.994 Run Summary: Type Total Ran Passed Failed Inactive 00:03:58.994 suites 1 1 n/a 0 0 00:03:58.994 tests 3 3 3 0 0 00:03:58.994 asserts 9 9 9 0 n/a 00:03:58.994 00:03:58.994 Elapsed time = 0.000 seconds 00:03:58.994 00:03:58.994 real 0m11.922s 00:03:58.994 user 0m11.848s 00:03:58.994 sys 0m0.212s 00:03:58.994 14:52:24 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.994 14:52:24 unittest.unittest_blob_blobfs -- common/autotest_common.sh@10 -- # set +x 00:03:58.994 ************************************ 00:03:58.994 END TEST unittest_blob_blobfs 00:03:58.994 ************************************ 00:03:58.994 14:52:24 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:58.994 14:52:24 unittest -- unit/unittest.sh@234 -- # run_test unittest_event unittest_event 00:03:58.994 14:52:24 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.994 14:52:24 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.994 14:52:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:58.994 ************************************ 00:03:58.994 START TEST unittest_event 00:03:58.994 ************************************ 00:03:58.994 14:52:24 unittest.unittest_event -- common/autotest_common.sh@1123 -- # unittest_event 00:03:58.994 14:52:24 unittest.unittest_event -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:03:58.994 00:03:58.994 00:03:58.994 CUnit - A unit testing framework for C - Version 2.1-3 00:03:58.994 http://cunit.sourceforge.net/ 00:03:58.994 00:03:58.994 00:03:58.994 Suite: app_suite 00:03:58.994 Test: test_spdk_app_parse_args ...app_ut [options] 00:03:58.994 00:03:58.994 CPU options: 00:03:58.994 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:03:58.994 (like [0,1,10]) 00:03:58.994 app_ut: invalid option -- z 00:03:58.994 --lcores lcore to CPU mapping list. The list is in the format: 00:03:58.994 [<,lcores[@CPUs]>...] 00:03:58.994 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:03:58.994 Within the group, '-' is used for range separator, 00:03:58.994 ',' is used for single number separator. 00:03:58.994 '( )' can be omitted for single element group, 00:03:58.994 '@' can be omitted if cpus and lcores have the same value 00:03:58.994 --disable-cpumask-locks Disable CPU core lock files. 00:03:58.994 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:03:58.994 pollers in the app support interrupt mode) 00:03:58.994 -p, --main-core main (primary) core for DPDK 00:03:58.994 00:03:58.994 Configuration options: 00:03:58.994 -c, --config, --json JSON config file 00:03:58.994 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:03:58.994 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:03:58.994 --wait-for-rpc wait for RPCs to initialize subsystems 00:03:58.994 --rpcs-allowed comma-separated list of permitted RPCS 00:03:58.994 --json-ignore-init-errors don't exit on invalid config entry 00:03:58.994 00:03:58.994 Memory options: 00:03:58.994 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:03:58.994 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:03:58.994 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:03:58.994 -R, --huge-unlink unlink huge files after initialization 00:03:58.994 -n, --mem-channels number of memory channels used for DPDK 00:03:58.994 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:03:58.994 --msg-mempool-size global message memory pool size in count (default: 262143) 00:03:58.994 --no-huge run without using hugepages 00:03:58.994 --enforce-numa enforce NUMA allocations from the correct socket 00:03:58.994 -i, --shm-id shared memory ID (optional) 00:03:58.994 -g, --single-file-segments force creating just one hugetlbfs file 00:03:58.994 00:03:58.994 PCI options: 00:03:58.994 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:03:58.994 -B, --pci-blocked pci addr to block (can be used more than once) 00:03:58.994 -u, --no-pci disable PCI access 00:03:58.994 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:03:58.994 00:03:58.994 Log options: 00:03:58.994 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:03:58.994 --silence-noticelog disable notice level logging to stderr 00:03:58.994 00:03:58.994 Trace options: 00:03:58.994 --num-trace-entries number of trace entries for each core, must be power of 2, 00:03:58.994 setting 0 to disable trace (default 32768) 00:03:58.994 Tracepoints vary in size and can use more than one trace entry. 00:03:58.994 -e, --tpoint-group [:] 00:03:58.994 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:03:58.994 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:03:58.994 a tracepoint group. First tpoint inside a group can be enabled by 00:03:58.994 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:03:58.994 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:03:58.994 in /include/spdk_internal/trace_defs.h 00:03:58.994 00:03:58.994 Other options: 00:03:58.994 -h, --help show this usage 00:03:58.994 -v, --version print SPDK version 00:03:58.994 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:03:58.994 --env-context Opaque context for use of the env implementation 00:03:58.994 app_ut [options] 00:03:58.994 00:03:58.994 CPU options: 00:03:58.994 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:03:58.994 (like [0,1,10]) 00:03:58.994 --lcores lcore to CPU mapping list. The list is in the format: 00:03:58.994 [<,lcores[@CPUs]>...] 00:03:58.994 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:03:58.994 Within the group, '-' is used for range separator, 00:03:58.994 ',' is used for single number separator. 00:03:58.994 '( )' can be omitted for single element group, 00:03:58.994 '@' can be omitted if cpus and lcores have the same value 00:03:58.994 --disable-cpumask-locks Disable CPU core lock files. 00:03:58.994 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:03:58.994 pollers in the app support interrupt mode) 00:03:58.994 -p, --main-core main (primary) core for DPDK 00:03:58.994 00:03:58.994 Configuration options: 00:03:58.994 -c, --config, --json JSON config file 00:03:58.994 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:03:58.994 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:03:58.994 --wait-for-rpc wait for RPCs to initialize subsystems 00:03:58.994 --rpcs-allowed comma-separated list of permitted RPCS 00:03:58.994 --json-ignore-init-errors don't exit on invalid config entry 00:03:58.994 00:03:58.994 Memory options: 00:03:58.994 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:03:58.994 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:03:58.994 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:03:58.994 -R, --huge-unlink unlink huge files after initialization 00:03:58.994 -n, --mem-channels number of memory channels used for DPDK 00:03:58.994 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:03:58.994 --msg-mempool-size global message memory pool size in count (default: 262143) 00:03:58.994 --no-huge run without using hugepages 00:03:58.994 --enforce-numa enforce NUMA allocations from the correct socket 00:03:58.994 -i, --shm-id shared memory ID (optional) 00:03:58.995 -g, --single-file-segments force creating just one hugetlbfs file 00:03:58.995 00:03:58.995 PCI options: 00:03:58.995 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:03:58.995 -B, --pci-blocked pci addr to block (can be used more than once) 00:03:58.995 -u, --no-pci disable PCI access 00:03:58.995 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:03:58.995 00:03:58.995 Log options: 00:03:58.995 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:03:58.995 --silence-noticelog disable notice level logging to stderr 00:03:58.995 00:03:58.995 Trace options: 00:03:58.995 --num-trace-entries number of trace entries for each core, must be power of 2, 00:03:58.995 setting 0 to disable trace (default 32768) 00:03:58.995 Tracepoints vary in size and can use more than one trace entry. 00:03:58.995 -e, --tpoint-group [:] 00:03:58.995 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:03:58.995 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:03:58.995 a tracepoint group. First tpoint inside a group can be enabled by 00:03:58.995 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:03:58.995 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:03:58.995 in /include/spdk_internal/trace_defs.h 00:03:58.995 00:03:58.995 Other options: 00:03:58.995 -h, --help show this usage 00:03:58.995 -v, --version print SPDK version 00:03:58.995 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:03:58.995 --env-context Opaque context for use of the env implementation 00:03:58.995 app_ut [options] 00:03:58.995 00:03:58.995 CPU options: 00:03:58.995 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:03:58.995 (like [0,1,10]) 00:03:58.995 --lcores lcore to CPU mapping list. The list is in the format: 00:03:58.995 [<,lcores[@CPUs]>...] 00:03:58.995 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:03:58.995 Within the group, '-' is used for range separator, 00:03:58.995 ',' is used for single number separator. 00:03:58.995 '( )' can be omitted for single element group, 00:03:58.995 '@' can be omitted if cpus and lcores have the same value 00:03:58.995 --disable-cpumask-locks Disable CPU core lock files. 00:03:58.995 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:03:58.995 pollers in the app support interrupt mode) 00:03:58.995 -p, --main-core main (primary) core for DPDK 00:03:58.995 00:03:58.995 Configuration options: 00:03:58.995 -c, --config, --json JSON config file 00:03:58.995 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:03:58.995 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:03:58.995 --wait-for-rpc wait for RPCs to initialize subsystems 00:03:58.995 --rpcs-allowed comma-separated list of permitted RPCS 00:03:58.995 --json-ignore-init-errors don't exit on invalid config entry 00:03:58.995 00:03:58.995 Memory options: 00:03:58.995 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:03:58.995 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:03:58.995 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:03:58.995 -R, --huge-unlink unlink huge files after initialization 00:03:58.995 -n, --mem-channels number of memory channels used for DPDK 00:03:58.995 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:03:58.995 --msg-mempool-size global message memory pool size in count (default: 262143) 00:03:58.995 --no-huge run without using hugepages 00:03:58.995 --enforce-numa enforce NUMA allocations from the correct socket 00:03:58.995 -i, --shm-id shared memory ID (optional) 00:03:58.995 -g, --single-file-segments force creating just one hugetlbfs file 00:03:58.995 00:03:58.995 PCI options: 00:03:58.995 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:03:58.995 -B, --pci-blocked pci addr to block (can be used more than once) 00:03:58.995 -u, --no-pci disable PCI access 00:03:58.995 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:03:58.995 00:03:58.995 Log options: 00:03:58.995 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:03:58.995 --silence-noticelog disable notice level logging to stderr 00:03:58.995 00:03:58.995 Trace options: 00:03:58.995 --num-trace-entries number of trace entries for each core, must be power of 2, 00:03:58.995 setting 0 to disable trace (default 32768) 00:03:58.995 Tracepoints vary in size and can use more than one trace entry. 00:03:58.995 -e, --tpoint-group [:] 00:03:58.995 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:03:58.995 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:03:58.995 a tracepoint group. First tpoint inside a group can be enabled by 00:03:58.995 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:03:58.995 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:03:58.995 in /include/spdk_internal/trace_defs.h 00:03:58.995 00:03:58.995 Other options: 00:03:58.995 -h, --help show this usage 00:03:58.995 -v, --version print SPDK version 00:03:58.995 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:03:58.995 --env-context Opaque context for use of the env implementation 00:03:58.995 passed 00:03:58.995 00:03:58.995 app_ut: unrecognized option `--test-long-opt' 00:03:58.995 [2024-07-12 14:52:24.758835] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1198:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:03:58.995 [2024-07-12 14:52:24.759089] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1381:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:03:58.995 [2024-07-12 14:52:24.759195] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1283:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:03:58.995 Run Summary: Type Total Ran Passed Failed Inactive 00:03:58.995 suites 1 1 n/a 0 0 00:03:58.995 tests 1 1 1 0 0 00:03:58.995 asserts 8 8 8 0 n/a 00:03:58.995 00:03:58.995 Elapsed time = 0.000 seconds 00:03:58.995 14:52:24 unittest.unittest_event -- unit/unittest.sh@52 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:03:58.995 00:03:58.995 00:03:58.995 CUnit - A unit testing framework for C - Version 2.1-3 00:03:58.995 http://cunit.sourceforge.net/ 00:03:58.995 00:03:58.995 00:03:58.995 Suite: app_suite 00:03:58.995 Test: test_create_reactor ...passed 00:03:58.995 Test: test_init_reactors ...passed 00:03:58.995 Test: test_event_call ...passed 00:03:58.995 Test: test_schedule_thread ...passed 00:03:58.995 Test: test_reschedule_thread ...passed 00:03:58.995 Test: test_bind_thread ...passed 00:03:58.995 Test: test_for_each_reactor ...passed 00:03:58.995 Test: test_reactor_stats ...passed 00:03:58.995 Test: test_scheduler ...passed 00:03:58.995 Test: test_governor ...passed 00:03:58.995 00:03:58.995 Run Summary: Type Total Ran Passed Failed Inactive 00:03:58.995 suites 1 1 n/a 0 0 00:03:58.995 tests 10 10 10 0 0 00:03:58.995 asserts 336 336 336 0 n/a 00:03:58.995 00:03:58.995 Elapsed time = 0.000 seconds 00:03:58.995 00:03:58.995 real 0m0.015s 00:03:58.995 user 0m0.001s 00:03:58.995 sys 0m0.012s 00:03:58.995 14:52:24 unittest.unittest_event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.995 ************************************ 00:03:58.995 END TEST unittest_event 00:03:58.995 ************************************ 00:03:58.995 14:52:24 unittest.unittest_event -- common/autotest_common.sh@10 -- # set +x 00:03:58.995 14:52:24 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:58.995 14:52:24 unittest -- unit/unittest.sh@235 -- # uname -s 00:03:58.995 14:52:24 unittest -- unit/unittest.sh@235 -- # '[' FreeBSD = Linux ']' 00:03:58.995 14:52:24 unittest -- unit/unittest.sh@239 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:03:58.995 14:52:24 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.995 14:52:24 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.995 14:52:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:59.255 ************************************ 00:03:59.255 START TEST unittest_accel 00:03:59.255 ************************************ 00:03:59.255 14:52:24 unittest.unittest_accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:03:59.255 00:03:59.255 00:03:59.255 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.255 http://cunit.sourceforge.net/ 00:03:59.255 00:03:59.255 00:03:59.255 Suite: accel_sequence 00:03:59.255 Test: test_sequence_fill_copy ...passed 00:03:59.255 Test: test_sequence_abort ...passed 00:03:59.255 Test: test_sequence_append_error ...passed 00:03:59.255 Test: test_sequence_completion_error ...[2024-07-12 14:52:24.820651] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1946:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x290a2a2ced40 00:03:59.255 passed 00:03:59.255 Test: test_sequence_decompress ...[2024-07-12 14:52:24.820929] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1946:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x290a2a2ced40 00:03:59.255 [2024-07-12 14:52:24.820954] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1856:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x290a2a2ced40 00:03:59.255 [2024-07-12 14:52:24.820978] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1856:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x290a2a2ced40 00:03:59.255 passed 00:03:59.255 Test: test_sequence_reverse ...passed 00:03:59.255 Test: test_sequence_copy_elision ...passed 00:03:59.255 Test: test_sequence_accel_buffers ...passed 00:03:59.255 Test: test_sequence_memory_domain ...[2024-07-12 14:52:24.823274] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1748:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:03:59.255 [2024-07-12 14:52:24.823329] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1787:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -48 00:03:59.255 passed 00:03:59.255 Test: test_sequence_module_memory_domain ...passed 00:03:59.255 Test: test_sequence_crypto ...passed 00:03:59.255 Test: test_sequence_driver ...[2024-07-12 14:52:24.824620] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1895:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x290a2a2cec80 using driver: ut 00:03:59.255 [2024-07-12 14:52:24.824665] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1960:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x290a2a2cec80 through driver: ut 00:03:59.255 passed 00:03:59.255 Test: test_sequence_same_iovs ...passed 00:03:59.255 Test: test_sequence_crc32 ...passed 00:03:59.255 Suite: accel 00:03:59.255 Test: test_spdk_accel_task_complete ...passed 00:03:59.255 Test: test_get_task ...passed 00:03:59.255 Test: test_spdk_accel_submit_copy ...passed 00:03:59.255 Test: test_spdk_accel_submit_dualcast ...passed 00:03:59.255 Test: test_spdk_accel_submit_compare ...passed 00:03:59.255 Test: test_spdk_accel_submit_fill ...passed 00:03:59.255 Test: test_spdk_accel_submit_crc32c ...passed 00:03:59.255 Test: test_spdk_accel_submit_crc32cv ...passed 00:03:59.255 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:03:59.255 Test: test_spdk_accel_submit_xor ...passed 00:03:59.255 Test: test_spdk_accel_module_find_by_name ...passed 00:03:59.255 Test: test_spdk_accel_module_register ...passed 00:03:59.255 00:03:59.255 [2024-07-12 14:52:24.825240] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 422:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:03:59.255 [2024-07-12 14:52:24.825251] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 422:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:03:59.255 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.255 suites 2 2 n/a 0 0 00:03:59.255 tests 26 26 26 0 0 00:03:59.255 asserts 830 830 830 0 n/a 00:03:59.255 00:03:59.255 Elapsed time = 0.000 seconds 00:03:59.255 00:03:59.255 real 0m0.014s 00:03:59.255 user 0m0.007s 00:03:59.255 sys 0m0.008s 00:03:59.255 14:52:24 unittest.unittest_accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.255 14:52:24 unittest.unittest_accel -- common/autotest_common.sh@10 -- # set +x 00:03:59.255 ************************************ 00:03:59.255 END TEST unittest_accel 00:03:59.255 ************************************ 00:03:59.255 14:52:24 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:59.255 14:52:24 unittest -- unit/unittest.sh@240 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:03:59.255 14:52:24 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.255 14:52:24 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.255 14:52:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:59.255 ************************************ 00:03:59.255 START TEST unittest_ioat 00:03:59.255 ************************************ 00:03:59.255 14:52:24 unittest.unittest_ioat -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:03:59.255 00:03:59.255 00:03:59.255 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.255 http://cunit.sourceforge.net/ 00:03:59.255 00:03:59.255 00:03:59.255 Suite: ioat 00:03:59.255 Test: ioat_state_check ...passed 00:03:59.255 00:03:59.255 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.255 suites 1 1 n/a 0 0 00:03:59.255 tests 1 1 1 0 0 00:03:59.255 asserts 32 32 32 0 n/a 00:03:59.255 00:03:59.255 Elapsed time = 0.000 seconds 00:03:59.255 00:03:59.255 real 0m0.005s 00:03:59.255 user 0m0.005s 00:03:59.255 sys 0m0.004s 00:03:59.255 14:52:24 unittest.unittest_ioat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.255 ************************************ 00:03:59.255 END TEST unittest_ioat 00:03:59.255 ************************************ 00:03:59.255 14:52:24 unittest.unittest_ioat -- common/autotest_common.sh@10 -- # set +x 00:03:59.255 14:52:24 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:59.255 14:52:24 unittest -- unit/unittest.sh@241 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:59.255 14:52:24 unittest -- unit/unittest.sh@242 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:03:59.255 14:52:24 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.255 14:52:24 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.255 14:52:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:59.255 ************************************ 00:03:59.255 START TEST unittest_idxd_user 00:03:59.255 ************************************ 00:03:59.255 14:52:24 unittest.unittest_idxd_user -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:03:59.255 00:03:59.255 00:03:59.255 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.255 http://cunit.sourceforge.net/ 00:03:59.255 00:03:59.255 00:03:59.255 Suite: idxd_user 00:03:59.255 Test: test_idxd_wait_cmd ...[2024-07-12 14:52:24.917632] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:03:59.255 passed 00:03:59.255 Test: test_idxd_reset_dev ...passed 00:03:59.255 Test: test_idxd_group_config ...passed 00:03:59.255 Test: test_idxd_wq_config ...passed 00:03:59.255 00:03:59.255 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.255 suites 1 1 n/a 0 0 00:03:59.255 tests 4 4 4 0 0 00:03:59.255 asserts 20 20 20 0 n/a 00:03:59.255 00:03:59.255 Elapsed time = 0.000 seconds 00:03:59.255 [2024-07-12 14:52:24.917855] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:03:59.255 [2024-07-12 14:52:24.917900] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:03:59.255 [2024-07-12 14:52:24.917914] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:03:59.255 00:03:59.255 real 0m0.005s 00:03:59.255 user 0m0.005s 00:03:59.255 sys 0m0.004s 00:03:59.255 14:52:24 unittest.unittest_idxd_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.255 14:52:24 unittest.unittest_idxd_user -- common/autotest_common.sh@10 -- # set +x 00:03:59.255 ************************************ 00:03:59.255 END TEST unittest_idxd_user 00:03:59.255 ************************************ 00:03:59.255 14:52:24 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:59.255 14:52:24 unittest -- unit/unittest.sh@244 -- # run_test unittest_iscsi unittest_iscsi 00:03:59.255 14:52:24 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.255 14:52:24 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.255 14:52:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:59.255 ************************************ 00:03:59.255 START TEST unittest_iscsi 00:03:59.255 ************************************ 00:03:59.255 14:52:24 unittest.unittest_iscsi -- common/autotest_common.sh@1123 -- # unittest_iscsi 00:03:59.255 14:52:24 unittest.unittest_iscsi -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:03:59.255 00:03:59.255 00:03:59.255 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.255 http://cunit.sourceforge.net/ 00:03:59.255 00:03:59.255 00:03:59.255 Suite: conn_suite 00:03:59.255 Test: read_task_split_in_order_case ...passed 00:03:59.255 Test: read_task_split_reverse_order_case ...passed 00:03:59.255 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:03:59.255 Test: process_non_read_task_completion_test ...passed 00:03:59.255 Test: free_tasks_on_connection ...passed 00:03:59.255 Test: free_tasks_with_queued_datain ...passed 00:03:59.256 Test: abort_queued_datain_task_test ...passed 00:03:59.256 Test: abort_queued_datain_tasks_test ...passed 00:03:59.256 00:03:59.256 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.256 suites 1 1 n/a 0 0 00:03:59.256 tests 8 8 8 0 0 00:03:59.256 asserts 230 230 230 0 n/a 00:03:59.256 00:03:59.256 Elapsed time = 0.000 seconds 00:03:59.256 14:52:24 unittest.unittest_iscsi -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:03:59.256 00:03:59.256 00:03:59.256 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.256 http://cunit.sourceforge.net/ 00:03:59.256 00:03:59.256 00:03:59.256 Suite: iscsi_suite 00:03:59.256 Test: param_negotiation_test ...passed 00:03:59.256 Test: list_negotiation_test ...passed 00:03:59.256 Test: parse_valid_test ...passed 00:03:59.256 Test: parse_invalid_test ...[2024-07-12 14:52:24.968607] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:03:59.256 [2024-07-12 14:52:24.968878] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:03:59.256 [2024-07-12 14:52:24.968903] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:03:59.256 [2024-07-12 14:52:24.968940] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:03:59.256 [2024-07-12 14:52:24.968965] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:03:59.256 [2024-07-12 14:52:24.968982] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:03:59.256 [2024-07-12 14:52:24.968999] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:03:59.256 passed 00:03:59.256 00:03:59.256 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.256 suites 1 1 n/a 0 0 00:03:59.256 tests 4 4 4 0 0 00:03:59.256 asserts 161 161 161 0 n/a 00:03:59.256 00:03:59.256 Elapsed time = 0.000 seconds 00:03:59.256 14:52:24 unittest.unittest_iscsi -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:03:59.256 00:03:59.256 00:03:59.256 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.256 http://cunit.sourceforge.net/ 00:03:59.256 00:03:59.256 00:03:59.256 Suite: iscsi_target_node_suite 00:03:59.256 Test: add_lun_test_cases ...[2024-07-12 14:52:24.974697] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1253:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:03:59.256 passed 00:03:59.256 Test: allow_any_allowed ...passed 00:03:59.256 Test: allow_ipv6_allowed ...[2024-07-12 14:52:24.974918] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:03:59.256 [2024-07-12 14:52:24.974938] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:03:59.256 [2024-07-12 14:52:24.974952] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:03:59.256 [2024-07-12 14:52:24.974965] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:03:59.256 passed 00:03:59.256 Test: allow_ipv6_denied ...passed 00:03:59.256 Test: allow_ipv6_invalid ...passed 00:03:59.256 Test: allow_ipv4_allowed ...passed 00:03:59.256 Test: allow_ipv4_denied ...passed 00:03:59.256 Test: allow_ipv4_invalid ...passed 00:03:59.256 Test: node_access_allowed ...passed 00:03:59.256 Test: node_access_denied_by_empty_netmask ...passed 00:03:59.256 Test: node_access_multi_initiator_groups_cases ...passed 00:03:59.256 Test: allow_iscsi_name_multi_maps_case ...passed 00:03:59.256 Test: chap_param_test_cases ...[2024-07-12 14:52:24.975112] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:03:59.256 [2024-07-12 14:52:24.975136] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:03:59.256 [2024-07-12 14:52:24.975150] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:03:59.256 [2024-07-12 14:52:24.975164] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:03:59.256 [2024-07-12 14:52:24.975177] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:03:59.256 passed 00:03:59.256 00:03:59.256 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.256 suites 1 1 n/a 0 0 00:03:59.256 tests 13 13 13 0 0 00:03:59.256 asserts 50 50 50 0 n/a 00:03:59.256 00:03:59.256 Elapsed time = 0.000 seconds 00:03:59.256 14:52:24 unittest.unittest_iscsi -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:03:59.256 00:03:59.256 00:03:59.256 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.256 http://cunit.sourceforge.net/ 00:03:59.256 00:03:59.256 00:03:59.256 Suite: iscsi_suite 00:03:59.256 Test: op_login_check_target_test ...passed 00:03:59.256 Test: op_login_session_normal_test ...[2024-07-12 14:52:24.981803] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1439:iscsi_op_login_check_target: *ERROR*: access denied 00:03:59.256 [2024-07-12 14:52:24.981955] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:03:59.256 [2024-07-12 14:52:24.981967] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:03:59.256 [2024-07-12 14:52:24.982135] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:03:59.256 [2024-07-12 14:52:24.982159] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:03:59.256 [2024-07-12 14:52:24.982169] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1475:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:03:59.256 [2024-07-12 14:52:24.982193] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 703:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:03:59.256 [2024-07-12 14:52:24.982202] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1475:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:03:59.256 passed 00:03:59.256 Test: maxburstlength_test ...[2024-07-12 14:52:24.982378] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:03:59.256 passed 00:03:59.256 Test: underflow_for_read_transfer_test ...passed 00:03:59.256 Test: underflow_for_zero_read_transfer_test ...passed 00:03:59.256 Test: underflow_for_request_sense_test ...passed 00:03:59.256 Test: underflow_for_check_condition_test ...passed 00:03:59.256 Test: add_transfer_task_test ...passed 00:03:59.256 Test: get_transfer_task_test ...passed 00:03:59.256 Test: del_transfer_task_test ...passed 00:03:59.256 Test: clear_all_transfer_tasks_test ...[2024-07-12 14:52:24.982391] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4569:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:03:59.256 passed 00:03:59.256 Test: build_iovs_test ...passed 00:03:59.256 Test: build_iovs_with_md_test ...passed 00:03:59.256 Test: pdu_hdr_op_login_test ...passed 00:03:59.256 Test: pdu_hdr_op_text_test ...[2024-07-12 14:52:24.982692] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1256:iscsi_op_login_rsp_init: *ERROR*: transit error 00:03:59.256 [2024-07-12 14:52:24.982708] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1264:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:03:59.256 [2024-07-12 14:52:24.982717] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1277:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:03:59.256 [2024-07-12 14:52:24.982729] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2259:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:03:59.256 [2024-07-12 14:52:24.982738] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2290:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:03:59.256 passed 00:03:59.256 Test: pdu_hdr_op_logout_test ...passed 00:03:59.256 Test: pdu_hdr_op_scsi_test ...passed 00:03:59.256 Test: pdu_hdr_op_task_mgmt_test ...[2024-07-12 14:52:24.982849] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2304:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:03:59.256 [2024-07-12 14:52:24.982863] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2535:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:03:59.256 [2024-07-12 14:52:24.982875] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:03:59.256 [2024-07-12 14:52:24.982884] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:03:59.256 [2024-07-12 14:52:24.982892] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3382:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:03:59.256 [2024-07-12 14:52:24.982902] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3416:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:03:59.256 [2024-07-12 14:52:24.982910] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3423:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:03:59.256 [2024-07-12 14:52:24.982920] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3446:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:03:59.256 [2024-07-12 14:52:24.982931] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3623:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:03:59.256 [2024-07-12 14:52:24.983072] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3712:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:03:59.256 passed 00:03:59.256 Test: pdu_hdr_op_nopout_test ...[2024-07-12 14:52:24.983216] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3731:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:03:59.256 [2024-07-12 14:52:24.983226] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:03:59.256 [2024-07-12 14:52:24.983339] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:03:59.256 [2024-07-12 14:52:24.983348] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3761:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:03:59.256 passed 00:03:59.256 Test: pdu_hdr_op_data_test ...[2024-07-12 14:52:24.983358] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4204:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:03:59.256 [2024-07-12 14:52:24.983368] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:03:59.256 [2024-07-12 14:52:24.983376] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:03:59.256 passed 00:03:59.256 Test: empty_text_with_cbit_test ...passed 00:03:59.257 Test: pdu_payload_read_test ...[2024-07-12 14:52:24.983384] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4235:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:03:59.257 [2024-07-12 14:52:24.983539] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4240:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:03:59.257 [2024-07-12 14:52:24.983548] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4251:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:03:59.257 [2024-07-12 14:52:24.983556] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4263:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:03:59.257 [2024-07-12 14:52:24.984209] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4650:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:03:59.257 passed 00:03:59.257 Test: data_out_pdu_sequence_test ...passed 00:03:59.257 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:03:59.257 00:03:59.257 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.257 suites 1 1 n/a 0 0 00:03:59.257 tests 24 24 24 0 0 00:03:59.257 asserts 150253 150253 150253 0 n/a 00:03:59.257 00:03:59.257 Elapsed time = 0.008 seconds 00:03:59.257 14:52:24 unittest.unittest_iscsi -- unit/unittest.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:03:59.257 00:03:59.257 00:03:59.257 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.257 http://cunit.sourceforge.net/ 00:03:59.257 00:03:59.257 00:03:59.257 Suite: init_grp_suite 00:03:59.257 Test: create_initiator_group_success_case ...passed 00:03:59.257 Test: find_initiator_group_success_case ...passed 00:03:59.257 Test: register_initiator_group_twice_case ...passed 00:03:59.257 Test: add_initiator_name_success_case ...passed 00:03:59.257 Test: add_initiator_name_fail_case ...[2024-07-12 14:52:24.991146] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:03:59.257 passed 00:03:59.257 Test: delete_all_initiator_names_success_case ...passed 00:03:59.257 Test: add_netmask_success_case ...passed 00:03:59.257 Test: add_netmask_fail_case ...[2024-07-12 14:52:24.991274] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:03:59.257 passed 00:03:59.257 Test: delete_all_netmasks_success_case ...passed 00:03:59.257 Test: initiator_name_overwrite_all_to_any_case ...passed 00:03:59.257 Test: netmask_overwrite_all_to_any_case ...passed 00:03:59.257 Test: add_delete_initiator_names_case ...passed 00:03:59.257 Test: add_duplicated_initiator_names_case ...passed 00:03:59.257 Test: delete_nonexisting_initiator_names_case ...passed 00:03:59.257 Test: add_delete_netmasks_case ...passed 00:03:59.257 Test: add_duplicated_netmasks_case ...passed 00:03:59.257 Test: delete_nonexisting_netmasks_case ...passed 00:03:59.257 00:03:59.257 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.257 suites 1 1 n/a 0 0 00:03:59.257 tests 17 17 17 0 0 00:03:59.257 asserts 108 108 108 0 n/a 00:03:59.257 00:03:59.257 Elapsed time = 0.000 seconds 00:03:59.257 14:52:24 unittest.unittest_iscsi -- unit/unittest.sh@73 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:03:59.257 00:03:59.257 00:03:59.257 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.257 http://cunit.sourceforge.net/ 00:03:59.257 00:03:59.257 00:03:59.257 Suite: portal_grp_suite 00:03:59.257 Test: portal_create_ipv4_normal_case ...passed 00:03:59.257 Test: portal_create_ipv6_normal_case ...passed 00:03:59.257 Test: portal_create_ipv4_wildcard_case ...passed 00:03:59.257 Test: portal_create_ipv6_wildcard_case ...passed 00:03:59.257 Test: portal_create_twice_case ...[2024-07-12 14:52:24.995869] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:03:59.257 passed 00:03:59.257 Test: portal_grp_register_unregister_case ...passed 00:03:59.257 Test: portal_grp_register_twice_case ...passed 00:03:59.257 Test: portal_grp_add_delete_case ...passed 00:03:59.257 Test: portal_grp_add_delete_twice_case ...passed 00:03:59.257 00:03:59.257 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.257 suites 1 1 n/a 0 0 00:03:59.257 tests 9 9 9 0 0 00:03:59.257 asserts 44 44 44 0 n/a 00:03:59.257 00:03:59.257 Elapsed time = 0.000 seconds 00:03:59.257 00:03:59.257 real 0m0.041s 00:03:59.257 user 0m0.022s 00:03:59.257 sys 0m0.022s 00:03:59.257 14:52:24 unittest.unittest_iscsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.257 14:52:24 unittest.unittest_iscsi -- common/autotest_common.sh@10 -- # set +x 00:03:59.257 ************************************ 00:03:59.257 END TEST unittest_iscsi 00:03:59.257 ************************************ 00:03:59.257 14:52:25 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:59.257 14:52:25 unittest -- unit/unittest.sh@245 -- # run_test unittest_json unittest_json 00:03:59.257 14:52:25 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.257 14:52:25 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.257 14:52:25 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:59.257 ************************************ 00:03:59.257 START TEST unittest_json 00:03:59.257 ************************************ 00:03:59.257 14:52:25 unittest.unittest_json -- common/autotest_common.sh@1123 -- # unittest_json 00:03:59.257 14:52:25 unittest.unittest_json -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:03:59.257 00:03:59.257 00:03:59.257 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.257 http://cunit.sourceforge.net/ 00:03:59.257 00:03:59.257 00:03:59.257 Suite: json 00:03:59.257 Test: test_parse_literal ...passed 00:03:59.257 Test: test_parse_string_simple ...passed 00:03:59.257 Test: test_parse_string_control_chars ...passed 00:03:59.257 Test: test_parse_string_utf8 ...passed 00:03:59.257 Test: test_parse_string_escapes_twochar ...passed 00:03:59.257 Test: test_parse_string_escapes_unicode ...passed 00:03:59.257 Test: test_parse_number ...passed 00:03:59.257 Test: test_parse_array ...passed 00:03:59.257 Test: test_parse_object ...passed 00:03:59.257 Test: test_parse_nesting ...passed 00:03:59.257 Test: test_parse_comment ...passed 00:03:59.257 00:03:59.257 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.257 suites 1 1 n/a 0 0 00:03:59.257 tests 11 11 11 0 0 00:03:59.257 asserts 1516 1516 1516 0 n/a 00:03:59.257 00:03:59.257 Elapsed time = 0.000 seconds 00:03:59.257 14:52:25 unittest.unittest_json -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:03:59.257 00:03:59.257 00:03:59.257 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.257 http://cunit.sourceforge.net/ 00:03:59.257 00:03:59.257 00:03:59.257 Suite: json 00:03:59.257 Test: test_strequal ...passed 00:03:59.257 Test: test_num_to_uint16 ...passed 00:03:59.257 Test: test_num_to_int32 ...passed 00:03:59.257 Test: test_num_to_uint64 ...passed 00:03:59.257 Test: test_decode_object ...passed 00:03:59.257 Test: test_decode_array ...passed 00:03:59.257 Test: test_decode_bool ...passed 00:03:59.257 Test: test_decode_uint16 ...passed 00:03:59.257 Test: test_decode_int32 ...passed 00:03:59.257 Test: test_decode_uint32 ...passed 00:03:59.257 Test: test_decode_uint64 ...passed 00:03:59.257 Test: test_decode_string ...passed 00:03:59.257 Test: test_decode_uuid ...passed 00:03:59.257 Test: test_find ...passed 00:03:59.257 Test: test_find_array ...passed 00:03:59.257 Test: test_iterating ...passed 00:03:59.257 Test: test_free_object ...passed 00:03:59.257 00:03:59.257 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.257 suites 1 1 n/a 0 0 00:03:59.257 tests 17 17 17 0 0 00:03:59.257 asserts 236 236 236 0 n/a 00:03:59.257 00:03:59.257 Elapsed time = 0.000 seconds 00:03:59.257 14:52:25 unittest.unittest_json -- unit/unittest.sh@79 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:03:59.257 00:03:59.257 00:03:59.257 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.257 http://cunit.sourceforge.net/ 00:03:59.257 00:03:59.257 00:03:59.257 Suite: json 00:03:59.257 Test: test_write_literal ...passed 00:03:59.257 Test: test_write_string_simple ...passed 00:03:59.257 Test: test_write_string_escapes ...passed 00:03:59.257 Test: test_write_string_utf16le ...passed 00:03:59.257 Test: test_write_number_int32 ...passed 00:03:59.257 Test: test_write_number_uint32 ...passed 00:03:59.257 Test: test_write_number_uint128 ...passed 00:03:59.257 Test: test_write_string_number_uint128 ...passed 00:03:59.257 Test: test_write_number_int64 ...passed 00:03:59.257 Test: test_write_number_uint64 ...passed 00:03:59.257 Test: test_write_number_double ...passed 00:03:59.257 Test: test_write_uuid ...passed 00:03:59.257 Test: test_write_array ...passed 00:03:59.257 Test: test_write_object ...passed 00:03:59.257 Test: test_write_nesting ...passed 00:03:59.257 Test: test_write_val ...passed 00:03:59.257 00:03:59.257 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.257 suites 1 1 n/a 0 0 00:03:59.257 tests 16 16 16 0 0 00:03:59.257 asserts 918 918 918 0 n/a 00:03:59.257 00:03:59.257 Elapsed time = 0.000 seconds 00:03:59.257 14:52:25 unittest.unittest_json -- unit/unittest.sh@80 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:03:59.257 00:03:59.257 00:03:59.257 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.257 http://cunit.sourceforge.net/ 00:03:59.257 00:03:59.257 00:03:59.257 Suite: jsonrpc 00:03:59.257 Test: test_parse_request ...passed 00:03:59.257 Test: test_parse_request_streaming ...passed 00:03:59.257 00:03:59.257 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.257 suites 1 1 n/a 0 0 00:03:59.258 tests 2 2 2 0 0 00:03:59.258 asserts 289 289 289 0 n/a 00:03:59.258 00:03:59.258 Elapsed time = 0.000 seconds 00:03:59.258 00:03:59.258 real 0m0.026s 00:03:59.258 user 0m0.007s 00:03:59.258 sys 0m0.018s 00:03:59.258 14:52:25 unittest.unittest_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.258 14:52:25 unittest.unittest_json -- common/autotest_common.sh@10 -- # set +x 00:03:59.258 ************************************ 00:03:59.258 END TEST unittest_json 00:03:59.258 ************************************ 00:03:59.515 14:52:25 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:59.515 14:52:25 unittest -- unit/unittest.sh@246 -- # run_test unittest_rpc unittest_rpc 00:03:59.515 14:52:25 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.515 14:52:25 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.515 14:52:25 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:59.515 ************************************ 00:03:59.515 START TEST unittest_rpc 00:03:59.515 ************************************ 00:03:59.515 14:52:25 unittest.unittest_rpc -- common/autotest_common.sh@1123 -- # unittest_rpc 00:03:59.515 14:52:25 unittest.unittest_rpc -- unit/unittest.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:03:59.515 00:03:59.515 00:03:59.516 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.516 http://cunit.sourceforge.net/ 00:03:59.516 00:03:59.516 00:03:59.516 Suite: rpc 00:03:59.516 Test: test_jsonrpc_handler ...passed 00:03:59.516 Test: test_spdk_rpc_is_method_allowed ...passed 00:03:59.516 Test: test_rpc_get_methods ...[2024-07-12 14:52:25.110299] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:03:59.516 passed 00:03:59.516 Test: test_rpc_spdk_get_version ...passed 00:03:59.516 Test: test_spdk_rpc_listen_close ...passed 00:03:59.516 Test: test_rpc_run_multiple_servers ...passed 00:03:59.516 00:03:59.516 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.516 suites 1 1 n/a 0 0 00:03:59.516 tests 6 6 6 0 0 00:03:59.516 asserts 23 23 23 0 n/a 00:03:59.516 00:03:59.516 Elapsed time = 0.000 seconds 00:03:59.516 00:03:59.516 real 0m0.006s 00:03:59.516 user 0m0.006s 00:03:59.516 sys 0m0.000s 00:03:59.516 14:52:25 unittest.unittest_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.516 14:52:25 unittest.unittest_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.516 ************************************ 00:03:59.516 END TEST unittest_rpc 00:03:59.516 ************************************ 00:03:59.516 14:52:25 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:59.516 14:52:25 unittest -- unit/unittest.sh@247 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:03:59.516 14:52:25 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.516 14:52:25 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.516 14:52:25 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:59.516 ************************************ 00:03:59.516 START TEST unittest_notify 00:03:59.516 ************************************ 00:03:59.516 14:52:25 unittest.unittest_notify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:03:59.516 00:03:59.516 00:03:59.516 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.516 http://cunit.sourceforge.net/ 00:03:59.516 00:03:59.516 00:03:59.516 Suite: app_suite 00:03:59.516 Test: notify ...passed 00:03:59.516 00:03:59.516 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.516 suites 1 1 n/a 0 0 00:03:59.516 tests 1 1 1 0 0 00:03:59.516 asserts 13 13 13 0 n/a 00:03:59.516 00:03:59.516 Elapsed time = 0.000 seconds 00:03:59.516 00:03:59.516 real 0m0.006s 00:03:59.516 user 0m0.000s 00:03:59.516 sys 0m0.004s 00:03:59.516 14:52:25 unittest.unittest_notify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.516 14:52:25 unittest.unittest_notify -- common/autotest_common.sh@10 -- # set +x 00:03:59.516 ************************************ 00:03:59.516 END TEST unittest_notify 00:03:59.516 ************************************ 00:03:59.516 14:52:25 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:59.516 14:52:25 unittest -- unit/unittest.sh@248 -- # run_test unittest_nvme unittest_nvme 00:03:59.516 14:52:25 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.516 14:52:25 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.516 14:52:25 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:59.516 ************************************ 00:03:59.516 START TEST unittest_nvme 00:03:59.516 ************************************ 00:03:59.516 14:52:25 unittest.unittest_nvme -- common/autotest_common.sh@1123 -- # unittest_nvme 00:03:59.516 14:52:25 unittest.unittest_nvme -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:03:59.516 00:03:59.516 00:03:59.516 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.516 http://cunit.sourceforge.net/ 00:03:59.516 00:03:59.516 00:03:59.516 Suite: nvme 00:03:59.516 Test: test_opc_data_transfer ...passed 00:03:59.516 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:03:59.516 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:03:59.516 Test: test_trid_parse_and_compare ...[2024-07-12 14:52:25.209147] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1199:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:03:59.516 [2024-07-12 14:52:25.209426] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:03:59.516 [2024-07-12 14:52:25.209460] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1212:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:03:59.516 [2024-07-12 14:52:25.209476] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:03:59.516 passed 00:03:59.516 Test: test_trid_trtype_str ...passed 00:03:59.516 Test: test_trid_adrfam_str ...passed 00:03:59.516 Test: test_nvme_ctrlr_probe ...passed 00:03:59.516 Test: test_spdk_nvme_probe ...[2024-07-12 14:52:25.209490] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1222:parse_next_key: *ERROR*: Key without value 00:03:59.516 [2024-07-12 14:52:25.209505] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:03:59.516 [2024-07-12 14:52:25.209657] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:03:59.516 [2024-07-12 14:52:25.209689] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:03:59.516 passed 00:03:59.516 Test: test_spdk_nvme_connect ...[2024-07-12 14:52:25.209711] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:03:59.516 [2024-07-12 14:52:25.209729] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 822:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:03:59.516 [2024-07-12 14:52:25.209743] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:03:59.516 [2024-07-12 14:52:25.209774] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1010:spdk_nvme_connect: *ERROR*: No transport ID specified 00:03:59.516 passed 00:03:59.516 Test: test_nvme_ctrlr_probe_internal ...passed 00:03:59.516 Test: test_nvme_init_controllers ...passed 00:03:59.516 Test: test_nvme_driver_init ...[2024-07-12 14:52:25.209894] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:03:59.516 [2024-07-12 14:52:25.209934] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:03:59.516 [2024-07-12 14:52:25.209950] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:03:59.516 [2024-07-12 14:52:25.209970] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:03:59.516 [2024-07-12 14:52:25.209995] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:03:59.516 [2024-07-12 14:52:25.210010] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:03:59.516 [2024-07-12 14:52:25.325031] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:03:59.516 passed 00:03:59.516 Test: test_spdk_nvme_detach ...passed 00:03:59.516 Test: test_nvme_completion_poll_cb ...passed 00:03:59.516 Test: test_nvme_user_copy_cmd_complete ...passed 00:03:59.516 Test: test_nvme_allocate_request_null ...passed 00:03:59.516 Test: test_nvme_allocate_request ...passed 00:03:59.516 Test: test_nvme_free_request ...passed 00:03:59.516 Test: test_nvme_allocate_request_user_copy ...passed 00:03:59.516 Test: test_nvme_robust_mutex_init_shared ...passed 00:03:59.516 Test: test_nvme_request_check_timeout ...passed 00:03:59.516 Test: test_nvme_wait_for_completion ...passed 00:03:59.516 Test: test_spdk_nvme_parse_func ...passed 00:03:59.516 Test: test_spdk_nvme_detach_async ...passed 00:03:59.516 Test: test_nvme_parse_addr ...passed 00:03:59.516 00:03:59.516 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.516 suites 1 1 n/a 0 0 00:03:59.516 tests 25 25 25 0 0 00:03:59.516 asserts 326 326 326 0 n/a 00:03:59.516 00:03:59.516 Elapsed time = 0.000 seconds[2024-07-12 14:52:25.325299] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1609:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:03:59.516 00:03:59.776 14:52:25 unittest.unittest_nvme -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:03:59.776 00:03:59.776 00:03:59.776 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.776 http://cunit.sourceforge.net/ 00:03:59.776 00:03:59.776 00:03:59.776 Suite: nvme_ctrlr 00:03:59.776 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-07-12 14:52:25.332042] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:59.776 passed 00:03:59.776 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-07-12 14:52:25.333457] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:59.776 passed 00:03:59.776 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-07-12 14:52:25.334615] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:59.776 passed 00:03:59.776 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-07-12 14:52:25.335781] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:59.776 passed 00:03:59.776 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-07-12 14:52:25.336967] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:59.776 [2024-07-12 14:52:25.338108] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-12 14:52:25.339272] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-12 14:52:25.340410] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:03:59.776 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-07-12 14:52:25.342693] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:59.776 [2024-07-12 14:52:25.344933] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-12 14:52:25.346087] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:03:59.776 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-07-12 14:52:25.348366] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:59.776 [2024-07-12 14:52:25.349497] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-12 14:52:25.351746] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:03:59.776 Test: test_nvme_ctrlr_init_delay ...[2024-07-12 14:52:25.354013] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:59.776 passed 00:03:59.776 Test: test_alloc_io_qpair_rr_1 ...[2024-07-12 14:52:25.355169] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:59.776 [2024-07-12 14:52:25.355214] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5475:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:03:59.776 [2024-07-12 14:52:25.355235] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:03:59.776 [2024-07-12 14:52:25.355250] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:03:59.776 passed 00:03:59.776 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:03:59.776 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:03:59.776 Test: test_alloc_io_qpair_wrr_1 ...[2024-07-12 14:52:25.355262] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:03:59.776 passed 00:03:59.776 Test: test_alloc_io_qpair_wrr_2 ...passed 00:03:59.776 Test: test_spdk_nvme_ctrlr_update_firmware ...passed 00:03:59.776 Test: test_nvme_ctrlr_fail ...passed 00:03:59.776 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:03:59.776 Test: test_nvme_ctrlr_set_supported_features ...passed 00:03:59.776 Test: test_nvme_ctrlr_set_host_feature ...[2024-07-12 14:52:25.355307] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:59.776 [2024-07-12 14:52:25.355336] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:59.776 [2024-07-12 14:52:25.355355] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5475:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:03:59.776 [2024-07-12 14:52:25.355384] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5003:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:03:59.776 [2024-07-12 14:52:25.355399] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5040:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:03:59.776 [2024-07-12 14:52:25.355412] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5080:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:03:59.776 [2024-07-12 14:52:25.355426] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5040:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:03:59.776 [2024-07-12 14:52:25.355443] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:03:59.776 [2024-07-12 14:52:25.355467] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:59.776 passed 00:03:59.776 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:03:59.776 Test: test_nvme_ctrlr_test_active_ns ...[2024-07-12 14:52:25.356638] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:59.776 passed 00:03:59.776 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:03:59.776 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:03:59.776 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:03:59.776 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-07-12 14:52:25.389606] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:59.776 passed 00:03:59.776 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-07-12 14:52:25.396260] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:59.776 passed 00:03:59.776 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-07-12 14:52:25.397416] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:59.776 [2024-07-12 14:52:25.397467] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3003:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:03:59.776 passed 00:03:59.776 Test: test_alloc_io_qpair_fail ...[2024-07-12 14:52:25.398592] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:59.776 passed 00:03:59.776 Test: test_nvme_ctrlr_add_remove_process ...passed 00:03:59.776 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:03:59.776 Test: test_nvme_ctrlr_set_state ...[2024-07-12 14:52:25.398633] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 506:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:03:59.776 passed 00:03:59.776 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-07-12 14:52:25.398671] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1547:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:03:59.776 [2024-07-12 14:52:25.398689] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:59.776 passed 00:03:59.776 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-07-12 14:52:25.401298] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:59.776 passed 00:03:59.776 Test: test_nvme_ctrlr_ns_mgmt ...[2024-07-12 14:52:25.407442] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:59.776 passed 00:03:59.776 Test: test_nvme_ctrlr_reset ...[2024-07-12 14:52:25.408594] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:59.776 passed 00:03:59.776 Test: test_nvme_ctrlr_aer_callback ...[2024-07-12 14:52:25.408653] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:59.776 passed 00:03:59.776 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-07-12 14:52:25.409795] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:59.776 passed 00:03:59.776 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:03:59.776 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:03:59.776 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-07-12 14:52:25.411004] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:59.776 passed 00:03:59.777 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:03:59.777 Test: test_nvme_ctrlr_ana_resize ...[2024-07-12 14:52:25.412151] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:59.777 passed 00:03:59.777 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:03:59.777 Test: test_nvme_transport_ctrlr_ready ...passed 00:03:59.777 Test: test_nvme_ctrlr_disable ...[2024-07-12 14:52:25.413301] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4152:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:03:59.777 [2024-07-12 14:52:25.413319] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4205:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 53 (error) 00:03:59.777 [2024-07-12 14:52:25.413332] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:59.777 passed 00:03:59.777 00:03:59.777 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.777 suites 1 1 n/a 0 0 00:03:59.777 tests 44 44 44 0 0 00:03:59.777 asserts 10434 10434 10434 0 n/a 00:03:59.777 00:03:59.777 Elapsed time = 0.031 seconds 00:03:59.777 14:52:25 unittest.unittest_nvme -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:03:59.777 00:03:59.777 00:03:59.777 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.777 http://cunit.sourceforge.net/ 00:03:59.777 00:03:59.777 00:03:59.777 Suite: nvme_ctrlr_cmd 00:03:59.777 Test: test_get_log_pages ...passed 00:03:59.777 Test: test_set_feature_cmd ...passed 00:03:59.777 Test: test_set_feature_ns_cmd ...passed 00:03:59.777 Test: test_get_feature_cmd ...passed 00:03:59.777 Test: test_get_feature_ns_cmd ...passed 00:03:59.777 Test: test_abort_cmd ...passed 00:03:59.777 Test: test_set_host_id_cmds ...[2024-07-12 14:52:25.423334] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:03:59.777 passed 00:03:59.777 Test: test_io_cmd_raw_no_payload_build ...passed 00:03:59.777 Test: test_io_raw_cmd ...passed 00:03:59.777 Test: test_io_raw_cmd_with_md ...passed 00:03:59.777 Test: test_namespace_attach ...passed 00:03:59.777 Test: test_namespace_detach ...passed 00:03:59.777 Test: test_namespace_create ...passed 00:03:59.777 Test: test_namespace_delete ...passed 00:03:59.777 Test: test_doorbell_buffer_config ...passed 00:03:59.777 Test: test_format_nvme ...passed 00:03:59.777 Test: test_fw_commit ...passed 00:03:59.777 Test: test_fw_image_download ...passed 00:03:59.777 Test: test_sanitize ...passed 00:03:59.777 Test: test_directive ...passed 00:03:59.777 Test: test_nvme_request_add_abort ...passed 00:03:59.777 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:03:59.777 Test: test_nvme_ctrlr_cmd_identify ...passed 00:03:59.777 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:03:59.777 00:03:59.777 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.777 suites 1 1 n/a 0 0 00:03:59.777 tests 24 24 24 0 0 00:03:59.777 asserts 198 198 198 0 n/a 00:03:59.777 00:03:59.777 Elapsed time = 0.000 seconds 00:03:59.777 14:52:25 unittest.unittest_nvme -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:03:59.777 00:03:59.777 00:03:59.777 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.777 http://cunit.sourceforge.net/ 00:03:59.777 00:03:59.777 00:03:59.777 Suite: nvme_ctrlr_cmd 00:03:59.777 Test: test_geometry_cmd ...passed 00:03:59.777 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:03:59.777 00:03:59.777 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.777 suites 1 1 n/a 0 0 00:03:59.777 tests 2 2 2 0 0 00:03:59.777 asserts 7 7 7 0 n/a 00:03:59.777 00:03:59.777 Elapsed time = 0.000 seconds 00:03:59.777 14:52:25 unittest.unittest_nvme -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:03:59.777 00:03:59.777 00:03:59.777 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.777 http://cunit.sourceforge.net/ 00:03:59.777 00:03:59.777 00:03:59.777 Suite: nvme 00:03:59.777 Test: test_nvme_ns_construct ...passed 00:03:59.777 Test: test_nvme_ns_uuid ...passed 00:03:59.777 Test: test_nvme_ns_csi ...passed 00:03:59.777 Test: test_nvme_ns_data ...passed 00:03:59.777 Test: test_nvme_ns_set_identify_data ...passed 00:03:59.777 Test: test_spdk_nvme_ns_get_values ...passed 00:03:59.777 Test: test_spdk_nvme_ns_is_active ...passed 00:03:59.777 Test: spdk_nvme_ns_supports ...passed 00:03:59.777 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:03:59.777 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:03:59.777 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:03:59.777 Test: test_nvme_ns_find_id_desc ...passed 00:03:59.777 00:03:59.777 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.777 suites 1 1 n/a 0 0 00:03:59.777 tests 12 12 12 0 0 00:03:59.777 asserts 95 95 95 0 n/a 00:03:59.777 00:03:59.777 Elapsed time = 0.000 seconds 00:03:59.777 14:52:25 unittest.unittest_nvme -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:03:59.777 00:03:59.777 00:03:59.777 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.777 http://cunit.sourceforge.net/ 00:03:59.777 00:03:59.777 00:03:59.777 Suite: nvme_ns_cmd 00:03:59.777 Test: split_test ...passed 00:03:59.777 Test: split_test2 ...passed 00:03:59.777 Test: split_test3 ...passed 00:03:59.777 Test: split_test4 ...passed 00:03:59.777 Test: test_nvme_ns_cmd_flush ...passed 00:03:59.777 Test: test_nvme_ns_cmd_dataset_management ...passed 00:03:59.777 Test: test_nvme_ns_cmd_copy ...passed 00:03:59.777 Test: test_io_flags ...[2024-07-12 14:52:25.439634] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:03:59.777 passed 00:03:59.777 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:03:59.777 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:03:59.777 Test: test_nvme_ns_cmd_reservation_register ...passed 00:03:59.777 Test: test_nvme_ns_cmd_reservation_release ...passed 00:03:59.777 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:03:59.777 Test: test_nvme_ns_cmd_reservation_report ...passed 00:03:59.777 Test: test_cmd_child_request ...passed 00:03:59.777 Test: test_nvme_ns_cmd_readv ...passed 00:03:59.777 Test: test_nvme_ns_cmd_read_with_md ...passed 00:03:59.777 Test: test_nvme_ns_cmd_writev ...passed 00:03:59.777 Test: test_nvme_ns_cmd_write_with_md ...passed 00:03:59.777 Test: test_nvme_ns_cmd_zone_append_with_md ...[2024-07-12 14:52:25.440001] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 292:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:03:59.777 passed 00:03:59.777 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:03:59.777 Test: test_nvme_ns_cmd_comparev ...passed 00:03:59.777 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:03:59.777 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:03:59.777 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:03:59.777 Test: test_nvme_ns_cmd_setup_request ...passed 00:03:59.777 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:03:59.777 Test: test_spdk_nvme_ns_cmd_writev_ext ...passed 00:03:59.777 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-07-12 14:52:25.440155] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:03:59.777 [2024-07-12 14:52:25.440183] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:03:59.777 passed 00:03:59.777 Test: test_nvme_ns_cmd_verify ...passed 00:03:59.777 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:03:59.777 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:03:59.777 00:03:59.777 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.777 suites 1 1 n/a 0 0 00:03:59.777 tests 32 32 32 0 0 00:03:59.777 asserts 550 550 550 0 n/a 00:03:59.777 00:03:59.777 Elapsed time = 0.000 seconds 00:03:59.777 14:52:25 unittest.unittest_nvme -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:03:59.777 00:03:59.777 00:03:59.777 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.777 http://cunit.sourceforge.net/ 00:03:59.777 00:03:59.777 00:03:59.777 Suite: nvme_ns_cmd 00:03:59.777 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:03:59.777 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:03:59.777 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:03:59.777 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:03:59.777 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:03:59.777 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:03:59.777 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:03:59.777 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:03:59.777 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:03:59.777 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:03:59.777 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:03:59.777 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:03:59.777 00:03:59.777 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.777 suites 1 1 n/a 0 0 00:03:59.777 tests 12 12 12 0 0 00:03:59.777 asserts 123 123 123 0 n/a 00:03:59.777 00:03:59.777 Elapsed time = 0.000 seconds 00:03:59.777 14:52:25 unittest.unittest_nvme -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:03:59.777 00:03:59.777 00:03:59.777 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.777 http://cunit.sourceforge.net/ 00:03:59.777 00:03:59.777 00:03:59.777 Suite: nvme_qpair 00:03:59.777 Test: test3 ...passed 00:03:59.777 Test: test_ctrlr_failed ...passed 00:03:59.777 Test: struct_packing ...passed 00:03:59.777 Test: test_nvme_qpair_process_completions ...[2024-07-12 14:52:25.453166] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:03:59.778 [2024-07-12 14:52:25.453355] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:03:59.778 passed 00:03:59.778 Test: test_nvme_completion_is_retry ...passed 00:03:59.778 Test: test_get_status_string ...passed 00:03:59.778 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:03:59.778 Test: test_nvme_qpair_submit_request ...passed 00:03:59.778 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:03:59.778 Test: test_nvme_qpair_manual_complete_request ...passed 00:03:59.778 Test: test_nvme_qpair_init_deinit ...passed 00:03:59.778 Test: test_nvme_get_sgl_print_info ...passed 00:03:59.778 00:03:59.778 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.778 suites 1 1 n/a 0 0 00:03:59.778 tests 12 12 12 0 0 00:03:59.778 asserts 154 154 154 0 n/a 00:03:59.778 00:03:59.778 Elapsed time = 0.000 seconds 00:03:59.778 [2024-07-12 14:52:25.453416] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 805:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (Device not configured) on qpair id 0 00:03:59.778 [2024-07-12 14:52:25.453433] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 805:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (Device not configured) on qpair id 1 00:03:59.778 [2024-07-12 14:52:25.453485] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:03:59.778 14:52:25 unittest.unittest_nvme -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:03:59.778 00:03:59.778 00:03:59.778 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.778 http://cunit.sourceforge.net/ 00:03:59.778 00:03:59.778 00:03:59.778 Suite: nvme_pcie 00:03:59.778 Test: test_prp_list_append ...[2024-07-12 14:52:25.457805] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1207:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:03:59.778 passed 00:03:59.778 Test: test_nvme_pcie_hotplug_monitor ...passed 00:03:59.778 Test: test_shadow_doorbell_update ...passed 00:03:59.778 Test: test_build_contig_hw_sgl_request ...passed 00:03:59.778 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:03:59.778 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:03:59.778 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:03:59.778 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:03:59.778 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:03:59.778 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:03:59.778 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:03:59.778 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:03:59.778 Test: test_nvme_pcie_ctrlr_config_pmr ...passed 00:03:59.778 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:03:59.778 00:03:59.778 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.778 suites 1 1 n/a 0 0 00:03:59.778 tests 14 14 14 0 0 00:03:59.778 asserts 235 235 235 0 n/a 00:03:59.778 00:03:59.778 Elapsed time = 0.000 seconds 00:03:59.778 [2024-07-12 14:52:25.457935] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1236:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:03:59.778 [2024-07-12 14:52:25.457946] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:03:59.778 [2024-07-12 14:52:25.457976] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1220:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:03:59.778 [2024-07-12 14:52:25.457990] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1220:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:03:59.778 [2024-07-12 14:52:25.458049] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1207:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:03:59.778 [2024-07-12 14:52:25.458067] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:03:59.778 [2024-07-12 14:52:25.458080] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:03:59.778 [2024-07-12 14:52:25.458091] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:03:59.778 [2024-07-12 14:52:25.458100] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:03:59.778 14:52:25 unittest.unittest_nvme -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:03:59.778 00:03:59.778 00:03:59.778 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.778 http://cunit.sourceforge.net/ 00:03:59.778 00:03:59.778 00:03:59.778 Suite: nvme_ns_cmd 00:03:59.778 Test: nvme_poll_group_create_test ...passed 00:03:59.778 Test: nvme_poll_group_add_remove_test ...passed 00:03:59.778 Test: nvme_poll_group_process_completions ...passed 00:03:59.778 Test: nvme_poll_group_destroy_test ...passed 00:03:59.778 Test: nvme_poll_group_get_free_stats ...passed 00:03:59.778 00:03:59.778 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.778 suites 1 1 n/a 0 0 00:03:59.778 tests 5 5 5 0 0 00:03:59.778 asserts 75 75 75 0 n/a 00:03:59.778 00:03:59.778 Elapsed time = 0.000 seconds 00:03:59.778 14:52:25 unittest.unittest_nvme -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:03:59.778 00:03:59.778 00:03:59.778 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.778 http://cunit.sourceforge.net/ 00:03:59.778 00:03:59.778 00:03:59.778 Suite: nvme_quirks 00:03:59.778 Test: test_nvme_quirks_striping ...passed 00:03:59.778 00:03:59.778 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.778 suites 1 1 n/a 0 0 00:03:59.778 tests 1 1 1 0 0 00:03:59.778 asserts 5 5 5 0 n/a 00:03:59.778 00:03:59.778 Elapsed time = 0.000 seconds 00:03:59.778 14:52:25 unittest.unittest_nvme -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:03:59.778 00:03:59.778 00:03:59.778 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.778 http://cunit.sourceforge.net/ 00:03:59.778 00:03:59.778 00:03:59.778 Suite: nvme_tcp 00:03:59.778 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:03:59.778 Test: test_nvme_tcp_build_iovs ...passed 00:03:59.778 Test: test_nvme_tcp_build_sgl_request ...[2024-07-12 14:52:25.469681] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 849:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x8209e0f40, and the iovcnt=16, remaining_size=28672 00:03:59.778 passed 00:03:59.778 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:03:59.778 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:03:59.778 Test: test_nvme_tcp_req_complete_safe ...passed 00:03:59.778 Test: test_nvme_tcp_req_get ...passed 00:03:59.778 Test: test_nvme_tcp_req_init ...passed 00:03:59.778 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:03:59.778 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:03:59.778 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:03:59.778 Test: test_nvme_tcp_alloc_reqs ...passed 00:03:59.778 Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed 00:03:59.778 Test: test_nvme_tcp_pdu_ch_handle ...passed 00:03:59.778 Test: test_nvme_tcp_qpair_connect_sock ...[2024-07-12 14:52:25.469907] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209e2af8 is same with the state(6) to be set 00:03:59.778 [2024-07-12 14:52:25.469951] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209e2af8 is same with the state(5) to be set 00:03:59.778 [2024-07-12 14:52:25.469965] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1190:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x8209e2288 00:03:59.778 [2024-07-12 14:52:25.469976] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1250:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:03:59.778 [2024-07-12 14:52:25.469985] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209e2af8 is same with the state(5) to be set 00:03:59.778 [2024-07-12 14:52:25.469995] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1200:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:03:59.778 [2024-07-12 14:52:25.470004] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209e2af8 is same with the state(5) to be set 00:03:59.778 [2024-07-12 14:52:25.470014] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:03:59.778 [2024-07-12 14:52:25.470023] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209e2af8 is same with the state(5) to be set 00:03:59.778 [2024-07-12 14:52:25.470033] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209e2af8 is same with the state(5) to be set 00:03:59.778 [2024-07-12 14:52:25.470042] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209e2af8 is same with the state(5) to be set 00:03:59.778 [2024-07-12 14:52:25.470052] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209e2af8 is same with the state(5) to be set 00:03:59.778 [2024-07-12 14:52:25.470061] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209e2af8 is same with the state(5) to be set 00:03:59.778 [2024-07-12 14:52:25.470070] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209e2af8 is same with the state(5) to be set 00:03:59.778 [2024-07-12 14:52:25.470105] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:03:59.778 [2024-07-12 14:52:25.470116] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:04:17.861 passed 00:04:17.861 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:04:17.861 Test: test_nvme_tcp_c2h_payload_handle ...passed 00:04:17.861 Test: test_nvme_tcp_icresp_handle ...[2024-07-12 14:52:41.130983] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:04:17.861 [2024-07-12 14:52:41.131112] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1358:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x8209e26c0): PDU Sequence Error 00:04:17.861 [2024-07-12 14:52:41.131140] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1576:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:04:17.861 [2024-07-12 14:52:41.131159] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1584:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:04:17.861 [2024-07-12 14:52:41.131175] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209e2af8 is same with the state(5) to be set 00:04:17.861 [2024-07-12 14:52:41.131192] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1592:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:04:17.861 [2024-07-12 14:52:41.131207] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209e2af8 is same with the state(5) to be set 00:04:17.861 passed 00:04:17.861 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:04:17.861 Test: test_nvme_tcp_capsule_resp_hdr_handle ...passed 00:04:17.861 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:04:17.861 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...passed 00:04:17.862 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-07-12 14:52:41.131224] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209e2af8 is same with the state(0) to be set 00:04:17.862 [2024-07-12 14:52:41.131245] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1358:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x8209e26c0): PDU Sequence Error 00:04:17.862 [2024-07-12 14:52:41.131293] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1653:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x8209e2af8 00:04:17.862 [2024-07-12 14:52:41.131362] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 358:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x8209e0850, errno=0, rc=0 00:04:17.862 [2024-07-12 14:52:41.131381] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209e0850 is same with the state(5) to be set 00:04:17.862 [2024-07-12 14:52:41.131398] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209e0850 is same with the state(5) to be set 00:04:17.862 [2024-07-12 14:52:41.131479] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2186:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8209e0850 (0): No error: 0 00:04:17.862 [2024-07-12 14:52:41.131497] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2186:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8209e0850 (0): No error: 0 00:04:17.862 passed 00:04:17.862 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:04:17.862 Test: test_nvme_tcp_poll_group_get_stats ...[2024-07-12 14:52:41.213776] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2517:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:04:17.862 [2024-07-12 14:52:41.213855] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2517:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:04:17.862 [2024-07-12 14:52:41.213894] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2969:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:04:17.862 [2024-07-12 14:52:41.213906] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2969:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:04:17.862 passed 00:04:17.862 Test: test_nvme_tcp_ctrlr_construct ...[2024-07-12 14:52:41.213949] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2517:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:04:17.862 [2024-07-12 14:52:41.213959] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:04:17.862 [2024-07-12 14:52:41.213973] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:04:17.862 passed 00:04:17.862 Test: test_nvme_tcp_qpair_submit_request ...passed 00:04:17.862 00:04:17.862 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.862 suites 1 1 n/a 0 0 00:04:17.862 tests 27 27 27 0 0 00:04:17.862 asserts 624 624 624 0 n/a 00:04:17.862 00:04:17.862 Elapsed time = 0.078 seconds 00:04:17.862 [2024-07-12 14:52:41.213983] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:04:17.862 [2024-07-12 14:52:41.213999] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2384:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc55606b000 with addr=192.168.1.78, port=23 00:04:17.862 [2024-07-12 14:52:41.214013] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:04:17.862 [2024-07-12 14:52:41.214032] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 849:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x1bc556039180, and the iovcnt=1, remaining_size=1024 00:04:17.862 [2024-07-12 14:52:41.214042] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1035:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:04:17.862 14:52:41 unittest.unittest_nvme -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:04:17.862 00:04:17.862 00:04:17.862 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.862 http://cunit.sourceforge.net/ 00:04:17.862 00:04:17.862 00:04:17.862 Suite: nvme_transport 00:04:17.862 Test: test_nvme_get_transport ...passed 00:04:17.862 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:04:17.862 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:04:17.862 Test: test_nvme_transport_poll_group_add_remove ...passed 00:04:17.862 Test: test_ctrlr_get_memory_domains ...passed 00:04:17.862 00:04:17.862 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.862 suites 1 1 n/a 0 0 00:04:17.862 tests 5 5 5 0 0 00:04:17.862 asserts 28 28 28 0 n/a 00:04:17.862 00:04:17.862 Elapsed time = 0.000 seconds 00:04:17.862 14:52:41 unittest.unittest_nvme -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:04:17.862 00:04:17.862 00:04:17.862 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.862 http://cunit.sourceforge.net/ 00:04:17.862 00:04:17.862 00:04:17.862 Suite: nvme_io_msg 00:04:17.862 Test: test_nvme_io_msg_send ...passed 00:04:17.862 Test: test_nvme_io_msg_process ...passed 00:04:17.862 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:04:17.862 00:04:17.862 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.862 suites 1 1 n/a 0 0 00:04:17.862 tests 3 3 3 0 0 00:04:17.862 asserts 56 56 56 0 n/a 00:04:17.862 00:04:17.862 Elapsed time = 0.000 seconds 00:04:17.862 14:52:41 unittest.unittest_nvme -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:04:17.862 00:04:17.862 00:04:17.862 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.862 http://cunit.sourceforge.net/ 00:04:17.862 00:04:17.862 00:04:17.862 Suite: nvme_pcie_common 00:04:17.862 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-07-12 14:52:41.234837] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:04:17.862 passed 00:04:17.862 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:04:17.862 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:04:17.862 Test: test_nvme_pcie_ctrlr_connect_qpair ...passed 00:04:17.862 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-07-12 14:52:41.235076] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 506:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:04:17.862 [2024-07-12 14:52:41.235102] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 459:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:04:17.862 [2024-07-12 14:52:41.235113] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 553:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:04:17.862 passed 00:04:17.862 Test: test_nvme_pcie_poll_group_get_stats ...[2024-07-12 14:52:41.235219] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1799:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:04:17.862 [2024-07-12 14:52:41.235229] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1799:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:04:17.862 passed 00:04:17.862 00:04:17.862 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.862 suites 1 1 n/a 0 0 00:04:17.862 tests 6 6 6 0 0 00:04:17.862 asserts 148 148 148 0 n/a 00:04:17.862 00:04:17.862 Elapsed time = 0.000 seconds 00:04:17.862 14:52:41 unittest.unittest_nvme -- unit/unittest.sh@103 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:04:17.862 00:04:17.862 00:04:17.862 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.862 http://cunit.sourceforge.net/ 00:04:17.862 00:04:17.862 00:04:17.862 Suite: nvme_fabric 00:04:17.862 Test: test_nvme_fabric_prop_set_cmd ...passed 00:04:17.862 Test: test_nvme_fabric_prop_get_cmd ...passed 00:04:17.862 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:04:17.862 Test: test_nvme_fabric_discover_probe ...passed 00:04:17.862 Test: test_nvme_fabric_qpair_connect ...[2024-07-12 14:52:41.239347] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 607:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -85, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:04:17.862 passed 00:04:17.862 00:04:17.862 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.862 suites 1 1 n/a 0 0 00:04:17.862 tests 5 5 5 0 0 00:04:17.862 asserts 60 60 60 0 n/a 00:04:17.862 00:04:17.862 Elapsed time = 0.000 seconds 00:04:17.862 14:52:41 unittest.unittest_nvme -- unit/unittest.sh@104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:04:17.862 00:04:17.862 00:04:17.862 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.862 http://cunit.sourceforge.net/ 00:04:17.862 00:04:17.862 00:04:17.862 Suite: nvme_opal 00:04:17.862 Test: test_opal_nvme_security_recv_send_done ...passed 00:04:17.862 Test: test_opal_add_short_atom_header ...passed 00:04:17.862 00:04:17.862 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.862 suites 1 1 n/a 0 0 00:04:17.862 tests 2 2 2 0 0 00:04:17.862 asserts 22 22 22 0 n/a 00:04:17.862 00:04:17.862 Elapsed time = 0.000 seconds 00:04:17.862 [2024-07-12 14:52:41.245263] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:04:17.862 00:04:17.862 real 0m16.042s 00:04:17.862 user 0m0.079s 00:04:17.862 sys 0m0.147s 00:04:17.862 ************************************ 00:04:17.862 END TEST unittest_nvme 00:04:17.862 ************************************ 00:04:17.862 14:52:41 unittest.unittest_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.862 14:52:41 unittest.unittest_nvme -- common/autotest_common.sh@10 -- # set +x 00:04:17.862 14:52:41 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:17.862 14:52:41 unittest -- unit/unittest.sh@249 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:04:17.862 14:52:41 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.862 14:52:41 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.862 14:52:41 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:17.862 ************************************ 00:04:17.862 START TEST unittest_log 00:04:17.862 ************************************ 00:04:17.862 14:52:41 unittest.unittest_log -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:04:17.862 00:04:17.862 00:04:17.862 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.862 http://cunit.sourceforge.net/ 00:04:17.862 00:04:17.862 00:04:17.862 Suite: log 00:04:17.862 Test: log_test ...passed 00:04:17.862 Test: deprecation ...[2024-07-12 14:52:41.298919] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:04:17.862 [2024-07-12 14:52:41.299155] log_ut.c: 57:log_test: *DEBUG*: log test 00:04:17.863 log dump test: 00:04:17.863 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:04:17.863 spdk dump test: 00:04:17.863 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:04:17.863 spdk dump test: 00:04:17.863 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:04:17.863 00000010 65 20 63 68 61 72 73 e chars 00:04:17.863 passed 00:04:17.863 00:04:17.863 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.863 suites 1 1 n/a 0 0 00:04:17.863 tests 2 2 2 0 0 00:04:17.863 asserts 73 73 73 0 n/a 00:04:17.863 00:04:17.863 Elapsed time = 0.000 seconds 00:04:17.863 00:04:17.863 real 0m1.075s 00:04:17.863 user 0m0.000s 00:04:17.863 sys 0m0.008s 00:04:17.863 14:52:42 unittest.unittest_log -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.863 ************************************ 00:04:17.863 END TEST unittest_log 00:04:17.863 ************************************ 00:04:17.863 14:52:42 unittest.unittest_log -- common/autotest_common.sh@10 -- # set +x 00:04:17.863 14:52:42 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:17.863 14:52:42 unittest -- unit/unittest.sh@250 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:04:17.863 14:52:42 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.863 14:52:42 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.863 14:52:42 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:17.863 ************************************ 00:04:17.863 START TEST unittest_lvol 00:04:17.863 ************************************ 00:04:17.863 14:52:42 unittest.unittest_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:04:17.863 00:04:17.863 00:04:17.863 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.863 http://cunit.sourceforge.net/ 00:04:17.863 00:04:17.863 00:04:17.863 Suite: lvol 00:04:17.863 Test: lvs_init_unload_success ...[2024-07-12 14:52:42.425508] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:04:17.863 passed 00:04:17.863 Test: lvs_init_destroy_success ...[2024-07-12 14:52:42.425764] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:04:17.863 passed 00:04:17.863 Test: lvs_init_opts_success ...passed 00:04:17.863 Test: lvs_unload_lvs_is_null_fail ...passed 00:04:17.863 Test: lvs_names ...passed 00:04:17.863 Test: lvol_create_destroy_success ...passed 00:04:17.863 Test: lvol_create_fail ...[2024-07-12 14:52:42.425799] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:04:17.863 [2024-07-12 14:52:42.425822] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:04:17.863 [2024-07-12 14:52:42.425835] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:04:17.863 [2024-07-12 14:52:42.425856] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:04:17.863 [2024-07-12 14:52:42.425914] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:04:17.863 [2024-07-12 14:52:42.425930] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:04:17.863 passed 00:04:17.863 Test: lvol_destroy_fail ...passed 00:04:17.863 Test: lvol_close ...passed 00:04:17.863 Test: lvol_resize ...passed 00:04:17.863 Test: lvol_set_read_only ...passed 00:04:17.863 Test: test_lvs_load ...[2024-07-12 14:52:42.425962] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:04:17.863 [2024-07-12 14:52:42.425985] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:04:17.863 [2024-07-12 14:52:42.425997] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:04:17.863 passed 00:04:17.863 Test: lvols_load ...passed 00:04:17.863 Test: lvol_open ...passed 00:04:17.863 Test: lvol_snapshot ...passed 00:04:17.863 Test: lvol_snapshot_fail ...[2024-07-12 14:52:42.426084] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:04:17.863 [2024-07-12 14:52:42.426101] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:04:17.863 [2024-07-12 14:52:42.426128] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:04:17.863 [2024-07-12 14:52:42.426159] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:04:17.863 [2024-07-12 14:52:42.426258] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:04:17.863 passed 00:04:17.863 Test: lvol_clone ...passed 00:04:17.863 Test: lvol_clone_fail ...passed 00:04:17.863 Test: lvol_iter_clones ...passed 00:04:17.863 Test: lvol_refcnt ...passed 00:04:17.863 Test: lvol_names ...[2024-07-12 14:52:42.426331] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:04:17.863 [2024-07-12 14:52:42.426380] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 6437e852-405e-11ef-b2a4-e9dca065e82e because it is still open 00:04:17.863 [2024-07-12 14:52:42.426401] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:04:17.863 [2024-07-12 14:52:42.426417] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:04:17.863 [2024-07-12 14:52:42.426440] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:04:17.863 passed 00:04:17.863 Test: lvol_create_thin_provisioned ...passed 00:04:17.863 Test: lvol_rename ...passed 00:04:17.863 Test: lvs_rename ...passed 00:04:17.863 Test: lvol_inflate ...passed 00:04:17.863 Test: lvol_decouple_parent ...passed 00:04:17.863 Test: lvol_get_xattr ...passed 00:04:17.863 Test: lvol_esnap_reload ...passed 00:04:17.863 Test: lvol_esnap_create_bad_args ...passed 00:04:17.863 Test: lvol_esnap_create_delete ...[2024-07-12 14:52:42.426483] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:04:17.863 [2024-07-12 14:52:42.426502] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:04:17.863 [2024-07-12 14:52:42.426531] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:04:17.863 [2024-07-12 14:52:42.426555] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:04:17.863 [2024-07-12 14:52:42.426579] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:04:17.863 [2024-07-12 14:52:42.426629] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:04:17.863 [2024-07-12 14:52:42.426641] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:04:17.863 [2024-07-12 14:52:42.426654] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1260:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:04:17.863 [2024-07-12 14:52:42.426671] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:04:17.863 [2024-07-12 14:52:42.426690] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:04:17.863 passed 00:04:17.863 Test: lvol_esnap_load_esnaps ...passed 00:04:17.863 Test: lvol_esnap_missing ...passed 00:04:17.863 Test: lvol_esnap_hotplug ... 00:04:17.863 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:04:17.863 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:04:17.863 [2024-07-12 14:52:42.426725] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1833:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:04:17.863 [2024-07-12 14:52:42.426752] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:04:17.863 [2024-07-12 14:52:42.426764] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:04:17.863 [2024-07-12 14:52:42.426876] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol 6437fb78-405e-11ef-b2a4-e9dca065e82e: failed to create esnap bs_dev: error -12 00:04:17.863 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:04:17.863 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:04:17.863 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:04:17.863 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:04:17.863 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:04:17.863 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:04:17.863 [2024-07-12 14:52:42.426967] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol 6437febe-405e-11ef-b2a4-e9dca065e82e: failed to create esnap bs_dev: error -12 00:04:17.863 [2024-07-12 14:52:42.427023] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol 64380142-405e-11ef-b2a4-e9dca065e82e: failed to create esnap bs_dev: error -12 00:04:17.863 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:04:17.863 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:04:17.863 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:04:17.863 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:04:17.863 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:04:17.863 passed 00:04:17.863 Test: lvol_get_by ...passed 00:04:17.863 Test: lvol_shallow_copy ...passed 00:04:17.863 Test: lvol_set_parent ...passed 00:04:17.863 Test: lvol_set_external_parent ...passed 00:04:17.863 00:04:17.863 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.863 suites 1 1 n/a 0 0 00:04:17.863 tests 37 37 37 0 0 00:04:17.863 asserts 1505 1505 1505 0 n/a 00:04:17.863 00:04:17.863 Elapsed time = 0.000 seconds 00:04:17.863 [2024-07-12 14:52:42.427384] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2274:spdk_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:04:17.863 [2024-07-12 14:52:42.427404] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2281:spdk_lvol_shallow_copy: *ERROR*: lvol 64380f5a-405e-11ef-b2a4-e9dca065e82e shallow copy, ext_dev must not be NULL 00:04:17.863 [2024-07-12 14:52:42.427438] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2338:spdk_lvol_set_parent: *ERROR*: lvol must not be NULL 00:04:17.863 [2024-07-12 14:52:42.427450] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2344:spdk_lvol_set_parent: *ERROR*: snapshot must not be NULL 00:04:17.863 [2024-07-12 14:52:42.427474] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2393:spdk_lvol_set_external_parent: *ERROR*: lvol must not be NULL 00:04:17.863 [2024-07-12 14:52:42.427486] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2399:spdk_lvol_set_external_parent: *ERROR*: snapshot must not be NULL 00:04:17.863 [2024-07-12 14:52:42.427498] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2406:spdk_lvol_set_external_parent: *ERROR*: lvol lvol and esnap have the same UUID 00:04:17.863 00:04:17.863 real 0m0.010s 00:04:17.863 user 0m0.002s 00:04:17.863 sys 0m0.008s 00:04:17.863 14:52:42 unittest.unittest_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.863 14:52:42 unittest.unittest_lvol -- common/autotest_common.sh@10 -- # set +x 00:04:17.863 ************************************ 00:04:17.863 END TEST unittest_lvol 00:04:17.863 ************************************ 00:04:17.864 14:52:42 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:17.864 14:52:42 unittest -- unit/unittest.sh@251 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:17.864 14:52:42 unittest -- unit/unittest.sh@252 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:04:17.864 14:52:42 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.864 14:52:42 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.864 14:52:42 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:17.864 ************************************ 00:04:17.864 START TEST unittest_nvme_rdma 00:04:17.864 ************************************ 00:04:17.864 14:52:42 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:04:17.864 00:04:17.864 00:04:17.864 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.864 http://cunit.sourceforge.net/ 00:04:17.864 00:04:17.864 00:04:17.864 Suite: nvme_rdma 00:04:17.864 Test: test_nvme_rdma_build_sgl_request ...[2024-07-12 14:52:42.477034] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1405:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:04:17.864 passed 00:04:17.864 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:04:17.864 Test: test_nvme_rdma_build_contig_request ...passed 00:04:17.864 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:04:17.864 Test: test_nvme_rdma_create_reqs ...passed 00:04:17.864 Test: test_nvme_rdma_create_rsps ...passed 00:04:17.864 Test: test_nvme_rdma_ctrlr_create_qpair ...passed 00:04:17.864 Test: test_nvme_rdma_poller_create ...passed 00:04:17.864 Test: test_nvme_rdma_qpair_process_cm_event ...passed[2024-07-12 14:52:42.477246] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1579:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:04:17.864 [2024-07-12 14:52:42.477275] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1635:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:04:17.864 [2024-07-12 14:52:42.477302] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1516:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:04:17.864 [2024-07-12 14:52:42.477331] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 933:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:04:17.864 [2024-07-12 14:52:42.477373] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 851:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:04:17.864 [2024-07-12 14:52:42.477398] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1773:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:04:17.864 [2024-07-12 14:52:42.477411] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1773:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:04:17.864 [2024-07-12 14:52:42.477443] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 452:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:04:17.864 00:04:17.864 Test: test_nvme_rdma_ctrlr_construct ...passed 00:04:17.864 Test: test_nvme_rdma_req_put_and_get ...passed 00:04:17.864 Test: test_nvme_rdma_req_init ...passed 00:04:17.864 Test: test_nvme_rdma_validate_cm_event ...passed 00:04:17.864 Test: test_nvme_rdma_qpair_init ...passed 00:04:17.864 Test: test_nvme_rdma_qpair_submit_request ...passed 00:04:17.864 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:04:17.864 Test: test_rdma_get_memory_translation ...passed 00:04:17.864 Test: test_get_rdma_qpair_from_wc ...passed 00:04:17.864 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:04:17.864 Test: test_nvme_rdma_poll_group_get_stats ...passed 00:04:17.864 Test: test_nvme_rdma_qpair_set_poller ...[2024-07-12 14:52:42.477509] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 546:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:04:17.864 [2024-07-12 14:52:42.477523] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 546:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:04:17.864 [2024-07-12 14:52:42.477551] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1394:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:04:17.864 [2024-07-12 14:52:42.477563] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1405:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:04:17.864 [2024-07-12 14:52:42.477585] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3230:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:04:17.864 [2024-07-12 14:52:42.477595] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3230:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:04:17.864 [2024-07-12 14:52:42.477622] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2942:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:04:17.864 [2024-07-12 14:52:42.477634] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2988:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:04:17.864 [2024-07-12 14:52:42.477645] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 649:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x820b12908 on poll group 0x208ffec72000 00:04:17.864 [2024-07-12 14:52:42.477657] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2942:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:04:17.864 [2024-07-12 14:52:42.477667] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2988:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0x0 00:04:17.864 [2024-07-12 14:52:42.477678] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 649:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x820b12908 on poll group 0x208ffec72000 00:04:17.864 [2024-07-12 14:52:42.477732] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 627:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0 00:04:17.864 passed 00:04:17.864 00:04:17.864 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.864 suites 1 1 n/a 0 0 00:04:17.864 tests 21 21 21 0 0 00:04:17.864 asserts 397 397 397 0 n/a 00:04:17.864 00:04:17.864 Elapsed time = 0.000 seconds 00:04:17.864 00:04:17.864 real 0m0.007s 00:04:17.864 user 0m0.007s 00:04:17.864 sys 0m0.005s 00:04:17.864 14:52:42 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.864 14:52:42 unittest.unittest_nvme_rdma -- common/autotest_common.sh@10 -- # set +x 00:04:17.864 ************************************ 00:04:17.864 END TEST unittest_nvme_rdma 00:04:17.864 ************************************ 00:04:17.864 14:52:42 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:17.864 14:52:42 unittest -- unit/unittest.sh@253 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:04:17.864 14:52:42 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.864 14:52:42 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.864 14:52:42 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:17.864 ************************************ 00:04:17.864 START TEST unittest_nvmf_transport 00:04:17.864 ************************************ 00:04:17.864 14:52:42 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:04:17.864 00:04:17.864 00:04:17.864 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.864 http://cunit.sourceforge.net/ 00:04:17.864 00:04:17.864 00:04:17.864 Suite: nvmf 00:04:17.864 Test: test_spdk_nvmf_transport_create ...[2024-07-12 14:52:42.521699] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:04:17.864 [2024-07-12 14:52:42.521986] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:04:17.864 [2024-07-12 14:52:42.522027] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 276:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:04:17.864 passed 00:04:17.864 Test: test_nvmf_transport_poll_group_create ...passed 00:04:17.864 Test: test_spdk_nvmf_transport_opts_init ...passed 00:04:17.864 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:04:17.864 00:04:17.864 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.864 suites 1 1 n/a 0 0 00:04:17.864 tests 4 4 4 0 0 00:04:17.864 asserts 49 49 49 0 n/a 00:04:17.864 00:04:17.864 Elapsed time = 0.000 seconds[2024-07-12 14:52:42.522068] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 259:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:04:17.864 [2024-07-12 14:52:42.522107] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 792:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:04:17.864 [2024-07-12 14:52:42.522133] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 797:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:04:17.864 [2024-07-12 14:52:42.522148] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 802:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:04:17.864 00:04:17.864 00:04:17.864 real 0m0.007s 00:04:17.864 user 0m0.000s 00:04:17.864 sys 0m0.008s 00:04:17.864 14:52:42 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.864 14:52:42 unittest.unittest_nvmf_transport -- common/autotest_common.sh@10 -- # set +x 00:04:17.864 ************************************ 00:04:17.864 END TEST unittest_nvmf_transport 00:04:17.864 ************************************ 00:04:17.864 14:52:42 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:17.864 14:52:42 unittest -- unit/unittest.sh@254 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:04:17.864 14:52:42 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.864 14:52:42 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.864 14:52:42 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:17.864 ************************************ 00:04:17.864 START TEST unittest_rdma 00:04:17.864 ************************************ 00:04:17.864 14:52:42 unittest.unittest_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:04:17.864 00:04:17.864 00:04:17.864 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.864 http://cunit.sourceforge.net/ 00:04:17.864 00:04:17.864 00:04:17.864 Suite: rdma_common 00:04:17.864 Test: test_spdk_rdma_pd ...[2024-07-12 14:52:42.568167] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:04:17.864 [2024-07-12 14:52:42.568445] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:04:17.864 passed 00:04:17.864 00:04:17.864 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.864 suites 1 1 n/a 0 0 00:04:17.864 tests 1 1 1 0 0 00:04:17.864 asserts 31 31 31 0 n/a 00:04:17.864 00:04:17.864 Elapsed time = 0.000 seconds 00:04:17.864 00:04:17.864 real 0m0.006s 00:04:17.864 user 0m0.000s 00:04:17.864 sys 0m0.008s 00:04:17.864 14:52:42 unittest.unittest_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.864 14:52:42 unittest.unittest_rdma -- common/autotest_common.sh@10 -- # set +x 00:04:17.864 ************************************ 00:04:17.864 END TEST unittest_rdma 00:04:17.865 ************************************ 00:04:17.865 14:52:42 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:17.865 14:52:42 unittest -- unit/unittest.sh@257 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:17.865 14:52:42 unittest -- unit/unittest.sh@261 -- # run_test unittest_nvmf unittest_nvmf 00:04:17.865 14:52:42 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.865 14:52:42 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.865 14:52:42 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:17.865 ************************************ 00:04:17.865 START TEST unittest_nvmf 00:04:17.865 ************************************ 00:04:17.865 14:52:42 unittest.unittest_nvmf -- common/autotest_common.sh@1123 -- # unittest_nvmf 00:04:17.865 14:52:42 unittest.unittest_nvmf -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:04:17.865 00:04:17.865 00:04:17.865 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.865 http://cunit.sourceforge.net/ 00:04:17.865 00:04:17.865 00:04:17.865 Suite: nvmf 00:04:17.865 Test: test_get_log_page ...[2024-07-12 14:52:42.620883] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:04:17.865 passed 00:04:17.865 Test: test_process_fabrics_cmd ...passed 00:04:17.865 Test: test_connect ...[2024-07-12 14:52:42.621163] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4731:nvmf_check_qpair_active: *ERROR*: Received command 0x0 on qid 0 before CONNECT 00:04:17.865 [2024-07-12 14:52:42.621269] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1012:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:04:17.865 [2024-07-12 14:52:42.621292] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 875:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:04:17.865 [2024-07-12 14:52:42.621310] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1051:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:04:17.865 [2024-07-12 14:52:42.621326] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:04:17.865 [2024-07-12 14:52:42.621342] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 886:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:04:17.865 [2024-07-12 14:52:42.621359] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 894:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:04:17.865 [2024-07-12 14:52:42.621374] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 900:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:04:17.865 [2024-07-12 14:52:42.621390] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 926:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:04:17.865 [2024-07-12 14:52:42.621412] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:04:17.865 [2024-07-12 14:52:42.621432] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 676:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:04:17.865 passed 00:04:17.865 Test: test_get_ns_id_desc_list ...passed 00:04:17.865 Test: test_identify_ns ...[2024-07-12 14:52:42.621462] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 682:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:04:17.865 [2024-07-12 14:52:42.621481] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 689:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:04:17.865 [2024-07-12 14:52:42.621499] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 696:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:04:17.865 [2024-07-12 14:52:42.621517] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 720:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:04:17.865 [2024-07-12 14:52:42.621551] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 295:nvmf_ctrlr_add_qpair: *ERROR*: Got I/O connect with duplicate QID 1 (cntlid:0) 00:04:17.865 [2024-07-12 14:52:42.621577] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 4, group 0x0) 00:04:17.865 [2024-07-12 14:52:42.621595] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group 0x0) 00:04:17.865 [2024-07-12 14:52:42.621667] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:04:17.865 [2024-07-12 14:52:42.621751] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:04:17.865 [2024-07-12 14:52:42.621795] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:04:17.865 passed 00:04:17.865 Test: test_identify_ns_iocs_specific ...passed 00:04:17.865 Test: test_reservation_write_exclusive ...passed 00:04:17.865 Test: test_reservation_exclusive_access ...passed 00:04:17.865 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:04:17.865 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:04:17.865 Test: test_reservation_notification_log_page ...[2024-07-12 14:52:42.621839] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:04:17.865 [2024-07-12 14:52:42.621919] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:04:17.865 passed 00:04:17.865 Test: test_get_dif_ctx ...passed 00:04:17.865 Test: test_set_get_features ...passed 00:04:17.865 Test: test_identify_ctrlr ...passed 00:04:17.865 Test: test_identify_ctrlr_iocs_specific ...[2024-07-12 14:52:42.622049] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:04:17.865 [2024-07-12 14:52:42.622072] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:04:17.865 [2024-07-12 14:52:42.622088] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1659:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:04:17.865 [2024-07-12 14:52:42.622102] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1735:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:04:17.865 passed 00:04:17.865 Test: test_custom_admin_cmd ...passed 00:04:17.865 Test: test_fused_compare_and_write ...passed 00:04:17.865 Test: test_multi_async_event_reqs ...passed 00:04:17.865 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:04:17.865 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:04:17.865 Test: test_multi_async_events ...passed 00:04:17.865 Test: test_rae ...passed 00:04:17.865 Test: test_nvmf_ctrlr_create_destruct ...passed 00:04:17.865 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:04:17.865 Test: test_spdk_nvmf_request_zcopy_start ...passed 00:04:17.865 Test: test_zcopy_read ...passed 00:04:17.865 Test: test_zcopy_write ...[2024-07-12 14:52:42.622231] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4238:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:04:17.865 [2024-07-12 14:52:42.622248] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4227:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:04:17.865 [2024-07-12 14:52:42.622264] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4245:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:04:17.865 [2024-07-12 14:52:42.622376] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4731:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 before CONNECT 00:04:17.865 [2024-07-12 14:52:42.622396] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4757:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 in state 4 00:04:17.865 passed 00:04:17.865 Test: test_nvmf_property_set ...passed 00:04:17.865 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...passed 00:04:17.865 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...passed 00:04:17.865 Test: test_nvmf_ctrlr_ns_attachment ...passed 00:04:17.865 Test: test_nvmf_check_qpair_active ...passed 00:04:17.865 00:04:17.865 [2024-07-12 14:52:42.622453] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:04:17.865 [2024-07-12 14:52:42.622470] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:04:17.865 [2024-07-12 14:52:42.622490] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1969:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:04:17.865 [2024-07-12 14:52:42.622507] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1975:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:04:17.865 [2024-07-12 14:52:42.622522] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1987:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:04:17.865 [2024-07-12 14:52:42.622561] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4731:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before CONNECT 00:04:17.865 [2024-07-12 14:52:42.622578] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4745:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before authentication 00:04:17.865 [2024-07-12 14:52:42.622593] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4757:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 0 00:04:17.865 [2024-07-12 14:52:42.622608] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4757:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 4 00:04:17.865 [2024-07-12 14:52:42.622623] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4757:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 5 00:04:17.865 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.865 suites 1 1 n/a 0 0 00:04:17.865 tests 32 32 32 0 0 00:04:17.865 asserts 977 977 977 0 n/a 00:04:17.865 00:04:17.865 Elapsed time = 0.000 seconds 00:04:17.865 14:52:42 unittest.unittest_nvmf -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:04:17.865 00:04:17.865 00:04:17.865 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.865 http://cunit.sourceforge.net/ 00:04:17.865 00:04:17.865 00:04:17.865 Suite: nvmf 00:04:17.865 Test: test_get_rw_params ...passed 00:04:17.865 Test: test_get_rw_ext_params ...passed 00:04:17.865 Test: test_lba_in_range ...passed 00:04:17.865 Test: test_get_dif_ctx ...passed 00:04:17.865 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:04:17.865 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-07-12 14:52:42.629526] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 447:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:04:17.865 passed 00:04:17.865 Test: test_nvmf_bdev_ctrlr_zcopy_start ...passed 00:04:17.865 Test: test_nvmf_bdev_ctrlr_cmd ...passed 00:04:17.865 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:04:17.865 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:04:17.865 00:04:17.865 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.865 suites 1 1 n/a 0 0 00:04:17.865 tests 10 10 10 0 0 00:04:17.865 asserts 159 159 159 0 n/a 00:04:17.865 00:04:17.865 Elapsed time = 0.000 seconds 00:04:17.865 [2024-07-12 14:52:42.629749] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 455:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:04:17.865 [2024-07-12 14:52:42.629771] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 463:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:04:17.865 [2024-07-12 14:52:42.629793] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 965:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:04:17.865 [2024-07-12 14:52:42.629810] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 973:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:04:17.865 [2024-07-12 14:52:42.629829] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 401:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:04:17.865 [2024-07-12 14:52:42.629854] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 409:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:04:17.866 [2024-07-12 14:52:42.629873] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 500:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:04:17.866 [2024-07-12 14:52:42.629888] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 507:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:04:17.866 14:52:42 unittest.unittest_nvmf -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:04:17.866 00:04:17.866 00:04:17.866 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.866 http://cunit.sourceforge.net/ 00:04:17.866 00:04:17.866 00:04:17.866 Suite: nvmf 00:04:17.866 Test: test_discovery_log ...passed 00:04:17.866 Test: test_discovery_log_with_filters ...passed 00:04:17.866 00:04:17.866 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.866 suites 1 1 n/a 0 0 00:04:17.866 tests 2 2 2 0 0 00:04:17.866 asserts 238 238 238 0 n/a 00:04:17.866 00:04:17.866 Elapsed time = 0.000 seconds 00:04:17.866 14:52:42 unittest.unittest_nvmf -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:04:17.866 00:04:17.866 00:04:17.866 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.866 http://cunit.sourceforge.net/ 00:04:17.866 00:04:17.866 00:04:17.866 Suite: nvmf 00:04:17.866 Test: nvmf_test_create_subsystem ...[2024-07-12 14:52:42.641968] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 126:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:04:17.866 [2024-07-12 14:52:42.642180] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid 00:04:17.866 [2024-07-12 14:52:42.642209] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:04:17.866 [2024-07-12 14:52:42.642224] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid 00:04:17.866 [2024-07-12 14:52:42.642239] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:04:17.866 [2024-07-12 14:52:42.642251] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid 00:04:17.866 [2024-07-12 14:52:42.642265] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:04:17.866 [2024-07-12 14:52:42.642277] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid 00:04:17.866 [2024-07-12 14:52:42.642291] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 184:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:04:17.866 [2024-07-12 14:52:42.642303] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid 00:04:17.866 [2024-07-12 14:52:42.642316] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:04:17.866 [2024-07-12 14:52:42.642328] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid 00:04:17.866 [2024-07-12 14:52:42.642349] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:04:17.866 [2024-07-12 14:52:42.642363] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid 00:04:17.866 [2024-07-12 14:52:42.642403] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:04:17.866 passed 00:04:17.866 Test: test_spdk_nvmf_subsystem_add_ns ...passed 00:04:17.866 Test: test_spdk_nvmf_subsystem_add_fdp_ns ...[2024-07-12 14:52:42.642419] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid 00:04:17.866 [2024-07-12 14:52:42.642437] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:04:17.866 [2024-07-12 14:52:42.642449] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid 00:04:17.866 [2024-07-12 14:52:42.642463] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:04:17.866 [2024-07-12 14:52:42.642476] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid 00:04:17.866 [2024-07-12 14:52:42.642490] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:04:17.866 [2024-07-12 14:52:42.642502] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid 00:04:17.866 [2024-07-12 14:52:42.642564] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:04:17.866 [2024-07-12 14:52:42.642580] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2027:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:04:17.866 passed 00:04:17.866 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:04:17.866 Test: test_spdk_nvmf_ns_visible ...passed 00:04:17.866 Test: test_reservation_register ...passed 00:04:17.866 Test: test_reservation_register_with_ptpl ...[2024-07-12 14:52:42.642608] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2158:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem with id: 0 can only add FDP namespace. 00:04:17.866 [2024-07-12 14:52:42.642641] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11 00:04:17.866 [2024-07-12 14:52:42.642735] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:04:17.866 [2024-07-12 14:52:42.642757] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3160:nvmf_ns_reservation_register: *ERROR*: No registrant 00:04:17.866 passed 00:04:17.866 Test: test_reservation_acquire_preempt_1 ...passed 00:04:17.866 Test: test_reservation_acquire_release_with_ptpl ...[2024-07-12 14:52:42.642964] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:04:17.866 passed 00:04:17.866 Test: test_reservation_release ...passed 00:04:17.866 Test: test_reservation_unregister_notification ...[2024-07-12 14:52:42.643153] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:04:17.866 [2024-07-12 14:52:42.643182] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:04:17.866 passed 00:04:17.866 Test: test_reservation_release_notification ...passed 00:04:17.866 Test: test_reservation_release_notification_write_exclusive ...passed 00:04:17.866 Test: test_reservation_clear_notification ...passed 00:04:17.866 Test: test_reservation_preempt_notification ...passed 00:04:17.866 Test: test_spdk_nvmf_ns_event ...passed 00:04:17.866 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:04:17.866 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:04:17.866 Test: test_spdk_nvmf_subsystem_add_host ...[2024-07-12 14:52:42.643205] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:04:17.866 [2024-07-12 14:52:42.643227] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:04:17.866 [2024-07-12 14:52:42.643250] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:04:17.866 [2024-07-12 14:52:42.643272] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:04:17.866 [2024-07-12 14:52:42.643380] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 265:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:04:17.866 passed 00:04:17.866 Test: test_nvmf_ns_reservation_report ...passed 00:04:17.866 Test: test_nvmf_nqn_is_valid ...passed 00:04:17.866 Test: test_nvmf_ns_reservation_restore ...passed 00:04:17.866 Test: test_nvmf_subsystem_state_change ...[2024-07-12 14:52:42.643409] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to transport_ut transport 00:04:17.866 [2024-07-12 14:52:42.643434] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3466:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:04:17.866 [2024-07-12 14:52:42.643466] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:04:17.867 [2024-07-12 14:52:42.643495] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:645907fa-405e-11ef-b2a4-e9dca065e82": uuid is not the correct length 00:04:17.867 [2024-07-12 14:52:42.643510] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:04:17.867 [2024-07-12 14:52:42.643548] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2659:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:04:17.867 passed 00:04:17.867 Test: test_nvmf_reservation_custom_ops ...passed 00:04:17.867 00:04:17.867 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.867 suites 1 1 n/a 0 0 00:04:17.867 tests 24 24 24 0 0 00:04:17.867 asserts 499 499 499 0 n/a 00:04:17.867 00:04:17.867 Elapsed time = 0.000 seconds 00:04:17.867 14:52:42 unittest.unittest_nvmf -- unit/unittest.sh@112 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:04:17.867 00:04:17.867 00:04:17.867 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.867 http://cunit.sourceforge.net/ 00:04:17.867 00:04:17.867 00:04:17.867 Suite: nvmf 00:04:17.867 Test: test_nvmf_tcp_create ...[2024-07-12 14:52:42.655325] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 745:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:04:17.867 passed 00:04:17.867 Test: test_nvmf_tcp_destroy ...passed 00:04:17.867 Test: test_nvmf_tcp_poll_group_create ...passed 00:04:17.867 Test: test_nvmf_tcp_send_c2h_data ...passed 00:04:17.867 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:04:17.867 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:04:17.867 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:04:17.867 Test: test_nvmf_tcp_send_c2h_term_req ...passed 00:04:17.867 Test: test_nvmf_tcp_send_capsule_resp_pdu ...[2024-07-12 14:52:42.667133] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:04:17.867 [2024-07-12 14:52:42.667158] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820ef23f8 is same with the state(5) to be set 00:04:17.867 [2024-07-12 14:52:42.667169] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820ef23f8 is same with the state(5) to be set 00:04:17.867 [2024-07-12 14:52:42.667178] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:04:17.867 [2024-07-12 14:52:42.667187] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820ef23f8 is same with the state(5) to be set 00:04:17.867 passed 00:04:17.867 Test: test_nvmf_tcp_icreq_handle ...[2024-07-12 14:52:42.667218] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2122:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:04:17.867 [2024-07-12 14:52:42.667228] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:04:17.867 [2024-07-12 14:52:42.667237] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820ef2300 is same with the state(5) to be set 00:04:17.867 [2024-07-12 14:52:42.667246] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2122:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:04:17.867 passed 00:04:17.867 Test: test_nvmf_tcp_check_xfer_type ...passed 00:04:17.867 Test: test_nvmf_tcp_invalid_sgl ...passed 00:04:17.867 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-07-12 14:52:42.667255] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820ef2300 is same with the state(5) to be set 00:04:17.867 [2024-07-12 14:52:42.667264] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:04:17.867 [2024-07-12 14:52:42.667273] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820ef2300 is same with the state(5) to be set 00:04:17.867 [2024-07-12 14:52:42.667282] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=0 00:04:17.867 [2024-07-12 14:52:42.667291] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820ef2300 is same with the state(5) to be set 00:04:17.867 [2024-07-12 14:52:42.667307] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2518:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:04:17.867 [2024-07-12 14:52:42.667317] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:04:17.867 [2024-07-12 14:52:42.667325] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820ef2300 is same with the state(5) to be set 00:04:17.867 [2024-07-12 14:52:42.667342] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2249:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x820ef1b88 00:04:17.867 [2024-07-12 14:52:42.667351] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:04:17.867 [2024-07-12 14:52:42.667360] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820ef23f8 is same with the state(5) to be set 00:04:17.867 [2024-07-12 14:52:42.667372] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2308:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x820ef23f8 00:04:17.867 [2024-07-12 14:52:42.667386] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:04:17.867 [2024-07-12 14:52:42.667395] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820ef23f8 is same with the state(5) to be set 00:04:17.867 [2024-07-12 14:52:42.667412] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2259:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:04:17.867 [2024-07-12 14:52:42.667430] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:04:17.867 [2024-07-12 14:52:42.667446] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820ef23f8 is same with the state(5) to be set 00:04:17.867 [2024-07-12 14:52:42.667458] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2298:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:04:17.867 [2024-07-12 14:52:42.667467] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:04:17.867 [2024-07-12 14:52:42.667481] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820ef23f8 is same with the state(5) to be set 00:04:17.867 [2024-07-12 14:52:42.667495] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:04:17.867 [2024-07-12 14:52:42.667504] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820ef23f8 is same with the state(5) to be set 00:04:17.867 [2024-07-12 14:52:42.667517] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:04:17.867 passed 00:04:17.867 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-07-12 14:52:42.667532] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820ef23f8 is same with the state(5) to be set 00:04:17.867 [2024-07-12 14:52:42.667550] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:04:17.867 [2024-07-12 14:52:42.667560] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820ef23f8 is same with the state(5) to be set 00:04:17.867 [2024-07-12 14:52:42.667569] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:04:17.867 [2024-07-12 14:52:42.667582] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820ef23f8 is same with the state(5) to be set 00:04:17.867 [2024-07-12 14:52:42.667598] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:04:17.867 [2024-07-12 14:52:42.667607] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820ef23f8 is same with the state(5) to be set 00:04:17.867 [2024-07-12 14:52:42.667616] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:04:17.867 [2024-07-12 14:52:42.667624] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820ef23f8 is same with the state(5) to be set 00:04:17.867 passed 00:04:17.867 Test: test_nvmf_tcp_tls_generate_psk_id ...passed 00:04:17.867 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-07-12 14:52:42.672716] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:04:17.867 [2024-07-12 14:52:42.672738] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:04:17.867 passed 00:04:17.867 Test: test_nvmf_tcp_tls_generate_tls_psk ...passed 00:04:17.867 00:04:17.867 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.867 suites 1 1 n/a 0 0 00:04:17.867 tests 17 17 17 0 0 00:04:17.867 asserts 222 222 222 0 n/a 00:04:17.867 00:04:17.867 Elapsed time = 0.023 seconds 00:04:17.867 [2024-07-12 14:52:42.672848] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:04:17.867 [2024-07-12 14:52:42.672863] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:04:17.867 [2024-07-12 14:52:42.672928] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:04:17.867 [2024-07-12 14:52:42.672939] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:04:17.867 14:52:42 unittest.unittest_nvmf -- unit/unittest.sh@113 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:04:17.867 00:04:17.867 00:04:17.867 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.867 http://cunit.sourceforge.net/ 00:04:17.867 00:04:17.867 00:04:17.867 Suite: nvmf 00:04:17.867 Test: test_nvmf_tgt_create_poll_group ...passed 00:04:17.867 00:04:17.867 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.867 suites 1 1 n/a 0 0 00:04:17.867 tests 1 1 1 0 0 00:04:17.867 asserts 17 17 17 0 n/a 00:04:17.867 00:04:17.867 Elapsed time = 0.008 seconds 00:04:17.867 00:04:17.867 real 0m0.070s 00:04:17.867 user 0m0.024s 00:04:17.867 sys 0m0.049s 00:04:17.867 14:52:42 unittest.unittest_nvmf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.867 ************************************ 00:04:17.867 END TEST unittest_nvmf 00:04:17.867 ************************************ 00:04:17.867 14:52:42 unittest.unittest_nvmf -- common/autotest_common.sh@10 -- # set +x 00:04:17.867 14:52:42 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:17.867 14:52:42 unittest -- unit/unittest.sh@262 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:17.867 14:52:42 unittest -- unit/unittest.sh@267 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:17.867 14:52:42 unittest -- unit/unittest.sh@268 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:04:17.867 14:52:42 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.867 14:52:42 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.867 14:52:42 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:17.867 ************************************ 00:04:17.868 START TEST unittest_nvmf_rdma 00:04:17.868 ************************************ 00:04:17.868 14:52:42 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:04:17.868 00:04:17.868 00:04:17.868 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.868 http://cunit.sourceforge.net/ 00:04:17.868 00:04:17.868 00:04:17.868 Suite: nvmf 00:04:17.868 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-07-12 14:52:42.736436] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1864:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:04:17.868 [2024-07-12 14:52:42.736736] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:04:17.868 [2024-07-12 14:52:42.736752] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:04:17.868 passed 00:04:17.868 Test: test_spdk_nvmf_rdma_request_process ...passed 00:04:17.868 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:04:17.868 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:04:17.868 Test: test_nvmf_rdma_opts_init ...passed 00:04:17.868 Test: test_nvmf_rdma_request_free_data ...passed 00:04:17.868 Test: test_nvmf_rdma_resources_create ...passed 00:04:17.868 Test: test_nvmf_rdma_qpair_compare ...passed 00:04:17.868 Test: test_nvmf_rdma_resize_cq ...[2024-07-12 14:52:42.737369] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 955:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:04:17.868 Using CQ of insufficient size may lead to CQ overrun 00:04:17.868 [2024-07-12 14:52:42.737385] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 960:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:04:17.868 passed 00:04:17.868 00:04:17.868 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.868 suites 1 1 n/a 0 0 00:04:17.868 tests 9 9 9 0 0 00:04:17.868 asserts 579 579 579 0 n/a 00:04:17.868 00:04:17.868 Elapsed time = 0.000 seconds 00:04:17.868 [2024-07-12 14:52:42.737424] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 967:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0 00:04:17.868 00:04:17.868 real 0m0.007s 00:04:17.868 user 0m0.007s 00:04:17.868 sys 0m0.006s 00:04:17.868 14:52:42 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.868 ************************************ 00:04:17.868 END TEST unittest_nvmf_rdma 00:04:17.868 ************************************ 00:04:17.868 14:52:42 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:04:17.868 14:52:42 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:17.868 14:52:42 unittest -- unit/unittest.sh@271 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:17.868 14:52:42 unittest -- unit/unittest.sh@275 -- # run_test unittest_scsi unittest_scsi 00:04:17.868 14:52:42 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.868 14:52:42 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.868 14:52:42 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:17.868 ************************************ 00:04:17.868 START TEST unittest_scsi 00:04:17.868 ************************************ 00:04:17.868 14:52:42 unittest.unittest_scsi -- common/autotest_common.sh@1123 -- # unittest_scsi 00:04:17.868 14:52:42 unittest.unittest_scsi -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:04:17.868 00:04:17.868 00:04:17.868 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.868 http://cunit.sourceforge.net/ 00:04:17.868 00:04:17.868 00:04:17.868 Suite: dev_suite 00:04:17.868 Test: dev_destruct_null_dev ...passed 00:04:17.868 Test: dev_destruct_zero_luns ...passed 00:04:17.868 Test: dev_destruct_null_lun ...passed 00:04:17.868 Test: dev_destruct_success ...passed 00:04:17.868 Test: dev_construct_num_luns_zero ...[2024-07-12 14:52:42.787890] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:04:17.868 passed 00:04:17.868 Test: dev_construct_no_lun_zero ...passed 00:04:17.868 Test: dev_construct_null_lun ...passed 00:04:17.868 Test: dev_construct_name_too_long ...[2024-07-12 14:52:42.788075] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:04:17.868 [2024-07-12 14:52:42.788092] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 248:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:04:17.868 passed 00:04:17.868 Test: dev_construct_success ...passed 00:04:17.868 Test: dev_construct_success_lun_zero_not_first ...passed 00:04:17.868 Test: dev_queue_mgmt_task_success ...passed 00:04:17.868 Test: dev_queue_task_success ...passed 00:04:17.868 Test: dev_stop_success ...passed 00:04:17.868 Test: dev_add_port_max_ports ...passed 00:04:17.868 Test: dev_add_port_construct_failure1 ...passed 00:04:17.868 Test: dev_add_port_construct_failure2 ...passed 00:04:17.868 Test: dev_add_port_success1 ...passed 00:04:17.868 Test: dev_add_port_success2 ...passed 00:04:17.868 Test: dev_add_port_success3 ...passed 00:04:17.868 Test: dev_find_port_by_id_num_ports_zero ...passed 00:04:17.868 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:04:17.868 Test: dev_find_port_by_id_success ...passed 00:04:17.868 Test: dev_add_lun_bdev_not_found ...passed 00:04:17.868 Test: dev_add_lun_no_free_lun_id ...[2024-07-12 14:52:42.788106] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 223:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:04:17.868 [2024-07-12 14:52:42.788148] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:04:17.868 [2024-07-12 14:52:42.788161] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:04:17.868 [2024-07-12 14:52:42.788173] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:04:17.868 passed 00:04:17.868 Test: dev_add_lun_success1 ...passed 00:04:17.868 Test: dev_add_lun_success2 ...passed 00:04:17.868 Test: dev_check_pending_tasks ...passed 00:04:17.868 Test: dev_iterate_luns ...passed 00:04:17.868 Test: dev_find_free_lun ...[2024-07-12 14:52:42.788416] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:04:17.868 passed 00:04:17.868 00:04:17.868 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.868 suites 1 1 n/a 0 0 00:04:17.868 tests 29 29 29 0 0 00:04:17.868 asserts 97 97 97 0 n/a 00:04:17.868 00:04:17.868 Elapsed time = 0.000 seconds 00:04:17.868 14:52:42 unittest.unittest_scsi -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:04:17.868 00:04:17.868 00:04:17.868 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.868 http://cunit.sourceforge.net/ 00:04:17.868 00:04:17.868 00:04:17.868 Suite: lun_suite 00:04:17.868 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-07-12 14:52:42.795254] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:04:17.868 passed 00:04:17.868 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...passed 00:04:17.868 Test: lun_task_mgmt_execute_lun_reset ...passed 00:04:17.868 Test: lun_task_mgmt_execute_target_reset ...passed 00:04:17.868 Test: lun_task_mgmt_execute_invalid_case ...passed 00:04:17.868 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:04:17.868 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:04:17.868 Test: lun_append_task_null_lun_not_supported ...passed 00:04:17.868 Test: lun_execute_scsi_task_pending ...passed 00:04:17.868 Test: lun_execute_scsi_task_complete ...passed 00:04:17.868 Test: lun_execute_scsi_task_resize ...passed 00:04:17.868 Test: lun_destruct_success ...passed 00:04:17.868 Test: lun_construct_null_ctx ...[2024-07-12 14:52:42.795508] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:04:17.868 [2024-07-12 14:52:42.795585] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:04:17.868 passed 00:04:17.868 Test: lun_construct_success ...passed 00:04:17.868 Test: lun_reset_task_wait_scsi_task_complete ...[2024-07-12 14:52:42.795636] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:04:17.868 passed 00:04:17.868 Test: lun_reset_task_suspend_scsi_task ...passed 00:04:17.868 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:04:17.868 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:04:17.868 00:04:17.868 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.868 suites 1 1 n/a 0 0 00:04:17.868 tests 18 18 18 0 0 00:04:17.868 asserts 153 153 153 0 n/a 00:04:17.868 00:04:17.868 Elapsed time = 0.000 seconds 00:04:17.868 14:52:42 unittest.unittest_scsi -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:04:17.868 00:04:17.868 00:04:17.868 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.868 http://cunit.sourceforge.net/ 00:04:17.868 00:04:17.868 00:04:17.868 Suite: scsi_suite 00:04:17.868 Test: scsi_init ...passed 00:04:17.868 00:04:17.868 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.868 suites 1 1 n/a 0 0 00:04:17.869 tests 1 1 1 0 0 00:04:17.869 asserts 1 1 1 0 n/a 00:04:17.869 00:04:17.869 Elapsed time = 0.000 seconds 00:04:17.869 14:52:42 unittest.unittest_scsi -- unit/unittest.sh@120 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:04:17.869 00:04:17.869 00:04:17.869 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.869 http://cunit.sourceforge.net/ 00:04:17.869 00:04:17.869 00:04:17.869 Suite: translation_suite 00:04:17.869 Test: mode_select_6_test ...passed 00:04:17.869 Test: mode_select_6_test2 ...passed 00:04:17.869 Test: mode_sense_6_test ...passed 00:04:17.869 Test: mode_sense_10_test ...passed 00:04:17.869 Test: inquiry_evpd_test ...passed 00:04:17.869 Test: inquiry_standard_test ...passed 00:04:17.869 Test: inquiry_overflow_test ...passed 00:04:17.869 Test: task_complete_test ...passed 00:04:17.869 Test: lba_range_test ...passed 00:04:17.869 Test: xfer_len_test ...passed 00:04:17.869 Test: xfer_test ...passed 00:04:17.869 Test: scsi_name_padding_test ...passed 00:04:17.869 Test: get_dif_ctx_test ...passed 00:04:17.869 Test: unmap_split_test ...passed 00:04:17.869 00:04:17.869 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.869 suites 1 1 n/a 0 0 00:04:17.869 tests 14 14 14 0 0 00:04:17.869 asserts 1205 1205 1205 0 n/a 00:04:17.869 00:04:17.869 Elapsed time = 0.000 seconds 00:04:17.869 [2024-07-12 14:52:42.809485] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1271:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:04:17.869 14:52:42 unittest.unittest_scsi -- unit/unittest.sh@121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:04:17.869 00:04:17.869 00:04:17.869 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.869 http://cunit.sourceforge.net/ 00:04:17.869 00:04:17.869 00:04:17.869 Suite: reservation_suite 00:04:17.869 Test: test_reservation_register ...passed 00:04:17.869 Test: test_reservation_reserve ...passed 00:04:17.869 Test: test_all_registrant_reservation_reserve ...passed 00:04:17.869 Test: test_all_registrant_reservation_access ...passed 00:04:17.869 Test: test_reservation_preempt_non_all_regs ...passed 00:04:17.869 Test: test_reservation_preempt_all_regs ...passed 00:04:17.869 Test: test_reservation_cmds_conflict ...passed 00:04:17.869 Test: test_scsi2_reserve_release ...passed 00:04:17.869 Test: test_pr_with_scsi2_reserve_release ...passed 00:04:17.869 00:04:17.869 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.869 suites 1 1 n/a 0 0 00:04:17.869 tests 9 9 9 0 0 00:04:17.869 asserts 344 344 344 0 n/a 00:04:17.869 00:04:17.869 Elapsed time = 0.000 seconds 00:04:17.869 [2024-07-12 14:52:42.814635] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:04:17.869 [2024-07-12 14:52:42.814801] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:04:17.869 [2024-07-12 14:52:42.814817] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 215:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:04:17.869 [2024-07-12 14:52:42.814827] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 210:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:04:17.869 [2024-07-12 14:52:42.814840] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:04:17.869 [2024-07-12 14:52:42.814856] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:04:17.869 [2024-07-12 14:52:42.814867] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 866:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0x8 00:04:17.869 [2024-07-12 14:52:42.814876] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 866:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0xaa 00:04:17.869 [2024-07-12 14:52:42.814888] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:04:17.869 [2024-07-12 14:52:42.814898] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 464:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:04:17.869 [2024-07-12 14:52:42.814912] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:04:17.869 [2024-07-12 14:52:42.814931] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:04:17.869 [2024-07-12 14:52:42.814941] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 858:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:04:17.869 [2024-07-12 14:52:42.814950] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:04:17.869 [2024-07-12 14:52:42.814958] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:04:17.869 [2024-07-12 14:52:42.814966] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:04:17.869 [2024-07-12 14:52:42.814974] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:04:17.869 [2024-07-12 14:52:42.814992] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:04:17.869 00:04:17.869 real 0m0.032s 00:04:17.869 user 0m0.014s 00:04:17.869 sys 0m0.019s 00:04:17.869 14:52:42 unittest.unittest_scsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.869 14:52:42 unittest.unittest_scsi -- common/autotest_common.sh@10 -- # set +x 00:04:17.869 ************************************ 00:04:17.869 END TEST unittest_scsi 00:04:17.869 ************************************ 00:04:17.869 14:52:42 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:17.869 14:52:42 unittest -- unit/unittest.sh@278 -- # uname -s 00:04:17.869 14:52:42 unittest -- unit/unittest.sh@278 -- # '[' FreeBSD = Linux ']' 00:04:17.869 14:52:42 unittest -- unit/unittest.sh@281 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:04:17.869 14:52:42 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.869 14:52:42 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.869 14:52:42 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:17.869 ************************************ 00:04:17.869 START TEST unittest_thread 00:04:17.869 ************************************ 00:04:17.869 14:52:42 unittest.unittest_thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:04:17.869 00:04:17.869 00:04:17.869 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.869 http://cunit.sourceforge.net/ 00:04:17.869 00:04:17.869 00:04:17.869 Suite: io_channel 00:04:17.869 Test: thread_alloc ...passed 00:04:17.869 Test: thread_send_msg ...passed 00:04:17.869 Test: thread_poller ...passed 00:04:17.869 Test: poller_pause ...passed 00:04:17.869 Test: thread_for_each ...passed 00:04:17.869 Test: for_each_channel_remove ...passed 00:04:17.869 Test: for_each_channel_unreg ...[2024-07-12 14:52:42.868728] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2178:spdk_io_device_register: *ERROR*: io_device 0x820266f24 already registered (old:0x143f27867000 new:0x143f27867180) 00:04:17.869 passed 00:04:17.869 Test: thread_name ...passed 00:04:17.869 Test: channel ...passed 00:04:17.869 Test: channel_destroy_races ...[2024-07-12 14:52:42.869363] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2311:spdk_get_io_channel: *ERROR*: could not find io_device 0x228848 00:04:17.869 passed 00:04:17.869 Test: thread_exit_test ...passed 00:04:17.869 Test: thread_update_stats_test ...[2024-07-12 14:52:42.869935] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 640:thread_exit: *ERROR*: thread 0x143f2782ca80 got timeout, and move it to the exited state forcefully 00:04:17.869 passed 00:04:17.869 Test: nested_channel ...passed 00:04:17.869 Test: device_unregister_and_thread_exit_race ...passed 00:04:17.869 Test: cache_closest_timed_poller ...passed 00:04:17.869 Test: multi_timed_pollers_have_same_expiration ...passed 00:04:17.869 Test: io_device_lookup ...passed 00:04:17.869 Test: spdk_spin ...[2024-07-12 14:52:42.871224] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:04:17.869 [2024-07-12 14:52:42.871249] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x820266f20 00:04:17.869 [2024-07-12 14:52:42.871266] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3120:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:04:17.869 [2024-07-12 14:52:42.871444] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:04:17.869 [2024-07-12 14:52:42.871462] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x820266f20 00:04:17.869 [2024-07-12 14:52:42.871477] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:04:17.869 [2024-07-12 14:52:42.871491] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x820266f20 00:04:17.869 [2024-07-12 14:52:42.871505] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:04:17.869 [2024-07-12 14:52:42.871517] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x820266f20 00:04:17.869 [2024-07-12 14:52:42.871531] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:04:17.869 [2024-07-12 14:52:42.871544] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x820266f20 00:04:17.869 passed 00:04:17.869 Test: for_each_channel_and_thread_exit_race ...passed 00:04:17.869 Test: for_each_thread_and_thread_exit_race ...passed 00:04:17.869 00:04:17.869 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.869 suites 1 1 n/a 0 0 00:04:17.869 tests 20 20 20 0 0 00:04:17.869 asserts 409 409 409 0 n/a 00:04:17.869 00:04:17.869 Elapsed time = 0.008 seconds 00:04:17.869 00:04:17.869 real 0m0.012s 00:04:17.869 user 0m0.010s 00:04:17.869 sys 0m0.000s 00:04:17.869 14:52:42 unittest.unittest_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.869 14:52:42 unittest.unittest_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.869 ************************************ 00:04:17.869 END TEST unittest_thread 00:04:17.869 ************************************ 00:04:17.869 14:52:42 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:17.870 14:52:42 unittest -- unit/unittest.sh@282 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:04:17.870 14:52:42 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.870 14:52:42 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.870 14:52:42 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:17.870 ************************************ 00:04:17.870 START TEST unittest_iobuf 00:04:17.870 ************************************ 00:04:17.870 14:52:42 unittest.unittest_iobuf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:04:17.870 00:04:17.870 00:04:17.870 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.870 http://cunit.sourceforge.net/ 00:04:17.870 00:04:17.870 00:04:17.870 Suite: io_channel 00:04:17.870 Test: iobuf ...passed 00:04:17.870 Test: iobuf_cache ...[2024-07-12 14:52:42.918954] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 362:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:04:17.870 [2024-07-12 14:52:42.919212] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 364:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:04:17.870 [2024-07-12 14:52:42.919681] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 374:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:04:17.870 [2024-07-12 14:52:42.919713] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 376:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:04:17.870 [2024-07-12 14:52:42.919731] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 362:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:04:17.870 [2024-07-12 14:52:42.919759] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 364:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:04:17.870 passed 00:04:17.870 00:04:17.870 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.870 suites 1 1 n/a 0 0 00:04:17.870 tests 2 2 2 0 0 00:04:17.870 asserts 107 107 107 0 n/a 00:04:17.870 00:04:17.870 Elapsed time = 0.000 seconds 00:04:17.870 00:04:17.870 real 0m0.007s 00:04:17.870 user 0m0.006s 00:04:17.870 sys 0m0.004s 00:04:17.870 14:52:42 unittest.unittest_iobuf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.870 14:52:42 unittest.unittest_iobuf -- common/autotest_common.sh@10 -- # set +x 00:04:17.870 ************************************ 00:04:17.870 END TEST unittest_iobuf 00:04:17.870 ************************************ 00:04:17.870 14:52:42 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:17.870 14:52:42 unittest -- unit/unittest.sh@283 -- # run_test unittest_util unittest_util 00:04:17.870 14:52:42 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.870 14:52:42 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.870 14:52:42 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:17.870 ************************************ 00:04:17.870 START TEST unittest_util 00:04:17.870 ************************************ 00:04:17.870 14:52:42 unittest.unittest_util -- common/autotest_common.sh@1123 -- # unittest_util 00:04:17.870 14:52:42 unittest.unittest_util -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:04:17.870 00:04:17.870 00:04:17.870 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.870 http://cunit.sourceforge.net/ 00:04:17.870 00:04:17.870 00:04:17.870 Suite: base64 00:04:17.870 Test: test_base64_get_encoded_strlen ...passed 00:04:17.870 Test: test_base64_get_decoded_len ...passed 00:04:17.870 Test: test_base64_encode ...passed 00:04:17.870 Test: test_base64_decode ...passed 00:04:17.870 Test: test_base64_urlsafe_encode ...passed 00:04:17.870 Test: test_base64_urlsafe_decode ...passed 00:04:17.870 00:04:17.870 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.870 suites 1 1 n/a 0 0 00:04:17.870 tests 6 6 6 0 0 00:04:17.870 asserts 112 112 112 0 n/a 00:04:17.870 00:04:17.870 Elapsed time = 0.000 seconds 00:04:17.870 14:52:42 unittest.unittest_util -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:04:17.870 00:04:17.870 00:04:17.870 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.870 http://cunit.sourceforge.net/ 00:04:17.870 00:04:17.870 00:04:17.870 Suite: bit_array 00:04:17.870 Test: test_1bit ...passed 00:04:17.870 Test: test_64bit ...passed 00:04:17.870 Test: test_find ...passed 00:04:17.870 Test: test_resize ...passed 00:04:17.870 Test: test_errors ...passed 00:04:17.870 Test: test_count ...passed 00:04:17.870 Test: test_mask_store_load ...passed 00:04:17.870 Test: test_mask_clear ...passed 00:04:17.870 00:04:17.870 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.870 suites 1 1 n/a 0 0 00:04:17.870 tests 8 8 8 0 0 00:04:17.870 asserts 5075 5075 5075 0 n/a 00:04:17.870 00:04:17.870 Elapsed time = 0.000 seconds 00:04:17.870 14:52:42 unittest.unittest_util -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:04:17.870 00:04:17.870 00:04:17.870 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.870 http://cunit.sourceforge.net/ 00:04:17.870 00:04:17.870 00:04:17.870 Suite: cpuset 00:04:17.870 Test: test_cpuset ...passed 00:04:17.870 Test: test_cpuset_parse ...[2024-07-12 14:52:42.977057] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 256:parse_list: *ERROR*: Unexpected end of core list '[' 00:04:17.870 [2024-07-12 14:52:42.977266] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:04:17.870 [2024-07-12 14:52:42.977286] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:04:17.870 [2024-07-12 14:52:42.977299] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 237:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:04:17.870 [2024-07-12 14:52:42.977311] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:04:17.870 [2024-07-12 14:52:42.977322] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:04:17.870 [2024-07-12 14:52:42.977335] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 220:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:04:17.870 [2024-07-12 14:52:42.977346] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 215:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:04:17.870 passed 00:04:17.870 Test: test_cpuset_fmt ...passed 00:04:17.870 Test: test_cpuset_foreach ...passed 00:04:17.870 00:04:17.870 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.870 suites 1 1 n/a 0 0 00:04:17.870 tests 4 4 4 0 0 00:04:17.870 asserts 90 90 90 0 n/a 00:04:17.870 00:04:17.870 Elapsed time = 0.000 seconds 00:04:17.870 14:52:42 unittest.unittest_util -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:04:17.870 00:04:17.870 00:04:17.870 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.870 http://cunit.sourceforge.net/ 00:04:17.870 00:04:17.870 00:04:17.870 Suite: crc16 00:04:17.870 Test: test_crc16_t10dif ...passed 00:04:17.870 Test: test_crc16_t10dif_seed ...passed 00:04:17.870 Test: test_crc16_t10dif_copy ...passed 00:04:17.870 00:04:17.870 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.870 suites 1 1 n/a 0 0 00:04:17.870 tests 3 3 3 0 0 00:04:17.870 asserts 5 5 5 0 n/a 00:04:17.870 00:04:17.870 Elapsed time = 0.000 seconds 00:04:17.870 14:52:42 unittest.unittest_util -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:04:17.870 00:04:17.870 00:04:17.870 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.870 http://cunit.sourceforge.net/ 00:04:17.870 00:04:17.870 00:04:17.870 Suite: crc32_ieee 00:04:17.870 Test: test_crc32_ieee ...passed 00:04:17.870 00:04:17.870 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.870 suites 1 1 n/a 0 0 00:04:17.870 tests 1 1 1 0 0 00:04:17.870 asserts 1 1 1 0 n/a 00:04:17.870 00:04:17.870 Elapsed time = 0.000 seconds 00:04:17.870 14:52:42 unittest.unittest_util -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:04:17.870 00:04:17.870 00:04:17.870 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.870 http://cunit.sourceforge.net/ 00:04:17.870 00:04:17.870 00:04:17.870 Suite: crc32c 00:04:17.870 Test: test_crc32c ...passed 00:04:17.870 Test: test_crc32c_nvme ...passed 00:04:17.870 00:04:17.870 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.870 suites 1 1 n/a 0 0 00:04:17.870 tests 2 2 2 0 0 00:04:17.870 asserts 16 16 16 0 n/a 00:04:17.870 00:04:17.870 Elapsed time = 0.000 seconds 00:04:17.870 14:52:42 unittest.unittest_util -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:04:17.870 00:04:17.870 00:04:17.870 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.870 http://cunit.sourceforge.net/ 00:04:17.870 00:04:17.870 00:04:17.870 Suite: crc64 00:04:17.870 Test: test_crc64_nvme ...passed 00:04:17.870 00:04:17.870 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.870 suites 1 1 n/a 0 0 00:04:17.870 tests 1 1 1 0 0 00:04:17.870 asserts 4 4 4 0 n/a 00:04:17.870 00:04:17.871 Elapsed time = 0.000 seconds 00:04:17.871 14:52:42 unittest.unittest_util -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:04:17.871 00:04:17.871 00:04:17.871 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.871 http://cunit.sourceforge.net/ 00:04:17.871 00:04:17.871 00:04:17.871 Suite: string 00:04:17.871 Test: test_parse_ip_addr ...passed 00:04:17.871 Test: test_str_chomp ...passed 00:04:17.871 Test: test_parse_capacity ...passed 00:04:17.871 Test: test_sprintf_append_realloc ...passed 00:04:17.871 Test: test_strtol ...passed 00:04:17.871 Test: test_strtoll ...passed 00:04:17.871 Test: test_strarray ...passed 00:04:17.871 Test: test_strcpy_replace ...passed 00:04:17.871 00:04:17.871 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.871 suites 1 1 n/a 0 0 00:04:17.871 tests 8 8 8 0 0 00:04:17.871 asserts 161 161 161 0 n/a 00:04:17.871 00:04:17.871 Elapsed time = 0.000 seconds 00:04:17.871 14:52:43 unittest.unittest_util -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:04:17.871 00:04:17.871 00:04:17.871 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.871 http://cunit.sourceforge.net/ 00:04:17.871 00:04:17.871 00:04:17.871 Suite: dif 00:04:17.871 Test: dif_generate_and_verify_test ...[2024-07-12 14:52:43.010593] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:04:17.871 [2024-07-12 14:52:43.010877] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:04:17.871 [2024-07-12 14:52:43.010940] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:04:17.871 passed 00:04:17.871 Test: dif_disable_check_test ...[2024-07-12 14:52:43.010998] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:04:17.871 [2024-07-12 14:52:43.011055] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:04:17.871 [2024-07-12 14:52:43.011111] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:04:17.871 [2024-07-12 14:52:43.011304] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:04:17.871 [2024-07-12 14:52:43.011361] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:04:17.871 [2024-07-12 14:52:43.011416] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:04:17.871 passed 00:04:17.871 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-07-12 14:52:43.011609] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:04:17.871 [2024-07-12 14:52:43.011667] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:04:17.871 [2024-07-12 14:52:43.011724] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:04:17.871 [2024-07-12 14:52:43.011781] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:04:17.871 [2024-07-12 14:52:43.011842] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:04:17.871 [2024-07-12 14:52:43.011899] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:04:17.871 [2024-07-12 14:52:43.011954] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:04:17.871 passed 00:04:17.871 Test: dif_apptag_mask_test ...[2024-07-12 14:52:43.012010] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:04:17.871 [2024-07-12 14:52:43.012066] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:04:17.871 [2024-07-12 14:52:43.012123] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:04:17.871 [2024-07-12 14:52:43.012178] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:04:17.871 [2024-07-12 14:52:43.012237] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:04:17.871 passed 00:04:17.871 Test: dif_sec_512_md_0_error_test ...passed 00:04:17.871 Test: dif_sec_4096_md_0_error_test ...passed 00:04:17.871 Test: dif_sec_4100_md_128_error_test ...passed 00:04:17.871 Test: dif_guard_seed_test ...passed 00:04:17.871 Test: dif_guard_value_test ...[2024-07-12 14:52:43.012294] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:04:17.871 [2024-07-12 14:52:43.012330] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:04:17.871 [2024-07-12 14:52:43.012364] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:04:17.871 [2024-07-12 14:52:43.012391] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:04:17.871 [2024-07-12 14:52:43.012407] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:04:17.871 [2024-07-12 14:52:43.012418] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:04:17.871 passed 00:04:17.871 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:04:17.871 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:04:17.871 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:04:17.871 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:04:17.871 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:04:17.871 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:04:17.871 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:04:17.871 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:04:17.871 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:04:17.871 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:04:17.871 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:04:17.871 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:04:17.871 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:04:17.871 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:04:17.871 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:04:17.871 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:04:17.871 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:04:17.871 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:04:17.871 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-12 14:52:43.020524] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd4e, Actual=fd4c 00:04:17.871 [2024-07-12 14:52:43.020778] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fe23, Actual=fe21 00:04:17.871 [2024-07-12 14:52:43.021027] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=8a 00:04:17.871 [2024-07-12 14:52:43.021275] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=8a 00:04:17.871 [2024-07-12 14:52:43.021531] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=58 00:04:17.871 [2024-07-12 14:52:43.021779] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=58 00:04:17.871 [2024-07-12 14:52:43.022024] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd4c, Actual=2879 00:04:17.871 [2024-07-12 14:52:43.022188] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fe21, Actual=6d93 00:04:17.871 [2024-07-12 14:52:43.022351] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753ef, Actual=1ab753ed 00:04:17.871 [2024-07-12 14:52:43.022601] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=38574662, Actual=38574660 00:04:17.871 [2024-07-12 14:52:43.022849] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=8a 00:04:17.871 [2024-07-12 14:52:43.023098] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=8a 00:04:17.871 [2024-07-12 14:52:43.023343] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=20000005a 00:04:17.871 [2024-07-12 14:52:43.023590] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=20000005a 00:04:17.871 [2024-07-12 14:52:43.023835] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753ed, Actual=bb531573 00:04:17.871 [2024-07-12 14:52:43.023996] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=38574660, Actual=f00f85fd 00:04:17.871 [2024-07-12 14:52:43.024157] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7708ecc20d3, Actual=a576a7728ecc20d3 00:04:17.871 [2024-07-12 14:52:43.024416] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=88010a2f4837a266, Actual=88010a2d4837a266 00:04:17.871 [2024-07-12 14:52:43.024671] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=8a 00:04:17.871 [2024-07-12 14:52:43.024923] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=8a 00:04:17.872 [2024-07-12 14:52:43.025170] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=2005a 00:04:17.872 [2024-07-12 14:52:43.025420] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=2005a 00:04:17.872 [2024-07-12 14:52:43.025685] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc20d3, Actual=1350106b35202e11 00:04:17.872 [2024-07-12 14:52:43.025847] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=88010a2d4837a266, Actual=5df8344711684bcf 00:04:17.872 passed 00:04:17.872 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-07-12 14:52:43.025905] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4e, Actual=fd4c 00:04:17.872 [2024-07-12 14:52:43.025939] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe23, Actual=fe21 00:04:17.872 [2024-07-12 14:52:43.025971] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.872 [2024-07-12 14:52:43.026003] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.872 [2024-07-12 14:52:43.026036] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5a 00:04:17.872 [2024-07-12 14:52:43.026068] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5a 00:04:17.872 [2024-07-12 14:52:43.026100] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=2879 00:04:17.872 [2024-07-12 14:52:43.026126] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=6d93 00:04:17.872 [2024-07-12 14:52:43.026153] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ef, Actual=1ab753ed 00:04:17.872 [2024-07-12 14:52:43.026185] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574662, Actual=38574660 00:04:17.872 [2024-07-12 14:52:43.026218] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.872 [2024-07-12 14:52:43.026251] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.872 [2024-07-12 14:52:43.026283] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000058 00:04:17.872 [2024-07-12 14:52:43.026315] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000058 00:04:17.872 [2024-07-12 14:52:43.026346] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=bb531573 00:04:17.872 [2024-07-12 14:52:43.026372] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=f00f85fd 00:04:17.872 [2024-07-12 14:52:43.026399] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7708ecc20d3, Actual=a576a7728ecc20d3 00:04:17.872 [2024-07-12 14:52:43.026431] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2f4837a266, Actual=88010a2d4837a266 00:04:17.872 [2024-07-12 14:52:43.026463] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.872 [2024-07-12 14:52:43.026495] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.872 passed 00:04:17.872 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-07-12 14:52:43.026527] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20058 00:04:17.872 [2024-07-12 14:52:43.026559] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20058 00:04:17.872 [2024-07-12 14:52:43.026594] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1350106b35202e11 00:04:17.872 [2024-07-12 14:52:43.026625] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=5df8344711684bcf 00:04:17.872 [2024-07-12 14:52:43.026655] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4e, Actual=fd4c 00:04:17.872 [2024-07-12 14:52:43.026687] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe23, Actual=fe21 00:04:17.872 [2024-07-12 14:52:43.026719] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.872 [2024-07-12 14:52:43.026751] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.872 [2024-07-12 14:52:43.026791] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5a 00:04:17.872 [2024-07-12 14:52:43.026823] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5a 00:04:17.872 [2024-07-12 14:52:43.026856] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=2879 00:04:17.872 [2024-07-12 14:52:43.026882] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=6d93 00:04:17.872 [2024-07-12 14:52:43.026909] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ef, Actual=1ab753ed 00:04:17.872 [2024-07-12 14:52:43.026942] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574662, Actual=38574660 00:04:17.872 [2024-07-12 14:52:43.026973] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.872 [2024-07-12 14:52:43.027005] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.872 [2024-07-12 14:52:43.027037] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000058 00:04:17.872 [2024-07-12 14:52:43.027069] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000058 00:04:17.872 [2024-07-12 14:52:43.027101] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=bb531573 00:04:17.872 [2024-07-12 14:52:43.027127] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=f00f85fd 00:04:17.872 [2024-07-12 14:52:43.027153] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7708ecc20d3, Actual=a576a7728ecc20d3 00:04:17.872 [2024-07-12 14:52:43.027185] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2f4837a266, Actual=88010a2d4837a266 00:04:17.872 [2024-07-12 14:52:43.027217] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.872 [2024-07-12 14:52:43.027249] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.872 [2024-07-12 14:52:43.027281] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20058 00:04:17.872 [2024-07-12 14:52:43.027313] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20058 00:04:17.872 [2024-07-12 14:52:43.027345] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1350106b35202e11 00:04:17.872 passed 00:04:17.872 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-07-12 14:52:43.027371] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=5df8344711684bcf 00:04:17.872 [2024-07-12 14:52:43.027400] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4e, Actual=fd4c 00:04:17.872 [2024-07-12 14:52:43.027432] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe23, Actual=fe21 00:04:17.872 [2024-07-12 14:52:43.027464] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.872 [2024-07-12 14:52:43.027496] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.872 [2024-07-12 14:52:43.027529] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5a 00:04:17.872 [2024-07-12 14:52:43.027561] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5a 00:04:17.872 [2024-07-12 14:52:43.027593] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=2879 00:04:17.872 [2024-07-12 14:52:43.027619] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=6d93 00:04:17.872 [2024-07-12 14:52:43.027646] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ef, Actual=1ab753ed 00:04:17.872 [2024-07-12 14:52:43.027678] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574662, Actual=38574660 00:04:17.872 [2024-07-12 14:52:43.027710] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.872 [2024-07-12 14:52:43.027742] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.872 [2024-07-12 14:52:43.027774] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000058 00:04:17.872 [2024-07-12 14:52:43.027806] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000058 00:04:17.872 [2024-07-12 14:52:43.027838] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=bb531573 00:04:17.872 [2024-07-12 14:52:43.027864] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=f00f85fd 00:04:17.872 [2024-07-12 14:52:43.027890] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7708ecc20d3, Actual=a576a7728ecc20d3 00:04:17.872 [2024-07-12 14:52:43.027922] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2f4837a266, Actual=88010a2d4837a266 00:04:17.872 [2024-07-12 14:52:43.027954] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.872 [2024-07-12 14:52:43.027986] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.872 [2024-07-12 14:52:43.028018] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20058 00:04:17.872 [2024-07-12 14:52:43.028050] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20058 00:04:17.872 [2024-07-12 14:52:43.028082] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1350106b35202e11 00:04:17.872 [2024-07-12 14:52:43.028108] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=5df8344711684bcf 00:04:17.872 passed 00:04:17.872 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-07-12 14:52:43.028137] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4e, Actual=fd4c 00:04:17.872 [2024-07-12 14:52:43.028169] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe23, Actual=fe21 00:04:17.873 [2024-07-12 14:52:43.028201] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.873 [2024-07-12 14:52:43.028236] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.873 passed 00:04:17.873 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-07-12 14:52:43.028277] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5a 00:04:17.873 [2024-07-12 14:52:43.028311] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5a 00:04:17.873 [2024-07-12 14:52:43.028353] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=2879 00:04:17.873 [2024-07-12 14:52:43.028381] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=6d93 00:04:17.873 [2024-07-12 14:52:43.028410] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ef, Actual=1ab753ed 00:04:17.873 [2024-07-12 14:52:43.028452] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574662, Actual=38574660 00:04:17.873 [2024-07-12 14:52:43.028484] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.873 [2024-07-12 14:52:43.028516] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.873 [2024-07-12 14:52:43.028548] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000058 00:04:17.873 [2024-07-12 14:52:43.028580] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000058 00:04:17.873 [2024-07-12 14:52:43.028612] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=bb531573 00:04:17.873 [2024-07-12 14:52:43.028638] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=f00f85fd 00:04:17.873 [2024-07-12 14:52:43.028665] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7708ecc20d3, Actual=a576a7728ecc20d3 00:04:17.873 [2024-07-12 14:52:43.028696] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2f4837a266, Actual=88010a2d4837a266 00:04:17.873 [2024-07-12 14:52:43.028728] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.873 [2024-07-12 14:52:43.028760] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.873 [2024-07-12 14:52:43.028792] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20058 00:04:17.873 [2024-07-12 14:52:43.028824] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20058 00:04:17.873 [2024-07-12 14:52:43.028857] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1350106b35202e11 00:04:17.873 passed 00:04:17.873 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-07-12 14:52:43.028883] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=5df8344711684bcf 00:04:17.873 [2024-07-12 14:52:43.028911] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4e, Actual=fd4c 00:04:17.873 [2024-07-12 14:52:43.028943] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe23, Actual=fe21 00:04:17.873 [2024-07-12 14:52:43.028975] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.873 [2024-07-12 14:52:43.029007] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.873 [2024-07-12 14:52:43.029039] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5a 00:04:17.873 [2024-07-12 14:52:43.029072] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5a 00:04:17.873 [2024-07-12 14:52:43.029105] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=2879 00:04:17.873 [2024-07-12 14:52:43.029135] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=6d93 00:04:17.873 passed 00:04:17.873 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-07-12 14:52:43.029164] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ef, Actual=1ab753ed 00:04:17.873 [2024-07-12 14:52:43.029197] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574662, Actual=38574660 00:04:17.873 [2024-07-12 14:52:43.029233] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.873 [2024-07-12 14:52:43.029266] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.873 [2024-07-12 14:52:43.029302] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000058 00:04:17.873 [2024-07-12 14:52:43.029340] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000058 00:04:17.873 [2024-07-12 14:52:43.029376] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=bb531573 00:04:17.873 [2024-07-12 14:52:43.029404] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=f00f85fd 00:04:17.873 [2024-07-12 14:52:43.029432] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7708ecc20d3, Actual=a576a7728ecc20d3 00:04:17.873 [2024-07-12 14:52:43.029464] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2f4837a266, Actual=88010a2d4837a266 00:04:17.873 [2024-07-12 14:52:43.029500] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.873 [2024-07-12 14:52:43.029532] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.873 [2024-07-12 14:52:43.029567] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20058 00:04:17.873 [2024-07-12 14:52:43.029600] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20058 00:04:17.873 [2024-07-12 14:52:43.029635] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1350106b35202e11 00:04:17.873 [2024-07-12 14:52:43.029663] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=5df8344711684bcf 00:04:17.873 passed 00:04:17.873 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:04:17.873 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:04:17.873 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:04:17.873 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:04:17.873 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:04:17.873 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:04:17.873 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:04:17.873 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:04:17.873 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:04:17.873 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-12 14:52:43.034095] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd4e, Actual=fd4c 00:04:17.873 [2024-07-12 14:52:43.034243] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=282c, Actual=282e 00:04:17.873 [2024-07-12 14:52:43.034387] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=8a 00:04:17.873 [2024-07-12 14:52:43.034520] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=8a 00:04:17.873 [2024-07-12 14:52:43.034651] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=58 00:04:17.873 [2024-07-12 14:52:43.034783] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=58 00:04:17.873 [2024-07-12 14:52:43.034919] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd4c, Actual=2879 00:04:17.873 [2024-07-12 14:52:43.035054] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d90d, Actual=4abf 00:04:17.873 [2024-07-12 14:52:43.035191] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753ef, Actual=1ab753ed 00:04:17.873 [2024-07-12 14:52:43.035324] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=f0f6c9c0, Actual=f0f6c9c2 00:04:17.873 [2024-07-12 14:52:43.035460] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=8a 00:04:17.873 [2024-07-12 14:52:43.035603] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=8a 00:04:17.873 [2024-07-12 14:52:43.035739] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=20000005a 00:04:17.873 [2024-07-12 14:52:43.035875] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=20000005a 00:04:17.873 [2024-07-12 14:52:43.036008] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753ed, Actual=bb531573 00:04:17.873 [2024-07-12 14:52:43.036144] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=890612e, Actual=c0c8a2b3 00:04:17.873 [2024-07-12 14:52:43.036287] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7708ecc20d3, Actual=a576a7728ecc20d3 00:04:17.873 [2024-07-12 14:52:43.036432] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=b4b2486481aef814, Actual=b4b2486681aef814 00:04:17.873 [2024-07-12 14:52:43.036570] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=8a 00:04:17.873 [2024-07-12 14:52:43.036707] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=8a 00:04:17.873 [2024-07-12 14:52:43.036843] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=2005a 00:04:17.873 [2024-07-12 14:52:43.036980] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=2005a 00:04:17.873 [2024-07-12 14:52:43.037116] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc20d3, Actual=1350106b35202e11 00:04:17.873 [2024-07-12 14:52:43.037261] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d8fbbefb69e63b38, Actual=d02809130b9d291 00:04:17.873 passed 00:04:17.873 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-12 14:52:43.037307] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4e, Actual=fd4c 00:04:17.873 [2024-07-12 14:52:43.037342] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=dc37, Actual=dc35 00:04:17.873 [2024-07-12 14:52:43.037375] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.873 [2024-07-12 14:52:43.037408] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.873 [2024-07-12 14:52:43.037442] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5a 00:04:17.873 [2024-07-12 14:52:43.037485] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5a 00:04:17.874 [2024-07-12 14:52:43.037518] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=2879 00:04:17.874 [2024-07-12 14:52:43.037552] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=bea4 00:04:17.874 [2024-07-12 14:52:43.037585] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ef, Actual=1ab753ed 00:04:17.874 [2024-07-12 14:52:43.037619] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=12c0e842, Actual=12c0e840 00:04:17.874 [2024-07-12 14:52:43.037652] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.874 [2024-07-12 14:52:43.037685] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.874 [2024-07-12 14:52:43.037719] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000058 00:04:17.874 [2024-07-12 14:52:43.037752] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000058 00:04:17.874 [2024-07-12 14:52:43.037785] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=bb531573 00:04:17.874 [2024-07-12 14:52:43.037818] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=22fe8331 00:04:17.874 [2024-07-12 14:52:43.037852] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7708ecc20d3, Actual=a576a7728ecc20d3 00:04:17.874 [2024-07-12 14:52:43.037885] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=542fdc84be92f38e, Actual=542fdc86be92f38e 00:04:17.874 [2024-07-12 14:52:43.037919] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.874 [2024-07-12 14:52:43.037952] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.874 [2024-07-12 14:52:43.037995] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20058 00:04:17.874 [2024-07-12 14:52:43.038028] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20058 00:04:17.874 passed 00:04:17.874 Test: dix_sec_512_md_0_error ...passed 00:04:17.874 Test: dix_sec_512_md_8_prchk_0_single_iov ...passed 00:04:17.874 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...[2024-07-12 14:52:43.038062] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1350106b35202e11 00:04:17.874 [2024-07-12 14:52:43.038095] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=ed9f14710f85d90b 00:04:17.874 [2024-07-12 14:52:43.038104] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:04:17.874 passed 00:04:17.874 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:04:17.874 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:04:17.874 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:04:17.874 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:04:17.874 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:04:17.874 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:04:17.874 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:04:17.874 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-12 14:52:43.042357] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd4e, Actual=fd4c 00:04:17.874 [2024-07-12 14:52:43.042497] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=282c, Actual=282e 00:04:17.874 [2024-07-12 14:52:43.042633] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=8a 00:04:17.874 [2024-07-12 14:52:43.042768] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=8a 00:04:17.874 [2024-07-12 14:52:43.042901] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=58 00:04:17.874 [2024-07-12 14:52:43.043032] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=58 00:04:17.874 [2024-07-12 14:52:43.043166] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd4c, Actual=2879 00:04:17.874 [2024-07-12 14:52:43.043305] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d90d, Actual=4abf 00:04:17.874 [2024-07-12 14:52:43.043444] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753ef, Actual=1ab753ed 00:04:17.874 [2024-07-12 14:52:43.043578] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=f0f6c9c0, Actual=f0f6c9c2 00:04:17.874 [2024-07-12 14:52:43.043711] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=8a 00:04:17.874 [2024-07-12 14:52:43.043844] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=8a 00:04:17.874 [2024-07-12 14:52:43.043977] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=20000005a 00:04:17.874 [2024-07-12 14:52:43.044111] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=20000005a 00:04:17.874 [2024-07-12 14:52:43.044241] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753ed, Actual=bb531573 00:04:17.874 [2024-07-12 14:52:43.044380] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=890612e, Actual=c0c8a2b3 00:04:17.874 [2024-07-12 14:52:43.044518] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7708ecc20d3, Actual=a576a7728ecc20d3 00:04:17.874 [2024-07-12 14:52:43.044649] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=b4b2486481aef814, Actual=b4b2486681aef814 00:04:17.874 [2024-07-12 14:52:43.044783] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=8a 00:04:17.874 [2024-07-12 14:52:43.044916] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=8a 00:04:17.874 [2024-07-12 14:52:43.045056] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=2005a 00:04:17.874 [2024-07-12 14:52:43.045190] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=2005a 00:04:17.874 [2024-07-12 14:52:43.045323] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc20d3, Actual=1350106b35202e11 00:04:17.874 passed 00:04:17.874 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-12 14:52:43.045457] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d8fbbefb69e63b38, Actual=d02809130b9d291 00:04:17.874 [2024-07-12 14:52:43.045498] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4e, Actual=fd4c 00:04:17.874 [2024-07-12 14:52:43.045540] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=dc37, Actual=dc35 00:04:17.874 [2024-07-12 14:52:43.045574] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.874 [2024-07-12 14:52:43.045608] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.874 [2024-07-12 14:52:43.045641] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5a 00:04:17.874 [2024-07-12 14:52:43.045674] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=5a 00:04:17.874 [2024-07-12 14:52:43.045708] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=2879 00:04:17.874 [2024-07-12 14:52:43.045741] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=bea4 00:04:17.874 [2024-07-12 14:52:43.045774] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ef, Actual=1ab753ed 00:04:17.874 [2024-07-12 14:52:43.045807] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=12c0e842, Actual=12c0e840 00:04:17.874 [2024-07-12 14:52:43.045840] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.874 [2024-07-12 14:52:43.045873] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.874 [2024-07-12 14:52:43.045905] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000058 00:04:17.874 [2024-07-12 14:52:43.045938] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000058 00:04:17.874 [2024-07-12 14:52:43.045971] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=bb531573 00:04:17.874 [2024-07-12 14:52:43.046004] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=22fe8331 00:04:17.874 [2024-07-12 14:52:43.046038] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7708ecc20d3, Actual=a576a7728ecc20d3 00:04:17.874 [2024-07-12 14:52:43.046071] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=542fdc84be92f38e, Actual=542fdc86be92f38e 00:04:17.874 [2024-07-12 14:52:43.046104] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.874 [2024-07-12 14:52:43.046137] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8a 00:04:17.874 [2024-07-12 14:52:43.046176] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20058 00:04:17.874 passed 00:04:17.874 Test: set_md_interleave_iovs_test ...[2024-07-12 14:52:43.046210] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20058 00:04:17.874 [2024-07-12 14:52:43.046248] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1350106b35202e11 00:04:17.874 [2024-07-12 14:52:43.046288] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=ed9f14710f85d90b 00:04:17.874 passed 00:04:17.874 Test: set_md_interleave_iovs_split_test ...passed 00:04:17.874 Test: dif_generate_stream_pi_16_test ...passed 00:04:17.874 Test: dif_generate_stream_test ...passed 00:04:17.874 Test: set_md_interleave_iovs_alignment_test ...passed 00:04:17.874 Test: dif_generate_split_test ...[2024-07-12 14:52:43.046971] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1822:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:04:17.874 passed 00:04:17.874 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:04:17.874 Test: dif_verify_split_test ...passed 00:04:17.874 Test: dif_verify_stream_multi_segments_test ...passed 00:04:17.874 Test: update_crc32c_pi_16_test ...passed 00:04:17.874 Test: update_crc32c_test ...passed 00:04:17.874 Test: dif_update_crc32c_split_test ...passed 00:04:17.874 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:04:17.874 Test: get_range_with_md_test ...passed 00:04:17.874 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:04:17.874 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:04:17.874 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:04:17.874 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:04:17.874 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:04:17.875 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:04:17.875 Test: dif_generate_and_verify_unmap_test ...passed 00:04:17.875 00:04:17.875 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.875 suites 1 1 n/a 0 0 00:04:17.875 tests 79 79 79 0 0 00:04:17.875 asserts 3584 3584 3584 0 n/a 00:04:17.875 00:04:17.875 Elapsed time = 0.031 seconds 00:04:17.875 14:52:43 unittest.unittest_util -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:04:17.875 00:04:17.875 00:04:17.875 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.875 http://cunit.sourceforge.net/ 00:04:17.875 00:04:17.875 00:04:17.875 Suite: iov 00:04:17.875 Test: test_single_iov ...passed 00:04:17.875 Test: test_simple_iov ...passed 00:04:17.875 Test: test_complex_iov ...passed 00:04:17.875 Test: test_iovs_to_buf ...passed 00:04:17.875 Test: test_buf_to_iovs ...passed 00:04:17.875 Test: test_memset ...passed 00:04:17.875 Test: test_iov_one ...passed 00:04:17.875 Test: test_iov_xfer ...passed 00:04:17.875 00:04:17.875 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.875 suites 1 1 n/a 0 0 00:04:17.875 tests 8 8 8 0 0 00:04:17.875 asserts 156 156 156 0 n/a 00:04:17.875 00:04:17.875 Elapsed time = 0.000 seconds 00:04:17.875 14:52:43 unittest.unittest_util -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:04:17.875 00:04:17.875 00:04:17.875 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.875 http://cunit.sourceforge.net/ 00:04:17.875 00:04:17.875 00:04:17.875 Suite: math 00:04:17.875 Test: test_serial_number_arithmetic ...passed 00:04:17.875 Suite: erase 00:04:17.875 Test: test_memset_s ...passed 00:04:17.875 00:04:17.875 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.875 suites 2 2 n/a 0 0 00:04:17.875 tests 2 2 2 0 0 00:04:17.875 asserts 18 18 18 0 n/a 00:04:17.875 00:04:17.875 Elapsed time = 0.000 seconds 00:04:17.875 14:52:43 unittest.unittest_util -- unit/unittest.sh@145 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:04:17.875 00:04:17.875 00:04:17.875 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.875 http://cunit.sourceforge.net/ 00:04:17.875 00:04:17.875 00:04:17.875 Suite: pipe 00:04:17.875 Test: test_create_destroy ...passed 00:04:17.875 Test: test_write_get_buffer ...passed 00:04:17.875 Test: test_write_advance ...passed 00:04:17.875 Test: test_read_get_buffer ...passed 00:04:17.875 Test: test_read_advance ...passed 00:04:17.875 Test: test_data ...passed 00:04:17.875 00:04:17.875 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.875 suites 1 1 n/a 0 0 00:04:17.875 tests 6 6 6 0 0 00:04:17.875 asserts 251 251 251 0 n/a 00:04:17.875 00:04:17.875 Elapsed time = 0.000 seconds 00:04:17.875 14:52:43 unittest.unittest_util -- unit/unittest.sh@146 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:04:17.875 00:04:17.875 00:04:17.875 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.875 http://cunit.sourceforge.net/ 00:04:17.875 00:04:17.875 00:04:17.875 Suite: xor 00:04:17.875 Test: test_xor_gen ...passed 00:04:17.875 00:04:17.875 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.875 suites 1 1 n/a 0 0 00:04:17.875 tests 1 1 1 0 0 00:04:17.875 asserts 17 17 17 0 n/a 00:04:17.875 00:04:17.875 Elapsed time = 0.000 seconds 00:04:17.875 00:04:17.875 real 0m0.111s 00:04:17.875 user 0m0.043s 00:04:17.875 sys 0m0.067s 00:04:17.875 14:52:43 unittest.unittest_util -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.875 ************************************ 00:04:17.875 END TEST unittest_util 00:04:17.875 14:52:43 unittest.unittest_util -- common/autotest_common.sh@10 -- # set +x 00:04:17.875 ************************************ 00:04:17.875 14:52:43 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:17.875 14:52:43 unittest -- unit/unittest.sh@284 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:17.875 14:52:43 unittest -- unit/unittest.sh@287 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:04:17.875 14:52:43 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.875 14:52:43 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.875 14:52:43 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:17.875 ************************************ 00:04:17.875 START TEST unittest_dma 00:04:17.875 ************************************ 00:04:17.875 14:52:43 unittest.unittest_dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:04:17.875 00:04:17.875 00:04:17.875 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.875 http://cunit.sourceforge.net/ 00:04:17.875 00:04:17.875 00:04:17.875 Suite: dma_suite 00:04:17.875 Test: test_dma ...passed 00:04:17.875 00:04:17.875 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.875 suites 1 1 n/a 0 0 00:04:17.875 tests 1 1 1 0 0 00:04:17.875 asserts 54 54 54 0 n/a 00:04:17.875 00:04:17.875 Elapsed time = 0.000 seconds 00:04:17.875 [2024-07-12 14:52:43.122103] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 56:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:04:17.875 00:04:17.875 real 0m0.006s 00:04:17.875 user 0m0.005s 00:04:17.875 sys 0m0.005s 00:04:17.875 14:52:43 unittest.unittest_dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.875 ************************************ 00:04:17.875 END TEST unittest_dma 00:04:17.875 ************************************ 00:04:17.875 14:52:43 unittest.unittest_dma -- common/autotest_common.sh@10 -- # set +x 00:04:17.875 14:52:43 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:17.875 14:52:43 unittest -- unit/unittest.sh@289 -- # run_test unittest_init unittest_init 00:04:17.875 14:52:43 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.875 14:52:43 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.875 14:52:43 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:17.875 ************************************ 00:04:17.875 START TEST unittest_init 00:04:17.875 ************************************ 00:04:17.875 14:52:43 unittest.unittest_init -- common/autotest_common.sh@1123 -- # unittest_init 00:04:17.875 14:52:43 unittest.unittest_init -- unit/unittest.sh@150 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:04:17.875 00:04:17.875 00:04:17.875 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.875 http://cunit.sourceforge.net/ 00:04:17.875 00:04:17.875 00:04:17.875 Suite: subsystem_suite 00:04:17.875 Test: subsystem_sort_test_depends_on_single ...passed 00:04:17.875 Test: subsystem_sort_test_depends_on_multiple ...passed 00:04:17.875 Test: subsystem_sort_test_missing_dependency ...[2024-07-12 14:52:43.166730] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 197:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:04:17.875 passed 00:04:17.875 00:04:17.875 [2024-07-12 14:52:43.167276] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:04:17.875 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.876 suites 1 1 n/a 0 0 00:04:17.876 tests 3 3 3 0 0 00:04:17.876 asserts 20 20 20 0 n/a 00:04:17.876 00:04:17.876 Elapsed time = 0.000 seconds 00:04:17.876 00:04:17.876 real 0m0.005s 00:04:17.876 user 0m0.008s 00:04:17.876 sys 0m0.004s 00:04:17.876 14:52:43 unittest.unittest_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.876 14:52:43 unittest.unittest_init -- common/autotest_common.sh@10 -- # set +x 00:04:17.876 ************************************ 00:04:17.876 END TEST unittest_init 00:04:17.876 ************************************ 00:04:17.876 14:52:43 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:17.876 14:52:43 unittest -- unit/unittest.sh@290 -- # run_test unittest_keyring /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:04:17.876 14:52:43 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.876 14:52:43 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.876 14:52:43 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:17.876 ************************************ 00:04:17.876 START TEST unittest_keyring 00:04:17.876 ************************************ 00:04:17.876 14:52:43 unittest.unittest_keyring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:04:17.876 00:04:17.876 00:04:17.876 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.876 http://cunit.sourceforge.net/ 00:04:17.876 00:04:17.876 00:04:17.876 Suite: keyring 00:04:17.876 Test: test_keyring_add_remove ...[2024-07-12 14:52:43.215873] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists 00:04:17.876 [2024-07-12 14:52:43.216103] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists 00:04:17.876 [2024-07-12 14:52:43.216138] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:04:17.876 passed 00:04:17.876 Test: test_keyring_get_put ...passed 00:04:17.876 00:04:17.876 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.876 suites 1 1 n/a 0 0 00:04:17.876 tests 2 2 2 0 0 00:04:17.876 asserts 44 44 44 0 n/a 00:04:17.876 00:04:17.876 Elapsed time = 0.000 seconds 00:04:17.876 00:04:17.876 real 0m0.006s 00:04:17.876 user 0m0.000s 00:04:17.876 sys 0m0.004s 00:04:17.876 14:52:43 unittest.unittest_keyring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.876 14:52:43 unittest.unittest_keyring -- common/autotest_common.sh@10 -- # set +x 00:04:17.876 ************************************ 00:04:17.876 END TEST unittest_keyring 00:04:17.876 ************************************ 00:04:17.876 14:52:43 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:17.876 14:52:43 unittest -- unit/unittest.sh@292 -- # '[' no = yes ']' 00:04:17.876 14:52:43 unittest -- unit/unittest.sh@305 -- # set +x 00:04:17.876 00:04:17.876 00:04:17.876 ===================== 00:04:17.876 All unit tests passed 00:04:17.876 ===================== 00:04:17.876 WARN: lcov not installed or SPDK built without coverage! 00:04:17.876 WARN: neither valgrind nor ASAN is enabled! 00:04:17.876 00:04:17.876 00:04:17.876 00:04:17.876 real 0m31.045s 00:04:17.876 user 0m12.623s 00:04:17.876 sys 0m1.491s 00:04:17.876 14:52:43 unittest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.876 14:52:43 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:17.876 ************************************ 00:04:17.876 END TEST unittest 00:04:17.876 ************************************ 00:04:17.876 14:52:43 -- common/autotest_common.sh@1142 -- # return 0 00:04:17.876 14:52:43 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:17.876 14:52:43 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:17.876 14:52:43 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:17.876 14:52:43 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:17.876 14:52:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:17.876 14:52:43 -- common/autotest_common.sh@10 -- # set +x 00:04:17.876 14:52:43 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:17.876 14:52:43 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:17.876 14:52:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.876 14:52:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.876 14:52:43 -- common/autotest_common.sh@10 -- # set +x 00:04:17.876 ************************************ 00:04:17.876 START TEST env 00:04:17.876 ************************************ 00:04:17.876 14:52:43 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:17.876 * Looking for test storage... 00:04:17.876 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:17.876 14:52:43 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:17.876 14:52:43 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.876 14:52:43 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.876 14:52:43 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.876 ************************************ 00:04:17.876 START TEST env_memory 00:04:17.876 ************************************ 00:04:17.876 14:52:43 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:17.876 00:04:17.876 00:04:17.876 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.876 http://cunit.sourceforge.net/ 00:04:17.876 00:04:17.876 00:04:17.876 Suite: memory 00:04:17.876 Test: alloc and free memory map ...[2024-07-12 14:52:43.463458] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:17.876 passed 00:04:17.876 Test: mem map translation ...[2024-07-12 14:52:43.470554] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:17.876 [2024-07-12 14:52:43.470595] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:17.876 [2024-07-12 14:52:43.470622] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:17.876 [2024-07-12 14:52:43.470632] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:17.876 passed 00:04:17.876 Test: mem map registration ...[2024-07-12 14:52:43.479346] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:17.876 [2024-07-12 14:52:43.479375] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:17.876 passed 00:04:17.876 Test: mem map adjacent registrations ...passed 00:04:17.876 00:04:17.876 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.876 suites 1 1 n/a 0 0 00:04:17.876 tests 4 4 4 0 0 00:04:17.876 asserts 152 152 152 0 n/a 00:04:17.876 00:04:17.876 Elapsed time = 0.023 seconds 00:04:17.876 00:04:17.876 real 0m0.041s 00:04:17.876 user 0m0.032s 00:04:17.876 sys 0m0.008s 00:04:17.876 14:52:43 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.876 ************************************ 00:04:17.876 END TEST env_memory 00:04:17.876 ************************************ 00:04:17.876 14:52:43 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:17.876 14:52:43 env -- common/autotest_common.sh@1142 -- # return 0 00:04:17.876 14:52:43 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:17.876 14:52:43 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.876 14:52:43 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.876 14:52:43 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.876 ************************************ 00:04:17.876 START TEST env_vtophys 00:04:17.876 ************************************ 00:04:17.876 14:52:43 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:17.876 EAL: lib.eal log level changed from notice to debug 00:04:17.876 EAL: Sysctl reports 10 cpus 00:04:17.876 EAL: Detected lcore 0 as core 0 on socket 0 00:04:17.876 EAL: Detected lcore 1 as core 0 on socket 0 00:04:17.876 EAL: Detected lcore 2 as core 0 on socket 0 00:04:17.876 EAL: Detected lcore 3 as core 0 on socket 0 00:04:17.876 EAL: Detected lcore 4 as core 0 on socket 0 00:04:17.876 EAL: Detected lcore 5 as core 0 on socket 0 00:04:17.876 EAL: Detected lcore 6 as core 0 on socket 0 00:04:17.876 EAL: Detected lcore 7 as core 0 on socket 0 00:04:17.876 EAL: Detected lcore 8 as core 0 on socket 0 00:04:17.876 EAL: Detected lcore 9 as core 0 on socket 0 00:04:17.876 EAL: Maximum logical cores by configuration: 128 00:04:17.876 EAL: Detected CPU lcores: 10 00:04:17.876 EAL: Detected NUMA nodes: 1 00:04:17.876 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:17.876 EAL: Checking presence of .so 'librte_eal.so.24' 00:04:17.876 EAL: Checking presence of .so 'librte_eal.so' 00:04:17.876 EAL: Detected static linkage of DPDK 00:04:17.876 EAL: No shared files mode enabled, IPC will be disabled 00:04:17.876 EAL: PCI scan found 10 devices 00:04:17.876 EAL: Specific IOVA mode is not requested, autodetecting 00:04:17.876 EAL: Selecting IOVA mode according to bus requests 00:04:17.876 EAL: Bus pci wants IOVA as 'PA' 00:04:17.876 EAL: Selected IOVA mode 'PA' 00:04:17.876 EAL: Contigmem driver has 8 buffers, each of size 256MB 00:04:17.877 EAL: Ask a virtual area of 0x2e000 bytes 00:04:17.877 EAL: WARNING! Base virtual address hint (0x1000005000 != 0x1000760000) not respected! 00:04:17.877 EAL: This may cause issues with mapping memory into secondary processes 00:04:17.877 EAL: Virtual area found at 0x1000760000 (size = 0x2e000) 00:04:17.877 EAL: Setting up physically contiguous memory... 00:04:17.877 EAL: Ask a virtual area of 0x1000 bytes 00:04:17.877 EAL: WARNING! Base virtual address hint (0x100000b000 != 0x1000d93000) not respected! 00:04:17.877 EAL: This may cause issues with mapping memory into secondary processes 00:04:17.877 EAL: Virtual area found at 0x1000d93000 (size = 0x1000) 00:04:17.877 EAL: Memseg list allocated at socket 0, page size 0x40000kB 00:04:17.877 EAL: Ask a virtual area of 0xf0000000 bytes 00:04:17.877 EAL: WARNING! Base virtual address hint (0x105000c000 != 0x1060000000) not respected! 00:04:17.877 EAL: This may cause issues with mapping memory into secondary processes 00:04:17.877 EAL: Virtual area found at 0x1060000000 (size = 0xf0000000) 00:04:17.877 EAL: VA reserved for memseg list at 0x1060000000, size f0000000 00:04:17.877 EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0x30000000, len 268435456 00:04:18.135 EAL: Mapped memory segment 1 @ 0x1070000000: physaddr:0x40000000, len 268435456 00:04:18.135 EAL: Mapped memory segment 2 @ 0x1090000000: physaddr:0xa0000000, len 268435456 00:04:18.135 EAL: Mapped memory segment 3 @ 0x10b0000000: physaddr:0x110000000, len 268435456 00:04:18.135 EAL: Mapped memory segment 4 @ 0x10d0000000: physaddr:0x1f0000000, len 268435456 00:04:18.394 EAL: Mapped memory segment 5 @ 0x10f0000000: physaddr:0x240000000, len 268435456 00:04:18.394 EAL: Mapped memory segment 6 @ 0x1110000000: physaddr:0x290000000, len 268435456 00:04:18.394 EAL: Mapped memory segment 7 @ 0x1130000000: physaddr:0x2b0000000, len 268435456 00:04:18.394 EAL: No shared files mode enabled, IPC is disabled 00:04:18.394 EAL: Added 512M to heap on socket 0 00:04:18.394 EAL: Added 256M to heap on socket 0 00:04:18.394 EAL: Added 256M to heap on socket 0 00:04:18.394 EAL: Added 256M to heap on socket 0 00:04:18.394 EAL: Added 256M to heap on socket 0 00:04:18.394 EAL: Added 256M to heap on socket 0 00:04:18.394 EAL: Added 256M to heap on socket 0 00:04:18.394 EAL: TSC is not safe to use in SMP mode 00:04:18.394 EAL: TSC is not invariant 00:04:18.394 EAL: TSC frequency is ~2199998 KHz 00:04:18.394 EAL: Main lcore 0 is ready (tid=9c428e12000;cpuset=[0]) 00:04:18.394 EAL: PCI scan found 10 devices 00:04:18.394 EAL: Registering mem event callbacks not supported 00:04:18.394 00:04:18.394 00:04:18.394 CUnit - A unit testing framework for C - Version 2.1-3 00:04:18.394 http://cunit.sourceforge.net/ 00:04:18.394 00:04:18.394 00:04:18.394 Suite: components_suite 00:04:18.394 Test: vtophys_malloc_test ...passed 00:04:18.653 Test: vtophys_spdk_malloc_test ...passed 00:04:18.653 00:04:18.653 Run Summary: Type Total Ran Passed Failed Inactive 00:04:18.653 suites 1 1 n/a 0 0 00:04:18.653 tests 2 2 2 0 0 00:04:18.653 asserts 467 467 467 0 n/a 00:04:18.653 00:04:18.653 Elapsed time = 0.094 seconds 00:04:18.653 00:04:18.653 real 0m0.712s 00:04:18.653 user 0m0.101s 00:04:18.653 sys 0m0.609s 00:04:18.653 14:52:44 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.653 ************************************ 00:04:18.653 END TEST env_vtophys 00:04:18.653 ************************************ 00:04:18.653 14:52:44 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:18.653 14:52:44 env -- common/autotest_common.sh@1142 -- # return 0 00:04:18.653 14:52:44 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:18.653 14:52:44 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.653 14:52:44 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.653 14:52:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:18.653 ************************************ 00:04:18.653 START TEST env_pci 00:04:18.653 ************************************ 00:04:18.653 14:52:44 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:18.653 00:04:18.653 00:04:18.653 CUnit - A unit testing framework for C - Version 2.1-3 00:04:18.653 http://cunit.sourceforge.net/ 00:04:18.653 00:04:18.653 00:04:18.653 Suite: pci 00:04:18.653 Test: pci_hook ...passed 00:04:18.653 00:04:18.653 EAL: Cannot find device (10000:00:01.0) 00:04:18.653 EAL: Failed to attach device on primary process 00:04:18.653 Run Summary: Type Total Ran Passed Failed Inactive 00:04:18.653 suites 1 1 n/a 0 0 00:04:18.653 tests 1 1 1 0 0 00:04:18.653 asserts 25 25 25 0 n/a 00:04:18.653 00:04:18.653 Elapsed time = 0.000 seconds 00:04:18.653 00:04:18.653 real 0m0.008s 00:04:18.653 user 0m0.008s 00:04:18.653 sys 0m0.005s 00:04:18.653 14:52:44 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.653 14:52:44 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:18.653 ************************************ 00:04:18.653 END TEST env_pci 00:04:18.653 ************************************ 00:04:18.653 14:52:44 env -- common/autotest_common.sh@1142 -- # return 0 00:04:18.653 14:52:44 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:18.653 14:52:44 env -- env/env.sh@15 -- # uname 00:04:18.653 14:52:44 env -- env/env.sh@15 -- # '[' FreeBSD = Linux ']' 00:04:18.653 14:52:44 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 00:04:18.653 14:52:44 env -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:18.653 14:52:44 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.653 14:52:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:18.653 ************************************ 00:04:18.653 START TEST env_dpdk_post_init 00:04:18.653 ************************************ 00:04:18.653 14:52:44 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 00:04:18.653 EAL: Sysctl reports 10 cpus 00:04:18.653 EAL: Detected CPU lcores: 10 00:04:18.653 EAL: Detected NUMA nodes: 1 00:04:18.653 EAL: Detected static linkage of DPDK 00:04:18.653 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:18.653 EAL: Selected IOVA mode 'PA' 00:04:18.653 EAL: Contigmem driver has 8 buffers, each of size 256MB 00:04:18.653 EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0x30000000, len 268435456 00:04:18.911 EAL: Mapped memory segment 1 @ 0x1070000000: physaddr:0x40000000, len 268435456 00:04:18.911 EAL: Mapped memory segment 2 @ 0x1090000000: physaddr:0xa0000000, len 268435456 00:04:18.911 EAL: Mapped memory segment 3 @ 0x10b0000000: physaddr:0x110000000, len 268435456 00:04:18.911 EAL: Mapped memory segment 4 @ 0x10d0000000: physaddr:0x1f0000000, len 268435456 00:04:19.170 EAL: Mapped memory segment 5 @ 0x10f0000000: physaddr:0x240000000, len 268435456 00:04:19.170 EAL: Mapped memory segment 6 @ 0x1110000000: physaddr:0x290000000, len 268435456 00:04:19.170 EAL: Mapped memory segment 7 @ 0x1130000000: physaddr:0x2b0000000, len 268435456 00:04:19.170 EAL: TSC is not safe to use in SMP mode 00:04:19.170 EAL: TSC is not invariant 00:04:19.170 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:19.170 [2024-07-12 14:52:44.912699] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:04:19.170 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:19.170 Starting DPDK initialization... 00:04:19.170 Starting SPDK post initialization... 00:04:19.170 SPDK NVMe probe 00:04:19.170 Attaching to 0000:00:10.0 00:04:19.170 Attached to 0000:00:10.0 00:04:19.170 Cleaning up... 00:04:19.170 00:04:19.170 real 0m0.609s 00:04:19.170 user 0m0.025s 00:04:19.170 sys 0m0.579s 00:04:19.170 14:52:44 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.170 14:52:44 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:19.170 ************************************ 00:04:19.170 END TEST env_dpdk_post_init 00:04:19.170 ************************************ 00:04:19.428 14:52:44 env -- common/autotest_common.sh@1142 -- # return 0 00:04:19.428 14:52:44 env -- env/env.sh@26 -- # uname 00:04:19.428 14:52:44 env -- env/env.sh@26 -- # '[' FreeBSD = Linux ']' 00:04:19.428 00:04:19.428 real 0m1.699s 00:04:19.428 user 0m0.331s 00:04:19.428 sys 0m1.386s 00:04:19.428 14:52:44 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.428 14:52:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.428 ************************************ 00:04:19.428 END TEST env 00:04:19.428 ************************************ 00:04:19.428 14:52:45 -- common/autotest_common.sh@1142 -- # return 0 00:04:19.428 14:52:45 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:19.428 14:52:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.428 14:52:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.428 14:52:45 -- common/autotest_common.sh@10 -- # set +x 00:04:19.428 ************************************ 00:04:19.428 START TEST rpc 00:04:19.428 ************************************ 00:04:19.428 14:52:45 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:19.428 * Looking for test storage... 00:04:19.428 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:19.428 14:52:45 rpc -- rpc/rpc.sh@65 -- # spdk_pid=45514 00:04:19.428 14:52:45 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:19.428 14:52:45 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:19.428 14:52:45 rpc -- rpc/rpc.sh@67 -- # waitforlisten 45514 00:04:19.428 14:52:45 rpc -- common/autotest_common.sh@829 -- # '[' -z 45514 ']' 00:04:19.428 14:52:45 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:19.428 14:52:45 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:19.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:19.428 14:52:45 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:19.428 14:52:45 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:19.428 14:52:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.428 [2024-07-12 14:52:45.201651] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:04:19.428 [2024-07-12 14:52:45.201816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:19.996 EAL: TSC is not safe to use in SMP mode 00:04:19.996 EAL: TSC is not invariant 00:04:19.996 [2024-07-12 14:52:45.766616] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.311 [2024-07-12 14:52:45.864906] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:20.311 [2024-07-12 14:52:45.867325] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:20.311 [2024-07-12 14:52:45.867358] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 45514' to capture a snapshot of events at runtime. 00:04:20.311 [2024-07-12 14:52:45.867382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.595 14:52:46 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:20.595 14:52:46 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:20.595 14:52:46 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:20.595 14:52:46 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:20.595 14:52:46 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:20.595 14:52:46 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:20.595 14:52:46 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.595 14:52:46 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.595 14:52:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.595 ************************************ 00:04:20.595 START TEST rpc_integrity 00:04:20.595 ************************************ 00:04:20.595 14:52:46 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:20.595 14:52:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:20.595 14:52:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.595 14:52:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.595 14:52:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.595 14:52:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:20.595 14:52:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:20.595 14:52:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:20.595 14:52:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:20.595 14:52:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.596 14:52:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.596 14:52:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.596 14:52:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:20.596 14:52:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:20.596 14:52:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.596 14:52:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.596 14:52:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.596 14:52:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:20.596 { 00:04:20.596 "name": "Malloc0", 00:04:20.596 "aliases": [ 00:04:20.596 "668ce341-405e-11ef-b2a4-e9dca065e82e" 00:04:20.596 ], 00:04:20.596 "product_name": "Malloc disk", 00:04:20.596 "block_size": 512, 00:04:20.596 "num_blocks": 16384, 00:04:20.596 "uuid": "668ce341-405e-11ef-b2a4-e9dca065e82e", 00:04:20.596 "assigned_rate_limits": { 00:04:20.596 "rw_ios_per_sec": 0, 00:04:20.596 "rw_mbytes_per_sec": 0, 00:04:20.596 "r_mbytes_per_sec": 0, 00:04:20.596 "w_mbytes_per_sec": 0 00:04:20.596 }, 00:04:20.596 "claimed": false, 00:04:20.596 "zoned": false, 00:04:20.596 "supported_io_types": { 00:04:20.596 "read": true, 00:04:20.596 "write": true, 00:04:20.596 "unmap": true, 00:04:20.596 "flush": true, 00:04:20.596 "reset": true, 00:04:20.596 "nvme_admin": false, 00:04:20.596 "nvme_io": false, 00:04:20.596 "nvme_io_md": false, 00:04:20.596 "write_zeroes": true, 00:04:20.596 "zcopy": true, 00:04:20.596 "get_zone_info": false, 00:04:20.596 "zone_management": false, 00:04:20.596 "zone_append": false, 00:04:20.596 "compare": false, 00:04:20.596 "compare_and_write": false, 00:04:20.596 "abort": true, 00:04:20.596 "seek_hole": false, 00:04:20.596 "seek_data": false, 00:04:20.596 "copy": true, 00:04:20.596 "nvme_iov_md": false 00:04:20.596 }, 00:04:20.596 "memory_domains": [ 00:04:20.596 { 00:04:20.596 "dma_device_id": "system", 00:04:20.596 "dma_device_type": 1 00:04:20.596 }, 00:04:20.596 { 00:04:20.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.596 "dma_device_type": 2 00:04:20.596 } 00:04:20.596 ], 00:04:20.596 "driver_specific": {} 00:04:20.596 } 00:04:20.596 ]' 00:04:20.596 14:52:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:20.596 14:52:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:20.596 14:52:46 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:20.596 14:52:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.596 14:52:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.596 [2024-07-12 14:52:46.374728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:20.596 [2024-07-12 14:52:46.374820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:20.596 [2024-07-12 14:52:46.375835] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3d6743837a00 00:04:20.596 [2024-07-12 14:52:46.375881] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:20.596 [2024-07-12 14:52:46.377035] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:20.596 [2024-07-12 14:52:46.377093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:20.596 Passthru0 00:04:20.596 14:52:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.596 14:52:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:20.596 14:52:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.596 14:52:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.596 14:52:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.596 14:52:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:20.596 { 00:04:20.596 "name": "Malloc0", 00:04:20.596 "aliases": [ 00:04:20.596 "668ce341-405e-11ef-b2a4-e9dca065e82e" 00:04:20.596 ], 00:04:20.596 "product_name": "Malloc disk", 00:04:20.596 "block_size": 512, 00:04:20.596 "num_blocks": 16384, 00:04:20.596 "uuid": "668ce341-405e-11ef-b2a4-e9dca065e82e", 00:04:20.596 "assigned_rate_limits": { 00:04:20.596 "rw_ios_per_sec": 0, 00:04:20.596 "rw_mbytes_per_sec": 0, 00:04:20.596 "r_mbytes_per_sec": 0, 00:04:20.596 "w_mbytes_per_sec": 0 00:04:20.596 }, 00:04:20.596 "claimed": true, 00:04:20.596 "claim_type": "exclusive_write", 00:04:20.596 "zoned": false, 00:04:20.596 "supported_io_types": { 00:04:20.596 "read": true, 00:04:20.596 "write": true, 00:04:20.596 "unmap": true, 00:04:20.596 "flush": true, 00:04:20.596 "reset": true, 00:04:20.596 "nvme_admin": false, 00:04:20.596 "nvme_io": false, 00:04:20.596 "nvme_io_md": false, 00:04:20.596 "write_zeroes": true, 00:04:20.596 "zcopy": true, 00:04:20.596 "get_zone_info": false, 00:04:20.596 "zone_management": false, 00:04:20.596 "zone_append": false, 00:04:20.596 "compare": false, 00:04:20.596 "compare_and_write": false, 00:04:20.596 "abort": true, 00:04:20.596 "seek_hole": false, 00:04:20.596 "seek_data": false, 00:04:20.596 "copy": true, 00:04:20.596 "nvme_iov_md": false 00:04:20.596 }, 00:04:20.596 "memory_domains": [ 00:04:20.596 { 00:04:20.596 "dma_device_id": "system", 00:04:20.596 "dma_device_type": 1 00:04:20.596 }, 00:04:20.596 { 00:04:20.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.596 "dma_device_type": 2 00:04:20.596 } 00:04:20.596 ], 00:04:20.596 "driver_specific": {} 00:04:20.596 }, 00:04:20.596 { 00:04:20.596 "name": "Passthru0", 00:04:20.596 "aliases": [ 00:04:20.596 "e299f8ea-3519-435f-9572-1b124ba83aad" 00:04:20.596 ], 00:04:20.596 "product_name": "passthru", 00:04:20.596 "block_size": 512, 00:04:20.596 "num_blocks": 16384, 00:04:20.596 "uuid": "e299f8ea-3519-435f-9572-1b124ba83aad", 00:04:20.596 "assigned_rate_limits": { 00:04:20.596 "rw_ios_per_sec": 0, 00:04:20.596 "rw_mbytes_per_sec": 0, 00:04:20.596 "r_mbytes_per_sec": 0, 00:04:20.596 "w_mbytes_per_sec": 0 00:04:20.596 }, 00:04:20.596 "claimed": false, 00:04:20.596 "zoned": false, 00:04:20.596 "supported_io_types": { 00:04:20.596 "read": true, 00:04:20.596 "write": true, 00:04:20.596 "unmap": true, 00:04:20.596 "flush": true, 00:04:20.596 "reset": true, 00:04:20.596 "nvme_admin": false, 00:04:20.596 "nvme_io": false, 00:04:20.596 "nvme_io_md": false, 00:04:20.596 "write_zeroes": true, 00:04:20.596 "zcopy": true, 00:04:20.596 "get_zone_info": false, 00:04:20.596 "zone_management": false, 00:04:20.596 "zone_append": false, 00:04:20.596 "compare": false, 00:04:20.596 "compare_and_write": false, 00:04:20.596 "abort": true, 00:04:20.596 "seek_hole": false, 00:04:20.596 "seek_data": false, 00:04:20.596 "copy": true, 00:04:20.596 "nvme_iov_md": false 00:04:20.596 }, 00:04:20.596 "memory_domains": [ 00:04:20.596 { 00:04:20.596 "dma_device_id": "system", 00:04:20.596 "dma_device_type": 1 00:04:20.596 }, 00:04:20.596 { 00:04:20.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.596 "dma_device_type": 2 00:04:20.596 } 00:04:20.596 ], 00:04:20.596 "driver_specific": { 00:04:20.596 "passthru": { 00:04:20.596 "name": "Passthru0", 00:04:20.596 "base_bdev_name": "Malloc0" 00:04:20.596 } 00:04:20.596 } 00:04:20.596 } 00:04:20.596 ]' 00:04:20.596 14:52:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:20.856 14:52:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:20.856 14:52:46 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:20.856 14:52:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.856 14:52:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.856 14:52:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.856 14:52:46 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:20.856 14:52:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.856 14:52:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.856 14:52:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.856 14:52:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:20.856 14:52:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.856 14:52:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.856 14:52:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.856 14:52:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:20.856 14:52:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:20.856 14:52:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:20.856 00:04:20.856 real 0m0.134s 00:04:20.856 user 0m0.036s 00:04:20.856 sys 0m0.037s 00:04:20.856 14:52:46 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.856 ************************************ 00:04:20.856 END TEST rpc_integrity 00:04:20.856 ************************************ 00:04:20.856 14:52:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.856 14:52:46 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:20.856 14:52:46 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:20.856 14:52:46 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.856 14:52:46 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.856 14:52:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.856 ************************************ 00:04:20.856 START TEST rpc_plugins 00:04:20.856 ************************************ 00:04:20.856 14:52:46 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:20.856 14:52:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:20.856 14:52:46 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.856 14:52:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:20.856 14:52:46 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.856 14:52:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:20.856 14:52:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:20.856 14:52:46 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.856 14:52:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:20.856 14:52:46 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.856 14:52:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:20.856 { 00:04:20.856 "name": "Malloc1", 00:04:20.856 "aliases": [ 00:04:20.856 "66a5e792-405e-11ef-b2a4-e9dca065e82e" 00:04:20.856 ], 00:04:20.856 "product_name": "Malloc disk", 00:04:20.856 "block_size": 4096, 00:04:20.856 "num_blocks": 256, 00:04:20.856 "uuid": "66a5e792-405e-11ef-b2a4-e9dca065e82e", 00:04:20.856 "assigned_rate_limits": { 00:04:20.856 "rw_ios_per_sec": 0, 00:04:20.856 "rw_mbytes_per_sec": 0, 00:04:20.856 "r_mbytes_per_sec": 0, 00:04:20.856 "w_mbytes_per_sec": 0 00:04:20.856 }, 00:04:20.856 "claimed": false, 00:04:20.856 "zoned": false, 00:04:20.856 "supported_io_types": { 00:04:20.856 "read": true, 00:04:20.856 "write": true, 00:04:20.856 "unmap": true, 00:04:20.856 "flush": true, 00:04:20.856 "reset": true, 00:04:20.856 "nvme_admin": false, 00:04:20.856 "nvme_io": false, 00:04:20.856 "nvme_io_md": false, 00:04:20.856 "write_zeroes": true, 00:04:20.856 "zcopy": true, 00:04:20.856 "get_zone_info": false, 00:04:20.856 "zone_management": false, 00:04:20.856 "zone_append": false, 00:04:20.856 "compare": false, 00:04:20.856 "compare_and_write": false, 00:04:20.856 "abort": true, 00:04:20.856 "seek_hole": false, 00:04:20.856 "seek_data": false, 00:04:20.856 "copy": true, 00:04:20.856 "nvme_iov_md": false 00:04:20.856 }, 00:04:20.856 "memory_domains": [ 00:04:20.856 { 00:04:20.856 "dma_device_id": "system", 00:04:20.856 "dma_device_type": 1 00:04:20.856 }, 00:04:20.856 { 00:04:20.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.856 "dma_device_type": 2 00:04:20.856 } 00:04:20.856 ], 00:04:20.856 "driver_specific": {} 00:04:20.856 } 00:04:20.856 ]' 00:04:20.856 14:52:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:20.856 14:52:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:20.856 14:52:46 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:20.856 14:52:46 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.856 14:52:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:20.856 14:52:46 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.856 14:52:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:20.856 14:52:46 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.856 14:52:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:20.856 14:52:46 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.856 14:52:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:20.856 14:52:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:20.856 14:52:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:20.856 00:04:20.856 real 0m0.069s 00:04:20.856 user 0m0.020s 00:04:20.856 sys 0m0.016s 00:04:20.856 ************************************ 00:04:20.856 END TEST rpc_plugins 00:04:20.856 ************************************ 00:04:20.856 14:52:46 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.856 14:52:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:20.856 14:52:46 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:20.856 14:52:46 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:20.856 14:52:46 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.856 14:52:46 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.856 14:52:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.856 ************************************ 00:04:20.856 START TEST rpc_trace_cmd_test 00:04:20.856 ************************************ 00:04:20.856 14:52:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:20.856 14:52:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:20.856 14:52:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:20.856 14:52:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.856 14:52:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:20.856 14:52:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.856 14:52:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:20.856 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid45514", 00:04:20.856 "tpoint_group_mask": "0x8", 00:04:20.856 "iscsi_conn": { 00:04:20.856 "mask": "0x2", 00:04:20.856 "tpoint_mask": "0x0" 00:04:20.856 }, 00:04:20.856 "scsi": { 00:04:20.856 "mask": "0x4", 00:04:20.856 "tpoint_mask": "0x0" 00:04:20.856 }, 00:04:20.856 "bdev": { 00:04:20.856 "mask": "0x8", 00:04:20.856 "tpoint_mask": "0xffffffffffffffff" 00:04:20.856 }, 00:04:20.856 "nvmf_rdma": { 00:04:20.856 "mask": "0x10", 00:04:20.856 "tpoint_mask": "0x0" 00:04:20.856 }, 00:04:20.856 "nvmf_tcp": { 00:04:20.856 "mask": "0x20", 00:04:20.856 "tpoint_mask": "0x0" 00:04:20.856 }, 00:04:20.856 "blobfs": { 00:04:20.856 "mask": "0x80", 00:04:20.856 "tpoint_mask": "0x0" 00:04:20.856 }, 00:04:20.856 "dsa": { 00:04:20.856 "mask": "0x200", 00:04:20.856 "tpoint_mask": "0x0" 00:04:20.856 }, 00:04:20.856 "thread": { 00:04:20.856 "mask": "0x400", 00:04:20.856 "tpoint_mask": "0x0" 00:04:20.856 }, 00:04:20.856 "nvme_pcie": { 00:04:20.856 "mask": "0x800", 00:04:20.856 "tpoint_mask": "0x0" 00:04:20.856 }, 00:04:20.856 "iaa": { 00:04:20.856 "mask": "0x1000", 00:04:20.856 "tpoint_mask": "0x0" 00:04:20.856 }, 00:04:20.856 "nvme_tcp": { 00:04:20.856 "mask": "0x2000", 00:04:20.856 "tpoint_mask": "0x0" 00:04:20.856 }, 00:04:20.856 "bdev_nvme": { 00:04:20.856 "mask": "0x4000", 00:04:20.856 "tpoint_mask": "0x0" 00:04:20.856 }, 00:04:20.856 "sock": { 00:04:20.856 "mask": "0x8000", 00:04:20.856 "tpoint_mask": "0x0" 00:04:20.856 } 00:04:20.856 }' 00:04:20.856 14:52:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:20.856 14:52:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:20.856 14:52:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:20.856 14:52:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:20.856 14:52:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:20.856 14:52:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:20.856 14:52:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:20.856 14:52:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:20.857 14:52:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:20.857 14:52:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:20.857 00:04:20.857 real 0m0.053s 00:04:20.857 user 0m0.038s 00:04:20.857 sys 0m0.007s 00:04:20.857 14:52:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.857 14:52:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:20.857 ************************************ 00:04:20.857 END TEST rpc_trace_cmd_test 00:04:20.857 ************************************ 00:04:21.116 14:52:46 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:21.116 14:52:46 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:21.116 14:52:46 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:21.116 14:52:46 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:21.116 14:52:46 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.116 14:52:46 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.116 14:52:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.116 ************************************ 00:04:21.116 START TEST rpc_daemon_integrity 00:04:21.116 ************************************ 00:04:21.116 14:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:21.116 14:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:21.116 14:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.116 14:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.116 14:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.116 14:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:21.116 14:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:21.116 14:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:21.116 14:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:21.117 14:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.117 14:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.117 14:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.117 14:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:21.117 14:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:21.117 14:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.117 14:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.117 14:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.117 14:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:21.117 { 00:04:21.117 "name": "Malloc2", 00:04:21.117 "aliases": [ 00:04:21.117 "66c77a12-405e-11ef-b2a4-e9dca065e82e" 00:04:21.117 ], 00:04:21.117 "product_name": "Malloc disk", 00:04:21.117 "block_size": 512, 00:04:21.117 "num_blocks": 16384, 00:04:21.117 "uuid": "66c77a12-405e-11ef-b2a4-e9dca065e82e", 00:04:21.117 "assigned_rate_limits": { 00:04:21.117 "rw_ios_per_sec": 0, 00:04:21.117 "rw_mbytes_per_sec": 0, 00:04:21.117 "r_mbytes_per_sec": 0, 00:04:21.117 "w_mbytes_per_sec": 0 00:04:21.117 }, 00:04:21.117 "claimed": false, 00:04:21.117 "zoned": false, 00:04:21.117 "supported_io_types": { 00:04:21.117 "read": true, 00:04:21.117 "write": true, 00:04:21.117 "unmap": true, 00:04:21.117 "flush": true, 00:04:21.117 "reset": true, 00:04:21.117 "nvme_admin": false, 00:04:21.117 "nvme_io": false, 00:04:21.117 "nvme_io_md": false, 00:04:21.117 "write_zeroes": true, 00:04:21.117 "zcopy": true, 00:04:21.117 "get_zone_info": false, 00:04:21.117 "zone_management": false, 00:04:21.117 "zone_append": false, 00:04:21.117 "compare": false, 00:04:21.117 "compare_and_write": false, 00:04:21.117 "abort": true, 00:04:21.117 "seek_hole": false, 00:04:21.117 "seek_data": false, 00:04:21.117 "copy": true, 00:04:21.117 "nvme_iov_md": false 00:04:21.117 }, 00:04:21.117 "memory_domains": [ 00:04:21.117 { 00:04:21.117 "dma_device_id": "system", 00:04:21.117 "dma_device_type": 1 00:04:21.117 }, 00:04:21.117 { 00:04:21.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.117 "dma_device_type": 2 00:04:21.117 } 00:04:21.117 ], 00:04:21.117 "driver_specific": {} 00:04:21.117 } 00:04:21.117 ]' 00:04:21.117 14:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:21.117 14:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:21.117 14:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:21.117 14:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.117 14:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.117 [2024-07-12 14:52:46.758710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:21.117 [2024-07-12 14:52:46.758750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:21.117 [2024-07-12 14:52:46.758775] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3d6743837a00 00:04:21.117 [2024-07-12 14:52:46.758784] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:21.117 [2024-07-12 14:52:46.759214] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:21.117 [2024-07-12 14:52:46.759239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:21.117 Passthru0 00:04:21.117 14:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.117 14:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:21.117 14:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.117 14:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.117 14:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.117 14:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:21.117 { 00:04:21.117 "name": "Malloc2", 00:04:21.117 "aliases": [ 00:04:21.117 "66c77a12-405e-11ef-b2a4-e9dca065e82e" 00:04:21.117 ], 00:04:21.117 "product_name": "Malloc disk", 00:04:21.117 "block_size": 512, 00:04:21.117 "num_blocks": 16384, 00:04:21.117 "uuid": "66c77a12-405e-11ef-b2a4-e9dca065e82e", 00:04:21.117 "assigned_rate_limits": { 00:04:21.117 "rw_ios_per_sec": 0, 00:04:21.117 "rw_mbytes_per_sec": 0, 00:04:21.117 "r_mbytes_per_sec": 0, 00:04:21.117 "w_mbytes_per_sec": 0 00:04:21.117 }, 00:04:21.117 "claimed": true, 00:04:21.117 "claim_type": "exclusive_write", 00:04:21.117 "zoned": false, 00:04:21.117 "supported_io_types": { 00:04:21.117 "read": true, 00:04:21.117 "write": true, 00:04:21.117 "unmap": true, 00:04:21.117 "flush": true, 00:04:21.117 "reset": true, 00:04:21.117 "nvme_admin": false, 00:04:21.117 "nvme_io": false, 00:04:21.117 "nvme_io_md": false, 00:04:21.117 "write_zeroes": true, 00:04:21.117 "zcopy": true, 00:04:21.117 "get_zone_info": false, 00:04:21.117 "zone_management": false, 00:04:21.117 "zone_append": false, 00:04:21.117 "compare": false, 00:04:21.117 "compare_and_write": false, 00:04:21.117 "abort": true, 00:04:21.117 "seek_hole": false, 00:04:21.117 "seek_data": false, 00:04:21.117 "copy": true, 00:04:21.117 "nvme_iov_md": false 00:04:21.117 }, 00:04:21.117 "memory_domains": [ 00:04:21.117 { 00:04:21.117 "dma_device_id": "system", 00:04:21.117 "dma_device_type": 1 00:04:21.117 }, 00:04:21.117 { 00:04:21.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.117 "dma_device_type": 2 00:04:21.117 } 00:04:21.117 ], 00:04:21.117 "driver_specific": {} 00:04:21.117 }, 00:04:21.117 { 00:04:21.117 "name": "Passthru0", 00:04:21.117 "aliases": [ 00:04:21.117 "130c9a16-3547-be56-9e45-cc4e2c030308" 00:04:21.117 ], 00:04:21.117 "product_name": "passthru", 00:04:21.117 "block_size": 512, 00:04:21.117 "num_blocks": 16384, 00:04:21.117 "uuid": "130c9a16-3547-be56-9e45-cc4e2c030308", 00:04:21.118 "assigned_rate_limits": { 00:04:21.118 "rw_ios_per_sec": 0, 00:04:21.118 "rw_mbytes_per_sec": 0, 00:04:21.118 "r_mbytes_per_sec": 0, 00:04:21.118 "w_mbytes_per_sec": 0 00:04:21.118 }, 00:04:21.118 "claimed": false, 00:04:21.118 "zoned": false, 00:04:21.118 "supported_io_types": { 00:04:21.118 "read": true, 00:04:21.118 "write": true, 00:04:21.118 "unmap": true, 00:04:21.118 "flush": true, 00:04:21.118 "reset": true, 00:04:21.118 "nvme_admin": false, 00:04:21.118 "nvme_io": false, 00:04:21.118 "nvme_io_md": false, 00:04:21.118 "write_zeroes": true, 00:04:21.118 "zcopy": true, 00:04:21.118 "get_zone_info": false, 00:04:21.118 "zone_management": false, 00:04:21.118 "zone_append": false, 00:04:21.118 "compare": false, 00:04:21.118 "compare_and_write": false, 00:04:21.118 "abort": true, 00:04:21.118 "seek_hole": false, 00:04:21.118 "seek_data": false, 00:04:21.118 "copy": true, 00:04:21.118 "nvme_iov_md": false 00:04:21.118 }, 00:04:21.118 "memory_domains": [ 00:04:21.118 { 00:04:21.118 "dma_device_id": "system", 00:04:21.118 "dma_device_type": 1 00:04:21.118 }, 00:04:21.118 { 00:04:21.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.118 "dma_device_type": 2 00:04:21.118 } 00:04:21.118 ], 00:04:21.118 "driver_specific": { 00:04:21.118 "passthru": { 00:04:21.118 "name": "Passthru0", 00:04:21.118 "base_bdev_name": "Malloc2" 00:04:21.118 } 00:04:21.118 } 00:04:21.118 } 00:04:21.118 ]' 00:04:21.118 14:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:21.118 14:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:21.118 14:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:21.118 14:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.118 14:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.118 14:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.118 14:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:21.118 14:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.118 14:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.118 14:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.118 14:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:21.118 14:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.118 14:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.118 14:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.118 14:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:21.118 14:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:21.118 14:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:21.118 00:04:21.118 real 0m0.117s 00:04:21.118 user 0m0.033s 00:04:21.118 sys 0m0.028s 00:04:21.118 14:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.118 14:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.118 ************************************ 00:04:21.118 END TEST rpc_daemon_integrity 00:04:21.118 ************************************ 00:04:21.118 14:52:46 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:21.118 14:52:46 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:21.118 14:52:46 rpc -- rpc/rpc.sh@84 -- # killprocess 45514 00:04:21.118 14:52:46 rpc -- common/autotest_common.sh@948 -- # '[' -z 45514 ']' 00:04:21.118 14:52:46 rpc -- common/autotest_common.sh@952 -- # kill -0 45514 00:04:21.118 14:52:46 rpc -- common/autotest_common.sh@953 -- # uname 00:04:21.118 14:52:46 rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:21.118 14:52:46 rpc -- common/autotest_common.sh@956 -- # ps -c -o command 45514 00:04:21.118 14:52:46 rpc -- common/autotest_common.sh@956 -- # tail -1 00:04:21.118 14:52:46 rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:21.118 killing process with pid 45514 00:04:21.118 14:52:46 rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:21.118 14:52:46 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 45514' 00:04:21.118 14:52:46 rpc -- common/autotest_common.sh@967 -- # kill 45514 00:04:21.118 14:52:46 rpc -- common/autotest_common.sh@972 -- # wait 45514 00:04:21.376 00:04:21.376 real 0m2.077s 00:04:21.376 user 0m2.140s 00:04:21.376 sys 0m0.917s 00:04:21.376 ************************************ 00:04:21.376 END TEST rpc 00:04:21.376 ************************************ 00:04:21.376 14:52:47 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.376 14:52:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.376 14:52:47 -- common/autotest_common.sh@1142 -- # return 0 00:04:21.376 14:52:47 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:21.376 14:52:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.376 14:52:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.376 14:52:47 -- common/autotest_common.sh@10 -- # set +x 00:04:21.376 ************************************ 00:04:21.376 START TEST skip_rpc 00:04:21.376 ************************************ 00:04:21.376 14:52:47 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:21.634 * Looking for test storage... 00:04:21.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:21.634 14:52:47 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:21.634 14:52:47 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:21.634 14:52:47 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:21.634 14:52:47 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.634 14:52:47 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.634 14:52:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.634 ************************************ 00:04:21.634 START TEST skip_rpc 00:04:21.634 ************************************ 00:04:21.634 14:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:21.634 14:52:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=45690 00:04:21.634 14:52:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:21.634 14:52:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:21.634 14:52:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:21.634 [2024-07-12 14:52:47.313546] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:04:21.634 [2024-07-12 14:52:47.313726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:22.200 EAL: TSC is not safe to use in SMP mode 00:04:22.200 EAL: TSC is not invariant 00:04:22.200 [2024-07-12 14:52:47.850024] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.200 [2024-07-12 14:52:47.934411] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:22.200 [2024-07-12 14:52:47.936564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.488 14:52:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:27.488 14:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:27.488 14:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:27.488 14:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:27.488 14:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:27.488 14:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:27.488 14:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:27.488 14:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:27.488 14:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:27.488 14:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.488 14:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:27.488 14:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:27.488 14:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:27.488 14:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:27.488 14:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:27.488 14:52:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:27.488 14:52:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 45690 00:04:27.488 14:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 45690 ']' 00:04:27.488 14:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 45690 00:04:27.488 14:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:27.488 14:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:27.488 14:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps -c -o command 45690 00:04:27.488 14:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # tail -1 00:04:27.488 14:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:27.488 killing process with pid 45690 00:04:27.488 14:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:27.488 14:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 45690' 00:04:27.488 14:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 45690 00:04:27.488 14:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 45690 00:04:27.488 00:04:27.488 real 0m5.298s 00:04:27.488 user 0m4.728s 00:04:27.488 sys 0m0.587s 00:04:27.488 14:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.488 14:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.488 ************************************ 00:04:27.488 END TEST skip_rpc 00:04:27.488 ************************************ 00:04:27.488 14:52:52 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:27.488 14:52:52 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:27.488 14:52:52 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.488 14:52:52 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.488 14:52:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.488 ************************************ 00:04:27.488 START TEST skip_rpc_with_json 00:04:27.488 ************************************ 00:04:27.488 14:52:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:27.488 14:52:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:27.488 14:52:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=45735 00:04:27.488 14:52:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.488 14:52:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:27.488 14:52:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 45735 00:04:27.488 14:52:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 45735 ']' 00:04:27.488 14:52:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.488 14:52:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:27.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.488 14:52:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.488 14:52:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:27.488 14:52:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:27.488 [2024-07-12 14:52:52.657001] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:04:27.488 [2024-07-12 14:52:52.657189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:27.488 EAL: TSC is not safe to use in SMP mode 00:04:27.488 EAL: TSC is not invariant 00:04:27.488 [2024-07-12 14:52:53.173596] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.488 [2024-07-12 14:52:53.256102] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:27.488 [2024-07-12 14:52:53.258305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.055 14:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:28.055 14:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:28.055 14:52:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:28.055 14:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:28.055 14:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.055 [2024-07-12 14:52:53.696645] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:28.055 request: 00:04:28.055 { 00:04:28.055 "trtype": "tcp", 00:04:28.055 "method": "nvmf_get_transports", 00:04:28.055 "req_id": 1 00:04:28.055 } 00:04:28.055 Got JSON-RPC error response 00:04:28.055 response: 00:04:28.055 { 00:04:28.055 "code": -19, 00:04:28.055 "message": "Operation not supported by device" 00:04:28.055 } 00:04:28.055 14:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:28.055 14:52:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:28.055 14:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:28.055 14:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.055 [2024-07-12 14:52:53.708669] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:28.055 14:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:28.055 14:52:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:28.055 14:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:28.055 14:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.055 14:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:28.055 14:52:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:28.055 { 00:04:28.055 "subsystems": [ 00:04:28.055 { 00:04:28.055 "subsystem": "vmd", 00:04:28.055 "config": [] 00:04:28.055 }, 00:04:28.055 { 00:04:28.055 "subsystem": "iobuf", 00:04:28.055 "config": [ 00:04:28.055 { 00:04:28.055 "method": "iobuf_set_options", 00:04:28.055 "params": { 00:04:28.055 "small_pool_count": 8192, 00:04:28.055 "large_pool_count": 1024, 00:04:28.055 "small_bufsize": 8192, 00:04:28.055 "large_bufsize": 135168 00:04:28.055 } 00:04:28.055 } 00:04:28.055 ] 00:04:28.055 }, 00:04:28.055 { 00:04:28.055 "subsystem": "scheduler", 00:04:28.055 "config": [ 00:04:28.055 { 00:04:28.055 "method": "framework_set_scheduler", 00:04:28.055 "params": { 00:04:28.055 "name": "static" 00:04:28.055 } 00:04:28.055 } 00:04:28.055 ] 00:04:28.055 }, 00:04:28.055 { 00:04:28.055 "subsystem": "sock", 00:04:28.055 "config": [ 00:04:28.055 { 00:04:28.055 "method": "sock_set_default_impl", 00:04:28.055 "params": { 00:04:28.055 "impl_name": "posix" 00:04:28.055 } 00:04:28.055 }, 00:04:28.055 { 00:04:28.055 "method": "sock_impl_set_options", 00:04:28.055 "params": { 00:04:28.055 "impl_name": "ssl", 00:04:28.055 "recv_buf_size": 4096, 00:04:28.055 "send_buf_size": 4096, 00:04:28.055 "enable_recv_pipe": true, 00:04:28.055 "enable_quickack": false, 00:04:28.055 "enable_placement_id": 0, 00:04:28.055 "enable_zerocopy_send_server": true, 00:04:28.055 "enable_zerocopy_send_client": false, 00:04:28.055 "zerocopy_threshold": 0, 00:04:28.055 "tls_version": 0, 00:04:28.055 "enable_ktls": false 00:04:28.055 } 00:04:28.055 }, 00:04:28.055 { 00:04:28.055 "method": "sock_impl_set_options", 00:04:28.055 "params": { 00:04:28.055 "impl_name": "posix", 00:04:28.055 "recv_buf_size": 2097152, 00:04:28.055 "send_buf_size": 2097152, 00:04:28.055 "enable_recv_pipe": true, 00:04:28.055 "enable_quickack": false, 00:04:28.055 "enable_placement_id": 0, 00:04:28.055 "enable_zerocopy_send_server": true, 00:04:28.055 "enable_zerocopy_send_client": false, 00:04:28.055 "zerocopy_threshold": 0, 00:04:28.055 "tls_version": 0, 00:04:28.055 "enable_ktls": false 00:04:28.055 } 00:04:28.055 } 00:04:28.055 ] 00:04:28.055 }, 00:04:28.055 { 00:04:28.055 "subsystem": "keyring", 00:04:28.055 "config": [] 00:04:28.055 }, 00:04:28.055 { 00:04:28.055 "subsystem": "accel", 00:04:28.055 "config": [ 00:04:28.055 { 00:04:28.055 "method": "accel_set_options", 00:04:28.055 "params": { 00:04:28.055 "small_cache_size": 128, 00:04:28.055 "large_cache_size": 16, 00:04:28.055 "task_count": 2048, 00:04:28.055 "sequence_count": 2048, 00:04:28.055 "buf_count": 2048 00:04:28.055 } 00:04:28.055 } 00:04:28.055 ] 00:04:28.055 }, 00:04:28.055 { 00:04:28.055 "subsystem": "bdev", 00:04:28.055 "config": [ 00:04:28.055 { 00:04:28.055 "method": "bdev_set_options", 00:04:28.055 "params": { 00:04:28.055 "bdev_io_pool_size": 65535, 00:04:28.055 "bdev_io_cache_size": 256, 00:04:28.055 "bdev_auto_examine": true, 00:04:28.055 "iobuf_small_cache_size": 128, 00:04:28.055 "iobuf_large_cache_size": 16 00:04:28.055 } 00:04:28.055 }, 00:04:28.055 { 00:04:28.055 "method": "bdev_raid_set_options", 00:04:28.055 "params": { 00:04:28.055 "process_window_size_kb": 1024 00:04:28.055 } 00:04:28.055 }, 00:04:28.055 { 00:04:28.055 "method": "bdev_nvme_set_options", 00:04:28.055 "params": { 00:04:28.055 "action_on_timeout": "none", 00:04:28.055 "timeout_us": 0, 00:04:28.055 "timeout_admin_us": 0, 00:04:28.055 "keep_alive_timeout_ms": 10000, 00:04:28.055 "arbitration_burst": 0, 00:04:28.055 "low_priority_weight": 0, 00:04:28.055 "medium_priority_weight": 0, 00:04:28.055 "high_priority_weight": 0, 00:04:28.055 "nvme_adminq_poll_period_us": 10000, 00:04:28.055 "nvme_ioq_poll_period_us": 0, 00:04:28.055 "io_queue_requests": 0, 00:04:28.055 "delay_cmd_submit": true, 00:04:28.055 "transport_retry_count": 4, 00:04:28.055 "bdev_retry_count": 3, 00:04:28.055 "transport_ack_timeout": 0, 00:04:28.055 "ctrlr_loss_timeout_sec": 0, 00:04:28.055 "reconnect_delay_sec": 0, 00:04:28.055 "fast_io_fail_timeout_sec": 0, 00:04:28.055 "disable_auto_failback": false, 00:04:28.055 "generate_uuids": false, 00:04:28.055 "transport_tos": 0, 00:04:28.055 "nvme_error_stat": false, 00:04:28.055 "rdma_srq_size": 0, 00:04:28.055 "io_path_stat": false, 00:04:28.055 "allow_accel_sequence": false, 00:04:28.055 "rdma_max_cq_size": 0, 00:04:28.055 "rdma_cm_event_timeout_ms": 0, 00:04:28.055 "dhchap_digests": [ 00:04:28.055 "sha256", 00:04:28.055 "sha384", 00:04:28.055 "sha512" 00:04:28.055 ], 00:04:28.055 "dhchap_dhgroups": [ 00:04:28.055 "null", 00:04:28.055 "ffdhe2048", 00:04:28.055 "ffdhe3072", 00:04:28.055 "ffdhe4096", 00:04:28.055 "ffdhe6144", 00:04:28.055 "ffdhe8192" 00:04:28.055 ] 00:04:28.055 } 00:04:28.055 }, 00:04:28.055 { 00:04:28.055 "method": "bdev_nvme_set_hotplug", 00:04:28.055 "params": { 00:04:28.055 "period_us": 100000, 00:04:28.055 "enable": false 00:04:28.055 } 00:04:28.055 }, 00:04:28.055 { 00:04:28.055 "method": "bdev_wait_for_examine" 00:04:28.055 } 00:04:28.055 ] 00:04:28.055 }, 00:04:28.055 { 00:04:28.055 "subsystem": "scsi", 00:04:28.055 "config": null 00:04:28.055 }, 00:04:28.055 { 00:04:28.056 "subsystem": "nvmf", 00:04:28.056 "config": [ 00:04:28.056 { 00:04:28.056 "method": "nvmf_set_config", 00:04:28.056 "params": { 00:04:28.056 "discovery_filter": "match_any", 00:04:28.056 "admin_cmd_passthru": { 00:04:28.056 "identify_ctrlr": false 00:04:28.056 } 00:04:28.056 } 00:04:28.056 }, 00:04:28.056 { 00:04:28.056 "method": "nvmf_set_max_subsystems", 00:04:28.056 "params": { 00:04:28.056 "max_subsystems": 1024 00:04:28.056 } 00:04:28.056 }, 00:04:28.056 { 00:04:28.056 "method": "nvmf_set_crdt", 00:04:28.056 "params": { 00:04:28.056 "crdt1": 0, 00:04:28.056 "crdt2": 0, 00:04:28.056 "crdt3": 0 00:04:28.056 } 00:04:28.056 }, 00:04:28.056 { 00:04:28.056 "method": "nvmf_create_transport", 00:04:28.056 "params": { 00:04:28.056 "trtype": "TCP", 00:04:28.056 "max_queue_depth": 128, 00:04:28.056 "max_io_qpairs_per_ctrlr": 127, 00:04:28.056 "in_capsule_data_size": 4096, 00:04:28.056 "max_io_size": 131072, 00:04:28.056 "io_unit_size": 131072, 00:04:28.056 "max_aq_depth": 128, 00:04:28.056 "num_shared_buffers": 511, 00:04:28.056 "buf_cache_size": 4294967295, 00:04:28.056 "dif_insert_or_strip": false, 00:04:28.056 "zcopy": false, 00:04:28.056 "c2h_success": true, 00:04:28.056 "sock_priority": 0, 00:04:28.056 "abort_timeout_sec": 1, 00:04:28.056 "ack_timeout": 0, 00:04:28.056 "data_wr_pool_size": 0 00:04:28.056 } 00:04:28.056 } 00:04:28.056 ] 00:04:28.056 }, 00:04:28.056 { 00:04:28.056 "subsystem": "iscsi", 00:04:28.056 "config": [ 00:04:28.056 { 00:04:28.056 "method": "iscsi_set_options", 00:04:28.056 "params": { 00:04:28.056 "node_base": "iqn.2016-06.io.spdk", 00:04:28.056 "max_sessions": 128, 00:04:28.056 "max_connections_per_session": 2, 00:04:28.056 "max_queue_depth": 64, 00:04:28.056 "default_time2wait": 2, 00:04:28.056 "default_time2retain": 20, 00:04:28.056 "first_burst_length": 8192, 00:04:28.056 "immediate_data": true, 00:04:28.056 "allow_duplicated_isid": false, 00:04:28.056 "error_recovery_level": 0, 00:04:28.056 "nop_timeout": 60, 00:04:28.056 "nop_in_interval": 30, 00:04:28.056 "disable_chap": false, 00:04:28.056 "require_chap": false, 00:04:28.056 "mutual_chap": false, 00:04:28.056 "chap_group": 0, 00:04:28.056 "max_large_datain_per_connection": 64, 00:04:28.056 "max_r2t_per_connection": 4, 00:04:28.056 "pdu_pool_size": 36864, 00:04:28.056 "immediate_data_pool_size": 16384, 00:04:28.056 "data_out_pool_size": 2048 00:04:28.056 } 00:04:28.056 } 00:04:28.056 ] 00:04:28.056 } 00:04:28.056 ] 00:04:28.056 } 00:04:28.056 14:52:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:28.056 14:52:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 45735 00:04:28.056 14:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 45735 ']' 00:04:28.056 14:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 45735 00:04:28.056 14:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:28.056 14:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:28.056 14:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps -c -o command 45735 00:04:28.056 14:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # tail -1 00:04:28.056 14:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:28.056 14:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:28.056 killing process with pid 45735 00:04:28.056 14:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 45735' 00:04:28.056 14:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 45735 00:04:28.056 14:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 45735 00:04:28.313 14:52:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:28.314 14:52:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=45753 00:04:28.314 14:52:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:33.628 14:52:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 45753 00:04:33.628 14:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 45753 ']' 00:04:33.628 14:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 45753 00:04:33.628 14:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:33.628 14:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:33.628 14:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps -c -o command 45753 00:04:33.628 14:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # tail -1 00:04:33.628 14:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:33.628 14:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:33.628 14:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 45753' 00:04:33.628 killing process with pid 45753 00:04:33.628 14:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 45753 00:04:33.628 14:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 45753 00:04:33.887 14:52:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:33.887 14:52:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:33.887 00:04:33.887 real 0m6.864s 00:04:33.887 user 0m6.252s 00:04:33.887 sys 0m1.168s 00:04:33.887 14:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.887 ************************************ 00:04:33.887 END TEST skip_rpc_with_json 00:04:33.887 ************************************ 00:04:33.887 14:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.887 14:52:59 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:33.887 14:52:59 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:33.887 14:52:59 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.887 14:52:59 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.887 14:52:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.887 ************************************ 00:04:33.887 START TEST skip_rpc_with_delay 00:04:33.887 ************************************ 00:04:33.887 14:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:33.887 14:52:59 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:33.887 14:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:33.887 14:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:33.887 14:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:33.887 14:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:33.887 14:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:33.887 14:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:33.887 14:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:33.887 14:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:33.887 14:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:33.887 14:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:33.887 14:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:33.887 [2024-07-12 14:52:59.570167] app.c: 836:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:33.887 [2024-07-12 14:52:59.570412] app.c: 715:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:33.887 14:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:33.887 14:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:33.887 14:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:33.887 14:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:33.887 00:04:33.887 real 0m0.010s 00:04:33.887 user 0m0.004s 00:04:33.887 sys 0m0.001s 00:04:33.887 14:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.887 ************************************ 00:04:33.887 END TEST skip_rpc_with_delay 00:04:33.887 ************************************ 00:04:33.887 14:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:33.887 14:52:59 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:33.887 14:52:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:33.887 14:52:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' FreeBSD '!=' FreeBSD ']' 00:04:33.887 14:52:59 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:33.887 00:04:33.887 real 0m12.443s 00:04:33.887 user 0m11.121s 00:04:33.887 sys 0m1.926s 00:04:33.887 14:52:59 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.887 14:52:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.887 ************************************ 00:04:33.887 END TEST skip_rpc 00:04:33.887 ************************************ 00:04:33.887 14:52:59 -- common/autotest_common.sh@1142 -- # return 0 00:04:33.887 14:52:59 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:33.887 14:52:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.887 14:52:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.887 14:52:59 -- common/autotest_common.sh@10 -- # set +x 00:04:33.887 ************************************ 00:04:33.887 START TEST rpc_client 00:04:33.887 ************************************ 00:04:33.887 14:52:59 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:34.145 * Looking for test storage... 00:04:34.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:34.145 14:52:59 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:34.145 OK 00:04:34.145 14:52:59 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:34.145 00:04:34.145 real 0m0.134s 00:04:34.145 user 0m0.061s 00:04:34.145 sys 0m0.114s 00:04:34.145 14:52:59 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.145 14:52:59 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:34.145 ************************************ 00:04:34.145 END TEST rpc_client 00:04:34.145 ************************************ 00:04:34.145 14:52:59 -- common/autotest_common.sh@1142 -- # return 0 00:04:34.145 14:52:59 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:34.145 14:52:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.145 14:52:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.145 14:52:59 -- common/autotest_common.sh@10 -- # set +x 00:04:34.145 ************************************ 00:04:34.145 START TEST json_config 00:04:34.145 ************************************ 00:04:34.145 14:52:59 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:34.145 14:52:59 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:34.403 14:52:59 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:34.403 14:52:59 json_config -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:04:34.403 14:52:59 json_config -- nvmf/common.sh@7 -- # return 0 00:04:34.403 14:52:59 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:34.403 14:52:59 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:34.403 14:52:59 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:34.403 14:52:59 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:34.403 14:52:59 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:34.403 14:52:59 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:34.403 14:52:59 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:34.403 14:52:59 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:34.404 14:52:59 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:34.404 14:52:59 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:34.404 14:52:59 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:34.404 14:52:59 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:34.404 14:52:59 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:34.404 14:52:59 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:34.404 14:52:59 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:34.404 INFO: JSON configuration test init 00:04:34.404 14:52:59 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:34.404 14:52:59 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:34.404 14:52:59 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:34.404 14:52:59 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:34.404 14:52:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.404 14:52:59 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:34.404 14:52:59 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:34.404 14:52:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.404 14:52:59 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:34.404 14:52:59 json_config -- json_config/common.sh@9 -- # local app=target 00:04:34.404 14:52:59 json_config -- json_config/common.sh@10 -- # shift 00:04:34.404 14:52:59 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:34.404 14:52:59 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:34.404 14:52:59 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:34.404 14:52:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:34.404 14:52:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:34.404 14:52:59 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=45908 00:04:34.404 Waiting for target to run... 00:04:34.404 14:52:59 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:34.404 14:52:59 json_config -- json_config/common.sh@25 -- # waitforlisten 45908 /var/tmp/spdk_tgt.sock 00:04:34.404 14:52:59 json_config -- common/autotest_common.sh@829 -- # '[' -z 45908 ']' 00:04:34.404 14:52:59 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:34.404 14:52:59 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:34.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:34.404 14:52:59 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:34.404 14:52:59 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:34.404 14:52:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.404 14:52:59 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:34.404 [2024-07-12 14:52:59.983711] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:04:34.404 [2024-07-12 14:52:59.983983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:34.662 EAL: TSC is not safe to use in SMP mode 00:04:34.662 EAL: TSC is not invariant 00:04:34.662 [2024-07-12 14:53:00.256534] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.662 [2024-07-12 14:53:00.341456] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:34.662 [2024-07-12 14:53:00.343656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.597 14:53:01 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:35.597 14:53:01 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:35.597 00:04:35.597 14:53:01 json_config -- json_config/common.sh@26 -- # echo '' 00:04:35.597 14:53:01 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:35.597 14:53:01 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:35.597 14:53:01 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:35.597 14:53:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.597 14:53:01 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:35.597 14:53:01 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:35.597 14:53:01 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:35.597 14:53:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.598 14:53:01 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:35.598 14:53:01 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:35.598 14:53:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:35.856 [2024-07-12 14:53:01.421271] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:04:35.856 14:53:01 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:35.856 14:53:01 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:35.856 14:53:01 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:35.856 14:53:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.856 14:53:01 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:35.856 14:53:01 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:35.856 14:53:01 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:35.856 14:53:01 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:35.856 14:53:01 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:35.856 14:53:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:36.113 14:53:01 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:36.113 14:53:01 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:36.113 14:53:01 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:36.113 14:53:01 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:36.113 14:53:01 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:36.114 14:53:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.114 14:53:01 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:36.114 14:53:01 json_config -- json_config/json_config.sh@278 -- # [[ 1 -eq 1 ]] 00:04:36.114 14:53:01 json_config -- json_config/json_config.sh@279 -- # create_bdev_subsystem_config 00:04:36.114 14:53:01 json_config -- json_config/json_config.sh@105 -- # timing_enter create_bdev_subsystem_config 00:04:36.114 14:53:01 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:36.114 14:53:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.114 14:53:01 json_config -- json_config/json_config.sh@107 -- # expected_notifications=() 00:04:36.114 14:53:01 json_config -- json_config/json_config.sh@107 -- # local expected_notifications 00:04:36.114 14:53:01 json_config -- json_config/json_config.sh@111 -- # expected_notifications+=($(get_notifications)) 00:04:36.114 14:53:01 json_config -- json_config/json_config.sh@111 -- # get_notifications 00:04:36.114 14:53:01 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:04:36.114 14:53:01 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:36.114 14:53:01 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:36.114 14:53:01 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:04:36.114 14:53:01 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:04:36.114 14:53:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:04:36.371 14:53:02 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:04:36.371 14:53:02 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:36.371 14:53:02 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:36.371 14:53:02 json_config -- json_config/json_config.sh@113 -- # [[ 1 -eq 1 ]] 00:04:36.371 14:53:02 json_config -- json_config/json_config.sh@114 -- # local lvol_store_base_bdev=Nvme0n1 00:04:36.371 14:53:02 json_config -- json_config/json_config.sh@116 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:04:36.371 14:53:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:04:36.629 Nvme0n1p0 Nvme0n1p1 00:04:36.629 14:53:02 json_config -- json_config/json_config.sh@117 -- # tgt_rpc bdev_split_create Malloc0 3 00:04:36.629 14:53:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:04:36.888 [2024-07-12 14:53:02.593362] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:04:36.888 [2024-07-12 14:53:02.593411] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:04:36.888 00:04:36.888 14:53:02 json_config -- json_config/json_config.sh@118 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:04:36.888 14:53:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:04:37.146 Malloc3 00:04:37.146 14:53:02 json_config -- json_config/json_config.sh@119 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:04:37.146 14:53:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:04:37.408 [2024-07-12 14:53:03.157425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:37.408 [2024-07-12 14:53:03.157527] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:37.408 [2024-07-12 14:53:03.157555] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdfb81e38180 00:04:37.408 [2024-07-12 14:53:03.157564] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:37.408 [2024-07-12 14:53:03.158198] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:37.408 [2024-07-12 14:53:03.158223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:04:37.408 PTBdevFromMalloc3 00:04:37.408 14:53:03 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_null_create Null0 32 512 00:04:37.408 14:53:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:04:37.666 Null0 00:04:37.666 14:53:03 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:04:37.666 14:53:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:04:37.924 Malloc0 00:04:37.924 14:53:03 json_config -- json_config/json_config.sh@124 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:04:37.924 14:53:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:04:38.181 Malloc1 00:04:38.181 14:53:03 json_config -- json_config/json_config.sh@137 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:04:38.181 14:53:03 json_config -- json_config/json_config.sh@140 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:04:38.439 102400+0 records in 00:04:38.439 102400+0 records out 00:04:38.439 104857600 bytes transferred in 0.294713 secs (355795537 bytes/sec) 00:04:38.439 14:53:04 json_config -- json_config/json_config.sh@141 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:04:38.439 14:53:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:04:38.698 aio_disk 00:04:38.698 14:53:04 json_config -- json_config/json_config.sh@142 -- # expected_notifications+=(bdev_register:aio_disk) 00:04:38.698 14:53:04 json_config -- json_config/json_config.sh@147 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:04:38.698 14:53:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:04:38.956 717e3635-405e-11ef-b2a4-e9dca065e82e 00:04:38.956 14:53:04 json_config -- json_config/json_config.sh@154 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:04:38.956 14:53:04 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:04:38.956 14:53:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:04:39.215 14:53:04 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:04:39.215 14:53:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:04:39.473 14:53:05 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:04:39.473 14:53:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:04:39.754 14:53:05 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:04:39.754 14:53:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:04:40.016 14:53:05 json_config -- json_config/json_config.sh@157 -- # [[ 0 -eq 1 ]] 00:04:40.016 14:53:05 json_config -- json_config/json_config.sh@172 -- # [[ 0 -eq 1 ]] 00:04:40.016 14:53:05 json_config -- json_config/json_config.sh@178 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:71a5e326-405e-11ef-b2a4-e9dca065e82e bdev_register:71cf649e-405e-11ef-b2a4-e9dca065e82e bdev_register:71f2cd64-405e-11ef-b2a4-e9dca065e82e bdev_register:721c5016-405e-11ef-b2a4-e9dca065e82e 00:04:40.016 14:53:05 json_config -- json_config/json_config.sh@67 -- # local events_to_check 00:04:40.016 14:53:05 json_config -- json_config/json_config.sh@68 -- # local recorded_events 00:04:40.016 14:53:05 json_config -- json_config/json_config.sh@71 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:04:40.016 14:53:05 json_config -- json_config/json_config.sh@71 -- # sort 00:04:40.016 14:53:05 json_config -- json_config/json_config.sh@71 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:71a5e326-405e-11ef-b2a4-e9dca065e82e bdev_register:71cf649e-405e-11ef-b2a4-e9dca065e82e bdev_register:71f2cd64-405e-11ef-b2a4-e9dca065e82e bdev_register:721c5016-405e-11ef-b2a4-e9dca065e82e 00:04:40.016 14:53:05 json_config -- json_config/json_config.sh@72 -- # recorded_events=($(get_notifications | sort)) 00:04:40.016 14:53:05 json_config -- json_config/json_config.sh@72 -- # get_notifications 00:04:40.016 14:53:05 json_config -- json_config/json_config.sh@72 -- # sort 00:04:40.016 14:53:05 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:04:40.016 14:53:05 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:40.016 14:53:05 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:40.016 14:53:05 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:04:40.016 14:53:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:04:40.016 14:53:05 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:04:40.275 14:53:05 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:04:40.275 14:53:05 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:40.275 14:53:05 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:40.275 14:53:05 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p1 00:04:40.275 14:53:05 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:40.275 14:53:05 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:40.275 14:53:05 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p0 00:04:40.275 14:53:05 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:40.275 14:53:05 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:40.275 14:53:05 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc3 00:04:40.275 14:53:05 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:40.275 14:53:05 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:40.275 14:53:05 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:PTBdevFromMalloc3 00:04:40.275 14:53:05 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:40.275 14:53:05 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:40.275 14:53:05 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Null0 00:04:40.275 14:53:05 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:40.275 14:53:05 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:40.275 14:53:05 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0 00:04:40.275 14:53:05 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:40.275 14:53:05 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:40.275 14:53:05 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p2 00:04:40.275 14:53:05 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:40.275 14:53:05 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:40.275 14:53:05 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p1 00:04:40.275 14:53:05 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:40.275 14:53:05 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:40.275 14:53:05 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p0 00:04:40.275 14:53:05 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:40.275 14:53:06 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:40.275 14:53:06 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc1 00:04:40.275 14:53:06 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:40.275 14:53:06 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:40.275 14:53:06 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:aio_disk 00:04:40.275 14:53:06 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:40.275 14:53:06 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:40.275 14:53:06 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:71a5e326-405e-11ef-b2a4-e9dca065e82e 00:04:40.275 14:53:06 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:40.275 14:53:06 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:40.275 14:53:06 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:71cf649e-405e-11ef-b2a4-e9dca065e82e 00:04:40.275 14:53:06 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:40.275 14:53:06 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:40.275 14:53:06 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:71f2cd64-405e-11ef-b2a4-e9dca065e82e 00:04:40.275 14:53:06 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:40.275 14:53:06 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:40.275 14:53:06 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:721c5016-405e-11ef-b2a4-e9dca065e82e 00:04:40.275 14:53:06 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:40.275 14:53:06 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:40.275 14:53:06 json_config -- json_config/json_config.sh@74 -- # [[ bdev_register:71a5e326-405e-11ef-b2a4-e9dca065e82e bdev_register:71cf649e-405e-11ef-b2a4-e9dca065e82e bdev_register:71f2cd64-405e-11ef-b2a4-e9dca065e82e bdev_register:721c5016-405e-11ef-b2a4-e9dca065e82e bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\7\1\a\5\e\3\2\6\-\4\0\5\e\-\1\1\e\f\-\b\2\a\4\-\e\9\d\c\a\0\6\5\e\8\2\e\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\7\1\c\f\6\4\9\e\-\4\0\5\e\-\1\1\e\f\-\b\2\a\4\-\e\9\d\c\a\0\6\5\e\8\2\e\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\7\1\f\2\c\d\6\4\-\4\0\5\e\-\1\1\e\f\-\b\2\a\4\-\e\9\d\c\a\0\6\5\e\8\2\e\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\7\2\1\c\5\0\1\6\-\4\0\5\e\-\1\1\e\f\-\b\2\a\4\-\e\9\d\c\a\0\6\5\e\8\2\e\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k ]] 00:04:40.275 14:53:06 json_config -- json_config/json_config.sh@86 -- # cat 00:04:40.276 14:53:06 json_config -- json_config/json_config.sh@86 -- # printf ' %s\n' bdev_register:71a5e326-405e-11ef-b2a4-e9dca065e82e bdev_register:71cf649e-405e-11ef-b2a4-e9dca065e82e bdev_register:71f2cd64-405e-11ef-b2a4-e9dca065e82e bdev_register:721c5016-405e-11ef-b2a4-e9dca065e82e bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk 00:04:40.276 Expected events matched: 00:04:40.276 bdev_register:71a5e326-405e-11ef-b2a4-e9dca065e82e 00:04:40.276 bdev_register:71cf649e-405e-11ef-b2a4-e9dca065e82e 00:04:40.276 bdev_register:71f2cd64-405e-11ef-b2a4-e9dca065e82e 00:04:40.276 bdev_register:721c5016-405e-11ef-b2a4-e9dca065e82e 00:04:40.276 bdev_register:Malloc0 00:04:40.276 bdev_register:Malloc0p0 00:04:40.276 bdev_register:Malloc0p1 00:04:40.276 bdev_register:Malloc0p2 00:04:40.276 bdev_register:Malloc1 00:04:40.276 bdev_register:Malloc3 00:04:40.276 bdev_register:Null0 00:04:40.276 bdev_register:Nvme0n1 00:04:40.276 bdev_register:Nvme0n1p0 00:04:40.276 bdev_register:Nvme0n1p1 00:04:40.276 bdev_register:PTBdevFromMalloc3 00:04:40.276 bdev_register:aio_disk 00:04:40.276 14:53:06 json_config -- json_config/json_config.sh@180 -- # timing_exit create_bdev_subsystem_config 00:04:40.276 14:53:06 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:40.276 14:53:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.276 14:53:06 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:40.276 14:53:06 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:40.276 14:53:06 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:04:40.276 14:53:06 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:40.276 14:53:06 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:40.276 14:53:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.276 14:53:06 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:40.276 14:53:06 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:40.276 14:53:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:40.533 MallocBdevForConfigChangeCheck 00:04:40.533 14:53:06 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:40.533 14:53:06 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:40.533 14:53:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.533 14:53:06 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:40.533 14:53:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:41.100 INFO: shutting down applications... 00:04:41.100 14:53:06 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:41.100 14:53:06 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:41.100 14:53:06 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:41.100 14:53:06 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:41.100 14:53:06 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:41.100 [2024-07-12 14:53:06.877746] vbdev_lvol.c: 151:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:04:41.358 Calling clear_iscsi_subsystem 00:04:41.358 Calling clear_nvmf_subsystem 00:04:41.358 Calling clear_bdev_subsystem 00:04:41.358 14:53:07 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:41.358 14:53:07 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:41.358 14:53:07 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:41.358 14:53:07 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:41.358 14:53:07 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:41.358 14:53:07 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:41.616 14:53:07 json_config -- json_config/json_config.sh@345 -- # break 00:04:41.616 14:53:07 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:41.616 14:53:07 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:41.616 14:53:07 json_config -- json_config/common.sh@31 -- # local app=target 00:04:41.616 14:53:07 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:41.616 14:53:07 json_config -- json_config/common.sh@35 -- # [[ -n 45908 ]] 00:04:41.616 14:53:07 json_config -- json_config/common.sh@38 -- # kill -SIGINT 45908 00:04:41.616 14:53:07 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:41.616 14:53:07 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:41.616 14:53:07 json_config -- json_config/common.sh@41 -- # kill -0 45908 00:04:41.616 14:53:07 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:42.183 14:53:07 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:42.183 14:53:07 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.183 14:53:07 json_config -- json_config/common.sh@41 -- # kill -0 45908 00:04:42.183 14:53:07 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:42.183 14:53:07 json_config -- json_config/common.sh@43 -- # break 00:04:42.183 14:53:07 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:42.183 SPDK target shutdown done 00:04:42.183 14:53:07 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:42.183 INFO: relaunching applications... 00:04:42.183 14:53:07 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:42.183 14:53:07 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:42.183 14:53:07 json_config -- json_config/common.sh@9 -- # local app=target 00:04:42.183 14:53:07 json_config -- json_config/common.sh@10 -- # shift 00:04:42.183 14:53:07 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:42.183 14:53:07 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:42.183 14:53:07 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:42.183 14:53:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.183 14:53:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.184 14:53:07 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=46098 00:04:42.184 14:53:07 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:42.184 Waiting for target to run... 00:04:42.184 14:53:07 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:42.184 14:53:07 json_config -- json_config/common.sh@25 -- # waitforlisten 46098 /var/tmp/spdk_tgt.sock 00:04:42.184 14:53:07 json_config -- common/autotest_common.sh@829 -- # '[' -z 46098 ']' 00:04:42.184 14:53:07 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:42.184 14:53:07 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:42.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:42.184 14:53:07 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:42.184 14:53:07 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:42.184 14:53:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.184 [2024-07-12 14:53:07.969242] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:04:42.184 [2024-07-12 14:53:07.969508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:42.447 EAL: TSC is not safe to use in SMP mode 00:04:42.447 EAL: TSC is not invariant 00:04:42.447 [2024-07-12 14:53:08.231182] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.712 [2024-07-12 14:53:08.317793] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:42.712 [2024-07-12 14:53:08.320102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.712 [2024-07-12 14:53:08.463642] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:04:42.712 [2024-07-12 14:53:08.463699] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:04:42.712 [2024-07-12 14:53:08.471629] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:04:42.712 [2024-07-12 14:53:08.471663] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:04:42.712 [2024-07-12 14:53:08.479643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:42.712 [2024-07-12 14:53:08.479675] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:04:42.712 [2024-07-12 14:53:08.479691] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:04:42.712 [2024-07-12 14:53:08.487645] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:04:42.970 [2024-07-12 14:53:08.560450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:42.970 [2024-07-12 14:53:08.560501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:42.970 [2024-07-12 14:53:08.560512] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24db4ea37780 00:04:42.970 [2024-07-12 14:53:08.560521] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:42.970 [2024-07-12 14:53:08.560587] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:42.970 [2024-07-12 14:53:08.560598] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:04:43.227 14:53:08 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:43.227 14:53:08 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:43.227 00:04:43.227 14:53:08 json_config -- json_config/common.sh@26 -- # echo '' 00:04:43.227 14:53:08 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:43.227 INFO: Checking if target configuration is the same... 00:04:43.227 14:53:08 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:43.227 14:53:08 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.Lsn7iX /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:43.227 + '[' 2 -ne 2 ']' 00:04:43.227 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:43.227 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:43.227 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:43.227 +++ basename /tmp//sh-np.Lsn7iX 00:04:43.227 ++ mktemp /tmp/sh-np.Lsn7iX.XXX 00:04:43.227 + tmp_file_1=/tmp/sh-np.Lsn7iX.pRh 00:04:43.227 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:43.227 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:43.227 + tmp_file_2=/tmp/spdk_tgt_config.json.RO6 00:04:43.227 + ret=0 00:04:43.227 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:43.227 14:53:09 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:43.227 14:53:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:43.792 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:43.792 + diff -u /tmp/sh-np.Lsn7iX.pRh /tmp/spdk_tgt_config.json.RO6 00:04:43.792 INFO: JSON config files are the same 00:04:43.792 + echo 'INFO: JSON config files are the same' 00:04:43.792 + rm /tmp/sh-np.Lsn7iX.pRh /tmp/spdk_tgt_config.json.RO6 00:04:43.792 + exit 0 00:04:43.792 14:53:09 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:43.792 INFO: changing configuration and checking if this can be detected... 00:04:43.792 14:53:09 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:43.792 14:53:09 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:43.792 14:53:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:44.050 14:53:09 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.f4RyUG /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:44.050 + '[' 2 -ne 2 ']' 00:04:44.050 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:44.050 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:44.050 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:44.050 +++ basename /tmp//sh-np.f4RyUG 00:04:44.050 ++ mktemp /tmp/sh-np.f4RyUG.XXX 00:04:44.050 + tmp_file_1=/tmp/sh-np.f4RyUG.oHz 00:04:44.050 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:44.050 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:44.050 + tmp_file_2=/tmp/spdk_tgt_config.json.ipK 00:04:44.050 + ret=0 00:04:44.050 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:44.050 14:53:09 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:44.050 14:53:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:44.308 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:44.567 + diff -u /tmp/sh-np.f4RyUG.oHz /tmp/spdk_tgt_config.json.ipK 00:04:44.567 + ret=1 00:04:44.567 + echo '=== Start of file: /tmp/sh-np.f4RyUG.oHz ===' 00:04:44.567 + cat /tmp/sh-np.f4RyUG.oHz 00:04:44.567 + echo '=== End of file: /tmp/sh-np.f4RyUG.oHz ===' 00:04:44.567 + echo '' 00:04:44.567 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ipK ===' 00:04:44.567 + cat /tmp/spdk_tgt_config.json.ipK 00:04:44.567 + echo '=== End of file: /tmp/spdk_tgt_config.json.ipK ===' 00:04:44.567 + echo '' 00:04:44.567 + rm /tmp/sh-np.f4RyUG.oHz /tmp/spdk_tgt_config.json.ipK 00:04:44.567 + exit 1 00:04:44.567 INFO: configuration change detected. 00:04:44.567 14:53:10 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:44.567 14:53:10 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:44.567 14:53:10 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:44.567 14:53:10 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:44.567 14:53:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.567 14:53:10 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:44.567 14:53:10 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:44.567 14:53:10 json_config -- json_config/json_config.sh@317 -- # [[ -n 46098 ]] 00:04:44.567 14:53:10 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:44.567 14:53:10 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:44.567 14:53:10 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:44.567 14:53:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.567 14:53:10 json_config -- json_config/json_config.sh@186 -- # [[ 1 -eq 1 ]] 00:04:44.567 14:53:10 json_config -- json_config/json_config.sh@187 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:04:44.567 14:53:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:04:44.825 14:53:10 json_config -- json_config/json_config.sh@188 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:04:44.825 14:53:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:04:45.083 14:53:10 json_config -- json_config/json_config.sh@189 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:04:45.083 14:53:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:04:45.083 14:53:10 json_config -- json_config/json_config.sh@190 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:04:45.083 14:53:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:04:45.342 14:53:11 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:45.342 14:53:11 json_config -- json_config/json_config.sh@193 -- # [[ FreeBSD = Linux ]] 00:04:45.342 14:53:11 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:45.342 14:53:11 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:45.342 14:53:11 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:45.342 14:53:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.342 14:53:11 json_config -- json_config/json_config.sh@323 -- # killprocess 46098 00:04:45.342 14:53:11 json_config -- common/autotest_common.sh@948 -- # '[' -z 46098 ']' 00:04:45.342 14:53:11 json_config -- common/autotest_common.sh@952 -- # kill -0 46098 00:04:45.342 14:53:11 json_config -- common/autotest_common.sh@953 -- # uname 00:04:45.342 14:53:11 json_config -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:45.342 14:53:11 json_config -- common/autotest_common.sh@956 -- # ps -c -o command 46098 00:04:45.342 14:53:11 json_config -- common/autotest_common.sh@956 -- # tail -1 00:04:45.600 14:53:11 json_config -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:45.600 14:53:11 json_config -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:45.600 killing process with pid 46098 00:04:45.600 14:53:11 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46098' 00:04:45.600 14:53:11 json_config -- common/autotest_common.sh@967 -- # kill 46098 00:04:45.600 14:53:11 json_config -- common/autotest_common.sh@972 -- # wait 46098 00:04:45.860 14:53:11 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:45.860 14:53:11 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:45.860 14:53:11 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:45.860 14:53:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.860 14:53:11 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:45.860 INFO: Success 00:04:45.860 14:53:11 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:45.860 00:04:45.860 real 0m11.614s 00:04:45.860 user 0m18.570s 00:04:45.860 sys 0m1.875s 00:04:45.860 14:53:11 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.860 14:53:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.860 ************************************ 00:04:45.860 END TEST json_config 00:04:45.860 ************************************ 00:04:45.860 14:53:11 -- common/autotest_common.sh@1142 -- # return 0 00:04:45.860 14:53:11 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:45.860 14:53:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.860 14:53:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.860 14:53:11 -- common/autotest_common.sh@10 -- # set +x 00:04:45.860 ************************************ 00:04:45.860 START TEST json_config_extra_key 00:04:45.860 ************************************ 00:04:45.860 14:53:11 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:45.860 14:53:11 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:45.860 14:53:11 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:45.860 14:53:11 json_config_extra_key -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:04:45.860 14:53:11 json_config_extra_key -- nvmf/common.sh@7 -- # return 0 00:04:45.860 14:53:11 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:45.860 14:53:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:45.860 14:53:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:45.860 14:53:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:45.860 14:53:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:45.860 14:53:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:45.860 14:53:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:45.860 14:53:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:45.860 14:53:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:45.860 INFO: launching applications... 00:04:45.860 14:53:11 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:45.860 14:53:11 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:45.860 14:53:11 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:45.860 14:53:11 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:45.860 14:53:11 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:45.860 14:53:11 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:45.860 14:53:11 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:45.860 14:53:11 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:45.860 14:53:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:45.860 14:53:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:45.860 14:53:11 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=46227 00:04:45.860 Waiting for target to run... 00:04:45.860 14:53:11 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:45.860 14:53:11 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 46227 /var/tmp/spdk_tgt.sock 00:04:45.861 14:53:11 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:45.861 14:53:11 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 46227 ']' 00:04:45.861 14:53:11 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:45.861 14:53:11 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:45.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:45.861 14:53:11 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:45.861 14:53:11 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:45.861 14:53:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:45.861 [2024-07-12 14:53:11.639031] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:04:45.861 [2024-07-12 14:53:11.639174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:46.119 EAL: TSC is not safe to use in SMP mode 00:04:46.119 EAL: TSC is not invariant 00:04:46.119 [2024-07-12 14:53:11.899353] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.376 [2024-07-12 14:53:11.986846] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:46.376 [2024-07-12 14:53:11.989588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.943 14:53:12 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:46.943 00:04:46.943 14:53:12 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:46.943 14:53:12 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:46.943 INFO: shutting down applications... 00:04:46.943 14:53:12 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:46.943 14:53:12 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:46.943 14:53:12 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:46.943 14:53:12 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:46.943 14:53:12 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 46227 ]] 00:04:46.943 14:53:12 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 46227 00:04:46.943 14:53:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:46.943 14:53:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.943 14:53:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 46227 00:04:46.943 14:53:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:47.509 14:53:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:47.509 14:53:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.509 14:53:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 46227 00:04:47.509 14:53:13 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:47.509 14:53:13 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:47.509 14:53:13 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:47.509 SPDK target shutdown done 00:04:47.509 14:53:13 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:47.509 Success 00:04:47.509 14:53:13 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:47.509 00:04:47.509 real 0m1.757s 00:04:47.509 user 0m1.667s 00:04:47.509 sys 0m0.409s 00:04:47.509 14:53:13 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.509 ************************************ 00:04:47.509 END TEST json_config_extra_key 00:04:47.509 ************************************ 00:04:47.509 14:53:13 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:47.509 14:53:13 -- common/autotest_common.sh@1142 -- # return 0 00:04:47.509 14:53:13 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:47.509 14:53:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.509 14:53:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.509 14:53:13 -- common/autotest_common.sh@10 -- # set +x 00:04:47.509 ************************************ 00:04:47.509 START TEST alias_rpc 00:04:47.509 ************************************ 00:04:47.509 14:53:13 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:47.767 * Looking for test storage... 00:04:47.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:47.767 14:53:13 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:47.767 14:53:13 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:47.767 14:53:13 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=46285 00:04:47.767 14:53:13 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 46285 00:04:47.767 14:53:13 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 46285 ']' 00:04:47.767 14:53:13 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.767 14:53:13 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:47.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.767 14:53:13 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.767 14:53:13 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:47.767 14:53:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.767 [2024-07-12 14:53:13.429191] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:04:47.767 [2024-07-12 14:53:13.429427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:48.336 EAL: TSC is not safe to use in SMP mode 00:04:48.336 EAL: TSC is not invariant 00:04:48.336 [2024-07-12 14:53:13.965784] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.336 [2024-07-12 14:53:14.046392] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:48.336 [2024-07-12 14:53:14.048566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.903 14:53:14 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.903 14:53:14 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:48.903 14:53:14 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:48.903 14:53:14 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 46285 00:04:48.903 14:53:14 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 46285 ']' 00:04:48.903 14:53:14 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 46285 00:04:48.903 14:53:14 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:49.161 14:53:14 alias_rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:49.161 14:53:14 alias_rpc -- common/autotest_common.sh@956 -- # ps -c -o command 46285 00:04:49.161 14:53:14 alias_rpc -- common/autotest_common.sh@956 -- # tail -1 00:04:49.161 14:53:14 alias_rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:49.161 14:53:14 alias_rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:49.161 14:53:14 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46285' 00:04:49.161 killing process with pid 46285 00:04:49.161 14:53:14 alias_rpc -- common/autotest_common.sh@967 -- # kill 46285 00:04:49.161 14:53:14 alias_rpc -- common/autotest_common.sh@972 -- # wait 46285 00:04:49.420 00:04:49.420 real 0m1.692s 00:04:49.420 user 0m1.773s 00:04:49.420 sys 0m0.725s 00:04:49.420 14:53:14 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.420 ************************************ 00:04:49.420 END TEST alias_rpc 00:04:49.420 ************************************ 00:04:49.420 14:53:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.420 14:53:15 -- common/autotest_common.sh@1142 -- # return 0 00:04:49.420 14:53:15 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:49.420 14:53:15 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:49.420 14:53:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.420 14:53:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.420 14:53:15 -- common/autotest_common.sh@10 -- # set +x 00:04:49.420 ************************************ 00:04:49.420 START TEST spdkcli_tcp 00:04:49.420 ************************************ 00:04:49.420 14:53:15 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:49.420 * Looking for test storage... 00:04:49.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:49.420 14:53:15 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:49.420 14:53:15 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:49.420 14:53:15 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:49.420 14:53:15 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:49.420 14:53:15 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:49.420 14:53:15 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:49.420 14:53:15 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:49.420 14:53:15 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:49.420 14:53:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:49.420 14:53:15 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=46346 00:04:49.420 14:53:15 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 46346 00:04:49.420 14:53:15 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:49.420 14:53:15 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 46346 ']' 00:04:49.420 14:53:15 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.420 14:53:15 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:49.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.420 14:53:15 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.420 14:53:15 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:49.420 14:53:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:49.420 [2024-07-12 14:53:15.173194] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:04:49.420 [2024-07-12 14:53:15.173362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:49.988 EAL: TSC is not safe to use in SMP mode 00:04:49.988 EAL: TSC is not invariant 00:04:49.988 [2024-07-12 14:53:15.716497] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:49.988 [2024-07-12 14:53:15.799945] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:49.988 [2024-07-12 14:53:15.800014] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:04:50.248 [2024-07-12 14:53:15.802751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.248 [2024-07-12 14:53:15.802742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.507 14:53:16 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:50.507 14:53:16 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:50.507 14:53:16 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=46354 00:04:50.507 14:53:16 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:50.507 14:53:16 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:50.766 [ 00:04:50.766 "spdk_get_version", 00:04:50.766 "rpc_get_methods", 00:04:50.766 "env_dpdk_get_mem_stats", 00:04:50.766 "trace_get_info", 00:04:50.766 "trace_get_tpoint_group_mask", 00:04:50.766 "trace_disable_tpoint_group", 00:04:50.766 "trace_enable_tpoint_group", 00:04:50.766 "trace_clear_tpoint_mask", 00:04:50.766 "trace_set_tpoint_mask", 00:04:50.766 "notify_get_notifications", 00:04:50.766 "notify_get_types", 00:04:50.766 "accel_get_stats", 00:04:50.766 "accel_set_options", 00:04:50.766 "accel_set_driver", 00:04:50.766 "accel_crypto_key_destroy", 00:04:50.766 "accel_crypto_keys_get", 00:04:50.766 "accel_crypto_key_create", 00:04:50.766 "accel_assign_opc", 00:04:50.766 "accel_get_module_info", 00:04:50.766 "accel_get_opc_assignments", 00:04:50.766 "bdev_get_histogram", 00:04:50.766 "bdev_enable_histogram", 00:04:50.766 "bdev_set_qos_limit", 00:04:50.766 "bdev_set_qd_sampling_period", 00:04:50.766 "bdev_get_bdevs", 00:04:50.766 "bdev_reset_iostat", 00:04:50.766 "bdev_get_iostat", 00:04:50.766 "bdev_examine", 00:04:50.766 "bdev_wait_for_examine", 00:04:50.766 "bdev_set_options", 00:04:50.766 "keyring_get_keys", 00:04:50.766 "framework_get_pci_devices", 00:04:50.766 "framework_get_config", 00:04:50.766 "framework_get_subsystems", 00:04:50.766 "sock_get_default_impl", 00:04:50.766 "sock_set_default_impl", 00:04:50.766 "sock_impl_set_options", 00:04:50.766 "sock_impl_get_options", 00:04:50.766 "thread_set_cpumask", 00:04:50.766 "framework_get_governor", 00:04:50.766 "framework_get_scheduler", 00:04:50.766 "framework_set_scheduler", 00:04:50.766 "framework_get_reactors", 00:04:50.766 "thread_get_io_channels", 00:04:50.766 "thread_get_pollers", 00:04:50.766 "thread_get_stats", 00:04:50.766 "framework_monitor_context_switch", 00:04:50.766 "spdk_kill_instance", 00:04:50.766 "log_enable_timestamps", 00:04:50.766 "log_get_flags", 00:04:50.766 "log_clear_flag", 00:04:50.766 "log_set_flag", 00:04:50.766 "log_get_level", 00:04:50.766 "log_set_level", 00:04:50.766 "log_get_print_level", 00:04:50.766 "log_set_print_level", 00:04:50.766 "framework_enable_cpumask_locks", 00:04:50.766 "framework_disable_cpumask_locks", 00:04:50.766 "framework_wait_init", 00:04:50.766 "framework_start_init", 00:04:50.766 "iobuf_get_stats", 00:04:50.766 "iobuf_set_options", 00:04:50.766 "vmd_rescan", 00:04:50.766 "vmd_remove_device", 00:04:50.766 "vmd_enable", 00:04:50.766 "nvmf_stop_mdns_prr", 00:04:50.766 "nvmf_publish_mdns_prr", 00:04:50.766 "nvmf_subsystem_get_listeners", 00:04:50.766 "nvmf_subsystem_get_qpairs", 00:04:50.766 "nvmf_subsystem_get_controllers", 00:04:50.766 "nvmf_get_stats", 00:04:50.766 "nvmf_get_transports", 00:04:50.766 "nvmf_create_transport", 00:04:50.766 "nvmf_get_targets", 00:04:50.766 "nvmf_delete_target", 00:04:50.766 "nvmf_create_target", 00:04:50.766 "nvmf_subsystem_allow_any_host", 00:04:50.766 "nvmf_subsystem_remove_host", 00:04:50.766 "nvmf_subsystem_add_host", 00:04:50.766 "nvmf_ns_remove_host", 00:04:50.766 "nvmf_ns_add_host", 00:04:50.766 "nvmf_subsystem_remove_ns", 00:04:50.766 "nvmf_subsystem_add_ns", 00:04:50.766 "nvmf_subsystem_listener_set_ana_state", 00:04:50.766 "nvmf_discovery_get_referrals", 00:04:50.766 "nvmf_discovery_remove_referral", 00:04:50.766 "nvmf_discovery_add_referral", 00:04:50.766 "nvmf_subsystem_remove_listener", 00:04:50.766 "nvmf_subsystem_add_listener", 00:04:50.766 "nvmf_delete_subsystem", 00:04:50.766 "nvmf_create_subsystem", 00:04:50.766 "nvmf_get_subsystems", 00:04:50.766 "nvmf_set_crdt", 00:04:50.766 "nvmf_set_config", 00:04:50.766 "nvmf_set_max_subsystems", 00:04:50.766 "scsi_get_devices", 00:04:50.766 "iscsi_get_histogram", 00:04:50.766 "iscsi_enable_histogram", 00:04:50.766 "iscsi_set_options", 00:04:50.766 "iscsi_get_auth_groups", 00:04:50.766 "iscsi_auth_group_remove_secret", 00:04:50.766 "iscsi_auth_group_add_secret", 00:04:50.766 "iscsi_delete_auth_group", 00:04:50.766 "iscsi_create_auth_group", 00:04:50.766 "iscsi_set_discovery_auth", 00:04:50.766 "iscsi_get_options", 00:04:50.766 "iscsi_target_node_request_logout", 00:04:50.766 "iscsi_target_node_set_redirect", 00:04:50.766 "iscsi_target_node_set_auth", 00:04:50.767 "iscsi_target_node_add_lun", 00:04:50.767 "iscsi_get_stats", 00:04:50.767 "iscsi_get_connections", 00:04:50.767 "iscsi_portal_group_set_auth", 00:04:50.767 "iscsi_start_portal_group", 00:04:50.767 "iscsi_delete_portal_group", 00:04:50.767 "iscsi_create_portal_group", 00:04:50.767 "iscsi_get_portal_groups", 00:04:50.767 "iscsi_delete_target_node", 00:04:50.767 "iscsi_target_node_remove_pg_ig_maps", 00:04:50.767 "iscsi_target_node_add_pg_ig_maps", 00:04:50.767 "iscsi_create_target_node", 00:04:50.767 "iscsi_get_target_nodes", 00:04:50.767 "iscsi_delete_initiator_group", 00:04:50.767 "iscsi_initiator_group_remove_initiators", 00:04:50.767 "iscsi_initiator_group_add_initiators", 00:04:50.767 "iscsi_create_initiator_group", 00:04:50.767 "iscsi_get_initiator_groups", 00:04:50.767 "keyring_file_remove_key", 00:04:50.767 "keyring_file_add_key", 00:04:50.767 "iaa_scan_accel_module", 00:04:50.767 "dsa_scan_accel_module", 00:04:50.767 "ioat_scan_accel_module", 00:04:50.767 "accel_error_inject_error", 00:04:50.767 "bdev_aio_delete", 00:04:50.767 "bdev_aio_rescan", 00:04:50.767 "bdev_aio_create", 00:04:50.767 "blobfs_create", 00:04:50.767 "blobfs_detect", 00:04:50.767 "blobfs_set_cache_size", 00:04:50.767 "bdev_zone_block_delete", 00:04:50.767 "bdev_zone_block_create", 00:04:50.767 "bdev_delay_delete", 00:04:50.767 "bdev_delay_create", 00:04:50.767 "bdev_delay_update_latency", 00:04:50.767 "bdev_split_delete", 00:04:50.767 "bdev_split_create", 00:04:50.767 "bdev_error_inject_error", 00:04:50.767 "bdev_error_delete", 00:04:50.767 "bdev_error_create", 00:04:50.767 "bdev_raid_set_options", 00:04:50.767 "bdev_raid_remove_base_bdev", 00:04:50.767 "bdev_raid_add_base_bdev", 00:04:50.767 "bdev_raid_delete", 00:04:50.767 "bdev_raid_create", 00:04:50.767 "bdev_raid_get_bdevs", 00:04:50.767 "bdev_lvol_set_parent_bdev", 00:04:50.767 "bdev_lvol_set_parent", 00:04:50.767 "bdev_lvol_check_shallow_copy", 00:04:50.767 "bdev_lvol_start_shallow_copy", 00:04:50.767 "bdev_lvol_grow_lvstore", 00:04:50.767 "bdev_lvol_get_lvols", 00:04:50.767 "bdev_lvol_get_lvstores", 00:04:50.767 "bdev_lvol_delete", 00:04:50.767 "bdev_lvol_set_read_only", 00:04:50.767 "bdev_lvol_resize", 00:04:50.767 "bdev_lvol_decouple_parent", 00:04:50.767 "bdev_lvol_inflate", 00:04:50.767 "bdev_lvol_rename", 00:04:50.767 "bdev_lvol_clone_bdev", 00:04:50.767 "bdev_lvol_clone", 00:04:50.767 "bdev_lvol_snapshot", 00:04:50.767 "bdev_lvol_create", 00:04:50.767 "bdev_lvol_delete_lvstore", 00:04:50.767 "bdev_lvol_rename_lvstore", 00:04:50.767 "bdev_lvol_create_lvstore", 00:04:50.767 "bdev_passthru_delete", 00:04:50.767 "bdev_passthru_create", 00:04:50.767 "bdev_nvme_send_cmd", 00:04:50.767 "bdev_nvme_get_path_iostat", 00:04:50.767 "bdev_nvme_get_mdns_discovery_info", 00:04:50.767 "bdev_nvme_stop_mdns_discovery", 00:04:50.767 "bdev_nvme_start_mdns_discovery", 00:04:50.767 "bdev_nvme_set_multipath_policy", 00:04:50.767 "bdev_nvme_set_preferred_path", 00:04:50.767 "bdev_nvme_get_io_paths", 00:04:50.767 "bdev_nvme_remove_error_injection", 00:04:50.767 "bdev_nvme_add_error_injection", 00:04:50.767 "bdev_nvme_get_discovery_info", 00:04:50.767 "bdev_nvme_stop_discovery", 00:04:50.767 "bdev_nvme_start_discovery", 00:04:50.767 "bdev_nvme_get_controller_health_info", 00:04:50.767 "bdev_nvme_disable_controller", 00:04:50.767 "bdev_nvme_enable_controller", 00:04:50.767 "bdev_nvme_reset_controller", 00:04:50.767 "bdev_nvme_get_transport_statistics", 00:04:50.767 "bdev_nvme_apply_firmware", 00:04:50.767 "bdev_nvme_detach_controller", 00:04:50.767 "bdev_nvme_get_controllers", 00:04:50.767 "bdev_nvme_attach_controller", 00:04:50.767 "bdev_nvme_set_hotplug", 00:04:50.767 "bdev_nvme_set_options", 00:04:50.767 "bdev_null_resize", 00:04:50.767 "bdev_null_delete", 00:04:50.767 "bdev_null_create", 00:04:50.767 "bdev_malloc_delete", 00:04:50.767 "bdev_malloc_create" 00:04:50.767 ] 00:04:50.767 14:53:16 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:50.767 14:53:16 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:50.767 14:53:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:50.767 14:53:16 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:50.767 14:53:16 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 46346 00:04:50.767 14:53:16 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 46346 ']' 00:04:50.767 14:53:16 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 46346 00:04:50.767 14:53:16 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:50.767 14:53:16 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:50.767 14:53:16 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps -c -o command 46346 00:04:50.767 14:53:16 spdkcli_tcp -- common/autotest_common.sh@956 -- # tail -1 00:04:50.767 14:53:16 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:50.767 14:53:16 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:50.767 killing process with pid 46346 00:04:50.767 14:53:16 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46346' 00:04:50.767 14:53:16 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 46346 00:04:50.767 14:53:16 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 46346 00:04:51.026 00:04:51.026 real 0m1.682s 00:04:51.026 user 0m2.507s 00:04:51.026 sys 0m0.805s 00:04:51.026 ************************************ 00:04:51.026 END TEST spdkcli_tcp 00:04:51.026 ************************************ 00:04:51.026 14:53:16 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.026 14:53:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:51.026 14:53:16 -- common/autotest_common.sh@1142 -- # return 0 00:04:51.026 14:53:16 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:51.026 14:53:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.026 14:53:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.026 14:53:16 -- common/autotest_common.sh@10 -- # set +x 00:04:51.026 ************************************ 00:04:51.026 START TEST dpdk_mem_utility 00:04:51.026 ************************************ 00:04:51.026 14:53:16 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:51.285 * Looking for test storage... 00:04:51.285 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:51.285 14:53:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:51.285 14:53:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=46425 00:04:51.285 14:53:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 46425 00:04:51.285 14:53:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:51.286 14:53:16 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 46425 ']' 00:04:51.286 14:53:16 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.286 14:53:16 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.286 14:53:16 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.286 14:53:16 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.286 14:53:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:51.286 [2024-07-12 14:53:16.889607] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:04:51.286 [2024-07-12 14:53:16.889820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:51.853 EAL: TSC is not safe to use in SMP mode 00:04:51.853 EAL: TSC is not invariant 00:04:51.853 [2024-07-12 14:53:17.409910] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.853 [2024-07-12 14:53:17.492946] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:51.853 [2024-07-12 14:53:17.495140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.112 14:53:17 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:52.112 14:53:17 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:52.112 14:53:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:52.112 14:53:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:52.112 14:53:17 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.112 14:53:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:52.112 { 00:04:52.112 "filename": "/tmp/spdk_mem_dump.txt" 00:04:52.112 } 00:04:52.112 14:53:17 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.112 14:53:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:52.370 DPDK memory size 2048.000000 MiB in 1 heap(s) 00:04:52.370 1 heaps totaling size 2048.000000 MiB 00:04:52.370 size: 2048.000000 MiB heap id: 0 00:04:52.370 end heaps---------- 00:04:52.370 8 mempools totaling size 592.563660 MiB 00:04:52.370 size: 212.271240 MiB name: PDU_immediate_data_Pool 00:04:52.370 size: 153.489014 MiB name: PDU_data_out_Pool 00:04:52.370 size: 84.500549 MiB name: bdev_io_46425 00:04:52.370 size: 51.008362 MiB name: evtpool_46425 00:04:52.370 size: 50.000549 MiB name: msgpool_46425 00:04:52.370 size: 21.758911 MiB name: PDU_Pool 00:04:52.370 size: 19.508911 MiB name: SCSI_TASK_Pool 00:04:52.370 size: 0.026123 MiB name: Session_Pool 00:04:52.370 end mempools------- 00:04:52.370 6 memzones totaling size 4.142822 MiB 00:04:52.370 size: 1.000366 MiB name: RG_ring_0_46425 00:04:52.370 size: 1.000366 MiB name: RG_ring_1_46425 00:04:52.370 size: 1.000366 MiB name: RG_ring_4_46425 00:04:52.370 size: 1.000366 MiB name: RG_ring_5_46425 00:04:52.370 size: 0.125366 MiB name: RG_ring_2_46425 00:04:52.370 size: 0.015991 MiB name: RG_ring_3_46425 00:04:52.370 end memzones------- 00:04:52.370 14:53:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:52.370 heap id: 0 total size: 2048.000000 MiB number of busy elements: 39 number of free elements: 8 00:04:52.370 list of free elements. size: 1254.072021 MiB 00:04:52.370 element at address: 0x1060000000 with size: 512.000000 MiB 00:04:52.371 element at address: 0x1090000000 with size: 256.000000 MiB 00:04:52.371 element at address: 0x10b0000000 with size: 256.000000 MiB 00:04:52.371 element at address: 0x10d0000000 with size: 103.550476 MiB 00:04:52.371 element at address: 0x1130000000 with size: 88.694702 MiB 00:04:52.371 element at address: 0x10f0000000 with size: 26.986328 MiB 00:04:52.371 element at address: 0x1110000000 with size: 10.714783 MiB 00:04:52.371 element at address: 0x1112700000 with size: 0.125732 MiB 00:04:52.371 list of standard malloc elements. size: 197.217834 MiB 00:04:52.371 element at address: 0x1117bfff80 with size: 132.000122 MiB 00:04:52.371 element at address: 0x11358b5f80 with size: 64.000122 MiB 00:04:52.371 element at address: 0x11125fff80 with size: 1.000122 MiB 00:04:52.371 element at address: 0x113ffd9f00 with size: 0.140747 MiB 00:04:52.371 element at address: 0x111276fc80 with size: 0.062622 MiB 00:04:52.371 element at address: 0x113fffdf80 with size: 0.007935 MiB 00:04:52.371 element at address: 0x11398b6480 with size: 0.000305 MiB 00:04:52.371 element at address: 0x1112720300 with size: 0.000183 MiB 00:04:52.371 element at address: 0x11127203c0 with size: 0.000183 MiB 00:04:52.371 element at address: 0x1112720480 with size: 0.000183 MiB 00:04:52.371 element at address: 0x1112720540 with size: 0.000183 MiB 00:04:52.371 element at address: 0x1112720600 with size: 0.000183 MiB 00:04:52.371 element at address: 0x1112727200 with size: 0.000183 MiB 00:04:52.371 element at address: 0x1112727400 with size: 0.000183 MiB 00:04:52.371 element at address: 0x11127274c0 with size: 0.000183 MiB 00:04:52.371 element at address: 0x111272f780 with size: 0.000183 MiB 00:04:52.371 element at address: 0x111272f840 with size: 0.000183 MiB 00:04:52.371 element at address: 0x111272f900 with size: 0.000183 MiB 00:04:52.371 element at address: 0x111276fbc0 with size: 0.000183 MiB 00:04:52.371 element at address: 0x11398b6000 with size: 0.000183 MiB 00:04:52.371 element at address: 0x11398b60c0 with size: 0.000183 MiB 00:04:52.371 element at address: 0x11398b6180 with size: 0.000183 MiB 00:04:52.371 element at address: 0x11398b6240 with size: 0.000183 MiB 00:04:52.371 element at address: 0x11398b6300 with size: 0.000183 MiB 00:04:52.371 element at address: 0x11398b63c0 with size: 0.000183 MiB 00:04:52.371 element at address: 0x11398b65c0 with size: 0.000183 MiB 00:04:52.371 element at address: 0x11398b6680 with size: 0.000183 MiB 00:04:52.371 element at address: 0x11398b6880 with size: 0.000183 MiB 00:04:52.371 element at address: 0x11398b6940 with size: 0.000183 MiB 00:04:52.371 element at address: 0x11398d6c00 with size: 0.000183 MiB 00:04:52.371 element at address: 0x11398d6cc0 with size: 0.000183 MiB 00:04:52.371 element at address: 0x11399d6f80 with size: 0.000183 MiB 00:04:52.371 element at address: 0x1139ad7240 with size: 0.000183 MiB 00:04:52.371 element at address: 0x1139ad7300 with size: 0.000183 MiB 00:04:52.371 element at address: 0x113ccd7640 with size: 0.000183 MiB 00:04:52.371 element at address: 0x113ccd7840 with size: 0.000183 MiB 00:04:52.371 element at address: 0x113ccd7900 with size: 0.000183 MiB 00:04:52.371 element at address: 0x113fed7c40 with size: 0.000183 MiB 00:04:52.371 element at address: 0x113ffd9e40 with size: 0.000183 MiB 00:04:52.371 list of memzone associated elements. size: 596.710144 MiB 00:04:52.371 element at address: 0x10f2cfcac0 with size: 211.013000 MiB 00:04:52.371 associated memzone info: size: 211.012878 MiB name: MP_PDU_immediate_data_Pool_0 00:04:52.371 element at address: 0x10d678cec0 with size: 152.449524 MiB 00:04:52.371 associated memzone info: size: 152.449402 MiB name: MP_PDU_data_out_Pool_0 00:04:52.371 element at address: 0x111277fd00 with size: 84.000122 MiB 00:04:52.371 associated memzone info: size: 84.000000 MiB name: MP_bdev_io_46425_0 00:04:52.371 element at address: 0x113ccd79c0 with size: 48.000122 MiB 00:04:52.371 associated memzone info: size: 48.000000 MiB name: MP_evtpool_46425_0 00:04:52.371 element at address: 0x1139ad73c0 with size: 48.000122 MiB 00:04:52.371 associated memzone info: size: 48.000000 MiB name: MP_msgpool_46425_0 00:04:52.371 element at address: 0x1110f3d780 with size: 20.250671 MiB 00:04:52.371 associated memzone info: size: 20.250549 MiB name: MP_PDU_Pool_0 00:04:52.371 element at address: 0x10f1afc800 with size: 18.000671 MiB 00:04:52.371 associated memzone info: size: 18.000549 MiB name: MP_SCSI_TASK_Pool_0 00:04:52.371 element at address: 0x113fcd7a40 with size: 2.000488 MiB 00:04:52.371 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_46425 00:04:52.371 element at address: 0x113cad7440 with size: 2.000488 MiB 00:04:52.371 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_46425 00:04:52.371 element at address: 0x113fed7d00 with size: 1.008118 MiB 00:04:52.371 associated memzone info: size: 1.007996 MiB name: MP_evtpool_46425 00:04:52.371 element at address: 0x11123fdc40 with size: 1.008118 MiB 00:04:52.371 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:52.371 element at address: 0x1110e3b640 with size: 1.008118 MiB 00:04:52.371 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:52.371 element at address: 0x1110d39500 with size: 1.008118 MiB 00:04:52.371 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:52.371 element at address: 0x1110c373c0 with size: 1.008118 MiB 00:04:52.371 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:52.371 element at address: 0x11399d7040 with size: 1.000488 MiB 00:04:52.371 associated memzone info: size: 1.000366 MiB name: RG_ring_0_46425 00:04:52.371 element at address: 0x11398d6d80 with size: 1.000488 MiB 00:04:52.371 associated memzone info: size: 1.000366 MiB name: RG_ring_1_46425 00:04:52.371 element at address: 0x11124ffd80 with size: 1.000488 MiB 00:04:52.371 associated memzone info: size: 1.000366 MiB name: RG_ring_4_46425 00:04:52.371 element at address: 0x1110ab6fc0 with size: 1.000488 MiB 00:04:52.371 associated memzone info: size: 1.000366 MiB name: RG_ring_5_46425 00:04:52.371 element at address: 0x1117b7fd80 with size: 0.500488 MiB 00:04:52.371 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_46425 00:04:52.371 element at address: 0x111237da40 with size: 0.500488 MiB 00:04:52.371 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:52.371 element at address: 0x1110bb71c0 with size: 0.500488 MiB 00:04:52.371 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:52.371 element at address: 0x111272f9c0 with size: 0.250488 MiB 00:04:52.371 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:52.371 element at address: 0x11398b6a00 with size: 0.125488 MiB 00:04:52.371 associated memzone info: size: 0.125366 MiB name: RG_ring_2_46425 00:04:52.371 element at address: 0x1112727580 with size: 0.031738 MiB 00:04:52.371 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:52.371 element at address: 0x11127206c0 with size: 0.023743 MiB 00:04:52.371 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:52.371 element at address: 0x11358b1d80 with size: 0.016113 MiB 00:04:52.371 associated memzone info: size: 0.015991 MiB name: RG_ring_3_46425 00:04:52.371 element at address: 0x1112726800 with size: 0.002441 MiB 00:04:52.371 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:52.371 element at address: 0x113ccd7700 with size: 0.000305 MiB 00:04:52.371 associated memzone info: size: 0.000183 MiB name: MP_msgpool_46425 00:04:52.371 element at address: 0x11398b6740 with size: 0.000305 MiB 00:04:52.371 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_46425 00:04:52.371 element at address: 0x11127272c0 with size: 0.000305 MiB 00:04:52.371 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:52.371 14:53:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:52.371 14:53:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 46425 00:04:52.371 14:53:18 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 46425 ']' 00:04:52.371 14:53:18 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 46425 00:04:52.371 14:53:18 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:52.371 14:53:18 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:52.371 14:53:18 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps -c -o command 46425 00:04:52.371 14:53:18 dpdk_mem_utility -- common/autotest_common.sh@956 -- # tail -1 00:04:52.371 14:53:18 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:52.371 14:53:18 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:52.371 killing process with pid 46425 00:04:52.371 14:53:18 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46425' 00:04:52.371 14:53:18 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 46425 00:04:52.371 14:53:18 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 46425 00:04:52.630 00:04:52.630 real 0m1.546s 00:04:52.630 user 0m1.531s 00:04:52.630 sys 0m0.690s 00:04:52.630 ************************************ 00:04:52.630 END TEST dpdk_mem_utility 00:04:52.630 ************************************ 00:04:52.630 14:53:18 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.630 14:53:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:52.630 14:53:18 -- common/autotest_common.sh@1142 -- # return 0 00:04:52.630 14:53:18 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:52.630 14:53:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.630 14:53:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.630 14:53:18 -- common/autotest_common.sh@10 -- # set +x 00:04:52.630 ************************************ 00:04:52.630 START TEST event 00:04:52.630 ************************************ 00:04:52.630 14:53:18 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:52.889 * Looking for test storage... 00:04:52.889 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:52.889 14:53:18 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:52.889 14:53:18 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:52.889 14:53:18 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:52.889 14:53:18 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:52.889 14:53:18 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.889 14:53:18 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.889 ************************************ 00:04:52.889 START TEST event_perf 00:04:52.889 ************************************ 00:04:52.889 14:53:18 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:52.889 Running I/O for 1 seconds...[2024-07-12 14:53:18.493734] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:04:52.889 [2024-07-12 14:53:18.493967] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:53.494 EAL: TSC is not safe to use in SMP mode 00:04:53.494 EAL: TSC is not invariant 00:04:53.494 [2024-07-12 14:53:19.029217] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:53.494 [2024-07-12 14:53:19.110129] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:53.494 [2024-07-12 14:53:19.110208] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:04:53.494 [2024-07-12 14:53:19.110235] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:04:53.494 [2024-07-12 14:53:19.110243] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:04:53.494 [2024-07-12 14:53:19.114233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.494 [2024-07-12 14:53:19.114462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.494 Running I/O for 1 seconds...[2024-07-12 14:53:19.114349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:53.494 [2024-07-12 14:53:19.114454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:54.468 00:04:54.468 lcore 0: 2715390 00:04:54.468 lcore 1: 2715389 00:04:54.468 lcore 2: 2715391 00:04:54.468 lcore 3: 2715390 00:04:54.468 done. 00:04:54.468 00:04:54.468 real 0m1.737s 00:04:54.468 user 0m4.180s 00:04:54.468 sys 0m0.551s 00:04:54.468 14:53:20 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.468 14:53:20 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:54.468 ************************************ 00:04:54.468 END TEST event_perf 00:04:54.468 ************************************ 00:04:54.468 14:53:20 event -- common/autotest_common.sh@1142 -- # return 0 00:04:54.468 14:53:20 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:54.468 14:53:20 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:54.468 14:53:20 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.468 14:53:20 event -- common/autotest_common.sh@10 -- # set +x 00:04:54.468 ************************************ 00:04:54.468 START TEST event_reactor 00:04:54.468 ************************************ 00:04:54.468 14:53:20 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:54.468 [2024-07-12 14:53:20.270045] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:04:54.468 [2024-07-12 14:53:20.270272] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:55.035 EAL: TSC is not safe to use in SMP mode 00:04:55.035 EAL: TSC is not invariant 00:04:55.035 [2024-07-12 14:53:20.808301] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.294 [2024-07-12 14:53:20.894670] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:55.294 [2024-07-12 14:53:20.896999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.226 test_start 00:04:56.226 oneshot 00:04:56.226 tick 100 00:04:56.226 tick 100 00:04:56.226 tick 250 00:04:56.226 tick 100 00:04:56.226 tick 100 00:04:56.226 tick 100 00:04:56.226 tick 250 00:04:56.226 tick 500 00:04:56.226 tick 100 00:04:56.226 tick 100 00:04:56.226 tick 250 00:04:56.226 tick 100 00:04:56.226 tick 100 00:04:56.226 test_end 00:04:56.226 00:04:56.226 real 0m1.745s 00:04:56.226 user 0m1.162s 00:04:56.226 sys 0m0.581s 00:04:56.226 14:53:22 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.226 14:53:22 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:56.226 ************************************ 00:04:56.226 END TEST event_reactor 00:04:56.226 ************************************ 00:04:56.496 14:53:22 event -- common/autotest_common.sh@1142 -- # return 0 00:04:56.496 14:53:22 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:56.496 14:53:22 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:56.496 14:53:22 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.496 14:53:22 event -- common/autotest_common.sh@10 -- # set +x 00:04:56.496 ************************************ 00:04:56.496 START TEST event_reactor_perf 00:04:56.496 ************************************ 00:04:56.496 14:53:22 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:56.496 [2024-07-12 14:53:22.061226] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:04:56.496 [2024-07-12 14:53:22.061431] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:56.781 EAL: TSC is not safe to use in SMP mode 00:04:56.781 EAL: TSC is not invariant 00:04:56.781 [2024-07-12 14:53:22.583652] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.039 [2024-07-12 14:53:22.663792] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:57.039 [2024-07-12 14:53:22.665882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.000 test_start 00:04:58.000 test_end 00:04:58.001 Performance: 3556443 events per second 00:04:58.001 00:04:58.001 real 0m1.730s 00:04:58.001 user 0m1.170s 00:04:58.001 sys 0m0.558s 00:04:58.001 14:53:23 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.001 14:53:23 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:58.001 ************************************ 00:04:58.001 END TEST event_reactor_perf 00:04:58.001 ************************************ 00:04:58.258 14:53:23 event -- common/autotest_common.sh@1142 -- # return 0 00:04:58.258 14:53:23 event -- event/event.sh@49 -- # uname -s 00:04:58.258 14:53:23 event -- event/event.sh@49 -- # '[' FreeBSD = Linux ']' 00:04:58.258 00:04:58.258 real 0m5.480s 00:04:58.258 user 0m6.640s 00:04:58.258 sys 0m1.858s 00:04:58.258 14:53:23 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.258 14:53:23 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.258 ************************************ 00:04:58.258 END TEST event 00:04:58.258 ************************************ 00:04:58.258 14:53:23 -- common/autotest_common.sh@1142 -- # return 0 00:04:58.258 14:53:23 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:04:58.258 14:53:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.258 14:53:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.258 14:53:23 -- common/autotest_common.sh@10 -- # set +x 00:04:58.258 ************************************ 00:04:58.258 START TEST thread 00:04:58.258 ************************************ 00:04:58.258 14:53:23 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:04:58.258 * Looking for test storage... 00:04:58.258 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:04:58.258 14:53:23 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:58.258 14:53:23 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:04:58.258 14:53:23 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.258 14:53:23 thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.258 ************************************ 00:04:58.258 START TEST thread_poller_perf 00:04:58.258 ************************************ 00:04:58.258 14:53:24 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:58.258 [2024-07-12 14:53:24.013325] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:04:58.258 [2024-07-12 14:53:24.013507] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:58.822 EAL: TSC is not safe to use in SMP mode 00:04:58.822 EAL: TSC is not invariant 00:04:58.822 [2024-07-12 14:53:24.567454] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.080 [2024-07-12 14:53:24.655139] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:59.080 [2024-07-12 14:53:24.657340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.080 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:00.016 ====================================== 00:05:00.016 busy:2201595628 (cyc) 00:05:00.016 total_run_count: 5387000 00:05:00.016 tsc_hz: 2199998543 (cyc) 00:05:00.016 ====================================== 00:05:00.016 poller_cost: 408 (cyc), 185 (nsec) 00:05:00.016 00:05:00.016 real 0m1.767s 00:05:00.016 user 0m1.171s 00:05:00.016 sys 0m0.590s 00:05:00.017 14:53:25 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.017 14:53:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:00.017 ************************************ 00:05:00.017 END TEST thread_poller_perf 00:05:00.017 ************************************ 00:05:00.017 14:53:25 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:00.017 14:53:25 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:00.017 14:53:25 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:00.017 14:53:25 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.017 14:53:25 thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.017 ************************************ 00:05:00.017 START TEST thread_poller_perf 00:05:00.017 ************************************ 00:05:00.017 14:53:25 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:00.017 [2024-07-12 14:53:25.817885] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:05:00.017 [2024-07-12 14:53:25.818142] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:00.583 EAL: TSC is not safe to use in SMP mode 00:05:00.583 EAL: TSC is not invariant 00:05:00.583 [2024-07-12 14:53:26.361684] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.841 [2024-07-12 14:53:26.448320] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:00.841 [2024-07-12 14:53:26.450482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.841 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:01.777 ====================================== 00:05:01.777 busy:2200979084 (cyc) 00:05:01.777 total_run_count: 68137000 00:05:01.777 tsc_hz: 2199998543 (cyc) 00:05:01.777 ====================================== 00:05:01.777 poller_cost: 32 (cyc), 14 (nsec) 00:05:01.777 00:05:01.777 real 0m1.757s 00:05:01.777 user 0m1.185s 00:05:01.777 sys 0m0.569s 00:05:01.777 14:53:27 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.777 14:53:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:01.777 ************************************ 00:05:01.777 END TEST thread_poller_perf 00:05:01.777 ************************************ 00:05:02.036 14:53:27 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:02.036 14:53:27 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:05:02.036 14:53:27 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:05:02.036 14:53:27 thread -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.036 14:53:27 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.036 14:53:27 thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.036 ************************************ 00:05:02.036 START TEST thread_spdk_lock 00:05:02.036 ************************************ 00:05:02.036 14:53:27 thread.thread_spdk_lock -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:05:02.036 [2024-07-12 14:53:27.621769] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:05:02.036 [2024-07-12 14:53:27.621916] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:02.603 EAL: TSC is not safe to use in SMP mode 00:05:02.603 EAL: TSC is not invariant 00:05:02.603 [2024-07-12 14:53:28.154898] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.603 [2024-07-12 14:53:28.236550] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:02.603 [2024-07-12 14:53:28.236601] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:05:02.603 [2024-07-12 14:53:28.239213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.603 [2024-07-12 14:53:28.239205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.170 [2024-07-12 14:53:28.684097] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 965:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:03.170 [2024-07-12 14:53:28.684165] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:05:03.170 [2024-07-12 14:53:28.684174] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x317420 00:05:03.170 [2024-07-12 14:53:28.684680] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:03.170 [2024-07-12 14:53:28.684780] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1026:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:03.170 [2024-07-12 14:53:28.684789] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:03.170 Starting test contend 00:05:03.170 Worker Delay Wait us Hold us Total us 00:05:03.170 0 3 262404 166019 428423 00:05:03.170 1 5 163325 267495 430821 00:05:03.170 PASS test contend 00:05:03.170 Starting test hold_by_poller 00:05:03.170 PASS test hold_by_poller 00:05:03.170 Starting test hold_by_message 00:05:03.170 PASS test hold_by_message 00:05:03.170 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:05:03.170 100014 assertions passed 00:05:03.170 0 assertions failed 00:05:03.170 00:05:03.170 real 0m1.181s 00:05:03.170 user 0m1.074s 00:05:03.170 sys 0m0.549s 00:05:03.170 ************************************ 00:05:03.170 END TEST thread_spdk_lock 00:05:03.170 ************************************ 00:05:03.170 14:53:28 thread.thread_spdk_lock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.170 14:53:28 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:05:03.170 14:53:28 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:03.170 00:05:03.170 real 0m4.971s 00:05:03.170 user 0m3.583s 00:05:03.170 sys 0m1.853s 00:05:03.170 14:53:28 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.170 14:53:28 thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.170 ************************************ 00:05:03.170 END TEST thread 00:05:03.170 ************************************ 00:05:03.170 14:53:28 -- common/autotest_common.sh@1142 -- # return 0 00:05:03.170 14:53:28 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:03.170 14:53:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.170 14:53:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.170 14:53:28 -- common/autotest_common.sh@10 -- # set +x 00:05:03.170 ************************************ 00:05:03.170 START TEST accel 00:05:03.170 ************************************ 00:05:03.170 14:53:28 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:03.428 * Looking for test storage... 00:05:03.428 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:03.428 14:53:29 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:03.428 14:53:29 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:03.428 14:53:29 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:03.428 14:53:29 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=46725 00:05:03.428 14:53:29 accel -- accel/accel.sh@63 -- # waitforlisten 46725 00:05:03.428 14:53:29 accel -- common/autotest_common.sh@829 -- # '[' -z 46725 ']' 00:05:03.428 14:53:29 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /tmp//sh-np.0TO1xb 00:05:03.428 14:53:29 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.428 14:53:29 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:03.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.428 14:53:29 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.428 14:53:29 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:03.428 14:53:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:03.428 [2024-07-12 14:53:29.026851] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:05:03.429 [2024-07-12 14:53:29.027074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:03.993 EAL: TSC is not safe to use in SMP mode 00:05:03.993 EAL: TSC is not invariant 00:05:03.993 [2024-07-12 14:53:29.540600] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.993 [2024-07-12 14:53:29.654204] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:03.993 14:53:29 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:03.993 14:53:29 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:03.993 14:53:29 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:03.993 14:53:29 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:03.993 14:53:29 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:03.993 14:53:29 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:03.993 14:53:29 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:03.993 14:53:29 accel -- accel/accel.sh@41 -- # jq -r . 00:05:03.993 [2024-07-12 14:53:29.665130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.559 14:53:30 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.559 14:53:30 accel -- common/autotest_common.sh@862 -- # return 0 00:05:04.559 14:53:30 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:04.559 14:53:30 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:04.559 14:53:30 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:04.559 14:53:30 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:04.559 14:53:30 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:04.559 14:53:30 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:04.559 14:53:30 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:04.559 14:53:30 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.559 14:53:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:04.559 14:53:30 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.559 14:53:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:04.559 14:53:30 accel -- accel/accel.sh@72 -- # IFS== 00:05:04.559 14:53:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:04.559 14:53:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:04.559 14:53:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:04.559 14:53:30 accel -- accel/accel.sh@72 -- # IFS== 00:05:04.559 14:53:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:04.559 14:53:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:04.559 14:53:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:04.559 14:53:30 accel -- accel/accel.sh@72 -- # IFS== 00:05:04.559 14:53:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:04.559 14:53:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:04.559 14:53:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:04.559 14:53:30 accel -- accel/accel.sh@72 -- # IFS== 00:05:04.559 14:53:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:04.559 14:53:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:04.559 14:53:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:04.559 14:53:30 accel -- accel/accel.sh@72 -- # IFS== 00:05:04.559 14:53:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:04.559 14:53:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:04.559 14:53:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:04.559 14:53:30 accel -- accel/accel.sh@72 -- # IFS== 00:05:04.559 14:53:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:04.559 14:53:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:04.559 14:53:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:04.559 14:53:30 accel -- accel/accel.sh@72 -- # IFS== 00:05:04.559 14:53:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:04.559 14:53:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:04.559 14:53:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:04.559 14:53:30 accel -- accel/accel.sh@72 -- # IFS== 00:05:04.559 14:53:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:04.559 14:53:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:04.559 14:53:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:04.559 14:53:30 accel -- accel/accel.sh@72 -- # IFS== 00:05:04.559 14:53:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:04.559 14:53:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:04.559 14:53:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:04.559 14:53:30 accel -- accel/accel.sh@72 -- # IFS== 00:05:04.559 14:53:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:04.559 14:53:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:04.559 14:53:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:04.559 14:53:30 accel -- accel/accel.sh@72 -- # IFS== 00:05:04.559 14:53:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:04.559 14:53:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:04.559 14:53:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:04.559 14:53:30 accel -- accel/accel.sh@72 -- # IFS== 00:05:04.559 14:53:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:04.559 14:53:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:04.559 14:53:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:04.559 14:53:30 accel -- accel/accel.sh@72 -- # IFS== 00:05:04.559 14:53:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:04.559 14:53:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:04.559 14:53:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:04.559 14:53:30 accel -- accel/accel.sh@72 -- # IFS== 00:05:04.559 14:53:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:04.559 14:53:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:04.559 14:53:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:04.559 14:53:30 accel -- accel/accel.sh@72 -- # IFS== 00:05:04.559 14:53:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:04.559 14:53:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:04.559 14:53:30 accel -- accel/accel.sh@75 -- # killprocess 46725 00:05:04.559 14:53:30 accel -- common/autotest_common.sh@948 -- # '[' -z 46725 ']' 00:05:04.559 14:53:30 accel -- common/autotest_common.sh@952 -- # kill -0 46725 00:05:04.559 14:53:30 accel -- common/autotest_common.sh@953 -- # uname 00:05:04.559 14:53:30 accel -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:05:04.559 14:53:30 accel -- common/autotest_common.sh@956 -- # ps -c -o command 46725 00:05:04.559 14:53:30 accel -- common/autotest_common.sh@956 -- # tail -1 00:05:04.559 14:53:30 accel -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:05:04.559 14:53:30 accel -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:05:04.559 killing process with pid 46725 00:05:04.559 14:53:30 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46725' 00:05:04.559 14:53:30 accel -- common/autotest_common.sh@967 -- # kill 46725 00:05:04.559 14:53:30 accel -- common/autotest_common.sh@972 -- # wait 46725 00:05:04.817 14:53:30 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:04.817 14:53:30 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:04.817 14:53:30 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:04.817 14:53:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.817 14:53:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:04.817 14:53:30 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:04.817 14:53:30 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.vq78Kg -h 00:05:04.817 14:53:30 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.818 14:53:30 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:04.818 14:53:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:04.818 14:53:30 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:04.818 14:53:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:04.818 14:53:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.818 14:53:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:04.818 ************************************ 00:05:04.818 START TEST accel_missing_filename 00:05:04.818 ************************************ 00:05:04.818 14:53:30 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:04.818 14:53:30 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:04.818 14:53:30 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:04.818 14:53:30 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:04.818 14:53:30 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:04.818 14:53:30 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:04.818 14:53:30 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:04.818 14:53:30 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:04.818 14:53:30 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.W1nSPd -t 1 -w compress 00:05:04.818 [2024-07-12 14:53:30.469373] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:05:04.818 [2024-07-12 14:53:30.469590] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:05.385 EAL: TSC is not safe to use in SMP mode 00:05:05.385 EAL: TSC is not invariant 00:05:05.385 [2024-07-12 14:53:31.019469] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.385 [2024-07-12 14:53:31.106850] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:05.385 14:53:31 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:05.385 14:53:31 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:05.385 14:53:31 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:05.385 14:53:31 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:05.385 14:53:31 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:05.385 14:53:31 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:05.385 14:53:31 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:05.385 14:53:31 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:05.385 [2024-07-12 14:53:31.119574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.385 [2024-07-12 14:53:31.121917] app.c:1057:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:05.385 [2024-07-12 14:53:31.157715] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:05.644 A filename is required. 00:05:05.644 14:53:31 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:05.644 14:53:31 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:05.644 14:53:31 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:05.644 14:53:31 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:05.644 14:53:31 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:05.644 14:53:31 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:05.644 00:05:05.644 real 0m0.816s 00:05:05.644 user 0m0.222s 00:05:05.644 sys 0m0.593s 00:05:05.644 14:53:31 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.644 14:53:31 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:05.644 ************************************ 00:05:05.644 END TEST accel_missing_filename 00:05:05.644 ************************************ 00:05:05.644 14:53:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:05.644 14:53:31 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:05.644 14:53:31 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:05.644 14:53:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.644 14:53:31 accel -- common/autotest_common.sh@10 -- # set +x 00:05:05.644 ************************************ 00:05:05.644 START TEST accel_compress_verify 00:05:05.644 ************************************ 00:05:05.644 14:53:31 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:05.644 14:53:31 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:05.644 14:53:31 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:05.644 14:53:31 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:05.644 14:53:31 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:05.644 14:53:31 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:05.644 14:53:31 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:05.644 14:53:31 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:05.644 14:53:31 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.Uu6Nvg -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:05.644 [2024-07-12 14:53:31.331395] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:05:05.644 [2024-07-12 14:53:31.331638] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:06.212 EAL: TSC is not safe to use in SMP mode 00:05:06.212 EAL: TSC is not invariant 00:05:06.212 [2024-07-12 14:53:31.905055] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.212 [2024-07-12 14:53:32.012572] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:06.212 14:53:32 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:06.212 14:53:32 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:06.212 14:53:32 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:06.212 14:53:32 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:06.212 14:53:32 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:06.212 14:53:32 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:06.212 14:53:32 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:06.212 14:53:32 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:06.470 [2024-07-12 14:53:32.024464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.470 [2024-07-12 14:53:32.027785] app.c:1057:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:06.470 [2024-07-12 14:53:32.066887] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:06.470 00:05:06.470 Compression does not support the verify option, aborting. 00:05:06.470 14:53:32 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=211 00:05:06.470 14:53:32 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:06.470 14:53:32 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=83 00:05:06.470 14:53:32 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:06.470 14:53:32 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:06.470 14:53:32 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:06.470 00:05:06.470 real 0m0.855s 00:05:06.470 user 0m0.241s 00:05:06.470 sys 0m0.612s 00:05:06.470 ************************************ 00:05:06.471 END TEST accel_compress_verify 00:05:06.471 ************************************ 00:05:06.471 14:53:32 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.471 14:53:32 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:06.471 14:53:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:06.471 14:53:32 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:06.471 14:53:32 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:06.471 14:53:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.471 14:53:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:06.471 ************************************ 00:05:06.471 START TEST accel_wrong_workload 00:05:06.471 ************************************ 00:05:06.471 14:53:32 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:06.471 14:53:32 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:06.471 14:53:32 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:06.471 14:53:32 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:06.471 14:53:32 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:06.471 14:53:32 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:06.471 14:53:32 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:06.471 14:53:32 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:06.471 14:53:32 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.rrmsyR -t 1 -w foobar 00:05:06.471 Unsupported workload type: foobar 00:05:06.471 [2024-07-12 14:53:32.230772] app.c:1459:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:06.471 accel_perf options: 00:05:06.471 [-h help message] 00:05:06.471 [-q queue depth per core] 00:05:06.471 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:06.471 [-T number of threads per core 00:05:06.471 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:06.471 [-t time in seconds] 00:05:06.471 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:06.471 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:06.471 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:06.471 [-l for compress/decompress workloads, name of uncompressed input file 00:05:06.471 [-S for crc32c workload, use this seed value (default 0) 00:05:06.471 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:06.471 [-f for fill workload, use this BYTE value (default 255) 00:05:06.471 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:06.471 [-y verify result if this switch is on] 00:05:06.471 [-a tasks to allocate per core (default: same value as -q)] 00:05:06.471 Can be used to spread operations across a wider range of memory. 00:05:06.471 14:53:32 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:06.471 14:53:32 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:06.471 14:53:32 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:06.471 14:53:32 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:06.471 00:05:06.471 real 0m0.009s 00:05:06.471 user 0m0.003s 00:05:06.471 sys 0m0.008s 00:05:06.471 ************************************ 00:05:06.471 END TEST accel_wrong_workload 00:05:06.471 14:53:32 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.471 14:53:32 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:06.471 ************************************ 00:05:06.471 14:53:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:06.471 14:53:32 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:06.471 14:53:32 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:06.471 14:53:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.471 14:53:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:06.471 ************************************ 00:05:06.471 START TEST accel_negative_buffers 00:05:06.471 ************************************ 00:05:06.471 14:53:32 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:06.471 14:53:32 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:06.471 14:53:32 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:06.471 14:53:32 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:06.471 14:53:32 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:06.471 14:53:32 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:06.471 14:53:32 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:06.471 14:53:32 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:06.471 14:53:32 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.ZoYrg4 -t 1 -w xor -y -x -1 00:05:06.471 -x option must be non-negative. 00:05:06.471 [2024-07-12 14:53:32.281710] app.c:1459:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:06.471 accel_perf options: 00:05:06.471 [-h help message] 00:05:06.471 [-q queue depth per core] 00:05:06.471 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:06.471 [-T number of threads per core 00:05:06.471 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:06.471 [-t time in seconds] 00:05:06.471 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:06.471 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:06.471 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:06.471 [-l for compress/decompress workloads, name of uncompressed input file 00:05:06.471 [-S for crc32c workload, use this seed value (default 0) 00:05:06.471 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:06.471 [-f for fill workload, use this BYTE value (default 255) 00:05:06.471 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:06.471 [-y verify result if this switch is on] 00:05:06.471 [-a tasks to allocate per core (default: same value as -q)] 00:05:06.471 Can be used to spread operations across a wider range of memory. 00:05:06.471 14:53:32 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:06.471 14:53:32 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:06.471 14:53:32 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:06.730 ************************************ 00:05:06.730 END TEST accel_negative_buffers 00:05:06.730 ************************************ 00:05:06.730 14:53:32 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:06.730 00:05:06.730 real 0m0.009s 00:05:06.730 user 0m0.002s 00:05:06.730 sys 0m0.007s 00:05:06.730 14:53:32 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.730 14:53:32 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:06.730 14:53:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:06.730 14:53:32 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:06.730 14:53:32 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:06.730 14:53:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.730 14:53:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:06.730 ************************************ 00:05:06.730 START TEST accel_crc32c 00:05:06.730 ************************************ 00:05:06.730 14:53:32 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:06.730 14:53:32 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:06.730 14:53:32 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:06.730 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:06.730 14:53:32 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:06.730 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:06.730 14:53:32 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.oPJ75l -t 1 -w crc32c -S 32 -y 00:05:06.730 [2024-07-12 14:53:32.334160] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:05:06.730 [2024-07-12 14:53:32.334362] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:07.295 EAL: TSC is not safe to use in SMP mode 00:05:07.295 EAL: TSC is not invariant 00:05:07.295 [2024-07-12 14:53:32.879436] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.295 [2024-07-12 14:53:32.964928] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:07.295 14:53:32 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:07.295 14:53:32 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:07.295 14:53:32 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:07.295 14:53:32 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.295 14:53:32 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.295 14:53:32 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:07.295 14:53:32 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:07.295 14:53:32 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:07.295 [2024-07-12 14:53:32.973131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.295 14:53:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:07.295 14:53:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.295 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.295 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.295 14:53:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:07.295 14:53:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.295 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.295 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.295 14:53:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:07.295 14:53:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.295 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.295 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.295 14:53:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:07.295 14:53:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.295 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.295 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.295 14:53:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:07.295 14:53:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.295 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.295 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.295 14:53:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:07.295 14:53:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.295 14:53:32 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:07.295 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.295 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.296 14:53:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.669 14:53:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:08.669 14:53:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.669 14:53:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.669 14:53:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.669 14:53:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:08.669 14:53:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.669 14:53:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.669 14:53:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.669 14:53:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:08.669 14:53:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.669 14:53:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.669 14:53:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.669 14:53:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:08.669 14:53:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.669 14:53:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.669 14:53:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.669 14:53:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:08.669 14:53:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.669 14:53:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.669 14:53:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.669 14:53:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:08.669 14:53:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.669 14:53:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.669 14:53:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.669 14:53:34 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:08.669 14:53:34 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:08.669 14:53:34 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:08.669 00:05:08.669 real 0m1.805s 00:05:08.669 user 0m1.222s 00:05:08.669 sys 0m0.592s 00:05:08.669 14:53:34 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.669 ************************************ 00:05:08.669 END TEST accel_crc32c 00:05:08.669 ************************************ 00:05:08.669 14:53:34 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:08.669 14:53:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:08.669 14:53:34 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:08.669 14:53:34 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:08.669 14:53:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.669 14:53:34 accel -- common/autotest_common.sh@10 -- # set +x 00:05:08.669 ************************************ 00:05:08.669 START TEST accel_crc32c_C2 00:05:08.669 ************************************ 00:05:08.669 14:53:34 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:08.669 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:08.669 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:08.669 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:08.669 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:08.669 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:08.669 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.nhEk5r -t 1 -w crc32c -y -C 2 00:05:08.669 [2024-07-12 14:53:34.181440] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:05:08.669 [2024-07-12 14:53:34.181641] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:08.928 EAL: TSC is not safe to use in SMP mode 00:05:08.928 EAL: TSC is not invariant 00:05:08.928 [2024-07-12 14:53:34.700341] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.187 [2024-07-12 14:53:34.788300] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:09.187 [2024-07-12 14:53:34.795189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.187 14:53:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:10.563 14:53:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:10.563 14:53:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.563 14:53:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:10.563 14:53:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:10.563 14:53:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:10.563 14:53:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.563 14:53:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:10.563 14:53:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:10.563 14:53:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:10.563 14:53:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.563 14:53:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:10.563 14:53:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:10.563 14:53:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:10.563 14:53:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.563 14:53:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:10.563 14:53:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:10.563 14:53:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:10.563 14:53:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.563 14:53:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:10.563 14:53:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:10.563 14:53:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:10.563 14:53:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.563 14:53:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:10.563 14:53:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:10.563 14:53:35 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:10.563 14:53:35 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:10.563 14:53:35 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:10.563 00:05:10.563 real 0m1.772s 00:05:10.563 user 0m1.209s 00:05:10.563 sys 0m0.573s 00:05:10.563 14:53:35 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.563 ************************************ 00:05:10.563 END TEST accel_crc32c_C2 00:05:10.563 ************************************ 00:05:10.563 14:53:35 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:10.563 14:53:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:10.563 14:53:35 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:10.563 14:53:35 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:10.563 14:53:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.563 14:53:35 accel -- common/autotest_common.sh@10 -- # set +x 00:05:10.563 ************************************ 00:05:10.563 START TEST accel_copy 00:05:10.563 ************************************ 00:05:10.563 14:53:35 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:10.563 14:53:35 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:10.563 14:53:35 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:10.563 14:53:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.563 14:53:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.563 14:53:35 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:10.563 14:53:35 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.RE1onV -t 1 -w copy -y 00:05:10.563 [2024-07-12 14:53:35.993580] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:05:10.563 [2024-07-12 14:53:35.993791] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:10.823 EAL: TSC is not safe to use in SMP mode 00:05:10.823 EAL: TSC is not invariant 00:05:10.823 [2024-07-12 14:53:36.537547] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.823 [2024-07-12 14:53:36.628388] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:10.823 14:53:36 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:10.823 14:53:36 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:10.823 14:53:36 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:10.823 14:53:36 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:10.823 14:53:36 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:10.823 14:53:36 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:10.823 14:53:36 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:10.823 14:53:36 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:11.082 [2024-07-12 14:53:36.638273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.082 14:53:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.020 14:53:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:12.020 14:53:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.020 14:53:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.020 14:53:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.020 14:53:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:12.020 14:53:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.020 14:53:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.020 14:53:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.020 14:53:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:12.020 14:53:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.020 14:53:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.020 14:53:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.020 14:53:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:12.020 14:53:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.020 14:53:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.020 14:53:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.020 14:53:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:12.020 14:53:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.020 14:53:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.020 14:53:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.020 14:53:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:12.020 14:53:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.020 14:53:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.020 14:53:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.020 14:53:37 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:12.020 14:53:37 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:12.020 14:53:37 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:12.020 00:05:12.020 real 0m1.806s 00:05:12.020 user 0m1.223s 00:05:12.020 sys 0m0.590s 00:05:12.020 14:53:37 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.020 14:53:37 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:12.020 ************************************ 00:05:12.020 END TEST accel_copy 00:05:12.020 ************************************ 00:05:12.020 14:53:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:12.020 14:53:37 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:12.020 14:53:37 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:12.020 14:53:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.020 14:53:37 accel -- common/autotest_common.sh@10 -- # set +x 00:05:12.020 ************************************ 00:05:12.020 START TEST accel_fill 00:05:12.020 ************************************ 00:05:12.020 14:53:37 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:12.020 14:53:37 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:12.020 14:53:37 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:12.279 14:53:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.279 14:53:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.279 14:53:37 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:12.279 14:53:37 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.C60YBy -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:12.279 [2024-07-12 14:53:37.841751] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:05:12.279 [2024-07-12 14:53:37.842026] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:12.847 EAL: TSC is not safe to use in SMP mode 00:05:12.847 EAL: TSC is not invariant 00:05:12.847 [2024-07-12 14:53:38.371591] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.847 [2024-07-12 14:53:38.454307] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:12.847 [2024-07-12 14:53:38.464886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.847 14:53:38 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.848 14:53:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:14.231 14:53:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:14.231 14:53:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:14.231 14:53:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:14.231 14:53:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:14.231 14:53:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:14.231 14:53:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:14.231 14:53:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:14.231 14:53:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:14.231 14:53:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:14.231 14:53:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:14.231 14:53:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:14.231 14:53:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:14.231 14:53:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:14.231 14:53:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:14.231 14:53:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:14.231 14:53:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:14.231 14:53:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:14.231 14:53:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:14.231 14:53:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:14.231 14:53:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:14.231 14:53:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:14.231 14:53:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:14.231 14:53:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:14.231 14:53:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:14.231 14:53:39 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:14.231 14:53:39 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:14.231 14:53:39 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:14.231 00:05:14.231 real 0m1.791s 00:05:14.231 user 0m1.222s 00:05:14.231 sys 0m0.578s 00:05:14.231 14:53:39 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.231 14:53:39 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:14.231 ************************************ 00:05:14.231 END TEST accel_fill 00:05:14.231 ************************************ 00:05:14.231 14:53:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:14.231 14:53:39 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:14.231 14:53:39 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:14.231 14:53:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.231 14:53:39 accel -- common/autotest_common.sh@10 -- # set +x 00:05:14.231 ************************************ 00:05:14.231 START TEST accel_copy_crc32c 00:05:14.231 ************************************ 00:05:14.231 14:53:39 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:14.231 14:53:39 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:14.231 14:53:39 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:14.231 14:53:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.231 14:53:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.231 14:53:39 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:14.231 14:53:39 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.rNVPoW -t 1 -w copy_crc32c -y 00:05:14.231 [2024-07-12 14:53:39.678384] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:05:14.231 [2024-07-12 14:53:39.678604] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:14.490 EAL: TSC is not safe to use in SMP mode 00:05:14.490 EAL: TSC is not invariant 00:05:14.490 [2024-07-12 14:53:40.216464] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.490 [2024-07-12 14:53:40.299894] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:14.748 [2024-07-12 14:53:40.307122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.748 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.749 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:14.749 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.749 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.749 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.749 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:14.749 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.749 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.749 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.749 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.749 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.749 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.749 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.749 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.749 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.749 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.749 14:53:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.683 14:53:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:15.683 14:53:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:15.683 14:53:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:15.683 14:53:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.683 14:53:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:15.683 14:53:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:15.683 14:53:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:15.683 14:53:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.683 14:53:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:15.683 14:53:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:15.683 14:53:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:15.683 14:53:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.683 14:53:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:15.684 14:53:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:15.684 14:53:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:15.684 14:53:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.684 14:53:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:15.684 14:53:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:15.684 14:53:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:15.684 14:53:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.684 14:53:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:15.684 14:53:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:15.684 14:53:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:15.684 14:53:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.684 14:53:41 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:15.684 14:53:41 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:15.684 14:53:41 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:15.684 00:05:15.684 real 0m1.793s 00:05:15.684 user 0m1.239s 00:05:15.684 sys 0m0.567s 00:05:15.684 14:53:41 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.684 14:53:41 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:15.684 ************************************ 00:05:15.684 END TEST accel_copy_crc32c 00:05:15.684 ************************************ 00:05:15.942 14:53:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:15.942 14:53:41 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:15.942 14:53:41 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:15.942 14:53:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.942 14:53:41 accel -- common/autotest_common.sh@10 -- # set +x 00:05:15.942 ************************************ 00:05:15.942 START TEST accel_copy_crc32c_C2 00:05:15.942 ************************************ 00:05:15.942 14:53:41 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:15.942 14:53:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:15.942 14:53:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:15.942 14:53:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.942 14:53:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:15.942 14:53:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.942 14:53:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.VtAVW2 -t 1 -w copy_crc32c -y -C 2 00:05:15.942 [2024-07-12 14:53:41.519366] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:05:15.942 [2024-07-12 14:53:41.519580] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:16.510 EAL: TSC is not safe to use in SMP mode 00:05:16.510 EAL: TSC is not invariant 00:05:16.510 [2024-07-12 14:53:42.069309] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.510 [2024-07-12 14:53:42.164228] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:16.510 [2024-07-12 14:53:42.175435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:16.510 14:53:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.886 14:53:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:17.886 14:53:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.886 14:53:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.886 14:53:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.886 14:53:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:17.886 14:53:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.886 14:53:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.886 14:53:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.886 14:53:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:17.886 14:53:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.886 14:53:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.886 14:53:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.886 14:53:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:17.886 14:53:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.886 14:53:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.886 14:53:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.886 14:53:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:17.886 14:53:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.886 14:53:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.886 14:53:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.886 14:53:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:17.886 14:53:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.886 14:53:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.886 14:53:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.886 14:53:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:17.886 14:53:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:17.886 14:53:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:17.886 00:05:17.886 real 0m1.823s 00:05:17.886 user 0m1.242s 00:05:17.886 sys 0m0.591s 00:05:17.886 14:53:43 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.886 14:53:43 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:17.886 ************************************ 00:05:17.886 END TEST accel_copy_crc32c_C2 00:05:17.886 ************************************ 00:05:17.886 14:53:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:17.886 14:53:43 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:17.886 14:53:43 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:17.886 14:53:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.886 14:53:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:17.886 ************************************ 00:05:17.886 START TEST accel_dualcast 00:05:17.886 ************************************ 00:05:17.886 14:53:43 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:17.886 14:53:43 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:17.886 14:53:43 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:17.886 14:53:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.886 14:53:43 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:17.886 14:53:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:17.886 14:53:43 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.b9g7zL -t 1 -w dualcast -y 00:05:17.886 [2024-07-12 14:53:43.388264] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:05:17.886 [2024-07-12 14:53:43.388564] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:18.145 EAL: TSC is not safe to use in SMP mode 00:05:18.145 EAL: TSC is not invariant 00:05:18.145 [2024-07-12 14:53:43.938761] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.404 [2024-07-12 14:53:44.021178] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:18.404 [2024-07-12 14:53:44.031951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.404 14:53:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:19.781 14:53:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:19.781 14:53:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:19.781 14:53:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:19.781 14:53:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:19.781 14:53:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:19.781 14:53:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:19.781 14:53:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:19.781 14:53:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:19.781 14:53:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:19.781 14:53:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:19.781 14:53:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:19.781 14:53:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:19.781 14:53:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:19.781 14:53:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:19.781 14:53:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:19.781 14:53:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:19.781 14:53:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:19.781 14:53:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:19.781 14:53:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:19.781 14:53:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:19.781 14:53:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:19.781 14:53:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:19.781 14:53:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:19.781 14:53:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:19.781 14:53:45 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:19.781 14:53:45 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:19.781 14:53:45 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:19.781 00:05:19.781 real 0m1.821s 00:05:19.781 user 0m1.227s 00:05:19.781 sys 0m0.601s 00:05:19.781 ************************************ 00:05:19.781 END TEST accel_dualcast 00:05:19.781 ************************************ 00:05:19.781 14:53:45 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.781 14:53:45 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:19.781 14:53:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:19.781 14:53:45 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:19.781 14:53:45 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:19.781 14:53:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.781 14:53:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:19.781 ************************************ 00:05:19.781 START TEST accel_compare 00:05:19.781 ************************************ 00:05:19.781 14:53:45 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:19.781 14:53:45 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:19.781 14:53:45 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:19.781 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:19.781 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:19.781 14:53:45 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:19.781 14:53:45 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.04FH2b -t 1 -w compare -y 00:05:19.781 [2024-07-12 14:53:45.256665] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:05:19.781 [2024-07-12 14:53:45.256945] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:20.040 EAL: TSC is not safe to use in SMP mode 00:05:20.040 EAL: TSC is not invariant 00:05:20.040 [2024-07-12 14:53:45.796523] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.322 [2024-07-12 14:53:45.881876] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:20.322 [2024-07-12 14:53:45.889706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.322 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.323 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.323 14:53:45 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:20.323 14:53:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.323 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.323 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.323 14:53:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:20.323 14:53:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.323 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.323 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.323 14:53:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:20.323 14:53:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.323 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.323 14:53:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:21.270 14:53:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:21.270 14:53:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:21.271 14:53:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:21.271 14:53:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:21.271 14:53:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:21.271 14:53:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:21.271 14:53:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:21.271 14:53:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:21.271 14:53:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:21.271 14:53:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:21.271 14:53:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:21.271 14:53:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:21.271 14:53:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:21.271 14:53:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:21.271 14:53:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:21.271 14:53:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:21.271 14:53:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:21.271 14:53:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:21.271 14:53:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:21.271 14:53:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:21.271 14:53:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:21.271 14:53:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:21.271 14:53:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:21.271 14:53:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:21.271 14:53:47 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:21.271 14:53:47 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:21.271 14:53:47 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:21.271 00:05:21.271 real 0m1.804s 00:05:21.271 user 0m1.221s 00:05:21.271 sys 0m0.592s 00:05:21.271 14:53:47 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.271 14:53:47 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:21.271 ************************************ 00:05:21.271 END TEST accel_compare 00:05:21.271 ************************************ 00:05:21.529 14:53:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:21.529 14:53:47 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:21.529 14:53:47 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:21.529 14:53:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.529 14:53:47 accel -- common/autotest_common.sh@10 -- # set +x 00:05:21.529 ************************************ 00:05:21.529 START TEST accel_xor 00:05:21.529 ************************************ 00:05:21.529 14:53:47 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:21.529 14:53:47 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:21.529 14:53:47 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:21.529 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.530 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.530 14:53:47 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:21.530 14:53:47 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.3oZllU -t 1 -w xor -y 00:05:21.530 [2024-07-12 14:53:47.105439] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:05:21.530 [2024-07-12 14:53:47.105625] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:22.097 EAL: TSC is not safe to use in SMP mode 00:05:22.097 EAL: TSC is not invariant 00:05:22.097 [2024-07-12 14:53:47.630986] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.097 [2024-07-12 14:53:47.715350] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:22.097 [2024-07-12 14:53:47.726284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:22.097 14:53:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.098 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.098 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.098 14:53:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:22.098 14:53:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.098 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.098 14:53:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.474 14:53:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.474 14:53:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.474 14:53:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.474 14:53:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.474 14:53:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.474 14:53:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.474 14:53:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.474 14:53:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.474 14:53:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.474 14:53:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.474 14:53:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.474 14:53:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.474 14:53:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.474 14:53:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.474 14:53:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.474 14:53:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.474 14:53:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.474 14:53:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.474 14:53:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.474 14:53:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.474 14:53:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.474 14:53:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.474 14:53:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.474 14:53:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.474 14:53:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:23.474 14:53:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:23.474 14:53:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:23.474 00:05:23.474 real 0m1.785s 00:05:23.474 user 0m1.221s 00:05:23.474 sys 0m0.572s 00:05:23.474 14:53:48 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.474 14:53:48 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:23.474 ************************************ 00:05:23.474 END TEST accel_xor 00:05:23.474 ************************************ 00:05:23.474 14:53:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:23.474 14:53:48 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:23.474 14:53:48 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:23.474 14:53:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.474 14:53:48 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.474 ************************************ 00:05:23.474 START TEST accel_xor 00:05:23.474 ************************************ 00:05:23.474 14:53:48 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:23.474 14:53:48 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:23.474 14:53:48 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:23.474 14:53:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.474 14:53:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.474 14:53:48 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:23.474 14:53:48 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.hMjwtQ -t 1 -w xor -y -x 3 00:05:23.474 [2024-07-12 14:53:48.940537] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:05:23.474 [2024-07-12 14:53:48.940844] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:23.733 EAL: TSC is not safe to use in SMP mode 00:05:23.733 EAL: TSC is not invariant 00:05:23.733 [2024-07-12 14:53:49.496135] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.991 [2024-07-12 14:53:49.598626] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:23.991 14:53:49 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:23.991 14:53:49 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.991 14:53:49 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.991 14:53:49 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.991 14:53:49 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.991 14:53:49 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.991 14:53:49 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:23.991 14:53:49 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:23.991 [2024-07-12 14:53:49.609872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.991 14:53:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.991 14:53:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.991 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.991 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.992 14:53:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.368 14:53:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:25.368 14:53:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.368 14:53:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.368 14:53:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.368 14:53:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:25.368 14:53:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.368 14:53:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.368 14:53:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.368 14:53:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:25.368 14:53:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.368 14:53:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.368 14:53:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.368 14:53:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:25.368 14:53:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.368 14:53:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.368 14:53:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.368 14:53:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:25.368 14:53:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.368 14:53:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.368 14:53:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.368 14:53:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:25.368 14:53:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.368 14:53:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.368 14:53:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.368 14:53:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:25.368 14:53:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:25.368 14:53:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:25.368 00:05:25.368 real 0m1.825s 00:05:25.368 user 0m1.235s 00:05:25.368 sys 0m0.599s 00:05:25.368 14:53:50 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.368 14:53:50 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:25.368 ************************************ 00:05:25.368 END TEST accel_xor 00:05:25.368 ************************************ 00:05:25.368 14:53:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:25.368 14:53:50 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:25.368 14:53:50 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:25.368 14:53:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.368 14:53:50 accel -- common/autotest_common.sh@10 -- # set +x 00:05:25.368 ************************************ 00:05:25.368 START TEST accel_dif_verify 00:05:25.368 ************************************ 00:05:25.368 14:53:50 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:25.368 14:53:50 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:25.368 14:53:50 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:25.368 14:53:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.368 14:53:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.368 14:53:50 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:25.368 14:53:50 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.x8a1vu -t 1 -w dif_verify 00:05:25.368 [2024-07-12 14:53:50.815254] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:05:25.368 [2024-07-12 14:53:50.815471] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:25.626 EAL: TSC is not safe to use in SMP mode 00:05:25.626 EAL: TSC is not invariant 00:05:25.626 [2024-07-12 14:53:51.346123] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.626 [2024-07-12 14:53:51.428210] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:25.626 14:53:51 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:25.626 14:53:51 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:25.626 14:53:51 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:25.626 14:53:51 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.627 14:53:51 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.627 14:53:51 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:25.627 14:53:51 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:25.627 14:53:51 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:25.627 [2024-07-12 14:53:51.439029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.885 14:53:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:26.818 14:53:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:26.818 14:53:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:26.818 14:53:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:26.818 14:53:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:26.818 14:53:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:26.818 14:53:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:26.818 14:53:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:26.818 14:53:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:26.818 14:53:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:26.818 14:53:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:26.818 14:53:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:26.818 14:53:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:26.818 14:53:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:26.818 14:53:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:26.818 14:53:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:26.818 14:53:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:26.818 14:53:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:26.818 14:53:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:26.818 14:53:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:26.818 14:53:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:26.818 14:53:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:26.818 14:53:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:26.818 14:53:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:26.818 14:53:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:26.818 14:53:52 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:26.818 14:53:52 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:26.818 14:53:52 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:26.818 00:05:26.818 real 0m1.785s 00:05:26.818 user 0m1.213s 00:05:26.818 sys 0m0.578s 00:05:26.818 14:53:52 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.818 14:53:52 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:26.818 ************************************ 00:05:26.818 END TEST accel_dif_verify 00:05:26.818 ************************************ 00:05:26.818 14:53:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:26.818 14:53:52 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:26.818 14:53:52 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:26.818 14:53:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.818 14:53:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:27.075 ************************************ 00:05:27.075 START TEST accel_dif_generate 00:05:27.075 ************************************ 00:05:27.075 14:53:52 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:27.075 14:53:52 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:27.075 14:53:52 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:27.075 14:53:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.075 14:53:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.075 14:53:52 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:27.075 14:53:52 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.f8eOvt -t 1 -w dif_generate 00:05:27.075 [2024-07-12 14:53:52.649749] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:05:27.075 [2024-07-12 14:53:52.649926] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:27.642 EAL: TSC is not safe to use in SMP mode 00:05:27.642 EAL: TSC is not invariant 00:05:27.642 [2024-07-12 14:53:53.183565] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.642 [2024-07-12 14:53:53.265936] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:27.642 [2024-07-12 14:53:53.276796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.642 14:53:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.643 14:53:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:29.029 14:53:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:29.029 14:53:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:29.029 14:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:29.029 14:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:29.029 14:53:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:29.029 14:53:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:29.029 14:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:29.029 14:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:29.029 14:53:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:29.029 14:53:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:29.029 14:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:29.029 14:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:29.029 14:53:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:29.029 14:53:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:29.029 14:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:29.029 14:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:29.029 14:53:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:29.029 14:53:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:29.029 14:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:29.029 14:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:29.029 14:53:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:29.029 14:53:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:29.029 14:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:29.029 14:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:29.029 14:53:54 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:29.029 14:53:54 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:29.029 14:53:54 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:29.029 00:05:29.029 real 0m1.791s 00:05:29.029 user 0m1.225s 00:05:29.029 sys 0m0.574s 00:05:29.029 14:53:54 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.029 14:53:54 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:29.029 ************************************ 00:05:29.029 END TEST accel_dif_generate 00:05:29.029 ************************************ 00:05:29.029 14:53:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:29.029 14:53:54 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:29.029 14:53:54 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:29.029 14:53:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.029 14:53:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:29.029 ************************************ 00:05:29.029 START TEST accel_dif_generate_copy 00:05:29.029 ************************************ 00:05:29.029 14:53:54 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:29.029 14:53:54 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:29.029 14:53:54 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:29.029 14:53:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.029 14:53:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.029 14:53:54 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:29.029 14:53:54 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.vkxFQi -t 1 -w dif_generate_copy 00:05:29.029 [2024-07-12 14:53:54.487898] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:05:29.029 [2024-07-12 14:53:54.488156] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:29.287 EAL: TSC is not safe to use in SMP mode 00:05:29.287 EAL: TSC is not invariant 00:05:29.287 [2024-07-12 14:53:55.020493] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.545 [2024-07-12 14:53:55.108915] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:29.545 [2024-07-12 14:53:55.118991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.545 14:53:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.481 14:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:30.481 14:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.481 14:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.481 14:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.481 14:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:30.481 14:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.481 14:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.481 14:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.481 14:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:30.481 14:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.481 14:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.481 14:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.481 14:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:30.481 14:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.481 14:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.481 14:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.481 14:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:30.481 14:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.481 14:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.481 14:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.481 14:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:30.481 14:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.481 14:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.481 14:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.481 14:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:30.481 14:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:30.481 14:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:30.481 00:05:30.481 real 0m1.799s 00:05:30.481 user 0m1.242s 00:05:30.481 sys 0m0.567s 00:05:30.481 14:53:56 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.481 14:53:56 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:30.481 ************************************ 00:05:30.481 END TEST accel_dif_generate_copy 00:05:30.481 ************************************ 00:05:30.741 14:53:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:30.741 14:53:56 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:30.741 14:53:56 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:30.741 14:53:56 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:30.741 14:53:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.741 14:53:56 accel -- common/autotest_common.sh@10 -- # set +x 00:05:30.741 ************************************ 00:05:30.741 START TEST accel_comp 00:05:30.741 ************************************ 00:05:30.741 14:53:56 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:30.741 14:53:56 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:30.741 14:53:56 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:30.741 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.741 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.741 14:53:56 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:30.741 14:53:56 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.IqxFK9 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:30.741 [2024-07-12 14:53:56.336747] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:05:30.741 [2024-07-12 14:53:56.337066] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:31.308 EAL: TSC is not safe to use in SMP mode 00:05:31.308 EAL: TSC is not invariant 00:05:31.308 [2024-07-12 14:53:56.888119] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.308 [2024-07-12 14:53:56.974964] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:31.308 [2024-07-12 14:53:56.986504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:31.308 14:53:56 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:31.309 14:53:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.309 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:31.309 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:31.309 14:53:56 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:31.309 14:53:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.309 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:31.309 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:31.309 14:53:56 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:31.309 14:53:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.309 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:31.309 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:31.309 14:53:56 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:31.309 14:53:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.309 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:31.309 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:31.309 14:53:56 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:31.309 14:53:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.309 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:31.309 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:31.309 14:53:56 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:31.309 14:53:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.309 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:31.309 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:31.309 14:53:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:31.309 14:53:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.309 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:31.309 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:31.309 14:53:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:31.309 14:53:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.309 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:31.309 14:53:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:32.685 14:53:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:32.685 14:53:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.685 14:53:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:32.685 14:53:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:32.685 14:53:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:32.685 14:53:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.685 14:53:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:32.685 14:53:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:32.685 14:53:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:32.685 14:53:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.685 14:53:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:32.685 14:53:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:32.685 14:53:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:32.685 14:53:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.685 14:53:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:32.685 14:53:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:32.685 14:53:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:32.685 14:53:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.685 14:53:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:32.685 14:53:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:32.685 14:53:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:32.685 14:53:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:32.685 14:53:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:32.685 14:53:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:32.685 14:53:58 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:32.685 14:53:58 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:32.685 14:53:58 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:32.685 00:05:32.685 real 0m1.815s 00:05:32.685 user 0m1.220s 00:05:32.685 sys 0m0.601s 00:05:32.685 14:53:58 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.685 ************************************ 00:05:32.685 END TEST accel_comp 00:05:32.685 ************************************ 00:05:32.685 14:53:58 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:32.685 14:53:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:32.685 14:53:58 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:32.685 14:53:58 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:32.685 14:53:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.685 14:53:58 accel -- common/autotest_common.sh@10 -- # set +x 00:05:32.685 ************************************ 00:05:32.685 START TEST accel_decomp 00:05:32.685 ************************************ 00:05:32.685 14:53:58 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:32.685 14:53:58 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:32.685 14:53:58 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:32.685 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:32.685 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:32.685 14:53:58 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:32.685 14:53:58 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.3Bp9Ut -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:32.685 [2024-07-12 14:53:58.195487] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:05:32.685 [2024-07-12 14:53:58.195664] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:32.943 EAL: TSC is not safe to use in SMP mode 00:05:32.943 EAL: TSC is not invariant 00:05:33.202 [2024-07-12 14:53:58.761432] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.202 [2024-07-12 14:53:58.844464] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:33.202 [2024-07-12 14:53:58.855910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:33.202 14:53:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:34.579 14:54:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:34.579 14:54:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.579 14:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:34.579 14:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:34.579 14:54:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:34.579 14:54:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.579 14:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:34.579 14:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:34.579 14:54:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:34.579 14:54:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.579 14:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:34.579 14:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:34.579 14:54:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:34.579 14:54:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.579 14:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:34.579 14:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:34.579 14:54:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:34.580 14:54:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.580 14:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:34.580 14:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:34.580 14:54:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:34.580 14:54:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:34.580 14:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:34.580 14:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:34.580 14:54:00 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:34.580 14:54:00 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:34.580 14:54:00 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:34.580 00:05:34.580 real 0m1.820s 00:05:34.580 user 0m1.211s 00:05:34.580 sys 0m0.620s 00:05:34.580 14:54:00 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.580 14:54:00 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:34.580 ************************************ 00:05:34.580 END TEST accel_decomp 00:05:34.580 ************************************ 00:05:34.580 14:54:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:34.580 14:54:00 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:34.580 14:54:00 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:34.580 14:54:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.580 14:54:00 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.580 ************************************ 00:05:34.580 START TEST accel_decomp_full 00:05:34.580 ************************************ 00:05:34.580 14:54:00 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:34.580 14:54:00 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:34.580 14:54:00 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:34.580 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:34.580 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:34.580 14:54:00 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:34.580 14:54:00 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.05L11o -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:34.580 [2024-07-12 14:54:00.067651] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:05:34.580 [2024-07-12 14:54:00.067934] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:34.839 EAL: TSC is not safe to use in SMP mode 00:05:34.839 EAL: TSC is not invariant 00:05:34.839 [2024-07-12 14:54:00.605943] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.098 [2024-07-12 14:54:00.693481] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:35.098 14:54:00 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:35.098 14:54:00 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.098 14:54:00 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.098 14:54:00 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.098 14:54:00 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.098 14:54:00 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.098 14:54:00 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:35.098 14:54:00 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:35.098 [2024-07-12 14:54:00.704654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.098 14:54:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:35.098 14:54:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.098 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.098 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.098 14:54:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:35.098 14:54:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.098 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.098 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.098 14:54:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:35.099 14:54:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:36.474 14:54:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:36.474 14:54:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:36.474 14:54:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:36.474 14:54:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:36.474 14:54:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:36.474 14:54:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:36.474 14:54:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:36.474 14:54:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:36.474 14:54:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:36.474 14:54:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:36.474 14:54:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:36.474 14:54:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:36.474 14:54:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:36.474 14:54:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:36.474 14:54:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:36.474 14:54:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:36.474 14:54:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:36.474 14:54:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:36.474 14:54:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:36.474 14:54:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:36.474 14:54:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:36.474 14:54:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:36.474 14:54:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:36.474 14:54:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:36.474 14:54:01 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:36.474 14:54:01 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:36.474 14:54:01 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:36.474 00:05:36.474 real 0m1.817s 00:05:36.474 user 0m1.248s 00:05:36.474 sys 0m0.577s 00:05:36.474 14:54:01 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.474 ************************************ 00:05:36.474 END TEST accel_decomp_full 00:05:36.474 14:54:01 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:36.474 ************************************ 00:05:36.474 14:54:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:36.474 14:54:01 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:36.474 14:54:01 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:36.475 14:54:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.475 14:54:01 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.475 ************************************ 00:05:36.475 START TEST accel_decomp_mcore 00:05:36.475 ************************************ 00:05:36.475 14:54:01 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:36.475 14:54:01 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:36.475 14:54:01 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:36.475 14:54:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.475 14:54:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.475 14:54:01 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:36.475 14:54:01 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.2FnEsn -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:36.475 [2024-07-12 14:54:01.925802] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:05:36.475 [2024-07-12 14:54:01.926057] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:36.731 EAL: TSC is not safe to use in SMP mode 00:05:36.731 EAL: TSC is not invariant 00:05:36.731 [2024-07-12 14:54:02.484170] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:36.990 [2024-07-12 14:54:02.566679] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:36.990 [2024-07-12 14:54:02.566730] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:05:36.990 [2024-07-12 14:54:02.566740] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:05:36.990 [2024-07-12 14:54:02.566748] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:36.990 [2024-07-12 14:54:02.578937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.990 [2024-07-12 14:54:02.578991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.990 [2024-07-12 14:54:02.579129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.990 [2024-07-12 14:54:02.579124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.990 14:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.925 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:37.925 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.925 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.925 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.925 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:37.925 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.925 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.926 00:05:37.926 real 0m1.817s 00:05:37.926 user 0m4.346s 00:05:37.926 sys 0m0.597s 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.926 14:54:03 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:37.926 ************************************ 00:05:37.926 END TEST accel_decomp_mcore 00:05:37.926 ************************************ 00:05:38.184 14:54:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:38.184 14:54:03 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:38.184 14:54:03 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:38.184 14:54:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.184 14:54:03 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.184 ************************************ 00:05:38.184 START TEST accel_decomp_full_mcore 00:05:38.184 ************************************ 00:05:38.184 14:54:03 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:38.184 14:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:38.184 14:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:38.184 14:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.184 14:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.184 14:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:38.184 14:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.WiE971 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:38.184 [2024-07-12 14:54:03.781189] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:05:38.184 [2024-07-12 14:54:03.781406] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:38.751 EAL: TSC is not safe to use in SMP mode 00:05:38.751 EAL: TSC is not invariant 00:05:38.751 [2024-07-12 14:54:04.306129] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:38.751 [2024-07-12 14:54:04.392487] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:38.751 [2024-07-12 14:54:04.392546] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:05:38.751 [2024-07-12 14:54:04.392556] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:05:38.751 [2024-07-12 14:54:04.392564] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:05:38.751 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:38.751 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.751 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.751 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.751 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.751 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.751 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:38.751 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:38.751 [2024-07-12 14:54:04.402255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.751 [2024-07-12 14:54:04.402521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.751 [2024-07-12 14:54:04.402407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.751 [2024-07-12 14:54:04.402515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:38.751 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:38.751 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.751 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.751 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.751 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:38.751 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.751 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:38.752 14:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.126 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:40.126 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.126 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.126 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.126 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:40.126 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.126 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.126 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.126 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:40.126 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.126 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.127 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.127 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:40.127 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.127 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.127 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.127 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:40.127 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.127 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.127 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.127 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:40.127 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.127 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.127 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.127 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:40.127 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.127 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.127 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.127 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:40.127 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.127 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.127 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.127 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:40.127 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:40.127 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:40.127 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:40.127 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:40.127 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:40.127 14:54:05 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.127 00:05:40.127 real 0m1.799s 00:05:40.127 user 0m4.403s 00:05:40.127 sys 0m0.558s 00:05:40.127 14:54:05 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.127 14:54:05 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:40.127 ************************************ 00:05:40.127 END TEST accel_decomp_full_mcore 00:05:40.127 ************************************ 00:05:40.127 14:54:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:40.127 14:54:05 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:40.127 14:54:05 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:40.127 14:54:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.127 14:54:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.127 ************************************ 00:05:40.127 START TEST accel_decomp_mthread 00:05:40.127 ************************************ 00:05:40.127 14:54:05 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:40.127 14:54:05 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:40.127 14:54:05 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:40.127 14:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.127 14:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.127 14:54:05 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:40.127 14:54:05 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.FIUAwq -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:40.127 [2024-07-12 14:54:05.617828] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:05:40.127 [2024-07-12 14:54:05.618017] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:40.385 EAL: TSC is not safe to use in SMP mode 00:05:40.385 EAL: TSC is not invariant 00:05:40.385 [2024-07-12 14:54:06.154387] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.644 [2024-07-12 14:54:06.253018] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:40.644 [2024-07-12 14:54:06.263401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.644 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.645 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:40.645 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.645 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.645 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.645 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:40.645 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.645 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.645 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.645 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:40.645 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.645 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.645 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.645 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:40.645 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.645 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.645 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.645 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:40.645 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.645 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.645 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.645 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:40.645 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.645 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.645 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.645 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:40.645 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.645 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.645 14:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.020 14:54:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:42.020 14:54:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.020 14:54:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.020 14:54:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.020 14:54:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:42.020 14:54:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.020 14:54:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.020 14:54:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.020 14:54:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:42.020 14:54:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.020 14:54:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.020 14:54:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.020 14:54:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:42.020 14:54:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.020 14:54:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.020 14:54:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.020 14:54:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:42.020 14:54:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.020 14:54:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.020 14:54:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.020 14:54:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:42.020 14:54:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.020 14:54:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.020 14:54:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.020 14:54:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:42.020 14:54:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.020 14:54:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.020 14:54:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.020 14:54:07 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.020 14:54:07 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:42.020 14:54:07 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.020 00:05:42.020 real 0m1.818s 00:05:42.020 user 0m1.240s 00:05:42.020 sys 0m0.589s 00:05:42.020 ************************************ 00:05:42.020 END TEST accel_decomp_mthread 00:05:42.020 ************************************ 00:05:42.020 14:54:07 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.020 14:54:07 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:42.020 14:54:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:42.020 14:54:07 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:42.020 14:54:07 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:42.020 14:54:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.020 14:54:07 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.020 ************************************ 00:05:42.020 START TEST accel_decomp_full_mthread 00:05:42.020 ************************************ 00:05:42.020 14:54:07 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:42.020 14:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:42.020 14:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:42.020 14:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.020 14:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.020 14:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:42.020 14:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.Cd7z2S -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:42.020 [2024-07-12 14:54:07.487779] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:05:42.020 [2024-07-12 14:54:07.488070] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:42.278 EAL: TSC is not safe to use in SMP mode 00:05:42.278 EAL: TSC is not invariant 00:05:42.278 [2024-07-12 14:54:08.036697] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.537 [2024-07-12 14:54:08.118515] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:42.537 [2024-07-12 14:54:08.129191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:42.537 14:54:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.912 14:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:43.912 14:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.912 14:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.912 14:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.912 14:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:43.912 14:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.912 14:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.912 14:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.912 14:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:43.912 14:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.912 14:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.912 14:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.912 14:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:43.912 14:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.912 14:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.912 14:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.912 14:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:43.912 14:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.912 14:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.912 14:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.912 14:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:43.912 14:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.912 14:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.912 14:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.912 14:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:43.912 14:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:43.912 14:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:43.912 14:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:43.912 14:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:43.912 14:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:43.912 14:54:09 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:43.912 00:05:43.912 real 0m1.836s 00:05:43.912 user 0m1.265s 00:05:43.912 sys 0m0.584s 00:05:43.912 14:54:09 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.912 14:54:09 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:43.912 ************************************ 00:05:43.912 END TEST accel_decomp_full_mthread 00:05:43.912 ************************************ 00:05:43.912 14:54:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:43.912 14:54:09 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:43.912 14:54:09 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /tmp//sh-np.Xkg4KC 00:05:43.912 14:54:09 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:43.912 14:54:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.912 14:54:09 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.912 ************************************ 00:05:43.912 START TEST accel_dif_functional_tests 00:05:43.912 ************************************ 00:05:43.912 14:54:09 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /tmp//sh-np.Xkg4KC 00:05:43.912 [2024-07-12 14:54:09.367427] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:05:43.912 [2024-07-12 14:54:09.367720] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:44.170 EAL: TSC is not safe to use in SMP mode 00:05:44.170 EAL: TSC is not invariant 00:05:44.170 [2024-07-12 14:54:09.898709] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:44.433 [2024-07-12 14:54:09.990895] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:44.433 [2024-07-12 14:54:09.990965] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:05:44.433 [2024-07-12 14:54:09.990985] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:05:44.433 14:54:09 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:44.433 14:54:09 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.433 14:54:09 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.433 14:54:09 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.433 14:54:09 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.433 14:54:09 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.433 14:54:09 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:44.433 14:54:09 accel -- accel/accel.sh@41 -- # jq -r . 00:05:44.433 [2024-07-12 14:54:10.003375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.433 [2024-07-12 14:54:10.003734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.433 [2024-07-12 14:54:10.003713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.433 00:05:44.433 00:05:44.433 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.433 http://cunit.sourceforge.net/ 00:05:44.433 00:05:44.433 00:05:44.433 Suite: accel_dif 00:05:44.433 Test: verify: DIF generated, GUARD check ...passed 00:05:44.433 Test: verify: DIF generated, APPTAG check ...passed 00:05:44.433 Test: verify: DIF generated, REFTAG check ...passed 00:05:44.433 Test: verify: DIF not generated, GUARD check ...passed 00:05:44.433 Test: verify: DIF not generated, APPTAG check ...passed 00:05:44.433 Test: verify: DIF not generated, REFTAG check ...passed 00:05:44.433 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:44.433 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:05:44.433 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:44.433 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:44.433 Test: verify: REFTAG_INIT correct, REFTAG check ...[2024-07-12 14:54:10.022171] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:44.434 [2024-07-12 14:54:10.022238] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:44.434 [2024-07-12 14:54:10.022266] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:44.434 [2024-07-12 14:54:10.022371] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:44.434 passed 00:05:44.434 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-12 14:54:10.022451] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:44.434 passed 00:05:44.434 Test: verify copy: DIF generated, GUARD check ...passed 00:05:44.434 Test: verify copy: DIF generated, APPTAG check ...passed 00:05:44.434 Test: verify copy: DIF generated, REFTAG check ...passed 00:05:44.434 Test: verify copy: DIF not generated, GUARD check ...passed 00:05:44.434 Test: verify copy: DIF not generated, APPTAG check ...passed 00:05:44.434 Test: verify copy: DIF not generated, REFTAG check ...passed 00:05:44.434 Test: generate copy: DIF generated, GUARD check ...passed 00:05:44.434 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:44.434 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:44.434 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:44.434 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:44.434 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:44.434 Test: generate copy: iovecs-len validate ...passed 00:05:44.434 Test: generate copy: buffer alignment validate ...passed 00:05:44.434 00:05:44.434 [2024-07-12 14:54:10.022554] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:44.434 [2024-07-12 14:54:10.022583] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:44.434 [2024-07-12 14:54:10.022607] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:44.434 [2024-07-12 14:54:10.022739] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:44.434 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.434 suites 1 1 n/a 0 0 00:05:44.434 tests 26 26 26 0 0 00:05:44.434 asserts 115 115 115 0 n/a 00:05:44.434 00:05:44.434 Elapsed time = 0.000 seconds 00:05:44.434 00:05:44.434 real 0m0.850s 00:05:44.434 user 0m0.428s 00:05:44.434 sys 0m0.584s 00:05:44.434 14:54:10 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.434 14:54:10 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:44.434 ************************************ 00:05:44.434 END TEST accel_dif_functional_tests 00:05:44.434 ************************************ 00:05:44.434 14:54:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:44.434 00:05:44.434 real 0m41.362s 00:05:44.434 user 0m33.522s 00:05:44.434 sys 0m14.778s 00:05:44.434 14:54:10 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:44.434 14:54:10 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:44.434 14:54:10 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:44.434 14:54:10 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.434 14:54:10 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.434 14:54:10 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.434 14:54:10 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.434 14:54:10 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.434 14:54:10 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.434 14:54:10 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.434 14:54:10 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.434 14:54:10 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.434 14:54:10 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.434 14:54:10 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.434 14:54:10 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.434 14:54:10 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.434 14:54:10 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.434 14:54:10 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:44.434 14:54:10 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.434 14:54:10 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:44.434 14:54:10 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:44.434 14:54:10 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:44.434 14:54:10 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.434 14:54:10 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:44.434 14:54:10 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.434 ************************************ 00:05:44.434 14:54:10 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:44.434 END TEST accel 00:05:44.434 ************************************ 00:05:44.700 14:54:10 -- common/autotest_common.sh@1142 -- # return 0 00:05:44.700 14:54:10 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:44.700 14:54:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.700 14:54:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.700 14:54:10 -- common/autotest_common.sh@10 -- # set +x 00:05:44.700 ************************************ 00:05:44.700 START TEST accel_rpc 00:05:44.700 ************************************ 00:05:44.700 14:54:10 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:44.700 * Looking for test storage... 00:05:44.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:44.700 14:54:10 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:44.700 14:54:10 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=47487 00:05:44.700 14:54:10 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 47487 00:05:44.700 14:54:10 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:44.700 14:54:10 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 47487 ']' 00:05:44.700 14:54:10 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.700 14:54:10 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.700 14:54:10 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.700 14:54:10 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.700 14:54:10 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.700 [2024-07-12 14:54:10.439301] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:05:44.700 [2024-07-12 14:54:10.439585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:45.265 EAL: TSC is not safe to use in SMP mode 00:05:45.265 EAL: TSC is not invariant 00:05:45.265 [2024-07-12 14:54:10.998885] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.524 [2024-07-12 14:54:11.083153] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:45.524 [2024-07-12 14:54:11.085313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.782 14:54:11 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.783 14:54:11 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:45.783 14:54:11 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:45.783 14:54:11 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:45.783 14:54:11 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:45.783 14:54:11 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:45.783 14:54:11 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:45.783 14:54:11 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.783 14:54:11 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.783 14:54:11 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.783 ************************************ 00:05:45.783 START TEST accel_assign_opcode 00:05:45.783 ************************************ 00:05:45.783 14:54:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:05:45.783 14:54:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:45.783 14:54:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.783 14:54:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:45.783 [2024-07-12 14:54:11.485645] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:45.783 14:54:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.783 14:54:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:45.783 14:54:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.783 14:54:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:45.783 [2024-07-12 14:54:11.493631] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:45.783 14:54:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.783 14:54:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:45.783 14:54:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.783 14:54:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:45.783 14:54:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.783 14:54:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:45.783 14:54:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:45.783 14:54:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:45.783 14:54:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.783 14:54:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:45.783 14:54:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.783 software 00:05:45.783 00:05:45.783 real 0m0.070s 00:05:45.783 user 0m0.014s 00:05:45.783 sys 0m0.004s 00:05:45.783 14:54:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.783 14:54:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:45.783 ************************************ 00:05:45.783 END TEST accel_assign_opcode 00:05:45.783 ************************************ 00:05:45.783 14:54:11 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:45.783 14:54:11 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 47487 00:05:45.783 14:54:11 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 47487 ']' 00:05:45.783 14:54:11 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 47487 00:05:45.783 14:54:11 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:05:45.783 14:54:11 accel_rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:05:45.783 14:54:11 accel_rpc -- common/autotest_common.sh@956 -- # ps -c -o command 47487 00:05:45.783 14:54:11 accel_rpc -- common/autotest_common.sh@956 -- # tail -1 00:05:45.783 14:54:11 accel_rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:05:45.783 killing process with pid 47487 00:05:45.783 14:54:11 accel_rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:05:45.783 14:54:11 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 47487' 00:05:45.783 14:54:11 accel_rpc -- common/autotest_common.sh@967 -- # kill 47487 00:05:45.783 14:54:11 accel_rpc -- common/autotest_common.sh@972 -- # wait 47487 00:05:46.042 00:05:46.042 real 0m1.567s 00:05:46.042 user 0m1.431s 00:05:46.042 sys 0m0.797s 00:05:46.042 14:54:11 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.042 14:54:11 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.042 ************************************ 00:05:46.042 END TEST accel_rpc 00:05:46.042 ************************************ 00:05:46.300 14:54:11 -- common/autotest_common.sh@1142 -- # return 0 00:05:46.300 14:54:11 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:46.300 14:54:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.300 14:54:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.300 14:54:11 -- common/autotest_common.sh@10 -- # set +x 00:05:46.300 ************************************ 00:05:46.300 START TEST app_cmdline 00:05:46.300 ************************************ 00:05:46.300 14:54:11 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:46.300 * Looking for test storage... 00:05:46.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:46.300 14:54:12 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:46.300 14:54:12 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=47565 00:05:46.300 14:54:12 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:46.300 14:54:12 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 47565 00:05:46.300 14:54:12 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 47565 ']' 00:05:46.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.300 14:54:12 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.300 14:54:12 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.300 14:54:12 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.300 14:54:12 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.300 14:54:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:46.300 [2024-07-12 14:54:12.037030] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:05:46.300 [2024-07-12 14:54:12.037226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:46.867 EAL: TSC is not safe to use in SMP mode 00:05:46.867 EAL: TSC is not invariant 00:05:46.867 [2024-07-12 14:54:12.562046] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.867 [2024-07-12 14:54:12.646336] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:46.867 [2024-07-12 14:54:12.648394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.436 14:54:13 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.436 14:54:13 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:05:47.436 14:54:13 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:47.694 { 00:05:47.694 "version": "SPDK v24.09-pre git sha1 eea7da688", 00:05:47.694 "fields": { 00:05:47.694 "major": 24, 00:05:47.694 "minor": 9, 00:05:47.694 "patch": 0, 00:05:47.694 "suffix": "-pre", 00:05:47.694 "commit": "eea7da688" 00:05:47.694 } 00:05:47.694 } 00:05:47.695 14:54:13 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:47.695 14:54:13 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:47.695 14:54:13 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:47.695 14:54:13 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:47.695 14:54:13 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:47.695 14:54:13 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:47.695 14:54:13 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:47.695 14:54:13 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.695 14:54:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:47.695 14:54:13 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.695 14:54:13 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:47.695 14:54:13 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:47.695 14:54:13 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:47.695 14:54:13 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:05:47.695 14:54:13 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:47.695 14:54:13 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:47.695 14:54:13 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.695 14:54:13 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:47.695 14:54:13 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.695 14:54:13 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:47.695 14:54:13 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.695 14:54:13 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:47.695 14:54:13 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:47.695 14:54:13 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:47.954 request: 00:05:47.954 { 00:05:47.954 "method": "env_dpdk_get_mem_stats", 00:05:47.954 "req_id": 1 00:05:47.954 } 00:05:47.954 Got JSON-RPC error response 00:05:47.954 response: 00:05:47.954 { 00:05:47.954 "code": -32601, 00:05:47.954 "message": "Method not found" 00:05:47.954 } 00:05:47.954 14:54:13 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:05:47.954 14:54:13 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:47.954 14:54:13 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:47.954 14:54:13 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:47.954 14:54:13 app_cmdline -- app/cmdline.sh@1 -- # killprocess 47565 00:05:47.954 14:54:13 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 47565 ']' 00:05:47.954 14:54:13 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 47565 00:05:47.954 14:54:13 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:05:47.954 14:54:13 app_cmdline -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:05:47.954 14:54:13 app_cmdline -- common/autotest_common.sh@956 -- # ps -c -o command 47565 00:05:47.954 14:54:13 app_cmdline -- common/autotest_common.sh@956 -- # tail -1 00:05:47.954 14:54:13 app_cmdline -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:05:47.954 killing process with pid 47565 00:05:47.954 14:54:13 app_cmdline -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:05:47.954 14:54:13 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 47565' 00:05:47.954 14:54:13 app_cmdline -- common/autotest_common.sh@967 -- # kill 47565 00:05:47.954 14:54:13 app_cmdline -- common/autotest_common.sh@972 -- # wait 47565 00:05:48.212 00:05:48.212 real 0m1.973s 00:05:48.212 user 0m2.310s 00:05:48.212 sys 0m0.742s 00:05:48.212 14:54:13 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.212 ************************************ 00:05:48.212 END TEST app_cmdline 00:05:48.212 ************************************ 00:05:48.212 14:54:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:48.212 14:54:13 -- common/autotest_common.sh@1142 -- # return 0 00:05:48.212 14:54:13 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:48.212 14:54:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.212 14:54:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.212 14:54:13 -- common/autotest_common.sh@10 -- # set +x 00:05:48.212 ************************************ 00:05:48.212 START TEST version 00:05:48.212 ************************************ 00:05:48.212 14:54:13 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:48.471 * Looking for test storage... 00:05:48.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:48.471 14:54:14 version -- app/version.sh@17 -- # get_header_version major 00:05:48.471 14:54:14 version -- app/version.sh@14 -- # cut -f2 00:05:48.471 14:54:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:48.471 14:54:14 version -- app/version.sh@14 -- # tr -d '"' 00:05:48.471 14:54:14 version -- app/version.sh@17 -- # major=24 00:05:48.471 14:54:14 version -- app/version.sh@18 -- # get_header_version minor 00:05:48.471 14:54:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:48.471 14:54:14 version -- app/version.sh@14 -- # cut -f2 00:05:48.471 14:54:14 version -- app/version.sh@14 -- # tr -d '"' 00:05:48.471 14:54:14 version -- app/version.sh@18 -- # minor=9 00:05:48.471 14:54:14 version -- app/version.sh@19 -- # get_header_version patch 00:05:48.471 14:54:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:48.471 14:54:14 version -- app/version.sh@14 -- # cut -f2 00:05:48.471 14:54:14 version -- app/version.sh@14 -- # tr -d '"' 00:05:48.471 14:54:14 version -- app/version.sh@19 -- # patch=0 00:05:48.471 14:54:14 version -- app/version.sh@20 -- # get_header_version suffix 00:05:48.471 14:54:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:48.471 14:54:14 version -- app/version.sh@14 -- # cut -f2 00:05:48.471 14:54:14 version -- app/version.sh@14 -- # tr -d '"' 00:05:48.471 14:54:14 version -- app/version.sh@20 -- # suffix=-pre 00:05:48.471 14:54:14 version -- app/version.sh@22 -- # version=24.9 00:05:48.471 14:54:14 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:48.471 14:54:14 version -- app/version.sh@28 -- # version=24.9rc0 00:05:48.471 14:54:14 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:48.471 14:54:14 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:48.471 14:54:14 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:48.471 14:54:14 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:48.471 00:05:48.471 real 0m0.207s 00:05:48.471 user 0m0.162s 00:05:48.471 sys 0m0.130s 00:05:48.471 14:54:14 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.471 14:54:14 version -- common/autotest_common.sh@10 -- # set +x 00:05:48.471 ************************************ 00:05:48.471 END TEST version 00:05:48.471 ************************************ 00:05:48.471 14:54:14 -- common/autotest_common.sh@1142 -- # return 0 00:05:48.471 14:54:14 -- spdk/autotest.sh@188 -- # '[' 1 -eq 1 ']' 00:05:48.471 14:54:14 -- spdk/autotest.sh@189 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:05:48.471 14:54:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.471 14:54:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.471 14:54:14 -- common/autotest_common.sh@10 -- # set +x 00:05:48.471 ************************************ 00:05:48.471 START TEST blockdev_general 00:05:48.471 ************************************ 00:05:48.471 14:54:14 blockdev_general -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:05:48.730 * Looking for test storage... 00:05:48.730 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:48.730 14:54:14 blockdev_general -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:48.730 14:54:14 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:05:48.730 14:54:14 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:48.730 14:54:14 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:48.730 14:54:14 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:48.730 14:54:14 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:48.730 14:54:14 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:48.730 14:54:14 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:48.730 14:54:14 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:05:48.730 14:54:14 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:05:48.730 14:54:14 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:05:48.730 14:54:14 blockdev_general -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:05:48.730 14:54:14 blockdev_general -- bdev/blockdev.sh@674 -- # uname -s 00:05:48.730 14:54:14 blockdev_general -- bdev/blockdev.sh@674 -- # '[' FreeBSD = Linux ']' 00:05:48.730 14:54:14 blockdev_general -- bdev/blockdev.sh@679 -- # PRE_RESERVED_MEM=2048 00:05:48.730 14:54:14 blockdev_general -- bdev/blockdev.sh@682 -- # test_type=bdev 00:05:48.730 14:54:14 blockdev_general -- bdev/blockdev.sh@683 -- # crypto_device= 00:05:48.730 14:54:14 blockdev_general -- bdev/blockdev.sh@684 -- # dek= 00:05:48.730 14:54:14 blockdev_general -- bdev/blockdev.sh@685 -- # env_ctx= 00:05:48.730 14:54:14 blockdev_general -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:05:48.730 14:54:14 blockdev_general -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:05:48.730 14:54:14 blockdev_general -- bdev/blockdev.sh@690 -- # [[ bdev == bdev ]] 00:05:48.730 14:54:14 blockdev_general -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:05:48.730 14:54:14 blockdev_general -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:05:48.730 14:54:14 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=47704 00:05:48.730 14:54:14 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:48.730 14:54:14 blockdev_general -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:05:48.730 14:54:14 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 47704 00:05:48.730 14:54:14 blockdev_general -- common/autotest_common.sh@829 -- # '[' -z 47704 ']' 00:05:48.730 14:54:14 blockdev_general -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.730 14:54:14 blockdev_general -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.730 14:54:14 blockdev_general -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.730 14:54:14 blockdev_general -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.730 14:54:14 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:48.730 [2024-07-12 14:54:14.312165] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:05:48.730 [2024-07-12 14:54:14.312344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:49.298 EAL: TSC is not safe to use in SMP mode 00:05:49.298 EAL: TSC is not invariant 00:05:49.298 [2024-07-12 14:54:14.841304] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.298 [2024-07-12 14:54:14.930808] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:49.298 [2024-07-12 14:54:14.933077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.557 14:54:15 blockdev_general -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.557 14:54:15 blockdev_general -- common/autotest_common.sh@862 -- # return 0 00:05:49.557 14:54:15 blockdev_general -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:05:49.557 14:54:15 blockdev_general -- bdev/blockdev.sh@696 -- # setup_bdev_conf 00:05:49.557 14:54:15 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:05:49.557 14:54:15 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.557 14:54:15 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:49.816 [2024-07-12 14:54:15.416403] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:49.816 [2024-07-12 14:54:15.416483] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:49.816 00:05:49.816 [2024-07-12 14:54:15.424391] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:49.816 [2024-07-12 14:54:15.424431] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:49.816 00:05:49.816 Malloc0 00:05:49.816 Malloc1 00:05:49.816 Malloc2 00:05:49.816 Malloc3 00:05:49.816 Malloc4 00:05:49.816 Malloc5 00:05:49.816 Malloc6 00:05:49.816 Malloc7 00:05:49.816 Malloc8 00:05:49.816 Malloc9 00:05:49.816 [2024-07-12 14:54:15.512392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:49.816 [2024-07-12 14:54:15.512490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:49.816 [2024-07-12 14:54:15.512515] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12187ac3a980 00:05:49.816 [2024-07-12 14:54:15.512524] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:49.816 [2024-07-12 14:54:15.512880] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:49.816 [2024-07-12 14:54:15.512899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:49.816 TestPT 00:05:49.816 14:54:15 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.816 14:54:15 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:05:49.816 5000+0 records in 00:05:49.816 5000+0 records out 00:05:49.816 10240000 bytes transferred in 0.029186 secs (350856599 bytes/sec) 00:05:49.816 14:54:15 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:05:49.816 14:54:15 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.816 14:54:15 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:49.816 AIO0 00:05:49.816 14:54:15 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.816 14:54:15 blockdev_general -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:05:49.816 14:54:15 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.816 14:54:15 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:49.816 14:54:15 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.816 14:54:15 blockdev_general -- bdev/blockdev.sh@740 -- # cat 00:05:49.816 14:54:15 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:05:49.816 14:54:15 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.816 14:54:15 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:49.816 14:54:15 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.816 14:54:15 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:05:49.816 14:54:15 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.816 14:54:15 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:50.076 14:54:15 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.076 14:54:15 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:50.076 14:54:15 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.076 14:54:15 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:50.076 14:54:15 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.076 14:54:15 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:05:50.076 14:54:15 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:05:50.076 14:54:15 blockdev_general -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:05:50.076 14:54:15 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.076 14:54:15 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:50.076 14:54:15 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.076 14:54:15 blockdev_general -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:05:50.076 14:54:15 blockdev_general -- bdev/blockdev.sh@749 -- # jq -r .name 00:05:50.078 14:54:15 blockdev_general -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "9ba78140-405e-11ef-b2a4-e9dca065e82e"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "9ba78140-405e-11ef-b2a4-e9dca065e82e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "11d11fc3-3399-c25e-97b0-05b6b842eea9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "11d11fc3-3399-c25e-97b0-05b6b842eea9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "aa44c768-23c5-8053-9294-ad0db7fea047"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "aa44c768-23c5-8053-9294-ad0db7fea047",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "aa409e6d-761a-e154-b66b-884517c0db2a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "aa409e6d-761a-e154-b66b-884517c0db2a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "f41bce35-2b7e-9b54-a266-8499efd7b693"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f41bce35-2b7e-9b54-a266-8499efd7b693",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "fbd3a77f-e368-5455-a9ea-e7b068dc340a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "fbd3a77f-e368-5455-a9ea-e7b068dc340a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "5b004450-7796-0d59-8652-11dff062d405"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5b004450-7796-0d59-8652-11dff062d405",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "7f401e99-61b4-b75c-8e42-fd139bffacb0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7f401e99-61b4-b75c-8e42-fd139bffacb0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "2b74d293-3a44-bb5a-94c4-1665b41732df"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2b74d293-3a44-bb5a-94c4-1665b41732df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "77e30866-1c7d-6d5f-9cbb-25da8941d5e9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "77e30866-1c7d-6d5f-9cbb-25da8941d5e9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "b856663b-c893-6e55-b879-f4bc25e6d464"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b856663b-c893-6e55-b879-f4bc25e6d464",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "781bf03a-4b7f-cc5f-b391-457117d06620"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "781bf03a-4b7f-cc5f-b391-457117d06620",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "9bb4fd09-405e-11ef-b2a4-e9dca065e82e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "9bb4fd09-405e-11ef-b2a4-e9dca065e82e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "9bb4fd09-405e-11ef-b2a4-e9dca065e82e",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "9bac61e1-405e-11ef-b2a4-e9dca065e82e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "9bad9a5b-405e-11ef-b2a4-e9dca065e82e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "9bb62724-405e-11ef-b2a4-e9dca065e82e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "9bb62724-405e-11ef-b2a4-e9dca065e82e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "9bb62724-405e-11ef-b2a4-e9dca065e82e",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "9baed2e5-405e-11ef-b2a4-e9dca065e82e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "9bb00b69-405e-11ef-b2a4-e9dca065e82e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "9bb75fa3-405e-11ef-b2a4-e9dca065e82e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "9bb75fa3-405e-11ef-b2a4-e9dca065e82e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "9bb75fa3-405e-11ef-b2a4-e9dca065e82e",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "9bb143e2-405e-11ef-b2a4-e9dca065e82e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "9bb27cb2-405e-11ef-b2a4-e9dca065e82e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "9bbfec36-405e-11ef-b2a4-e9dca065e82e"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "9bbfec36-405e-11ef-b2a4-e9dca065e82e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:05:50.078 14:54:15 blockdev_general -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:05:50.078 14:54:15 blockdev_general -- bdev/blockdev.sh@752 -- # hello_world_bdev=Malloc0 00:05:50.078 14:54:15 blockdev_general -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:05:50.078 14:54:15 blockdev_general -- bdev/blockdev.sh@754 -- # killprocess 47704 00:05:50.078 14:54:15 blockdev_general -- common/autotest_common.sh@948 -- # '[' -z 47704 ']' 00:05:50.078 14:54:15 blockdev_general -- common/autotest_common.sh@952 -- # kill -0 47704 00:05:50.078 14:54:15 blockdev_general -- common/autotest_common.sh@953 -- # uname 00:05:50.078 14:54:15 blockdev_general -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:05:50.078 14:54:15 blockdev_general -- common/autotest_common.sh@956 -- # ps -c -o command 47704 00:05:50.078 14:54:15 blockdev_general -- common/autotest_common.sh@956 -- # tail -1 00:05:50.078 14:54:15 blockdev_general -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:05:50.078 killing process with pid 47704 00:05:50.078 14:54:15 blockdev_general -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:05:50.078 14:54:15 blockdev_general -- common/autotest_common.sh@966 -- # echo 'killing process with pid 47704' 00:05:50.078 14:54:15 blockdev_general -- common/autotest_common.sh@967 -- # kill 47704 00:05:50.078 14:54:15 blockdev_general -- common/autotest_common.sh@972 -- # wait 47704 00:05:50.645 14:54:16 blockdev_general -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:50.645 14:54:16 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:05:50.645 14:54:16 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:50.645 14:54:16 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.645 14:54:16 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:50.645 ************************************ 00:05:50.645 START TEST bdev_hello_world 00:05:50.645 ************************************ 00:05:50.645 14:54:16 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:05:50.645 [2024-07-12 14:54:16.208242] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:05:50.645 [2024-07-12 14:54:16.208534] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:51.214 EAL: TSC is not safe to use in SMP mode 00:05:51.214 EAL: TSC is not invariant 00:05:51.214 [2024-07-12 14:54:16.738275] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.214 [2024-07-12 14:54:16.837613] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:51.214 [2024-07-12 14:54:16.840284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.214 [2024-07-12 14:54:16.901696] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:51.214 [2024-07-12 14:54:16.901805] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:51.214 [2024-07-12 14:54:16.909621] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:51.214 [2024-07-12 14:54:16.909671] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:51.214 [2024-07-12 14:54:16.917645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:51.214 [2024-07-12 14:54:16.917701] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:51.214 [2024-07-12 14:54:16.917714] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:51.214 [2024-07-12 14:54:16.965661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:51.214 [2024-07-12 14:54:16.965724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:51.214 [2024-07-12 14:54:16.965736] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13e58d036800 00:05:51.214 [2024-07-12 14:54:16.965744] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:51.214 [2024-07-12 14:54:16.966157] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:51.214 [2024-07-12 14:54:16.966183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:51.473 [2024-07-12 14:54:17.065738] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:05:51.473 [2024-07-12 14:54:17.065824] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:05:51.473 [2024-07-12 14:54:17.065839] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:05:51.473 [2024-07-12 14:54:17.065856] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:05:51.473 [2024-07-12 14:54:17.065871] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:05:51.473 [2024-07-12 14:54:17.065879] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:05:51.473 [2024-07-12 14:54:17.065892] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:05:51.473 00:05:51.473 [2024-07-12 14:54:17.065901] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:05:51.732 00:05:51.732 real 0m1.112s 00:05:51.732 user 0m0.532s 00:05:51.732 sys 0m0.578s 00:05:51.732 14:54:17 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.732 14:54:17 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:05:51.732 ************************************ 00:05:51.732 END TEST bdev_hello_world 00:05:51.732 ************************************ 00:05:51.732 14:54:17 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:05:51.732 14:54:17 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:05:51.732 14:54:17 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:51.732 14:54:17 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.732 14:54:17 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:51.732 ************************************ 00:05:51.732 START TEST bdev_bounds 00:05:51.732 ************************************ 00:05:51.732 14:54:17 blockdev_general.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:05:51.732 14:54:17 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=47752 00:05:51.732 14:54:17 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:05:51.732 Process bdevio pid: 47752 00:05:51.732 14:54:17 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 47752' 00:05:51.732 14:54:17 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:51.732 14:54:17 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 47752 00:05:51.732 14:54:17 blockdev_general.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 47752 ']' 00:05:51.732 14:54:17 blockdev_general.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.732 14:54:17 blockdev_general.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.732 14:54:17 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.732 14:54:17 blockdev_general.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.732 14:54:17 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:51.732 [2024-07-12 14:54:17.370295] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:05:51.732 [2024-07-12 14:54:17.370530] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:52.299 EAL: TSC is not safe to use in SMP mode 00:05:52.299 EAL: TSC is not invariant 00:05:52.299 [2024-07-12 14:54:17.894322] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.299 [2024-07-12 14:54:17.991315] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:52.299 [2024-07-12 14:54:17.991408] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:05:52.299 [2024-07-12 14:54:17.991432] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:05:52.299 [2024-07-12 14:54:17.995692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.299 [2024-07-12 14:54:17.995563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.299 [2024-07-12 14:54:17.995685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.299 [2024-07-12 14:54:18.054362] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:52.299 [2024-07-12 14:54:18.054438] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:52.299 [2024-07-12 14:54:18.062343] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:52.299 [2024-07-12 14:54:18.062389] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:52.299 [2024-07-12 14:54:18.070360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:52.299 [2024-07-12 14:54:18.070409] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:52.299 [2024-07-12 14:54:18.070434] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:52.558 [2024-07-12 14:54:18.118365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:52.558 [2024-07-12 14:54:18.118437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:52.558 [2024-07-12 14:54:18.118448] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f6288c36800 00:05:52.558 [2024-07-12 14:54:18.118456] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:52.558 [2024-07-12 14:54:18.118826] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:52.558 [2024-07-12 14:54:18.118852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:52.866 14:54:18 blockdev_general.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.866 14:54:18 blockdev_general.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:05:52.866 14:54:18 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:05:52.866 I/O targets: 00:05:52.866 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:05:52.866 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:05:52.866 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:05:52.866 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:05:52.866 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:05:52.866 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:05:52.866 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:05:52.866 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:05:52.866 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:05:52.866 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:05:52.866 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:05:52.866 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:05:52.866 raid0: 131072 blocks of 512 bytes (64 MiB) 00:05:52.866 concat0: 131072 blocks of 512 bytes (64 MiB) 00:05:52.866 raid1: 65536 blocks of 512 bytes (32 MiB) 00:05:52.866 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:05:52.866 00:05:52.866 00:05:52.867 CUnit - A unit testing framework for C - Version 2.1-3 00:05:52.867 http://cunit.sourceforge.net/ 00:05:52.867 00:05:52.867 00:05:52.867 Suite: bdevio tests on: AIO0 00:05:52.867 Test: blockdev write read block ...passed 00:05:52.867 Test: blockdev write zeroes read block ...passed 00:05:52.867 Test: blockdev write zeroes read no split ...passed 00:05:52.867 Test: blockdev write zeroes read split ...passed 00:05:52.867 Test: blockdev write zeroes read split partial ...passed 00:05:52.867 Test: blockdev reset ...passed 00:05:52.867 Test: blockdev write read 8 blocks ...passed 00:05:52.867 Test: blockdev write read size > 128k ...passed 00:05:52.867 Test: blockdev write read invalid size ...passed 00:05:52.867 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:52.867 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:52.867 Test: blockdev write read max offset ...passed 00:05:52.867 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:52.867 Test: blockdev writev readv 8 blocks ...passed 00:05:52.867 Test: blockdev writev readv 30 x 1block ...passed 00:05:52.867 Test: blockdev writev readv block ...passed 00:05:52.867 Test: blockdev writev readv size > 128k ...passed 00:05:52.867 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:52.867 Test: blockdev comparev and writev ...passed 00:05:52.867 Test: blockdev nvme passthru rw ...passed 00:05:52.867 Test: blockdev nvme passthru vendor specific ...passed 00:05:52.867 Test: blockdev nvme admin passthru ...passed 00:05:52.867 Test: blockdev copy ...passed 00:05:52.867 Suite: bdevio tests on: raid1 00:05:52.867 Test: blockdev write read block ...passed 00:05:52.867 Test: blockdev write zeroes read block ...passed 00:05:52.867 Test: blockdev write zeroes read no split ...passed 00:05:52.867 Test: blockdev write zeroes read split ...passed 00:05:52.867 Test: blockdev write zeroes read split partial ...passed 00:05:52.867 Test: blockdev reset ...passed 00:05:52.867 Test: blockdev write read 8 blocks ...passed 00:05:52.867 Test: blockdev write read size > 128k ...passed 00:05:52.867 Test: blockdev write read invalid size ...passed 00:05:52.867 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:52.867 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:52.867 Test: blockdev write read max offset ...passed 00:05:52.867 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:52.867 Test: blockdev writev readv 8 blocks ...passed 00:05:52.867 Test: blockdev writev readv 30 x 1block ...passed 00:05:52.867 Test: blockdev writev readv block ...passed 00:05:52.867 Test: blockdev writev readv size > 128k ...passed 00:05:52.867 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:52.867 Test: blockdev comparev and writev ...passed 00:05:52.867 Test: blockdev nvme passthru rw ...passed 00:05:52.867 Test: blockdev nvme passthru vendor specific ...passed 00:05:52.867 Test: blockdev nvme admin passthru ...passed 00:05:52.867 Test: blockdev copy ...passed 00:05:52.867 Suite: bdevio tests on: concat0 00:05:52.867 Test: blockdev write read block ...passed 00:05:52.867 Test: blockdev write zeroes read block ...passed 00:05:52.867 Test: blockdev write zeroes read no split ...passed 00:05:52.867 Test: blockdev write zeroes read split ...passed 00:05:52.867 Test: blockdev write zeroes read split partial ...passed 00:05:52.867 Test: blockdev reset ...passed 00:05:52.867 Test: blockdev write read 8 blocks ...passed 00:05:52.867 Test: blockdev write read size > 128k ...passed 00:05:52.867 Test: blockdev write read invalid size ...passed 00:05:52.867 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:52.867 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:52.867 Test: blockdev write read max offset ...passed 00:05:52.867 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:52.867 Test: blockdev writev readv 8 blocks ...passed 00:05:52.867 Test: blockdev writev readv 30 x 1block ...passed 00:05:52.867 Test: blockdev writev readv block ...passed 00:05:52.867 Test: blockdev writev readv size > 128k ...passed 00:05:52.867 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:52.867 Test: blockdev comparev and writev ...passed 00:05:52.867 Test: blockdev nvme passthru rw ...passed 00:05:52.867 Test: blockdev nvme passthru vendor specific ...passed 00:05:52.867 Test: blockdev nvme admin passthru ...passed 00:05:52.867 Test: blockdev copy ...passed 00:05:52.867 Suite: bdevio tests on: raid0 00:05:52.867 Test: blockdev write read block ...passed 00:05:52.867 Test: blockdev write zeroes read block ...passed 00:05:52.867 Test: blockdev write zeroes read no split ...passed 00:05:52.867 Test: blockdev write zeroes read split ...passed 00:05:52.867 Test: blockdev write zeroes read split partial ...passed 00:05:52.867 Test: blockdev reset ...passed 00:05:52.867 Test: blockdev write read 8 blocks ...passed 00:05:52.867 Test: blockdev write read size > 128k ...passed 00:05:52.867 Test: blockdev write read invalid size ...passed 00:05:52.867 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:52.867 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:52.867 Test: blockdev write read max offset ...passed 00:05:52.867 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:52.867 Test: blockdev writev readv 8 blocks ...passed 00:05:52.867 Test: blockdev writev readv 30 x 1block ...passed 00:05:52.867 Test: blockdev writev readv block ...passed 00:05:52.867 Test: blockdev writev readv size > 128k ...passed 00:05:52.867 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:52.867 Test: blockdev comparev and writev ...passed 00:05:52.867 Test: blockdev nvme passthru rw ...passed 00:05:52.867 Test: blockdev nvme passthru vendor specific ...passed 00:05:52.867 Test: blockdev nvme admin passthru ...passed 00:05:52.867 Test: blockdev copy ...passed 00:05:52.867 Suite: bdevio tests on: TestPT 00:05:52.867 Test: blockdev write read block ...passed 00:05:52.867 Test: blockdev write zeroes read block ...passed 00:05:52.867 Test: blockdev write zeroes read no split ...passed 00:05:52.867 Test: blockdev write zeroes read split ...passed 00:05:52.867 Test: blockdev write zeroes read split partial ...passed 00:05:52.867 Test: blockdev reset ...passed 00:05:53.130 Test: blockdev write read 8 blocks ...passed 00:05:53.130 Test: blockdev write read size > 128k ...passed 00:05:53.130 Test: blockdev write read invalid size ...passed 00:05:53.130 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:53.130 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:53.130 Test: blockdev write read max offset ...passed 00:05:53.130 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:53.130 Test: blockdev writev readv 8 blocks ...passed 00:05:53.130 Test: blockdev writev readv 30 x 1block ...passed 00:05:53.130 Test: blockdev writev readv block ...passed 00:05:53.130 Test: blockdev writev readv size > 128k ...passed 00:05:53.130 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:53.130 Test: blockdev comparev and writev ...passed 00:05:53.130 Test: blockdev nvme passthru rw ...passed 00:05:53.130 Test: blockdev nvme passthru vendor specific ...passed 00:05:53.130 Test: blockdev nvme admin passthru ...passed 00:05:53.130 Test: blockdev copy ...passed 00:05:53.130 Suite: bdevio tests on: Malloc2p7 00:05:53.130 Test: blockdev write read block ...passed 00:05:53.130 Test: blockdev write zeroes read block ...passed 00:05:53.130 Test: blockdev write zeroes read no split ...passed 00:05:53.130 Test: blockdev write zeroes read split ...passed 00:05:53.130 Test: blockdev write zeroes read split partial ...passed 00:05:53.130 Test: blockdev reset ...passed 00:05:53.130 Test: blockdev write read 8 blocks ...passed 00:05:53.130 Test: blockdev write read size > 128k ...passed 00:05:53.130 Test: blockdev write read invalid size ...passed 00:05:53.130 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:53.130 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:53.130 Test: blockdev write read max offset ...passed 00:05:53.130 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:53.130 Test: blockdev writev readv 8 blocks ...passed 00:05:53.130 Test: blockdev writev readv 30 x 1block ...passed 00:05:53.130 Test: blockdev writev readv block ...passed 00:05:53.130 Test: blockdev writev readv size > 128k ...passed 00:05:53.130 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:53.130 Test: blockdev comparev and writev ...passed 00:05:53.130 Test: blockdev nvme passthru rw ...passed 00:05:53.130 Test: blockdev nvme passthru vendor specific ...passed 00:05:53.130 Test: blockdev nvme admin passthru ...passed 00:05:53.130 Test: blockdev copy ...passed 00:05:53.130 Suite: bdevio tests on: Malloc2p6 00:05:53.130 Test: blockdev write read block ...passed 00:05:53.130 Test: blockdev write zeroes read block ...passed 00:05:53.130 Test: blockdev write zeroes read no split ...passed 00:05:53.130 Test: blockdev write zeroes read split ...passed 00:05:53.130 Test: blockdev write zeroes read split partial ...passed 00:05:53.130 Test: blockdev reset ...passed 00:05:53.130 Test: blockdev write read 8 blocks ...passed 00:05:53.130 Test: blockdev write read size > 128k ...passed 00:05:53.130 Test: blockdev write read invalid size ...passed 00:05:53.130 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:53.130 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:53.130 Test: blockdev write read max offset ...passed 00:05:53.130 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:53.130 Test: blockdev writev readv 8 blocks ...passed 00:05:53.130 Test: blockdev writev readv 30 x 1block ...passed 00:05:53.130 Test: blockdev writev readv block ...passed 00:05:53.130 Test: blockdev writev readv size > 128k ...passed 00:05:53.130 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:53.130 Test: blockdev comparev and writev ...passed 00:05:53.130 Test: blockdev nvme passthru rw ...passed 00:05:53.130 Test: blockdev nvme passthru vendor specific ...passed 00:05:53.130 Test: blockdev nvme admin passthru ...passed 00:05:53.130 Test: blockdev copy ...passed 00:05:53.130 Suite: bdevio tests on: Malloc2p5 00:05:53.130 Test: blockdev write read block ...passed 00:05:53.130 Test: blockdev write zeroes read block ...passed 00:05:53.130 Test: blockdev write zeroes read no split ...passed 00:05:53.130 Test: blockdev write zeroes read split ...passed 00:05:53.130 Test: blockdev write zeroes read split partial ...passed 00:05:53.130 Test: blockdev reset ...passed 00:05:53.130 Test: blockdev write read 8 blocks ...passed 00:05:53.130 Test: blockdev write read size > 128k ...passed 00:05:53.130 Test: blockdev write read invalid size ...passed 00:05:53.130 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:53.130 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:53.130 Test: blockdev write read max offset ...passed 00:05:53.130 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:53.130 Test: blockdev writev readv 8 blocks ...passed 00:05:53.130 Test: blockdev writev readv 30 x 1block ...passed 00:05:53.130 Test: blockdev writev readv block ...passed 00:05:53.130 Test: blockdev writev readv size > 128k ...passed 00:05:53.130 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:53.130 Test: blockdev comparev and writev ...passed 00:05:53.130 Test: blockdev nvme passthru rw ...passed 00:05:53.130 Test: blockdev nvme passthru vendor specific ...passed 00:05:53.130 Test: blockdev nvme admin passthru ...passed 00:05:53.130 Test: blockdev copy ...passed 00:05:53.130 Suite: bdevio tests on: Malloc2p4 00:05:53.130 Test: blockdev write read block ...passed 00:05:53.130 Test: blockdev write zeroes read block ...passed 00:05:53.130 Test: blockdev write zeroes read no split ...passed 00:05:53.130 Test: blockdev write zeroes read split ...passed 00:05:53.130 Test: blockdev write zeroes read split partial ...passed 00:05:53.130 Test: blockdev reset ...passed 00:05:53.130 Test: blockdev write read 8 blocks ...passed 00:05:53.130 Test: blockdev write read size > 128k ...passed 00:05:53.130 Test: blockdev write read invalid size ...passed 00:05:53.130 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:53.130 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:53.130 Test: blockdev write read max offset ...passed 00:05:53.130 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:53.130 Test: blockdev writev readv 8 blocks ...passed 00:05:53.130 Test: blockdev writev readv 30 x 1block ...passed 00:05:53.130 Test: blockdev writev readv block ...passed 00:05:53.130 Test: blockdev writev readv size > 128k ...passed 00:05:53.130 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:53.130 Test: blockdev comparev and writev ...passed 00:05:53.130 Test: blockdev nvme passthru rw ...passed 00:05:53.130 Test: blockdev nvme passthru vendor specific ...passed 00:05:53.130 Test: blockdev nvme admin passthru ...passed 00:05:53.130 Test: blockdev copy ...passed 00:05:53.130 Suite: bdevio tests on: Malloc2p3 00:05:53.130 Test: blockdev write read block ...passed 00:05:53.130 Test: blockdev write zeroes read block ...passed 00:05:53.130 Test: blockdev write zeroes read no split ...passed 00:05:53.130 Test: blockdev write zeroes read split ...passed 00:05:53.130 Test: blockdev write zeroes read split partial ...passed 00:05:53.130 Test: blockdev reset ...passed 00:05:53.130 Test: blockdev write read 8 blocks ...passed 00:05:53.130 Test: blockdev write read size > 128k ...passed 00:05:53.130 Test: blockdev write read invalid size ...passed 00:05:53.130 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:53.130 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:53.130 Test: blockdev write read max offset ...passed 00:05:53.130 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:53.130 Test: blockdev writev readv 8 blocks ...passed 00:05:53.130 Test: blockdev writev readv 30 x 1block ...passed 00:05:53.130 Test: blockdev writev readv block ...passed 00:05:53.130 Test: blockdev writev readv size > 128k ...passed 00:05:53.130 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:53.130 Test: blockdev comparev and writev ...passed 00:05:53.130 Test: blockdev nvme passthru rw ...passed 00:05:53.130 Test: blockdev nvme passthru vendor specific ...passed 00:05:53.130 Test: blockdev nvme admin passthru ...passed 00:05:53.130 Test: blockdev copy ...passed 00:05:53.130 Suite: bdevio tests on: Malloc2p2 00:05:53.130 Test: blockdev write read block ...passed 00:05:53.130 Test: blockdev write zeroes read block ...passed 00:05:53.130 Test: blockdev write zeroes read no split ...passed 00:05:53.130 Test: blockdev write zeroes read split ...passed 00:05:53.130 Test: blockdev write zeroes read split partial ...passed 00:05:53.130 Test: blockdev reset ...passed 00:05:53.130 Test: blockdev write read 8 blocks ...passed 00:05:53.130 Test: blockdev write read size > 128k ...passed 00:05:53.130 Test: blockdev write read invalid size ...passed 00:05:53.130 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:53.130 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:53.130 Test: blockdev write read max offset ...passed 00:05:53.130 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:53.130 Test: blockdev writev readv 8 blocks ...passed 00:05:53.130 Test: blockdev writev readv 30 x 1block ...passed 00:05:53.130 Test: blockdev writev readv block ...passed 00:05:53.130 Test: blockdev writev readv size > 128k ...passed 00:05:53.130 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:53.130 Test: blockdev comparev and writev ...passed 00:05:53.130 Test: blockdev nvme passthru rw ...passed 00:05:53.130 Test: blockdev nvme passthru vendor specific ...passed 00:05:53.130 Test: blockdev nvme admin passthru ...passed 00:05:53.130 Test: blockdev copy ...passed 00:05:53.130 Suite: bdevio tests on: Malloc2p1 00:05:53.130 Test: blockdev write read block ...passed 00:05:53.130 Test: blockdev write zeroes read block ...passed 00:05:53.130 Test: blockdev write zeroes read no split ...passed 00:05:53.130 Test: blockdev write zeroes read split ...passed 00:05:53.130 Test: blockdev write zeroes read split partial ...passed 00:05:53.130 Test: blockdev reset ...passed 00:05:53.130 Test: blockdev write read 8 blocks ...passed 00:05:53.130 Test: blockdev write read size > 128k ...passed 00:05:53.130 Test: blockdev write read invalid size ...passed 00:05:53.130 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:53.130 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:53.130 Test: blockdev write read max offset ...passed 00:05:53.130 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:53.130 Test: blockdev writev readv 8 blocks ...passed 00:05:53.130 Test: blockdev writev readv 30 x 1block ...passed 00:05:53.130 Test: blockdev writev readv block ...passed 00:05:53.130 Test: blockdev writev readv size > 128k ...passed 00:05:53.130 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:53.130 Test: blockdev comparev and writev ...passed 00:05:53.130 Test: blockdev nvme passthru rw ...passed 00:05:53.130 Test: blockdev nvme passthru vendor specific ...passed 00:05:53.130 Test: blockdev nvme admin passthru ...passed 00:05:53.130 Test: blockdev copy ...passed 00:05:53.130 Suite: bdevio tests on: Malloc2p0 00:05:53.130 Test: blockdev write read block ...passed 00:05:53.130 Test: blockdev write zeroes read block ...passed 00:05:53.130 Test: blockdev write zeroes read no split ...passed 00:05:53.130 Test: blockdev write zeroes read split ...passed 00:05:53.130 Test: blockdev write zeroes read split partial ...passed 00:05:53.130 Test: blockdev reset ...passed 00:05:53.130 Test: blockdev write read 8 blocks ...passed 00:05:53.130 Test: blockdev write read size > 128k ...passed 00:05:53.130 Test: blockdev write read invalid size ...passed 00:05:53.130 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:53.130 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:53.130 Test: blockdev write read max offset ...passed 00:05:53.130 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:53.130 Test: blockdev writev readv 8 blocks ...passed 00:05:53.130 Test: blockdev writev readv 30 x 1block ...passed 00:05:53.130 Test: blockdev writev readv block ...passed 00:05:53.130 Test: blockdev writev readv size > 128k ...passed 00:05:53.130 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:53.130 Test: blockdev comparev and writev ...passed 00:05:53.130 Test: blockdev nvme passthru rw ...passed 00:05:53.130 Test: blockdev nvme passthru vendor specific ...passed 00:05:53.130 Test: blockdev nvme admin passthru ...passed 00:05:53.130 Test: blockdev copy ...passed 00:05:53.130 Suite: bdevio tests on: Malloc1p1 00:05:53.130 Test: blockdev write read block ...passed 00:05:53.130 Test: blockdev write zeroes read block ...passed 00:05:53.130 Test: blockdev write zeroes read no split ...passed 00:05:53.130 Test: blockdev write zeroes read split ...passed 00:05:53.130 Test: blockdev write zeroes read split partial ...passed 00:05:53.130 Test: blockdev reset ...passed 00:05:53.130 Test: blockdev write read 8 blocks ...passed 00:05:53.130 Test: blockdev write read size > 128k ...passed 00:05:53.130 Test: blockdev write read invalid size ...passed 00:05:53.130 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:53.130 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:53.130 Test: blockdev write read max offset ...passed 00:05:53.130 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:53.130 Test: blockdev writev readv 8 blocks ...passed 00:05:53.130 Test: blockdev writev readv 30 x 1block ...passed 00:05:53.130 Test: blockdev writev readv block ...passed 00:05:53.130 Test: blockdev writev readv size > 128k ...passed 00:05:53.130 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:53.130 Test: blockdev comparev and writev ...passed 00:05:53.130 Test: blockdev nvme passthru rw ...passed 00:05:53.130 Test: blockdev nvme passthru vendor specific ...passed 00:05:53.130 Test: blockdev nvme admin passthru ...passed 00:05:53.130 Test: blockdev copy ...passed 00:05:53.130 Suite: bdevio tests on: Malloc1p0 00:05:53.130 Test: blockdev write read block ...passed 00:05:53.130 Test: blockdev write zeroes read block ...passed 00:05:53.130 Test: blockdev write zeroes read no split ...passed 00:05:53.130 Test: blockdev write zeroes read split ...passed 00:05:53.130 Test: blockdev write zeroes read split partial ...passed 00:05:53.130 Test: blockdev reset ...passed 00:05:53.130 Test: blockdev write read 8 blocks ...passed 00:05:53.130 Test: blockdev write read size > 128k ...passed 00:05:53.130 Test: blockdev write read invalid size ...passed 00:05:53.130 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:53.131 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:53.131 Test: blockdev write read max offset ...passed 00:05:53.131 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:53.131 Test: blockdev writev readv 8 blocks ...passed 00:05:53.131 Test: blockdev writev readv 30 x 1block ...passed 00:05:53.131 Test: blockdev writev readv block ...passed 00:05:53.131 Test: blockdev writev readv size > 128k ...passed 00:05:53.131 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:53.131 Test: blockdev comparev and writev ...passed 00:05:53.131 Test: blockdev nvme passthru rw ...passed 00:05:53.131 Test: blockdev nvme passthru vendor specific ...passed 00:05:53.131 Test: blockdev nvme admin passthru ...passed 00:05:53.131 Test: blockdev copy ...passed 00:05:53.131 Suite: bdevio tests on: Malloc0 00:05:53.131 Test: blockdev write read block ...passed 00:05:53.131 Test: blockdev write zeroes read block ...passed 00:05:53.131 Test: blockdev write zeroes read no split ...passed 00:05:53.131 Test: blockdev write zeroes read split ...passed 00:05:53.131 Test: blockdev write zeroes read split partial ...passed 00:05:53.131 Test: blockdev reset ...passed 00:05:53.131 Test: blockdev write read 8 blocks ...passed 00:05:53.131 Test: blockdev write read size > 128k ...passed 00:05:53.131 Test: blockdev write read invalid size ...passed 00:05:53.131 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:53.131 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:53.131 Test: blockdev write read max offset ...passed 00:05:53.131 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:53.131 Test: blockdev writev readv 8 blocks ...passed 00:05:53.131 Test: blockdev writev readv 30 x 1block ...passed 00:05:53.131 Test: blockdev writev readv block ...passed 00:05:53.131 Test: blockdev writev readv size > 128k ...passed 00:05:53.131 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:53.131 Test: blockdev comparev and writev ...passed 00:05:53.131 Test: blockdev nvme passthru rw ...passed 00:05:53.131 Test: blockdev nvme passthru vendor specific ...passed 00:05:53.131 Test: blockdev nvme admin passthru ...passed 00:05:53.131 Test: blockdev copy ...passed 00:05:53.131 00:05:53.131 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.131 suites 16 16 n/a 0 0 00:05:53.131 tests 368 368 368 0 0 00:05:53.131 asserts 2224 2224 2224 0 n/a 00:05:53.131 00:05:53.131 Elapsed time = 0.500 seconds 00:05:53.131 0 00:05:53.131 14:54:18 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 47752 00:05:53.131 14:54:18 blockdev_general.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 47752 ']' 00:05:53.131 14:54:18 blockdev_general.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 47752 00:05:53.131 14:54:18 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:05:53.131 14:54:18 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:05:53.131 14:54:18 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # tail -1 00:05:53.131 14:54:18 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # ps -c -o command 47752 00:05:53.131 14:54:18 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=bdevio 00:05:53.131 14:54:18 blockdev_general.bdev_bounds -- common/autotest_common.sh@958 -- # '[' bdevio = sudo ']' 00:05:53.131 killing process with pid 47752 00:05:53.131 14:54:18 blockdev_general.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 47752' 00:05:53.131 14:54:18 blockdev_general.bdev_bounds -- common/autotest_common.sh@967 -- # kill 47752 00:05:53.131 14:54:18 blockdev_general.bdev_bounds -- common/autotest_common.sh@972 -- # wait 47752 00:05:53.390 14:54:19 blockdev_general.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:05:53.390 00:05:53.390 real 0m1.687s 00:05:53.390 user 0m3.346s 00:05:53.390 sys 0m0.679s 00:05:53.390 14:54:19 blockdev_general.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.390 14:54:19 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:53.390 ************************************ 00:05:53.390 END TEST bdev_bounds 00:05:53.390 ************************************ 00:05:53.390 14:54:19 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:05:53.390 14:54:19 blockdev_general -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:05:53.390 14:54:19 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:53.390 14:54:19 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.390 14:54:19 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:53.390 ************************************ 00:05:53.390 START TEST bdev_nbd 00:05:53.390 ************************************ 00:05:53.390 14:54:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:05:53.390 14:54:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:05:53.390 14:54:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ FreeBSD == Linux ]] 00:05:53.390 14:54:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # return 0 00:05:53.390 00:05:53.390 real 0m0.004s 00:05:53.390 user 0m0.001s 00:05:53.390 sys 0m0.007s 00:05:53.390 14:54:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.390 ************************************ 00:05:53.390 END TEST bdev_nbd 00:05:53.390 ************************************ 00:05:53.390 14:54:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:53.390 14:54:19 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:05:53.390 14:54:19 blockdev_general -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:05:53.390 14:54:19 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = nvme ']' 00:05:53.390 14:54:19 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = gpt ']' 00:05:53.390 14:54:19 blockdev_general -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:05:53.390 14:54:19 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:53.390 14:54:19 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.390 14:54:19 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:53.390 ************************************ 00:05:53.390 START TEST bdev_fio 00:05:53.390 ************************************ 00:05:53.390 14:54:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:05:53.390 14:54:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:05:53.390 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:05:53.390 14:54:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:05:53.390 14:54:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:05:53.390 14:54:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:05:53.390 14:54:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:05:53.390 14:54:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:05:53.390 14:54:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:05:53.390 14:54:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:53.390 14:54:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:05:53.390 14:54:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:05:53.390 14:54:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:05:53.390 14:54:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:05:53.390 14:54:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:05:53.390 14:54:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:05:53.390 14:54:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:05:53.390 14:54:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:53.390 14:54:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:05:53.391 14:54:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:05:53.391 14:54:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:05:53.391 14:54:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:05:53.391 14:54:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc0]' 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc0 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p0]' 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p0 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p1]' 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p1 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p0]' 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p0 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p1]' 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p1 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p2]' 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p2 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p3]' 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p3 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p4]' 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p4 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p5]' 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p5 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p6]' 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p6 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p7]' 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p7 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_TestPT]' 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=TestPT 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid0]' 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid0 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_concat0]' 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=concat0 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid1]' 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid1 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_AIO0]' 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=AIO0 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.327 14:54:20 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:05:54.327 ************************************ 00:05:54.327 START TEST bdev_fio_rw_verify 00:05:54.327 ************************************ 00:05:54.327 14:54:20 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:54.327 14:54:20 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:54.327 14:54:20 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:05:54.327 14:54:20 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:05:54.327 14:54:20 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:05:54.327 14:54:20 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:54.327 14:54:20 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:05:54.327 14:54:20 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:05:54.327 14:54:20 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:05:54.327 14:54:20 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:54.327 14:54:20 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:05:54.327 14:54:20 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:05:54.327 14:54:20 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:05:54.327 14:54:20 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:05:54.327 14:54:20 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:05:54.327 14:54:20 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:05:54.327 14:54:20 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:54.327 14:54:20 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:05:54.327 14:54:20 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:05:54.327 14:54:20 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:05:54.327 14:54:20 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:05:54.328 14:54:20 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:54.328 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:54.328 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:54.328 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:54.328 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:54.328 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:54.328 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:54.328 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:54.328 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:54.328 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:54.328 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:54.328 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:54.328 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:54.328 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:54.328 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:54.328 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:54.328 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:54.328 fio-3.35 00:05:54.586 Starting 16 threads 00:05:55.152 EAL: TSC is not safe to use in SMP mode 00:05:55.152 EAL: TSC is not invariant 00:06:07.352 00:06:07.352 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=101353: Fri Jul 12 14:54:31 2024 00:06:07.352 read: IOPS=232k, BW=907MiB/s (951MB/s)(9072MiB/10003msec) 00:06:07.352 slat (nsec): min=285, max=188989k, avg=4205.22, stdev=450630.23 00:06:07.352 clat (nsec): min=880, max=189016k, avg=49202.74, stdev=1393595.51 00:06:07.352 lat (usec): min=2, max=189018, avg=53.41, stdev=1464.67 00:06:07.352 clat percentiles (usec): 00:06:07.352 | 50.000th=[ 10], 99.000th=[ 717], 99.900th=[ 1172], 00:06:07.352 | 99.990th=[ 90702], 99.999th=[158335] 00:06:07.352 write: IOPS=395k, BW=1542MiB/s (1617MB/s)(14.9GiB/9918msec); 0 zone resets 00:06:07.352 slat (nsec): min=672, max=535459k, avg=21306.34, stdev=986904.48 00:06:07.352 clat (nsec): min=827, max=535547k, avg=104611.25, stdev=2211671.80 00:06:07.352 lat (usec): min=12, max=535558, avg=125.92, stdev=2422.71 00:06:07.352 clat percentiles (usec): 00:06:07.352 | 50.000th=[ 51], 99.000th=[ 717], 99.900th=[ 2868], 00:06:07.352 | 99.990th=[ 95945], 99.999th=[229639] 00:06:07.352 bw ( MiB/s): min= 680, max= 2529, per=99.24%, avg=1530.06, stdev=40.92, samples=299 00:06:07.352 iops : min=174256, max=647652, avg=391695.50, stdev=10476.30, samples=299 00:06:07.352 lat (nsec) : 1000=0.01% 00:06:07.352 lat (usec) : 2=0.04%, 4=11.19%, 10=17.40%, 20=21.81%, 50=16.19% 00:06:07.352 lat (usec) : 100=29.53%, 250=1.97%, 500=0.19%, 750=0.84%, 1000=0.65% 00:06:07.352 lat (msec) : 2=0.07%, 4=0.04%, 10=0.01%, 20=0.01%, 50=0.01% 00:06:07.352 lat (msec) : 100=0.02%, 250=0.01%, 500=0.01%, 750=0.01% 00:06:07.352 cpu : usr=55.91%, sys=3.07%, ctx=838133, majf=0, minf=619 00:06:07.352 IO depths : 1=12.5%, 2=24.9%, 4=49.9%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:06:07.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:06:07.352 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:06:07.352 issued rwts: total=2322407,3914734,0,0 short=0,0,0,0 dropped=0,0,0,0 00:06:07.352 latency : target=0, window=0, percentile=100.00%, depth=8 00:06:07.352 00:06:07.352 Run status group 0 (all jobs): 00:06:07.352 READ: bw=907MiB/s (951MB/s), 907MiB/s-907MiB/s (951MB/s-951MB/s), io=9072MiB (9513MB), run=10003-10003msec 00:06:07.352 WRITE: bw=1542MiB/s (1617MB/s), 1542MiB/s-1542MiB/s (1617MB/s-1617MB/s), io=14.9GiB (16.0GB), run=9918-9918msec 00:06:07.352 00:06:07.352 real 0m12.747s 00:06:07.352 user 1m34.049s 00:06:07.352 sys 0m7.273s 00:06:07.352 ************************************ 00:06:07.352 END TEST bdev_fio_rw_verify 00:06:07.352 ************************************ 00:06:07.352 14:54:32 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.352 14:54:32 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:06:07.352 14:54:32 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:06:07.352 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:06:07.352 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:06:07.352 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:06:07.352 14:54:32 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:06:07.352 14:54:32 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:06:07.352 14:54:32 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:06:07.352 14:54:32 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:06:07.352 14:54:32 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:06:07.352 14:54:32 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:06:07.352 14:54:32 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:06:07.352 14:54:32 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:06:07.352 14:54:32 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:06:07.352 14:54:32 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:06:07.352 14:54:32 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:06:07.352 14:54:32 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:06:07.352 14:54:32 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:06:07.352 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:06:07.353 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "9ba78140-405e-11ef-b2a4-e9dca065e82e"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "9ba78140-405e-11ef-b2a4-e9dca065e82e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "11d11fc3-3399-c25e-97b0-05b6b842eea9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "11d11fc3-3399-c25e-97b0-05b6b842eea9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "aa44c768-23c5-8053-9294-ad0db7fea047"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "aa44c768-23c5-8053-9294-ad0db7fea047",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "aa409e6d-761a-e154-b66b-884517c0db2a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "aa409e6d-761a-e154-b66b-884517c0db2a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "f41bce35-2b7e-9b54-a266-8499efd7b693"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f41bce35-2b7e-9b54-a266-8499efd7b693",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "fbd3a77f-e368-5455-a9ea-e7b068dc340a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "fbd3a77f-e368-5455-a9ea-e7b068dc340a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "5b004450-7796-0d59-8652-11dff062d405"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5b004450-7796-0d59-8652-11dff062d405",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "7f401e99-61b4-b75c-8e42-fd139bffacb0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7f401e99-61b4-b75c-8e42-fd139bffacb0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "2b74d293-3a44-bb5a-94c4-1665b41732df"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2b74d293-3a44-bb5a-94c4-1665b41732df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "77e30866-1c7d-6d5f-9cbb-25da8941d5e9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "77e30866-1c7d-6d5f-9cbb-25da8941d5e9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "b856663b-c893-6e55-b879-f4bc25e6d464"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b856663b-c893-6e55-b879-f4bc25e6d464",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "781bf03a-4b7f-cc5f-b391-457117d06620"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "781bf03a-4b7f-cc5f-b391-457117d06620",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "9bb4fd09-405e-11ef-b2a4-e9dca065e82e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "9bb4fd09-405e-11ef-b2a4-e9dca065e82e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "9bb4fd09-405e-11ef-b2a4-e9dca065e82e",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "9bac61e1-405e-11ef-b2a4-e9dca065e82e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "9bad9a5b-405e-11ef-b2a4-e9dca065e82e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "9bb62724-405e-11ef-b2a4-e9dca065e82e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "9bb62724-405e-11ef-b2a4-e9dca065e82e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "9bb62724-405e-11ef-b2a4-e9dca065e82e",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "9baed2e5-405e-11ef-b2a4-e9dca065e82e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "9bb00b69-405e-11ef-b2a4-e9dca065e82e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "9bb75fa3-405e-11ef-b2a4-e9dca065e82e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "9bb75fa3-405e-11ef-b2a4-e9dca065e82e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "9bb75fa3-405e-11ef-b2a4-e9dca065e82e",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "9bb143e2-405e-11ef-b2a4-e9dca065e82e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "9bb27cb2-405e-11ef-b2a4-e9dca065e82e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "9bbfec36-405e-11ef-b2a4-e9dca065e82e"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "9bbfec36-405e-11ef-b2a4-e9dca065e82e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:06:07.353 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n Malloc0 00:06:07.353 Malloc1p0 00:06:07.353 Malloc1p1 00:06:07.353 Malloc2p0 00:06:07.353 Malloc2p1 00:06:07.353 Malloc2p2 00:06:07.353 Malloc2p3 00:06:07.353 Malloc2p4 00:06:07.353 Malloc2p5 00:06:07.353 Malloc2p6 00:06:07.353 Malloc2p7 00:06:07.353 TestPT 00:06:07.353 raid0 00:06:07.353 concat0 ]] 00:06:07.353 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "9ba78140-405e-11ef-b2a4-e9dca065e82e"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "9ba78140-405e-11ef-b2a4-e9dca065e82e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "11d11fc3-3399-c25e-97b0-05b6b842eea9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "11d11fc3-3399-c25e-97b0-05b6b842eea9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "aa44c768-23c5-8053-9294-ad0db7fea047"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "aa44c768-23c5-8053-9294-ad0db7fea047",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "aa409e6d-761a-e154-b66b-884517c0db2a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "aa409e6d-761a-e154-b66b-884517c0db2a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "f41bce35-2b7e-9b54-a266-8499efd7b693"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f41bce35-2b7e-9b54-a266-8499efd7b693",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "fbd3a77f-e368-5455-a9ea-e7b068dc340a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "fbd3a77f-e368-5455-a9ea-e7b068dc340a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "5b004450-7796-0d59-8652-11dff062d405"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5b004450-7796-0d59-8652-11dff062d405",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "7f401e99-61b4-b75c-8e42-fd139bffacb0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7f401e99-61b4-b75c-8e42-fd139bffacb0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "2b74d293-3a44-bb5a-94c4-1665b41732df"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2b74d293-3a44-bb5a-94c4-1665b41732df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "77e30866-1c7d-6d5f-9cbb-25da8941d5e9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "77e30866-1c7d-6d5f-9cbb-25da8941d5e9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "b856663b-c893-6e55-b879-f4bc25e6d464"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b856663b-c893-6e55-b879-f4bc25e6d464",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "781bf03a-4b7f-cc5f-b391-457117d06620"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "781bf03a-4b7f-cc5f-b391-457117d06620",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "9bb4fd09-405e-11ef-b2a4-e9dca065e82e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "9bb4fd09-405e-11ef-b2a4-e9dca065e82e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "9bb4fd09-405e-11ef-b2a4-e9dca065e82e",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "9bac61e1-405e-11ef-b2a4-e9dca065e82e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "9bad9a5b-405e-11ef-b2a4-e9dca065e82e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "9bb62724-405e-11ef-b2a4-e9dca065e82e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "9bb62724-405e-11ef-b2a4-e9dca065e82e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "9bb62724-405e-11ef-b2a4-e9dca065e82e",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "9baed2e5-405e-11ef-b2a4-e9dca065e82e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "9bb00b69-405e-11ef-b2a4-e9dca065e82e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "9bb75fa3-405e-11ef-b2a4-e9dca065e82e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "9bb75fa3-405e-11ef-b2a4-e9dca065e82e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "9bb75fa3-405e-11ef-b2a4-e9dca065e82e",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "9bb143e2-405e-11ef-b2a4-e9dca065e82e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "9bb27cb2-405e-11ef-b2a4-e9dca065e82e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "9bbfec36-405e-11ef-b2a4-e9dca065e82e"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "9bbfec36-405e-11ef-b2a4-e9dca065e82e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc0]' 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc0 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p0]' 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p0 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p1]' 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p1 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p0]' 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p0 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p1]' 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p1 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p2]' 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p2 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p3]' 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p3 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p4]' 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p4 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p5]' 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p5 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p6]' 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p6 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p7]' 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p7 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_TestPT]' 00:06:07.354 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=TestPT 00:06:07.355 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:06:07.355 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_raid0]' 00:06:07.355 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=raid0 00:06:07.355 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:06:07.355 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_concat0]' 00:06:07.355 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=concat0 00:06:07.355 14:54:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:06:07.355 14:54:32 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:07.355 14:54:32 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.355 14:54:32 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:06:07.355 ************************************ 00:06:07.355 START TEST bdev_fio_trim 00:06:07.355 ************************************ 00:06:07.355 14:54:32 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:06:07.355 14:54:32 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:06:07.355 14:54:32 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:06:07.355 14:54:32 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:06:07.355 14:54:32 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:06:07.355 14:54:32 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:06:07.355 14:54:32 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:06:07.355 14:54:32 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:06:07.355 14:54:32 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:06:07.355 14:54:32 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:06:07.355 14:54:32 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:06:07.355 14:54:32 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:06:07.355 14:54:32 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:06:07.355 14:54:32 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:06:07.355 14:54:32 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:06:07.355 14:54:32 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:06:07.355 14:54:32 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:06:07.355 14:54:32 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:06:07.355 14:54:32 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:06:07.355 14:54:32 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:06:07.355 14:54:32 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:06:07.355 14:54:32 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:06:07.355 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:07.355 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:07.355 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:07.355 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:07.355 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:07.355 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:07.355 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:07.355 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:07.355 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:07.355 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:07.355 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:07.355 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:07.355 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:07.355 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:07.355 fio-3.35 00:06:07.355 Starting 14 threads 00:06:07.921 EAL: TSC is not safe to use in SMP mode 00:06:07.921 EAL: TSC is not invariant 00:06:20.120 00:06:20.120 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=101372: Fri Jul 12 14:54:44 2024 00:06:20.120 write: IOPS=2408k, BW=9407MiB/s (9864MB/s)(91.9GiB/10001msec); 0 zone resets 00:06:20.120 slat (nsec): min=277, max=2055.1M, avg=1590.80, stdev=567340.12 00:06:20.120 clat (nsec): min=1435, max=2055.1M, avg=15968.53, stdev=991462.39 00:06:20.120 lat (usec): min=2, max=2055.1k, avg=17.56, stdev=1142.31 00:06:20.120 clat percentiles (usec): 00:06:20.120 | 50.000th=[ 7], 99.000th=[ 17], 99.900th=[ 955], 99.990th=[11207], 00:06:20.120 | 99.999th=[94897] 00:06:20.120 bw ( MiB/s): min= 3496, max=14849, per=100.00%, avg=9590.12, stdev=282.85, samples=261 00:06:20.120 iops : min=895218, max=3801488, avg=2455071.44, stdev=72410.46, samples=261 00:06:20.120 trim: IOPS=2408k, BW=9407MiB/s (9864MB/s)(91.9GiB/10001msec); 0 zone resets 00:06:20.120 slat (nsec): min=532, max=395762k, avg=1443.69, stdev=207780.29 00:06:20.120 clat (nsec): min=389, max=2055.2M, avg=11735.77, stdev=1084537.78 00:06:20.120 lat (nsec): min=1646, max=2055.2M, avg=13179.46, stdev=1104267.02 00:06:20.120 clat percentiles (usec): 00:06:20.120 | 50.000th=[ 8], 99.000th=[ 16], 99.900th=[ 24], 99.990th=[ 39], 00:06:20.120 | 99.999th=[94897] 00:06:20.120 bw ( MiB/s): min= 3496, max=14849, per=100.00%, avg=9590.13, stdev=282.85, samples=261 00:06:20.120 iops : min=895218, max=3801492, avg=2455073.23, stdev=72410.45, samples=261 00:06:20.120 lat (nsec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:06:20.120 lat (usec) : 2=0.11%, 4=23.67%, 10=57.49%, 20=18.24%, 50=0.24% 00:06:20.120 lat (usec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.20% 00:06:20.120 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01% 00:06:20.120 lat (msec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:06:20.120 lat (msec) : 2000=0.01%, >=2000=0.01% 00:06:20.120 cpu : usr=63.63%, sys=3.17%, ctx=893244, majf=0, minf=0 00:06:20.120 IO depths : 1=12.5%, 2=24.9%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:06:20.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:06:20.120 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:06:20.120 issued rwts: total=0,24083403,24083408,0 short=0,0,0,0 dropped=0,0,0,0 00:06:20.120 latency : target=0, window=0, percentile=100.00%, depth=8 00:06:20.120 00:06:20.120 Run status group 0 (all jobs): 00:06:20.120 WRITE: bw=9407MiB/s (9864MB/s), 9407MiB/s-9407MiB/s (9864MB/s-9864MB/s), io=91.9GiB (98.6GB), run=10001-10001msec 00:06:20.120 TRIM: bw=9407MiB/s (9864MB/s), 9407MiB/s-9407MiB/s (9864MB/s-9864MB/s), io=91.9GiB (98.6GB), run=10001-10001msec 00:06:20.120 00:06:20.120 real 0m12.490s 00:06:20.120 user 1m34.741s 00:06:20.120 sys 0m7.754s 00:06:20.120 14:54:45 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.120 14:54:45 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:06:20.120 ************************************ 00:06:20.120 END TEST bdev_fio_trim 00:06:20.120 ************************************ 00:06:20.120 14:54:45 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:06:20.120 14:54:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:06:20.120 14:54:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:06:20.120 14:54:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:06:20.120 /home/vagrant/spdk_repo/spdk 00:06:20.120 14:54:45 blockdev_general.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:06:20.120 00:06:20.120 real 0m26.291s 00:06:20.120 user 3m9.046s 00:06:20.120 sys 0m15.768s 00:06:20.120 14:54:45 blockdev_general.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.120 ************************************ 00:06:20.120 END TEST bdev_fio 00:06:20.120 ************************************ 00:06:20.120 14:54:45 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:06:20.120 14:54:45 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:06:20.120 14:54:45 blockdev_general -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:20.120 14:54:45 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:20.120 14:54:45 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:06:20.120 14:54:45 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.120 14:54:45 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:20.120 ************************************ 00:06:20.120 START TEST bdev_verify 00:06:20.120 ************************************ 00:06:20.120 14:54:45 blockdev_general.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:20.120 [2024-07-12 14:54:45.497155] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:06:20.120 [2024-07-12 14:54:45.497457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:20.378 EAL: TSC is not safe to use in SMP mode 00:06:20.378 EAL: TSC is not invariant 00:06:20.378 [2024-07-12 14:54:46.047448] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:20.378 [2024-07-12 14:54:46.131410] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:20.378 [2024-07-12 14:54:46.131477] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:20.378 [2024-07-12 14:54:46.134216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.378 [2024-07-12 14:54:46.134205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.637 [2024-07-12 14:54:46.191978] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:20.637 [2024-07-12 14:54:46.192029] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:20.637 [2024-07-12 14:54:46.199963] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:20.637 [2024-07-12 14:54:46.200008] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:20.637 [2024-07-12 14:54:46.207993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:20.637 [2024-07-12 14:54:46.208037] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:06:20.637 [2024-07-12 14:54:46.208061] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:20.637 [2024-07-12 14:54:46.256004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:20.637 [2024-07-12 14:54:46.256074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:20.637 [2024-07-12 14:54:46.256100] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12f26d236800 00:06:20.637 [2024-07-12 14:54:46.256108] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:20.637 [2024-07-12 14:54:46.256546] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:20.637 [2024-07-12 14:54:46.256575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:06:20.637 Running I/O for 5 seconds... 00:06:25.903 00:06:25.903 Latency(us) 00:06:25.903 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:25.903 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:25.903 Verification LBA range: start 0x0 length 0x1000 00:06:25.903 Malloc0 : 5.02 6818.04 26.63 0.00 0.00 18740.92 61.91 51713.90 00:06:25.903 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:25.903 Verification LBA range: start 0x1000 length 0x1000 00:06:25.903 Malloc0 : 5.03 171.64 0.67 0.00 0.00 745145.55 610.68 1189657.92 00:06:25.903 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:25.903 Verification LBA range: start 0x0 length 0x800 00:06:25.903 Malloc1p0 : 5.02 6373.10 24.89 0.00 0.00 20072.22 243.90 25022.85 00:06:25.903 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:25.903 Verification LBA range: start 0x800 length 0x800 00:06:25.903 Malloc1p0 : 5.01 6764.40 26.42 0.00 0.00 18911.15 247.62 24427.07 00:06:25.903 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:25.903 Verification LBA range: start 0x0 length 0x800 00:06:25.903 Malloc1p1 : 5.02 6372.77 24.89 0.00 0.00 20070.20 245.76 24665.38 00:06:25.903 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:25.903 Verification LBA range: start 0x800 length 0x800 00:06:25.903 Malloc1p1 : 5.01 6763.90 26.42 0.00 0.00 18909.31 245.76 24069.60 00:06:25.903 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:25.903 Verification LBA range: start 0x0 length 0x200 00:06:25.903 Malloc2p0 : 5.02 6372.37 24.89 0.00 0.00 20068.36 240.17 24546.23 00:06:25.903 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:25.903 Verification LBA range: start 0x200 length 0x200 00:06:25.903 Malloc2p0 : 5.02 6763.48 26.42 0.00 0.00 18907.45 247.62 22163.10 00:06:25.903 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:25.903 Verification LBA range: start 0x0 length 0x200 00:06:25.903 Malloc2p1 : 5.02 6371.99 24.89 0.00 0.00 20066.30 238.31 24188.76 00:06:25.903 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:25.903 Verification LBA range: start 0x200 length 0x200 00:06:25.903 Malloc2p1 : 5.02 6763.10 26.42 0.00 0.00 18905.39 245.76 21090.69 00:06:25.903 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:25.903 Verification LBA range: start 0x0 length 0x200 00:06:25.903 Malloc2p2 : 5.02 6371.67 24.89 0.00 0.00 20063.69 240.17 23950.44 00:06:25.903 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:25.903 Verification LBA range: start 0x200 length 0x200 00:06:25.903 Malloc2p2 : 5.02 6762.66 26.42 0.00 0.00 18903.19 249.48 20614.06 00:06:25.903 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:25.903 Verification LBA range: start 0x0 length 0x200 00:06:25.903 Malloc2p3 : 5.02 6371.15 24.89 0.00 0.00 20061.42 240.17 23592.98 00:06:25.903 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:25.903 Verification LBA range: start 0x200 length 0x200 00:06:25.903 Malloc2p3 : 5.02 6762.15 26.41 0.00 0.00 18901.18 247.62 20256.60 00:06:25.903 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:25.903 Verification LBA range: start 0x0 length 0x200 00:06:25.903 Malloc2p4 : 5.02 6370.82 24.89 0.00 0.00 20059.71 242.04 21090.69 00:06:25.903 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:25.903 Verification LBA range: start 0x200 length 0x200 00:06:25.903 Malloc2p4 : 5.02 6761.76 26.41 0.00 0.00 18899.13 245.76 19660.81 00:06:25.903 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:25.903 Verification LBA range: start 0x0 length 0x200 00:06:25.903 Malloc2p5 : 5.02 6370.52 24.88 0.00 0.00 20057.23 269.96 20256.60 00:06:25.903 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:25.903 Verification LBA range: start 0x200 length 0x200 00:06:25.903 Malloc2p5 : 5.02 6761.41 26.41 0.00 0.00 18897.06 277.41 19303.34 00:06:25.903 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:25.903 Verification LBA range: start 0x0 length 0x200 00:06:25.903 Malloc2p6 : 5.02 6370.17 24.88 0.00 0.00 20055.04 266.24 19303.34 00:06:25.903 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:25.903 Verification LBA range: start 0x200 length 0x200 00:06:25.903 Malloc2p6 : 5.02 6761.00 26.41 0.00 0.00 18894.98 275.55 18826.72 00:06:25.903 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:25.903 Verification LBA range: start 0x0 length 0x200 00:06:25.903 Malloc2p7 : 5.02 6369.88 24.88 0.00 0.00 20052.58 245.76 19899.13 00:06:25.903 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:25.903 Verification LBA range: start 0x200 length 0x200 00:06:25.903 Malloc2p7 : 5.02 6760.64 26.41 0.00 0.00 18892.84 243.90 18707.56 00:06:25.903 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:25.903 Verification LBA range: start 0x0 length 0x1000 00:06:25.903 TestPT : 5.02 6349.05 24.80 0.00 0.00 20100.29 901.12 19899.13 00:06:25.903 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:25.903 Verification LBA range: start 0x1000 length 0x1000 00:06:25.903 TestPT : 5.02 5420.92 21.18 0.00 0.00 23556.17 131.26 64344.48 00:06:25.903 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:25.903 Verification LBA range: start 0x0 length 0x2000 00:06:25.903 raid0 : 5.02 6369.43 24.88 0.00 0.00 20046.00 255.07 19541.66 00:06:25.903 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:25.903 Verification LBA range: start 0x2000 length 0x2000 00:06:25.903 raid0 : 5.02 6759.86 26.41 0.00 0.00 18887.12 255.07 19065.03 00:06:25.903 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:25.903 Verification LBA range: start 0x0 length 0x2000 00:06:25.903 concat0 : 5.02 6369.15 24.88 0.00 0.00 20043.42 243.90 21805.63 00:06:25.903 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:25.903 Verification LBA range: start 0x2000 length 0x2000 00:06:25.903 concat0 : 5.02 6759.44 26.40 0.00 0.00 18885.22 253.21 22401.41 00:06:25.903 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:25.903 Verification LBA range: start 0x0 length 0x1000 00:06:25.903 raid1 : 5.02 6368.70 24.88 0.00 0.00 20040.50 329.54 23592.98 00:06:25.903 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:25.903 Verification LBA range: start 0x1000 length 0x1000 00:06:25.903 raid1 : 5.02 6758.99 26.40 0.00 0.00 18882.33 377.95 24069.60 00:06:25.903 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:25.903 Verification LBA range: start 0x0 length 0x4e2 00:06:25.903 AIO0 : 5.13 716.14 2.80 0.00 0.00 177782.91 13047.63 371768.10 00:06:25.903 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:25.903 Verification LBA range: start 0x4e2 length 0x4e2 00:06:25.903 AIO0 : 5.13 718.44 2.81 0.00 0.00 177177.01 13822.15 364142.09 00:06:25.903 =================================================================================================================== 00:06:25.903 Total : 190918.75 745.78 0.00 0.00 21440.46 61.91 1189657.92 00:06:26.160 00:06:26.160 real 0m6.286s 00:06:26.160 user 0m10.358s 00:06:26.160 sys 0m0.695s 00:06:26.160 14:54:51 blockdev_general.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.160 14:54:51 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:26.160 ************************************ 00:06:26.160 END TEST bdev_verify 00:06:26.160 ************************************ 00:06:26.160 14:54:51 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:06:26.160 14:54:51 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:26.160 14:54:51 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:06:26.160 14:54:51 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.160 14:54:51 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:26.160 ************************************ 00:06:26.160 START TEST bdev_verify_big_io 00:06:26.160 ************************************ 00:06:26.160 14:54:51 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:26.160 [2024-07-12 14:54:51.824423] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:06:26.160 [2024-07-12 14:54:51.824653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:26.724 EAL: TSC is not safe to use in SMP mode 00:06:26.724 EAL: TSC is not invariant 00:06:26.724 [2024-07-12 14:54:52.381448] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.724 [2024-07-12 14:54:52.481842] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:26.724 [2024-07-12 14:54:52.481934] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:26.724 [2024-07-12 14:54:52.484680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.724 [2024-07-12 14:54:52.484671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.981 [2024-07-12 14:54:52.542730] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:26.981 [2024-07-12 14:54:52.542807] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:26.981 [2024-07-12 14:54:52.550717] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:26.981 [2024-07-12 14:54:52.550774] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:26.981 [2024-07-12 14:54:52.558731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:26.981 [2024-07-12 14:54:52.558782] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:06:26.981 [2024-07-12 14:54:52.558791] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:26.981 [2024-07-12 14:54:52.606736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:26.981 [2024-07-12 14:54:52.606797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:26.981 [2024-07-12 14:54:52.606808] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x6da05a36800 00:06:26.981 [2024-07-12 14:54:52.606816] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:26.981 [2024-07-12 14:54:52.607204] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:26.981 [2024-07-12 14:54:52.607224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:06:26.981 [2024-07-12 14:54:52.708167] bdevperf.c:1822:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:06:26.981 [2024-07-12 14:54:52.708439] bdevperf.c:1822:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:06:26.981 [2024-07-12 14:54:52.708640] bdevperf.c:1822:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:06:26.981 [2024-07-12 14:54:52.708821] bdevperf.c:1822:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:06:26.981 [2024-07-12 14:54:52.708990] bdevperf.c:1822:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:06:26.981 [2024-07-12 14:54:52.709157] bdevperf.c:1822:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:06:26.981 [2024-07-12 14:54:52.709335] bdevperf.c:1822:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:06:26.981 [2024-07-12 14:54:52.709507] bdevperf.c:1822:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:06:26.981 [2024-07-12 14:54:52.709692] bdevperf.c:1822:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:06:26.981 [2024-07-12 14:54:52.709874] bdevperf.c:1822:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:06:26.981 [2024-07-12 14:54:52.710056] bdevperf.c:1822:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:06:26.981 [2024-07-12 14:54:52.710227] bdevperf.c:1822:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:06:26.981 [2024-07-12 14:54:52.710401] bdevperf.c:1822:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:06:26.981 [2024-07-12 14:54:52.710578] bdevperf.c:1822:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:06:26.981 [2024-07-12 14:54:52.710745] bdevperf.c:1822:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:06:26.981 [2024-07-12 14:54:52.710908] bdevperf.c:1822:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:06:26.981 [2024-07-12 14:54:52.712717] bdevperf.c:1822:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:06:26.981 [2024-07-12 14:54:52.712929] bdevperf.c:1822:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:06:26.981 Running I/O for 5 seconds... 00:06:32.244 00:06:32.244 Latency(us) 00:06:32.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:32.244 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:32.244 Verification LBA range: start 0x0 length 0x100 00:06:32.244 Malloc0 : 5.05 4029.48 251.84 0.00 0.00 31681.66 76.80 82456.26 00:06:32.244 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:32.244 Verification LBA range: start 0x100 length 0x100 00:06:32.244 Malloc0 : 5.04 3964.51 247.78 0.00 0.00 32202.28 77.27 103904.42 00:06:32.244 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:32.244 Verification LBA range: start 0x0 length 0x80 00:06:32.244 Malloc1p0 : 5.07 1021.68 63.86 0.00 0.00 124725.67 673.98 179211.29 00:06:32.244 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:32.244 Verification LBA range: start 0x80 length 0x80 00:06:32.244 Malloc1p0 : 5.06 1380.56 86.29 0.00 0.00 92229.53 826.65 134408.47 00:06:32.244 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:32.244 Verification LBA range: start 0x0 length 0x80 00:06:32.244 Malloc1p1 : 5.08 522.35 32.65 0.00 0.00 243541.05 411.46 301227.49 00:06:32.244 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:32.244 Verification LBA range: start 0x80 length 0x80 00:06:32.244 Malloc1p1 : 5.08 516.21 32.26 0.00 0.00 246260.36 430.08 291694.97 00:06:32.244 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:32.244 Verification LBA range: start 0x0 length 0x20 00:06:32.244 Malloc2p0 : 5.06 505.64 31.60 0.00 0.00 62863.24 268.10 106287.55 00:06:32.244 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:32.244 Verification LBA range: start 0x20 length 0x20 00:06:32.244 Malloc2p0 : 5.07 502.24 31.39 0.00 0.00 63279.55 277.41 95325.15 00:06:32.244 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:32.244 Verification LBA range: start 0x0 length 0x20 00:06:32.244 Malloc2p1 : 5.06 505.62 31.60 0.00 0.00 62832.96 247.62 105334.30 00:06:32.244 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:32.244 Verification LBA range: start 0x20 length 0x20 00:06:32.244 Malloc2p1 : 5.07 502.22 31.39 0.00 0.00 63264.98 245.76 94371.90 00:06:32.244 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:32.244 Verification LBA range: start 0x0 length 0x20 00:06:32.244 Malloc2p2 : 5.06 505.59 31.60 0.00 0.00 62807.03 307.20 104381.04 00:06:32.244 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:32.244 Verification LBA range: start 0x20 length 0x20 00:06:32.244 Malloc2p2 : 5.07 502.19 31.39 0.00 0.00 63233.23 247.62 93895.28 00:06:32.244 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:32.244 Verification LBA range: start 0x0 length 0x20 00:06:32.244 Malloc2p3 : 5.06 505.57 31.60 0.00 0.00 62785.08 316.51 103427.79 00:06:32.244 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:32.244 Verification LBA range: start 0x20 length 0x20 00:06:32.244 Malloc2p3 : 5.07 502.17 31.39 0.00 0.00 63205.25 260.65 92942.03 00:06:32.244 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:32.245 Verification LBA range: start 0x0 length 0x20 00:06:32.245 Malloc2p4 : 5.06 505.54 31.60 0.00 0.00 62758.74 297.89 102474.54 00:06:32.245 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:32.245 Verification LBA range: start 0x20 length 0x20 00:06:32.245 Malloc2p4 : 5.07 502.14 31.38 0.00 0.00 63180.56 266.24 91988.77 00:06:32.245 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:32.245 Verification LBA range: start 0x0 length 0x20 00:06:32.245 Malloc2p5 : 5.06 505.52 31.59 0.00 0.00 62736.01 245.76 101521.29 00:06:32.245 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:32.245 Verification LBA range: start 0x20 length 0x20 00:06:32.245 Malloc2p5 : 5.07 502.12 31.38 0.00 0.00 63160.25 273.69 91035.52 00:06:32.245 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:32.245 Verification LBA range: start 0x0 length 0x20 00:06:32.245 Malloc2p6 : 5.06 505.49 31.59 0.00 0.00 62709.50 258.79 100568.04 00:06:32.245 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:32.245 Verification LBA range: start 0x20 length 0x20 00:06:32.245 Malloc2p6 : 5.07 502.10 31.38 0.00 0.00 63139.00 243.90 90082.27 00:06:32.245 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:32.245 Verification LBA range: start 0x0 length 0x20 00:06:32.245 Malloc2p7 : 5.06 505.46 31.59 0.00 0.00 62686.23 284.86 99614.79 00:06:32.245 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:32.245 Verification LBA range: start 0x20 length 0x20 00:06:32.245 Malloc2p7 : 5.07 502.07 31.38 0.00 0.00 63127.12 286.72 89605.64 00:06:32.245 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:32.245 Verification LBA range: start 0x0 length 0x100 00:06:32.245 TestPT : 5.11 519.67 32.48 0.00 0.00 242572.42 3961.95 234499.88 00:06:32.245 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:32.245 Verification LBA range: start 0x100 length 0x100 00:06:32.245 TestPT : 5.19 283.62 17.73 0.00 0.00 443657.18 6374.87 478532.27 00:06:32.245 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:32.245 Verification LBA range: start 0x0 length 0x200 00:06:32.245 raid0 : 5.08 525.81 32.86 0.00 0.00 240510.61 368.64 280255.95 00:06:32.245 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:32.245 Verification LBA range: start 0x200 length 0x200 00:06:32.245 raid0 : 5.08 519.23 32.45 0.00 0.00 243505.13 370.50 272629.94 00:06:32.245 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:32.245 Verification LBA range: start 0x0 length 0x200 00:06:32.245 concat0 : 5.09 525.43 32.84 0.00 0.00 240313.14 363.05 274536.44 00:06:32.245 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:32.245 Verification LBA range: start 0x200 length 0x200 00:06:32.245 concat0 : 5.09 522.32 32.64 0.00 0.00 241814.06 376.09 265003.93 00:06:32.245 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:32.245 Verification LBA range: start 0x0 length 0x100 00:06:32.245 raid1 : 5.08 528.69 33.04 0.00 0.00 238481.24 467.32 265003.93 00:06:32.245 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:32.245 Verification LBA range: start 0x100 length 0x100 00:06:32.245 raid1 : 5.08 525.55 32.85 0.00 0.00 239901.94 458.01 255471.41 00:06:32.245 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:06:32.245 Verification LBA range: start 0x0 length 0x4e 00:06:32.245 AIO0 : 5.08 525.13 32.82 0.00 0.00 146157.88 320.23 163959.26 00:06:32.245 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:06:32.245 Verification LBA range: start 0x4e length 0x4e 00:06:32.245 AIO0 : 5.08 523.52 32.72 0.00 0.00 146634.69 480.35 154426.75 00:06:32.245 =================================================================================================================== 00:06:32.245 Total : 24495.44 1530.97 0.00 0.00 99737.37 76.80 478532.27 00:06:32.504 00:06:32.504 real 0m6.371s 00:06:32.504 user 0m11.240s 00:06:32.504 sys 0m0.715s 00:06:32.504 14:54:58 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.504 14:54:58 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:32.504 ************************************ 00:06:32.504 END TEST bdev_verify_big_io 00:06:32.504 ************************************ 00:06:32.504 14:54:58 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:06:32.504 14:54:58 blockdev_general -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:32.504 14:54:58 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:32.504 14:54:58 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.504 14:54:58 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:32.504 ************************************ 00:06:32.504 START TEST bdev_write_zeroes 00:06:32.504 ************************************ 00:06:32.504 14:54:58 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:32.504 [2024-07-12 14:54:58.244420] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:06:32.504 [2024-07-12 14:54:58.244648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:33.071 EAL: TSC is not safe to use in SMP mode 00:06:33.071 EAL: TSC is not invariant 00:06:33.071 [2024-07-12 14:54:58.788267] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.329 [2024-07-12 14:54:58.883881] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:33.329 [2024-07-12 14:54:58.886467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.329 [2024-07-12 14:54:58.946915] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:33.329 [2024-07-12 14:54:58.946976] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:33.329 [2024-07-12 14:54:58.954909] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:33.329 [2024-07-12 14:54:58.954956] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:33.329 [2024-07-12 14:54:58.962928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:33.329 [2024-07-12 14:54:58.962988] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:06:33.329 [2024-07-12 14:54:58.963010] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:33.329 [2024-07-12 14:54:59.010932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:33.329 [2024-07-12 14:54:59.011002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:33.329 [2024-07-12 14:54:59.011013] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d2b24a36800 00:06:33.329 [2024-07-12 14:54:59.011022] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:33.329 [2024-07-12 14:54:59.011383] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:33.329 [2024-07-12 14:54:59.011404] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:06:33.329 Running I/O for 1 seconds... 00:06:34.705 00:06:34.705 Latency(us) 00:06:34.705 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:34.705 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:34.705 Malloc0 : 1.01 32936.99 128.66 0.00 0.00 3885.59 169.43 7357.91 00:06:34.705 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:34.705 Malloc1p0 : 1.01 32933.22 128.65 0.00 0.00 3884.18 178.73 7149.39 00:06:34.705 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:34.705 Malloc1p1 : 1.01 32929.79 128.63 0.00 0.00 3882.49 176.87 6911.07 00:06:34.705 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:34.705 Malloc2p0 : 1.01 32923.51 128.61 0.00 0.00 3881.94 203.87 6642.97 00:06:34.705 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:34.705 Malloc2p1 : 1.01 32920.31 128.59 0.00 0.00 3880.85 172.22 6434.45 00:06:34.705 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:34.705 Malloc2p2 : 1.01 32916.55 128.58 0.00 0.00 3879.58 170.36 6225.92 00:06:34.705 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:34.705 Malloc2p3 : 1.01 32913.38 128.57 0.00 0.00 3878.20 173.15 6017.40 00:06:34.705 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:34.705 Malloc2p4 : 1.01 32910.01 128.55 0.00 0.00 3877.98 172.22 5868.45 00:06:34.705 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:34.705 Malloc2p5 : 1.01 32907.11 128.54 0.00 0.00 3876.28 171.29 5719.51 00:06:34.705 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:34.705 Malloc2p6 : 1.01 32902.60 128.53 0.00 0.00 3875.34 173.15 5510.99 00:06:34.705 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:34.705 Malloc2p7 : 1.01 32899.33 128.51 0.00 0.00 3874.75 175.94 5272.67 00:06:34.705 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:34.705 TestPT : 1.01 32895.35 128.50 0.00 0.00 3873.36 211.32 5034.36 00:06:34.705 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:34.705 raid0 : 1.01 32889.87 128.48 0.00 0.00 3871.39 273.69 4736.47 00:06:34.705 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:34.705 concat0 : 1.01 32983.35 128.84 0.00 0.00 3858.23 253.21 4825.84 00:06:34.705 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:34.705 raid1 : 1.01 32977.30 128.82 0.00 0.00 3856.89 322.09 4885.41 00:06:34.705 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:34.705 AIO0 : 1.05 2573.99 10.05 0.00 0.00 48363.92 539.93 157286.50 00:06:34.705 =================================================================================================================== 00:06:34.706 Total : 496412.65 1939.11 0.00 0.00 4116.71 169.43 157286.50 00:06:34.706 00:06:34.706 real 0m2.192s 00:06:34.706 user 0m1.452s 00:06:34.706 sys 0m0.623s 00:06:34.706 14:55:00 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.706 ************************************ 00:06:34.706 END TEST bdev_write_zeroes 00:06:34.706 ************************************ 00:06:34.706 14:55:00 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:34.706 14:55:00 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:06:34.706 14:55:00 blockdev_general -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:34.706 14:55:00 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:34.706 14:55:00 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.706 14:55:00 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:34.706 ************************************ 00:06:34.706 START TEST bdev_json_nonenclosed 00:06:34.706 ************************************ 00:06:34.706 14:55:00 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:34.706 [2024-07-12 14:55:00.480457] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:06:34.706 [2024-07-12 14:55:00.480711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:35.326 EAL: TSC is not safe to use in SMP mode 00:06:35.326 EAL: TSC is not invariant 00:06:35.326 [2024-07-12 14:55:01.003277] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.326 [2024-07-12 14:55:01.087759] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:35.326 [2024-07-12 14:55:01.089870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.326 [2024-07-12 14:55:01.089925] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:35.326 [2024-07-12 14:55:01.089935] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:35.326 [2024-07-12 14:55:01.089944] app.c:1057:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.600 00:06:35.600 real 0m0.726s 00:06:35.600 user 0m0.188s 00:06:35.600 sys 0m0.537s 00:06:35.600 14:55:01 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:06:35.600 14:55:01 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.600 14:55:01 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:35.600 ************************************ 00:06:35.600 END TEST bdev_json_nonenclosed 00:06:35.600 ************************************ 00:06:35.600 14:55:01 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:06:35.600 14:55:01 blockdev_general -- bdev/blockdev.sh@782 -- # true 00:06:35.601 14:55:01 blockdev_general -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:35.601 14:55:01 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:35.601 14:55:01 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.601 14:55:01 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:35.601 ************************************ 00:06:35.601 START TEST bdev_json_nonarray 00:06:35.601 ************************************ 00:06:35.601 14:55:01 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:35.601 [2024-07-12 14:55:01.255307] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:06:35.601 [2024-07-12 14:55:01.255618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:36.167 EAL: TSC is not safe to use in SMP mode 00:06:36.167 EAL: TSC is not invariant 00:06:36.167 [2024-07-12 14:55:01.803753] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.167 [2024-07-12 14:55:01.883703] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:36.167 [2024-07-12 14:55:01.885865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.167 [2024-07-12 14:55:01.885924] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:36.167 [2024-07-12 14:55:01.885935] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:36.167 [2024-07-12 14:55:01.885943] app.c:1057:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:36.426 00:06:36.426 real 0m0.756s 00:06:36.426 user 0m0.170s 00:06:36.426 sys 0m0.584s 00:06:36.426 14:55:02 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:06:36.426 14:55:02 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.426 14:55:02 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:36.426 ************************************ 00:06:36.426 END TEST bdev_json_nonarray 00:06:36.426 ************************************ 00:06:36.426 14:55:02 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:06:36.426 14:55:02 blockdev_general -- bdev/blockdev.sh@785 -- # true 00:06:36.426 14:55:02 blockdev_general -- bdev/blockdev.sh@787 -- # [[ bdev == bdev ]] 00:06:36.426 14:55:02 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qos qos_test_suite '' 00:06:36.426 14:55:02 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:36.426 14:55:02 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.426 14:55:02 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:36.426 ************************************ 00:06:36.426 START TEST bdev_qos 00:06:36.426 ************************************ 00:06:36.426 14:55:02 blockdev_general.bdev_qos -- common/autotest_common.sh@1123 -- # qos_test_suite '' 00:06:36.426 14:55:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # QOS_PID=48153 00:06:36.426 Process qos testing pid: 48153 00:06:36.426 14:55:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # echo 'Process qos testing pid: 48153' 00:06:36.426 14:55:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:06:36.426 14:55:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:06:36.426 14:55:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@449 -- # waitforlisten 48153 00:06:36.426 14:55:02 blockdev_general.bdev_qos -- common/autotest_common.sh@829 -- # '[' -z 48153 ']' 00:06:36.426 14:55:02 blockdev_general.bdev_qos -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.426 14:55:02 blockdev_general.bdev_qos -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.426 14:55:02 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.426 14:55:02 blockdev_general.bdev_qos -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.426 14:55:02 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:36.426 [2024-07-12 14:55:02.058310] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:06:36.426 [2024-07-12 14:55:02.058471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:36.993 EAL: TSC is not safe to use in SMP mode 00:06:36.993 EAL: TSC is not invariant 00:06:36.993 [2024-07-12 14:55:02.593681] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.993 [2024-07-12 14:55:02.692057] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:36.993 [2024-07-12 14:55:02.694567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.562 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.562 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@862 -- # return 0 00:06:37.562 14:55:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:06:37.562 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.562 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:37.562 Malloc_0 00:06:37.562 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.562 14:55:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # waitforbdev Malloc_0 00:06:37.562 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_0 00:06:37.562 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:06:37.562 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:06:37.562 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:06:37.562 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:06:37.562 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:06:37.562 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.562 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:37.562 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.562 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:06:37.562 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.562 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:37.562 [ 00:06:37.562 { 00:06:37.562 "name": "Malloc_0", 00:06:37.562 "aliases": [ 00:06:37.562 "b81250ca-405e-11ef-b2a4-e9dca065e82e" 00:06:37.562 ], 00:06:37.562 "product_name": "Malloc disk", 00:06:37.562 "block_size": 512, 00:06:37.562 "num_blocks": 262144, 00:06:37.562 "uuid": "b81250ca-405e-11ef-b2a4-e9dca065e82e", 00:06:37.562 "assigned_rate_limits": { 00:06:37.562 "rw_ios_per_sec": 0, 00:06:37.562 "rw_mbytes_per_sec": 0, 00:06:37.562 "r_mbytes_per_sec": 0, 00:06:37.562 "w_mbytes_per_sec": 0 00:06:37.562 }, 00:06:37.562 "claimed": false, 00:06:37.562 "zoned": false, 00:06:37.562 "supported_io_types": { 00:06:37.562 "read": true, 00:06:37.562 "write": true, 00:06:37.562 "unmap": true, 00:06:37.562 "flush": true, 00:06:37.562 "reset": true, 00:06:37.562 "nvme_admin": false, 00:06:37.562 "nvme_io": false, 00:06:37.562 "nvme_io_md": false, 00:06:37.563 "write_zeroes": true, 00:06:37.563 "zcopy": true, 00:06:37.563 "get_zone_info": false, 00:06:37.563 "zone_management": false, 00:06:37.563 "zone_append": false, 00:06:37.563 "compare": false, 00:06:37.563 "compare_and_write": false, 00:06:37.563 "abort": true, 00:06:37.563 "seek_hole": false, 00:06:37.563 "seek_data": false, 00:06:37.563 "copy": true, 00:06:37.563 "nvme_iov_md": false 00:06:37.563 }, 00:06:37.563 "memory_domains": [ 00:06:37.563 { 00:06:37.563 "dma_device_id": "system", 00:06:37.563 "dma_device_type": 1 00:06:37.563 }, 00:06:37.563 { 00:06:37.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.563 "dma_device_type": 2 00:06:37.563 } 00:06:37.563 ], 00:06:37.563 "driver_specific": {} 00:06:37.563 } 00:06:37.563 ] 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # rpc_cmd bdev_null_create Null_1 128 512 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:37.563 Null_1 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@454 -- # waitforbdev Null_1 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Null_1 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:37.563 [ 00:06:37.563 { 00:06:37.563 "name": "Null_1", 00:06:37.563 "aliases": [ 00:06:37.563 "b8186ad6-405e-11ef-b2a4-e9dca065e82e" 00:06:37.563 ], 00:06:37.563 "product_name": "Null disk", 00:06:37.563 "block_size": 512, 00:06:37.563 "num_blocks": 262144, 00:06:37.563 "uuid": "b8186ad6-405e-11ef-b2a4-e9dca065e82e", 00:06:37.563 "assigned_rate_limits": { 00:06:37.563 "rw_ios_per_sec": 0, 00:06:37.563 "rw_mbytes_per_sec": 0, 00:06:37.563 "r_mbytes_per_sec": 0, 00:06:37.563 "w_mbytes_per_sec": 0 00:06:37.563 }, 00:06:37.563 "claimed": false, 00:06:37.563 "zoned": false, 00:06:37.563 "supported_io_types": { 00:06:37.563 "read": true, 00:06:37.563 "write": true, 00:06:37.563 "unmap": false, 00:06:37.563 "flush": false, 00:06:37.563 "reset": true, 00:06:37.563 "nvme_admin": false, 00:06:37.563 "nvme_io": false, 00:06:37.563 "nvme_io_md": false, 00:06:37.563 "write_zeroes": true, 00:06:37.563 "zcopy": false, 00:06:37.563 "get_zone_info": false, 00:06:37.563 "zone_management": false, 00:06:37.563 "zone_append": false, 00:06:37.563 "compare": false, 00:06:37.563 "compare_and_write": false, 00:06:37.563 "abort": true, 00:06:37.563 "seek_hole": false, 00:06:37.563 "seek_data": false, 00:06:37.563 "copy": false, 00:06:37.563 "nvme_iov_md": false 00:06:37.563 }, 00:06:37.563 "driver_specific": {} 00:06:37.563 } 00:06:37.563 ] 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@457 -- # qos_function_test 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_iops_limit=1000 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local qos_lower_bw_limit=2 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local io_result=0 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local iops_limit=0 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@414 -- # local bw_limit=0 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # get_io_result IOPS Malloc_0 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:06:37.563 14:55:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:06:37.563 Running I/O for 60 seconds... 00:06:42.882 14:55:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 621364.36 2485457.46 0.00 0.00 2678784.00 0.00 0.00 ' 00:06:42.882 14:55:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:06:42.882 14:55:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:06:42.882 14:55:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # iostat_result=621364.36 00:06:42.882 14:55:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 621364 00:06:42.882 14:55:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # io_result=621364 00:06:42.882 14:55:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # iops_limit=155000 00:06:42.882 14:55:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@419 -- # '[' 155000 -gt 1000 ']' 00:06:42.882 14:55:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 155000 Malloc_0 00:06:42.882 14:55:08 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.882 14:55:08 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:43.160 14:55:08 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.160 14:55:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@423 -- # run_test bdev_qos_iops run_qos_test 155000 IOPS Malloc_0 00:06:43.160 14:55:08 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:43.160 14:55:08 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.160 14:55:08 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:43.160 ************************************ 00:06:43.160 START TEST bdev_qos_iops 00:06:43.160 ************************************ 00:06:43.160 14:55:08 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1123 -- # run_qos_test 155000 IOPS Malloc_0 00:06:43.160 14:55:08 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_limit=155000 00:06:43.160 14:55:08 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@390 -- # local qos_result=0 00:06:43.160 14:55:08 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # get_io_result IOPS Malloc_0 00:06:43.160 14:55:08 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:06:43.160 14:55:08 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:06:43.160 14:55:08 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # local iostat_result 00:06:43.160 14:55:08 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:43.160 14:55:08 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:06:43.160 14:55:08 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # tail -1 00:06:48.425 14:55:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 154999.67 619998.69 0.00 0.00 664640.00 0.00 0.00 ' 00:06:48.425 14:55:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:06:48.425 14:55:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:06:48.425 14:55:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # iostat_result=154999.67 00:06:48.425 14:55:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@385 -- # echo 154999 00:06:48.425 14:55:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # qos_result=154999 00:06:48.425 14:55:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@393 -- # '[' IOPS = BANDWIDTH ']' 00:06:48.425 14:55:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # lower_limit=139500 00:06:48.425 14:55:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@397 -- # upper_limit=170500 00:06:48.425 14:55:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 154999 -lt 139500 ']' 00:06:48.425 14:55:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 154999 -gt 170500 ']' 00:06:48.425 00:06:48.425 real 0m5.523s 00:06:48.425 user 0m0.146s 00:06:48.425 sys 0m0.016s 00:06:48.425 ************************************ 00:06:48.425 END TEST bdev_qos_iops 00:06:48.425 ************************************ 00:06:48.425 14:55:14 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.425 14:55:14 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:06:48.683 14:55:14 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:06:48.683 14:55:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # get_io_result BANDWIDTH Null_1 00:06:48.683 14:55:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:06:48.683 14:55:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:06:48.683 14:55:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:06:48.683 14:55:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:48.683 14:55:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Null_1 00:06:48.683 14:55:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:06:53.944 14:55:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 371174.44 1484697.77 0.00 0.00 1591296.00 0.00 0.00 ' 00:06:53.944 14:55:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:06:53.944 14:55:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:53.944 14:55:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:06:53.944 14:55:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # iostat_result=1591296.00 00:06:53.944 14:55:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 1591296 00:06:53.944 14:55:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=1591296 00:06:53.944 14:55:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # bw_limit=155 00:06:53.944 14:55:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@429 -- # '[' 155 -lt 2 ']' 00:06:53.944 14:55:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 155 Null_1 00:06:53.944 14:55:19 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.944 14:55:19 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:53.944 14:55:19 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.944 14:55:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@433 -- # run_test bdev_qos_bw run_qos_test 155 BANDWIDTH Null_1 00:06:53.944 14:55:19 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:53.944 14:55:19 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.944 14:55:19 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:53.944 ************************************ 00:06:53.944 START TEST bdev_qos_bw 00:06:53.944 ************************************ 00:06:53.944 14:55:19 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1123 -- # run_qos_test 155 BANDWIDTH Null_1 00:06:53.944 14:55:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_limit=155 00:06:53.944 14:55:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:06:53.944 14:55:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Null_1 00:06:53.944 14:55:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:06:53.944 14:55:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:06:53.944 14:55:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:06:53.944 14:55:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:53.944 14:55:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # grep Null_1 00:06:53.944 14:55:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # tail -1 00:07:00.500 14:55:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 39677.47 158709.89 0.00 0.00 171260.00 0.00 0.00 ' 00:07:00.500 14:55:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:07:00.500 14:55:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:07:00.500 14:55:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:07:00.500 14:55:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # iostat_result=171260.00 00:07:00.500 14:55:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@385 -- # echo 171260 00:07:00.500 14:55:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # qos_result=171260 00:07:00.500 14:55:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:07:00.500 14:55:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@394 -- # qos_limit=158720 00:07:00.500 14:55:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # lower_limit=142848 00:07:00.500 14:55:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@397 -- # upper_limit=174592 00:07:00.500 14:55:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 171260 -lt 142848 ']' 00:07:00.500 14:55:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 171260 -gt 174592 ']' 00:07:00.500 00:07:00.500 real 0m5.441s 00:07:00.500 user 0m0.117s 00:07:00.500 sys 0m0.016s 00:07:00.500 ************************************ 00:07:00.500 END TEST bdev_qos_bw 00:07:00.500 ************************************ 00:07:00.500 14:55:25 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.500 14:55:25 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:07:00.500 14:55:25 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:07:00.500 14:55:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:07:00.500 14:55:25 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.500 14:55:25 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:07:00.500 14:55:25 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.500 14:55:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@437 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:07:00.500 14:55:25 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:00.500 14:55:25 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.500 14:55:25 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:07:00.500 ************************************ 00:07:00.500 START TEST bdev_qos_ro_bw 00:07:00.500 ************************************ 00:07:00.500 14:55:25 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1123 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:07:00.500 14:55:25 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_limit=2 00:07:00.500 14:55:25 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:07:00.500 14:55:25 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Malloc_0 00:07:00.500 14:55:25 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:07:00.500 14:55:25 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:07:00.500 14:55:25 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:07:00.500 14:55:25 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:07:00.500 14:55:25 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # tail -1 00:07:00.500 14:55:25 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:07:05.787 14:55:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 511.86 2047.45 0.00 0.00 2212.00 0.00 0.00 ' 00:07:05.787 14:55:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:07:05.787 14:55:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:07:05.787 14:55:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:07:05.787 14:55:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # iostat_result=2212.00 00:07:05.787 14:55:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@385 -- # echo 2212 00:07:05.787 14:55:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # qos_result=2212 00:07:05.787 14:55:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:07:05.787 14:55:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@394 -- # qos_limit=2048 00:07:05.787 14:55:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # lower_limit=1843 00:07:05.787 14:55:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@397 -- # upper_limit=2252 00:07:05.787 14:55:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2212 -lt 1843 ']' 00:07:05.787 14:55:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2212 -gt 2252 ']' 00:07:05.787 00:07:05.787 real 0m5.533s 00:07:05.787 user 0m0.166s 00:07:05.787 sys 0m0.030s 00:07:05.787 14:55:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.787 14:55:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:07:05.787 ************************************ 00:07:05.787 END TEST bdev_qos_ro_bw 00:07:05.787 ************************************ 00:07:05.787 14:55:30 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:07:05.787 14:55:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:07:05.787 14:55:30 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.787 14:55:30 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:07:05.787 14:55:31 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.787 14:55:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # rpc_cmd bdev_null_delete Null_1 00:07:05.787 14:55:31 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.787 14:55:31 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:07:05.787 00:07:05.787 Latency(us) 00:07:05.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:05.787 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:07:05.787 Malloc_0 : 28.00 209388.02 817.92 0.00 0.00 1211.87 359.33 503316.81 00:07:05.787 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:07:05.787 Null_1 : 28.04 282259.14 1102.57 0.00 0.00 906.59 68.42 31218.99 00:07:05.787 =================================================================================================================== 00:07:05.787 Total : 491647.16 1920.50 0.00 0.00 1036.51 68.42 503316.81 00:07:05.787 0 00:07:05.787 14:55:31 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.787 14:55:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # killprocess 48153 00:07:05.787 14:55:31 blockdev_general.bdev_qos -- common/autotest_common.sh@948 -- # '[' -z 48153 ']' 00:07:05.787 14:55:31 blockdev_general.bdev_qos -- common/autotest_common.sh@952 -- # kill -0 48153 00:07:05.787 14:55:31 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # uname 00:07:05.787 14:55:31 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:05.787 14:55:31 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # ps -c -o command 48153 00:07:05.787 14:55:31 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # tail -1 00:07:05.787 14:55:31 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:07:05.787 14:55:31 blockdev_general.bdev_qos -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:07:05.787 killing process with pid 48153 00:07:05.787 14:55:31 blockdev_general.bdev_qos -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48153' 00:07:05.787 Received shutdown signal, test time was about 28.052636 seconds 00:07:05.787 00:07:05.787 Latency(us) 00:07:05.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:05.787 =================================================================================================================== 00:07:05.788 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:05.788 14:55:31 blockdev_general.bdev_qos -- common/autotest_common.sh@967 -- # kill 48153 00:07:05.788 14:55:31 blockdev_general.bdev_qos -- common/autotest_common.sh@972 -- # wait 48153 00:07:05.788 14:55:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@462 -- # trap - SIGINT SIGTERM EXIT 00:07:05.788 00:07:05.788 real 0m29.456s 00:07:05.788 user 0m30.186s 00:07:05.788 sys 0m0.813s 00:07:05.788 ************************************ 00:07:05.788 END TEST bdev_qos 00:07:05.788 ************************************ 00:07:05.788 14:55:31 blockdev_general.bdev_qos -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.788 14:55:31 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:07:05.788 14:55:31 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:07:05.788 14:55:31 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:07:05.788 14:55:31 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:05.788 14:55:31 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.788 14:55:31 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:05.788 ************************************ 00:07:05.788 START TEST bdev_qd_sampling 00:07:05.788 ************************************ 00:07:05.788 14:55:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1123 -- # qd_sampling_test_suite '' 00:07:05.788 14:55:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@538 -- # QD_DEV=Malloc_QD 00:07:05.788 14:55:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # QD_PID=48378 00:07:05.788 14:55:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:07:05.788 Process bdev QD sampling period testing pid: 48378 00:07:05.788 14:55:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # echo 'Process bdev QD sampling period testing pid: 48378' 00:07:05.788 14:55:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:07:05.788 14:55:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@544 -- # waitforlisten 48378 00:07:05.788 14:55:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@829 -- # '[' -z 48378 ']' 00:07:05.788 14:55:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.788 14:55:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.788 14:55:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.788 14:55:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.788 14:55:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:07:05.788 [2024-07-12 14:55:31.555999] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:07:05.788 [2024-07-12 14:55:31.556197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:06.356 EAL: TSC is not safe to use in SMP mode 00:07:06.356 EAL: TSC is not invariant 00:07:06.356 [2024-07-12 14:55:32.078339] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:06.615 [2024-07-12 14:55:32.175658] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:06.615 [2024-07-12 14:55:32.175718] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:07:06.615 [2024-07-12 14:55:32.179000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.615 [2024-07-12 14:55:32.178989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.181 14:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.181 14:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@862 -- # return 0 00:07:07.181 14:55:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:07:07.181 14:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.181 14:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:07:07.181 Malloc_QD 00:07:07.182 14:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.182 14:55:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@547 -- # waitforbdev Malloc_QD 00:07:07.182 14:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_QD 00:07:07.182 14:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:07.182 14:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local i 00:07:07.182 14:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:07.182 14:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:07.182 14:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:07:07.182 14:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.182 14:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:07:07.182 14:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.182 14:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:07:07.182 14:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.182 14:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:07:07.182 [ 00:07:07.182 { 00:07:07.182 "name": "Malloc_QD", 00:07:07.182 "aliases": [ 00:07:07.182 "c9b5df10-405e-11ef-b2a4-e9dca065e82e" 00:07:07.182 ], 00:07:07.182 "product_name": "Malloc disk", 00:07:07.182 "block_size": 512, 00:07:07.182 "num_blocks": 262144, 00:07:07.182 "uuid": "c9b5df10-405e-11ef-b2a4-e9dca065e82e", 00:07:07.182 "assigned_rate_limits": { 00:07:07.182 "rw_ios_per_sec": 0, 00:07:07.182 "rw_mbytes_per_sec": 0, 00:07:07.182 "r_mbytes_per_sec": 0, 00:07:07.182 "w_mbytes_per_sec": 0 00:07:07.182 }, 00:07:07.182 "claimed": false, 00:07:07.182 "zoned": false, 00:07:07.182 "supported_io_types": { 00:07:07.182 "read": true, 00:07:07.182 "write": true, 00:07:07.182 "unmap": true, 00:07:07.182 "flush": true, 00:07:07.182 "reset": true, 00:07:07.182 "nvme_admin": false, 00:07:07.182 "nvme_io": false, 00:07:07.182 "nvme_io_md": false, 00:07:07.182 "write_zeroes": true, 00:07:07.182 "zcopy": true, 00:07:07.182 "get_zone_info": false, 00:07:07.182 "zone_management": false, 00:07:07.182 "zone_append": false, 00:07:07.182 "compare": false, 00:07:07.182 "compare_and_write": false, 00:07:07.182 "abort": true, 00:07:07.182 "seek_hole": false, 00:07:07.182 "seek_data": false, 00:07:07.182 "copy": true, 00:07:07.182 "nvme_iov_md": false 00:07:07.182 }, 00:07:07.182 "memory_domains": [ 00:07:07.182 { 00:07:07.182 "dma_device_id": "system", 00:07:07.182 "dma_device_type": 1 00:07:07.182 }, 00:07:07.182 { 00:07:07.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.182 "dma_device_type": 2 00:07:07.182 } 00:07:07.182 ], 00:07:07.182 "driver_specific": {} 00:07:07.182 } 00:07:07.182 ] 00:07:07.182 14:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.182 14:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@905 -- # return 0 00:07:07.182 14:55:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:07.182 14:55:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # sleep 2 00:07:07.182 Running I/O for 5 seconds... 00:07:09.084 14:55:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@551 -- # qd_sampling_function_test Malloc_QD 00:07:09.084 14:55:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local bdev_name=Malloc_QD 00:07:09.084 14:55:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local sampling_period=10 00:07:09.084 14:55:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@521 -- # local iostats 00:07:09.084 14:55:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:07:09.084 14:55:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.084 14:55:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:07:09.084 14:55:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.084 14:55:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:07:09.084 14:55:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.084 14:55:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:07:09.084 14:55:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.084 14:55:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # iostats='{ 00:07:09.084 "tick_rate": 2199998543, 00:07:09.084 "ticks": 731438476403, 00:07:09.084 "bdevs": [ 00:07:09.084 { 00:07:09.084 "name": "Malloc_QD", 00:07:09.084 "bytes_read": 12005184000, 00:07:09.084 "num_read_ops": 2930947, 00:07:09.084 "bytes_written": 0, 00:07:09.084 "num_write_ops": 0, 00:07:09.084 "bytes_unmapped": 0, 00:07:09.084 "num_unmap_ops": 0, 00:07:09.084 "bytes_copied": 0, 00:07:09.084 "num_copy_ops": 0, 00:07:09.084 "read_latency_ticks": 2166233413675, 00:07:09.084 "max_read_latency_ticks": 1022399, 00:07:09.084 "min_read_latency_ticks": 38616, 00:07:09.084 "write_latency_ticks": 0, 00:07:09.084 "max_write_latency_ticks": 0, 00:07:09.084 "min_write_latency_ticks": 0, 00:07:09.084 "unmap_latency_ticks": 0, 00:07:09.084 "max_unmap_latency_ticks": 0, 00:07:09.084 "min_unmap_latency_ticks": 0, 00:07:09.084 "copy_latency_ticks": 0, 00:07:09.084 "max_copy_latency_ticks": 0, 00:07:09.084 "min_copy_latency_ticks": 0, 00:07:09.084 "io_error": {}, 00:07:09.084 "queue_depth_polling_period": 10, 00:07:09.084 "queue_depth": 512, 00:07:09.084 "io_time": 360, 00:07:09.084 "weighted_io_time": 184320 00:07:09.084 } 00:07:09.084 ] 00:07:09.084 }' 00:07:09.084 14:55:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:07:09.084 14:55:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # qd_sampling_period=10 00:07:09.084 14:55:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 == null ']' 00:07:09.084 14:55:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 -ne 10 ']' 00:07:09.084 14:55:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:07:09.084 14:55:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.084 14:55:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:07:09.084 00:07:09.084 Latency(us) 00:07:09.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:09.084 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:07:09.084 Malloc_QD : 1.95 763111.61 2980.90 0.00 0.00 335.20 71.68 465.45 00:07:09.084 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:07:09.084 Malloc_QD : 1.95 759751.27 2967.78 0.00 0.00 336.68 60.04 465.45 00:07:09.084 =================================================================================================================== 00:07:09.084 Total : 1522862.88 5948.68 0.00 0.00 335.94 60.04 465.45 00:07:09.084 0 00:07:09.084 14:55:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.084 14:55:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # killprocess 48378 00:07:09.084 14:55:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@948 -- # '[' -z 48378 ']' 00:07:09.084 14:55:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@952 -- # kill -0 48378 00:07:09.084 14:55:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # uname 00:07:09.084 14:55:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:09.084 14:55:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # ps -c -o command 48378 00:07:09.084 14:55:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # tail -1 00:07:09.084 14:55:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:07:09.084 killing process with pid 48378 00:07:09.084 14:55:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:07:09.084 14:55:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48378' 00:07:09.084 Received shutdown signal, test time was about 1.980039 seconds 00:07:09.084 00:07:09.084 Latency(us) 00:07:09.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:09.084 =================================================================================================================== 00:07:09.084 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:09.084 14:55:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@967 -- # kill 48378 00:07:09.084 14:55:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@972 -- # wait 48378 00:07:09.343 14:55:35 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@555 -- # trap - SIGINT SIGTERM EXIT 00:07:09.343 00:07:09.343 real 0m3.494s 00:07:09.343 user 0m6.385s 00:07:09.343 sys 0m0.658s 00:07:09.343 14:55:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.343 14:55:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:07:09.343 ************************************ 00:07:09.343 END TEST bdev_qd_sampling 00:07:09.343 ************************************ 00:07:09.343 14:55:35 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:07:09.343 14:55:35 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_error error_test_suite '' 00:07:09.343 14:55:35 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:09.343 14:55:35 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.343 14:55:35 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:09.343 ************************************ 00:07:09.343 START TEST bdev_error 00:07:09.343 ************************************ 00:07:09.343 14:55:35 blockdev_general.bdev_error -- common/autotest_common.sh@1123 -- # error_test_suite '' 00:07:09.343 14:55:35 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_1=Dev_1 00:07:09.343 14:55:35 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # DEV_2=Dev_2 00:07:09.343 14:55:35 blockdev_general.bdev_error -- bdev/blockdev.sh@468 -- # ERR_DEV=EE_Dev_1 00:07:09.343 14:55:35 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # ERR_PID=48421 00:07:09.343 Process error testing pid: 48421 00:07:09.343 14:55:35 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # echo 'Process error testing pid: 48421' 00:07:09.343 14:55:35 blockdev_general.bdev_error -- bdev/blockdev.sh@474 -- # waitforlisten 48421 00:07:09.343 14:55:35 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 48421 ']' 00:07:09.343 14:55:35 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.343 14:55:35 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:07:09.343 14:55:35 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.343 14:55:35 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.343 14:55:35 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.343 14:55:35 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:09.343 [2024-07-12 14:55:35.096906] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:07:09.343 [2024-07-12 14:55:35.097120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:09.911 EAL: TSC is not safe to use in SMP mode 00:07:09.911 EAL: TSC is not invariant 00:07:09.911 [2024-07-12 14:55:35.663901] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.169 [2024-07-12 14:55:35.750424] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:07:10.169 [2024-07-12 14:55:35.752757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:07:10.429 14:55:36 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:10.429 Dev_1 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.429 14:55:36 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # waitforbdev Dev_1 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:10.429 [ 00:07:10.429 { 00:07:10.429 "name": "Dev_1", 00:07:10.429 "aliases": [ 00:07:10.429 "cbbb5da6-405e-11ef-b2a4-e9dca065e82e" 00:07:10.429 ], 00:07:10.429 "product_name": "Malloc disk", 00:07:10.429 "block_size": 512, 00:07:10.429 "num_blocks": 262144, 00:07:10.429 "uuid": "cbbb5da6-405e-11ef-b2a4-e9dca065e82e", 00:07:10.429 "assigned_rate_limits": { 00:07:10.429 "rw_ios_per_sec": 0, 00:07:10.429 "rw_mbytes_per_sec": 0, 00:07:10.429 "r_mbytes_per_sec": 0, 00:07:10.429 "w_mbytes_per_sec": 0 00:07:10.429 }, 00:07:10.429 "claimed": false, 00:07:10.429 "zoned": false, 00:07:10.429 "supported_io_types": { 00:07:10.429 "read": true, 00:07:10.429 "write": true, 00:07:10.429 "unmap": true, 00:07:10.429 "flush": true, 00:07:10.429 "reset": true, 00:07:10.429 "nvme_admin": false, 00:07:10.429 "nvme_io": false, 00:07:10.429 "nvme_io_md": false, 00:07:10.429 "write_zeroes": true, 00:07:10.429 "zcopy": true, 00:07:10.429 "get_zone_info": false, 00:07:10.429 "zone_management": false, 00:07:10.429 "zone_append": false, 00:07:10.429 "compare": false, 00:07:10.429 "compare_and_write": false, 00:07:10.429 "abort": true, 00:07:10.429 "seek_hole": false, 00:07:10.429 "seek_data": false, 00:07:10.429 "copy": true, 00:07:10.429 "nvme_iov_md": false 00:07:10.429 }, 00:07:10.429 "memory_domains": [ 00:07:10.429 { 00:07:10.429 "dma_device_id": "system", 00:07:10.429 "dma_device_type": 1 00:07:10.429 }, 00:07:10.429 { 00:07:10.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.429 "dma_device_type": 2 00:07:10.429 } 00:07:10.429 ], 00:07:10.429 "driver_specific": {} 00:07:10.429 } 00:07:10.429 ] 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:07:10.429 14:55:36 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_error_create Dev_1 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:10.429 true 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.429 14:55:36 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:10.429 Dev_2 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.429 14:55:36 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # waitforbdev Dev_2 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:10.429 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.430 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:07:10.430 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.430 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:10.430 [ 00:07:10.430 { 00:07:10.430 "name": "Dev_2", 00:07:10.430 "aliases": [ 00:07:10.430 "cbc17789-405e-11ef-b2a4-e9dca065e82e" 00:07:10.430 ], 00:07:10.430 "product_name": "Malloc disk", 00:07:10.430 "block_size": 512, 00:07:10.430 "num_blocks": 262144, 00:07:10.430 "uuid": "cbc17789-405e-11ef-b2a4-e9dca065e82e", 00:07:10.430 "assigned_rate_limits": { 00:07:10.430 "rw_ios_per_sec": 0, 00:07:10.430 "rw_mbytes_per_sec": 0, 00:07:10.430 "r_mbytes_per_sec": 0, 00:07:10.430 "w_mbytes_per_sec": 0 00:07:10.430 }, 00:07:10.430 "claimed": false, 00:07:10.430 "zoned": false, 00:07:10.430 "supported_io_types": { 00:07:10.430 "read": true, 00:07:10.430 "write": true, 00:07:10.430 "unmap": true, 00:07:10.430 "flush": true, 00:07:10.430 "reset": true, 00:07:10.430 "nvme_admin": false, 00:07:10.430 "nvme_io": false, 00:07:10.430 "nvme_io_md": false, 00:07:10.430 "write_zeroes": true, 00:07:10.430 "zcopy": true, 00:07:10.430 "get_zone_info": false, 00:07:10.430 "zone_management": false, 00:07:10.430 "zone_append": false, 00:07:10.430 "compare": false, 00:07:10.430 "compare_and_write": false, 00:07:10.430 "abort": true, 00:07:10.430 "seek_hole": false, 00:07:10.430 "seek_data": false, 00:07:10.430 "copy": true, 00:07:10.430 "nvme_iov_md": false 00:07:10.430 }, 00:07:10.430 "memory_domains": [ 00:07:10.430 { 00:07:10.430 "dma_device_id": "system", 00:07:10.430 "dma_device_type": 1 00:07:10.430 }, 00:07:10.430 { 00:07:10.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.430 "dma_device_type": 2 00:07:10.430 } 00:07:10.430 ], 00:07:10.430 "driver_specific": {} 00:07:10.430 } 00:07:10.430 ] 00:07:10.430 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.430 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:07:10.430 14:55:36 blockdev_general.bdev_error -- bdev/blockdev.sh@481 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:07:10.430 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.430 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:10.430 14:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.430 14:55:36 blockdev_general.bdev_error -- bdev/blockdev.sh@484 -- # sleep 1 00:07:10.430 14:55:36 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:07:10.752 Running I/O for 5 seconds... 00:07:11.690 14:55:37 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # kill -0 48421 00:07:11.690 Process is existed as continue on error is set. Pid: 48421 00:07:11.690 14:55:37 blockdev_general.bdev_error -- bdev/blockdev.sh@488 -- # echo 'Process is existed as continue on error is set. Pid: 48421' 00:07:11.690 14:55:37 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:07:11.690 14:55:37 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.690 14:55:37 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:11.690 14:55:37 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.690 14:55:37 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # rpc_cmd bdev_malloc_delete Dev_1 00:07:11.690 14:55:37 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.690 14:55:37 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:11.690 14:55:37 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.690 14:55:37 blockdev_general.bdev_error -- bdev/blockdev.sh@497 -- # sleep 5 00:07:11.690 Timeout while waiting for response: 00:07:11.690 00:07:11.690 00:07:15.877 00:07:15.877 Latency(us) 00:07:15.877 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:15.877 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:07:15.877 EE_Dev_1 : 0.96 299776.56 1171.00 5.23 0.00 53.13 30.72 173.15 00:07:15.877 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:07:15.877 Dev_2 : 5.00 697124.97 2723.14 0.00 0.00 22.73 5.44 24546.23 00:07:15.877 =================================================================================================================== 00:07:15.877 Total : 996901.54 3894.15 5.23 0.00 25.04 5.44 24546.23 00:07:16.820 14:55:42 blockdev_general.bdev_error -- bdev/blockdev.sh@499 -- # killprocess 48421 00:07:16.820 14:55:42 blockdev_general.bdev_error -- common/autotest_common.sh@948 -- # '[' -z 48421 ']' 00:07:16.820 14:55:42 blockdev_general.bdev_error -- common/autotest_common.sh@952 -- # kill -0 48421 00:07:16.820 14:55:42 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # uname 00:07:16.820 14:55:42 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:16.820 14:55:42 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # tail -1 00:07:16.820 14:55:42 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # ps -c -o command 48421 00:07:16.820 14:55:42 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:07:16.820 killing process with pid 48421 00:07:16.820 14:55:42 blockdev_general.bdev_error -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:07:16.820 14:55:42 blockdev_general.bdev_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48421' 00:07:16.820 Received shutdown signal, test time was about 5.000000 seconds 00:07:16.820 00:07:16.820 Latency(us) 00:07:16.820 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.820 =================================================================================================================== 00:07:16.820 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:16.820 14:55:42 blockdev_general.bdev_error -- common/autotest_common.sh@967 -- # kill 48421 00:07:16.820 14:55:42 blockdev_general.bdev_error -- common/autotest_common.sh@972 -- # wait 48421 00:07:17.116 14:55:42 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # ERR_PID=48465 00:07:17.116 Process error testing pid: 48465 00:07:17.116 14:55:42 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:07:17.116 14:55:42 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # echo 'Process error testing pid: 48465' 00:07:17.116 14:55:42 blockdev_general.bdev_error -- bdev/blockdev.sh@505 -- # waitforlisten 48465 00:07:17.116 14:55:42 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 48465 ']' 00:07:17.116 14:55:42 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.116 14:55:42 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:17.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.116 14:55:42 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.116 14:55:42 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:17.116 14:55:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:17.116 [2024-07-12 14:55:42.803357] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:07:17.116 [2024-07-12 14:55:42.803569] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:17.681 EAL: TSC is not safe to use in SMP mode 00:07:17.681 EAL: TSC is not invariant 00:07:17.681 [2024-07-12 14:55:43.336461] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.681 [2024-07-12 14:55:43.422024] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:07:17.681 [2024-07-12 14:55:43.424220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.249 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:18.249 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:07:18.249 14:55:43 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:07:18.249 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.249 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:18.249 Dev_1 00:07:18.249 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.249 14:55:43 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # waitforbdev Dev_1 00:07:18.249 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:07:18.249 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:18.249 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:07:18.249 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:18.249 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:18.249 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:07:18.249 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.249 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:18.249 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.249 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:07:18.249 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.249 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:18.249 [ 00:07:18.249 { 00:07:18.249 "name": "Dev_1", 00:07:18.249 "aliases": [ 00:07:18.249 "d0671f41-405e-11ef-b2a4-e9dca065e82e" 00:07:18.249 ], 00:07:18.249 "product_name": "Malloc disk", 00:07:18.249 "block_size": 512, 00:07:18.249 "num_blocks": 262144, 00:07:18.249 "uuid": "d0671f41-405e-11ef-b2a4-e9dca065e82e", 00:07:18.249 "assigned_rate_limits": { 00:07:18.249 "rw_ios_per_sec": 0, 00:07:18.249 "rw_mbytes_per_sec": 0, 00:07:18.249 "r_mbytes_per_sec": 0, 00:07:18.249 "w_mbytes_per_sec": 0 00:07:18.249 }, 00:07:18.249 "claimed": false, 00:07:18.249 "zoned": false, 00:07:18.249 "supported_io_types": { 00:07:18.249 "read": true, 00:07:18.249 "write": true, 00:07:18.249 "unmap": true, 00:07:18.249 "flush": true, 00:07:18.249 "reset": true, 00:07:18.249 "nvme_admin": false, 00:07:18.249 "nvme_io": false, 00:07:18.249 "nvme_io_md": false, 00:07:18.249 "write_zeroes": true, 00:07:18.249 "zcopy": true, 00:07:18.249 "get_zone_info": false, 00:07:18.249 "zone_management": false, 00:07:18.249 "zone_append": false, 00:07:18.249 "compare": false, 00:07:18.249 "compare_and_write": false, 00:07:18.249 "abort": true, 00:07:18.249 "seek_hole": false, 00:07:18.249 "seek_data": false, 00:07:18.249 "copy": true, 00:07:18.249 "nvme_iov_md": false 00:07:18.249 }, 00:07:18.249 "memory_domains": [ 00:07:18.249 { 00:07:18.249 "dma_device_id": "system", 00:07:18.249 "dma_device_type": 1 00:07:18.249 }, 00:07:18.249 { 00:07:18.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.249 "dma_device_type": 2 00:07:18.249 } 00:07:18.249 ], 00:07:18.249 "driver_specific": {} 00:07:18.249 } 00:07:18.249 ] 00:07:18.249 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.249 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:07:18.249 14:55:43 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_error_create Dev_1 00:07:18.249 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.249 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:18.249 true 00:07:18.249 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.249 14:55:43 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:07:18.249 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.249 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:18.249 Dev_2 00:07:18.249 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.249 14:55:43 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # waitforbdev Dev_2 00:07:18.249 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:07:18.249 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:18.249 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:07:18.250 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:18.250 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:18.250 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:07:18.250 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.250 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:18.250 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.250 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:07:18.250 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.250 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:18.250 [ 00:07:18.250 { 00:07:18.250 "name": "Dev_2", 00:07:18.250 "aliases": [ 00:07:18.250 "d06c9cdd-405e-11ef-b2a4-e9dca065e82e" 00:07:18.250 ], 00:07:18.250 "product_name": "Malloc disk", 00:07:18.250 "block_size": 512, 00:07:18.250 "num_blocks": 262144, 00:07:18.250 "uuid": "d06c9cdd-405e-11ef-b2a4-e9dca065e82e", 00:07:18.250 "assigned_rate_limits": { 00:07:18.250 "rw_ios_per_sec": 0, 00:07:18.250 "rw_mbytes_per_sec": 0, 00:07:18.250 "r_mbytes_per_sec": 0, 00:07:18.250 "w_mbytes_per_sec": 0 00:07:18.250 }, 00:07:18.250 "claimed": false, 00:07:18.250 "zoned": false, 00:07:18.250 "supported_io_types": { 00:07:18.250 "read": true, 00:07:18.250 "write": true, 00:07:18.250 "unmap": true, 00:07:18.250 "flush": true, 00:07:18.250 "reset": true, 00:07:18.250 "nvme_admin": false, 00:07:18.250 "nvme_io": false, 00:07:18.250 "nvme_io_md": false, 00:07:18.250 "write_zeroes": true, 00:07:18.250 "zcopy": true, 00:07:18.250 "get_zone_info": false, 00:07:18.250 "zone_management": false, 00:07:18.250 "zone_append": false, 00:07:18.250 "compare": false, 00:07:18.250 "compare_and_write": false, 00:07:18.250 "abort": true, 00:07:18.250 "seek_hole": false, 00:07:18.250 "seek_data": false, 00:07:18.250 "copy": true, 00:07:18.250 "nvme_iov_md": false 00:07:18.250 }, 00:07:18.250 "memory_domains": [ 00:07:18.250 { 00:07:18.250 "dma_device_id": "system", 00:07:18.250 "dma_device_type": 1 00:07:18.250 }, 00:07:18.250 { 00:07:18.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.250 "dma_device_type": 2 00:07:18.250 } 00:07:18.250 ], 00:07:18.250 "driver_specific": {} 00:07:18.250 } 00:07:18.250 ] 00:07:18.250 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.250 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:07:18.250 14:55:43 blockdev_general.bdev_error -- bdev/blockdev.sh@512 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:07:18.250 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.250 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:18.250 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.250 14:55:43 blockdev_general.bdev_error -- bdev/blockdev.sh@515 -- # NOT wait 48465 00:07:18.250 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@648 -- # local es=0 00:07:18.250 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # valid_exec_arg wait 48465 00:07:18.250 14:55:43 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:07:18.250 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@636 -- # local arg=wait 00:07:18.250 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.250 14:55:44 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # type -t wait 00:07:18.250 14:55:43 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.250 14:55:44 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # wait 48465 00:07:18.509 Running I/O for 5 seconds... 00:07:18.509 task offset: 25792 on job bdev=EE_Dev_1 fails 00:07:18.509 00:07:18.509 Latency(us) 00:07:18.509 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:18.509 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:07:18.509 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:07:18.509 EE_Dev_1 : 0.00 164179.10 641.32 37313.43 0.00 64.49 24.44 121.95 00:07:18.509 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:07:18.509 Dev_2 : 0.00 197530.86 771.60 0.00 0.00 37.64 24.67 56.09 00:07:18.509 =================================================================================================================== 00:07:18.509 Total : 361709.97 1412.93 37313.43 0.00 49.93 24.44 121.95 00:07:18.509 [2024-07-12 14:55:44.099991] app.c:1057:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:18.509 request: 00:07:18.509 { 00:07:18.509 "method": "perform_tests", 00:07:18.509 "req_id": 1 00:07:18.509 } 00:07:18.509 Got JSON-RPC error response 00:07:18.509 response: 00:07:18.509 { 00:07:18.509 "code": -32603, 00:07:18.509 "message": "bdevperf failed with error Operation not permitted" 00:07:18.509 } 00:07:18.509 14:55:44 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # es=255 00:07:18.509 14:55:44 blockdev_general.bdev_error -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:18.509 14:55:44 blockdev_general.bdev_error -- common/autotest_common.sh@660 -- # es=127 00:07:18.509 14:55:44 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # case "$es" in 00:07:18.509 14:55:44 blockdev_general.bdev_error -- common/autotest_common.sh@668 -- # es=1 00:07:18.509 14:55:44 blockdev_general.bdev_error -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:18.509 00:07:18.509 real 0m9.230s 00:07:18.509 user 0m9.229s 00:07:18.509 sys 0m1.341s 00:07:18.509 14:55:44 blockdev_general.bdev_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.509 ************************************ 00:07:18.509 END TEST bdev_error 00:07:18.509 ************************************ 00:07:18.509 14:55:44 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:18.769 14:55:44 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:07:18.769 14:55:44 blockdev_general -- bdev/blockdev.sh@791 -- # run_test bdev_stat stat_test_suite '' 00:07:18.769 14:55:44 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:18.769 14:55:44 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.769 14:55:44 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:18.769 ************************************ 00:07:18.769 START TEST bdev_stat 00:07:18.769 ************************************ 00:07:18.769 14:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@1123 -- # stat_test_suite '' 00:07:18.769 14:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@592 -- # STAT_DEV=Malloc_STAT 00:07:18.769 14:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:07:18.769 14:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # STAT_PID=48492 00:07:18.769 Process Bdev IO statistics testing pid: 48492 00:07:18.769 14:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # echo 'Process Bdev IO statistics testing pid: 48492' 00:07:18.769 14:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:07:18.769 14:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@599 -- # waitforlisten 48492 00:07:18.769 14:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@829 -- # '[' -z 48492 ']' 00:07:18.769 14:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.769 14:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.769 14:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.769 14:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.769 14:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:07:18.769 [2024-07-12 14:55:44.370977] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:07:18.769 [2024-07-12 14:55:44.371175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:19.337 EAL: TSC is not safe to use in SMP mode 00:07:19.337 EAL: TSC is not invariant 00:07:19.337 [2024-07-12 14:55:44.901118] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:19.337 [2024-07-12 14:55:44.983129] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:19.337 [2024-07-12 14:55:44.983194] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:07:19.337 [2024-07-12 14:55:44.985822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.337 [2024-07-12 14:55:44.985814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.926 14:55:45 blockdev_general.bdev_stat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.926 14:55:45 blockdev_general.bdev_stat -- common/autotest_common.sh@862 -- # return 0 00:07:19.926 14:55:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:07:19.926 14:55:45 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.926 14:55:45 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:07:19.926 Malloc_STAT 00:07:19.926 14:55:45 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.926 14:55:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@602 -- # waitforbdev Malloc_STAT 00:07:19.926 14:55:45 blockdev_general.bdev_stat -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_STAT 00:07:19.926 14:55:45 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:19.926 14:55:45 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local i 00:07:19.926 14:55:45 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:19.926 14:55:45 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:19.926 14:55:45 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:07:19.926 14:55:45 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.926 14:55:45 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:07:19.926 14:55:45 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.926 14:55:45 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:07:19.926 14:55:45 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.926 14:55:45 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:07:19.926 [ 00:07:19.926 { 00:07:19.926 "name": "Malloc_STAT", 00:07:19.926 "aliases": [ 00:07:19.926 "d14e7e3f-405e-11ef-b2a4-e9dca065e82e" 00:07:19.926 ], 00:07:19.926 "product_name": "Malloc disk", 00:07:19.926 "block_size": 512, 00:07:19.926 "num_blocks": 262144, 00:07:19.926 "uuid": "d14e7e3f-405e-11ef-b2a4-e9dca065e82e", 00:07:19.926 "assigned_rate_limits": { 00:07:19.926 "rw_ios_per_sec": 0, 00:07:19.926 "rw_mbytes_per_sec": 0, 00:07:19.926 "r_mbytes_per_sec": 0, 00:07:19.926 "w_mbytes_per_sec": 0 00:07:19.926 }, 00:07:19.926 "claimed": false, 00:07:19.926 "zoned": false, 00:07:19.926 "supported_io_types": { 00:07:19.926 "read": true, 00:07:19.926 "write": true, 00:07:19.926 "unmap": true, 00:07:19.926 "flush": true, 00:07:19.926 "reset": true, 00:07:19.926 "nvme_admin": false, 00:07:19.926 "nvme_io": false, 00:07:19.926 "nvme_io_md": false, 00:07:19.926 "write_zeroes": true, 00:07:19.926 "zcopy": true, 00:07:19.926 "get_zone_info": false, 00:07:19.926 "zone_management": false, 00:07:19.926 "zone_append": false, 00:07:19.926 "compare": false, 00:07:19.926 "compare_and_write": false, 00:07:19.926 "abort": true, 00:07:19.926 "seek_hole": false, 00:07:19.926 "seek_data": false, 00:07:19.926 "copy": true, 00:07:19.926 "nvme_iov_md": false 00:07:19.926 }, 00:07:19.926 "memory_domains": [ 00:07:19.926 { 00:07:19.926 "dma_device_id": "system", 00:07:19.926 "dma_device_type": 1 00:07:19.926 }, 00:07:19.926 { 00:07:19.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.926 "dma_device_type": 2 00:07:19.926 } 00:07:19.926 ], 00:07:19.926 "driver_specific": {} 00:07:19.926 } 00:07:19.926 ] 00:07:19.926 14:55:45 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.926 14:55:45 blockdev_general.bdev_stat -- common/autotest_common.sh@905 -- # return 0 00:07:19.926 14:55:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # sleep 2 00:07:19.926 14:55:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:19.926 Running I/O for 10 seconds... 00:07:21.825 14:55:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@606 -- # stat_function_test Malloc_STAT 00:07:21.825 14:55:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local bdev_name=Malloc_STAT 00:07:21.825 14:55:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local iostats 00:07:21.825 14:55:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count1 00:07:21.825 14:55:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local io_count2 00:07:21.825 14:55:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local iostats_per_channel 00:07:21.825 14:55:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel1 00:07:21.825 14:55:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel2 00:07:21.825 14:55:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@566 -- # local io_count_per_channel_all=0 00:07:21.825 14:55:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:07:21.825 14:55:47 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.825 14:55:47 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:07:21.825 14:55:47 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.825 14:55:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # iostats='{ 00:07:21.825 "tick_rate": 2199998543, 00:07:21.825 "ticks": 759621372070, 00:07:21.825 "bdevs": [ 00:07:21.825 { 00:07:21.825 "name": "Malloc_STAT", 00:07:21.825 "bytes_read": 13159666176, 00:07:21.825 "num_read_ops": 3212803, 00:07:21.825 "bytes_written": 0, 00:07:21.825 "num_write_ops": 0, 00:07:21.825 "bytes_unmapped": 0, 00:07:21.825 "num_unmap_ops": 0, 00:07:21.825 "bytes_copied": 0, 00:07:21.825 "num_copy_ops": 0, 00:07:21.825 "read_latency_ticks": 2276604320247, 00:07:21.825 "max_read_latency_ticks": 1134490, 00:07:21.825 "min_read_latency_ticks": 43208, 00:07:21.825 "write_latency_ticks": 0, 00:07:21.825 "max_write_latency_ticks": 0, 00:07:21.825 "min_write_latency_ticks": 0, 00:07:21.825 "unmap_latency_ticks": 0, 00:07:21.825 "max_unmap_latency_ticks": 0, 00:07:21.825 "min_unmap_latency_ticks": 0, 00:07:21.825 "copy_latency_ticks": 0, 00:07:21.825 "max_copy_latency_ticks": 0, 00:07:21.825 "min_copy_latency_ticks": 0, 00:07:21.825 "io_error": {} 00:07:21.825 } 00:07:21.825 ] 00:07:21.825 }' 00:07:21.825 14:55:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # jq -r '.bdevs[0].num_read_ops' 00:07:21.825 14:55:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # io_count1=3212803 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # iostats_per_channel='{ 00:07:22.084 "tick_rate": 2199998543, 00:07:22.084 "ticks": 759677522871, 00:07:22.084 "name": "Malloc_STAT", 00:07:22.084 "channels": [ 00:07:22.084 { 00:07:22.084 "thread_id": 2, 00:07:22.084 "bytes_read": 6740246528, 00:07:22.084 "num_read_ops": 1645568, 00:07:22.084 "bytes_written": 0, 00:07:22.084 "num_write_ops": 0, 00:07:22.084 "bytes_unmapped": 0, 00:07:22.084 "num_unmap_ops": 0, 00:07:22.084 "bytes_copied": 0, 00:07:22.084 "num_copy_ops": 0, 00:07:22.084 "read_latency_ticks": 1152651274265, 00:07:22.084 "max_read_latency_ticks": 1134490, 00:07:22.084 "min_read_latency_ticks": 658400, 00:07:22.084 "write_latency_ticks": 0, 00:07:22.084 "max_write_latency_ticks": 0, 00:07:22.084 "min_write_latency_ticks": 0, 00:07:22.084 "unmap_latency_ticks": 0, 00:07:22.084 "max_unmap_latency_ticks": 0, 00:07:22.084 "min_unmap_latency_ticks": 0, 00:07:22.084 "copy_latency_ticks": 0, 00:07:22.084 "max_copy_latency_ticks": 0, 00:07:22.084 "min_copy_latency_ticks": 0 00:07:22.084 }, 00:07:22.084 { 00:07:22.084 "thread_id": 3, 00:07:22.084 "bytes_read": 6581911552, 00:07:22.084 "num_read_ops": 1606912, 00:07:22.084 "bytes_written": 0, 00:07:22.084 "num_write_ops": 0, 00:07:22.084 "bytes_unmapped": 0, 00:07:22.084 "num_unmap_ops": 0, 00:07:22.084 "bytes_copied": 0, 00:07:22.084 "num_copy_ops": 0, 00:07:22.084 "read_latency_ticks": 1152817079126, 00:07:22.084 "max_read_latency_ticks": 1025816, 00:07:22.084 "min_read_latency_ticks": 681740, 00:07:22.084 "write_latency_ticks": 0, 00:07:22.084 "max_write_latency_ticks": 0, 00:07:22.084 "min_write_latency_ticks": 0, 00:07:22.084 "unmap_latency_ticks": 0, 00:07:22.084 "max_unmap_latency_ticks": 0, 00:07:22.084 "min_unmap_latency_ticks": 0, 00:07:22.084 "copy_latency_ticks": 0, 00:07:22.084 "max_copy_latency_ticks": 0, 00:07:22.084 "min_copy_latency_ticks": 0 00:07:22.084 } 00:07:22.084 ] 00:07:22.084 }' 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # jq -r '.channels[0].num_read_ops' 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel1=1645568 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=1645568 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # jq -r '.channels[1].num_read_ops' 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel2=1606912 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@575 -- # io_count_per_channel_all=3252480 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # iostats='{ 00:07:22.084 "tick_rate": 2199998543, 00:07:22.084 "ticks": 759752370627, 00:07:22.084 "bdevs": [ 00:07:22.084 { 00:07:22.084 "name": "Malloc_STAT", 00:07:22.084 "bytes_read": 13537153536, 00:07:22.084 "num_read_ops": 3304963, 00:07:22.084 "bytes_written": 0, 00:07:22.084 "num_write_ops": 0, 00:07:22.084 "bytes_unmapped": 0, 00:07:22.084 "num_unmap_ops": 0, 00:07:22.084 "bytes_copied": 0, 00:07:22.084 "num_copy_ops": 0, 00:07:22.084 "read_latency_ticks": 2343652729983, 00:07:22.084 "max_read_latency_ticks": 1134490, 00:07:22.084 "min_read_latency_ticks": 43208, 00:07:22.084 "write_latency_ticks": 0, 00:07:22.084 "max_write_latency_ticks": 0, 00:07:22.084 "min_write_latency_ticks": 0, 00:07:22.084 "unmap_latency_ticks": 0, 00:07:22.084 "max_unmap_latency_ticks": 0, 00:07:22.084 "min_unmap_latency_ticks": 0, 00:07:22.084 "copy_latency_ticks": 0, 00:07:22.084 "max_copy_latency_ticks": 0, 00:07:22.084 "min_copy_latency_ticks": 0, 00:07:22.084 "io_error": {} 00:07:22.084 } 00:07:22.084 ] 00:07:22.084 }' 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # jq -r '.bdevs[0].num_read_ops' 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # io_count2=3304963 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 3252480 -lt 3212803 ']' 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 3252480 -gt 3304963 ']' 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:07:22.084 00:07:22.084 Latency(us) 00:07:22.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.084 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:07:22.084 Malloc_STAT : 2.10 802811.88 3135.98 0.00 0.00 318.62 56.09 517.59 00:07:22.084 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:07:22.084 Malloc_STAT : 2.10 783883.81 3062.05 0.00 0.00 326.31 61.91 467.32 00:07:22.084 =================================================================================================================== 00:07:22.084 Total : 1586695.69 6198.03 0.00 0.00 322.42 56.09 517.59 00:07:22.084 0 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # killprocess 48492 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- common/autotest_common.sh@948 -- # '[' -z 48492 ']' 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- common/autotest_common.sh@952 -- # kill -0 48492 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # uname 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # ps -c -o command 48492 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # tail -1 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:07:22.084 killing process with pid 48492 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48492' 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- common/autotest_common.sh@967 -- # kill 48492 00:07:22.084 Received shutdown signal, test time was about 2.139155 seconds 00:07:22.084 00:07:22.084 Latency(us) 00:07:22.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.084 =================================================================================================================== 00:07:22.084 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:22.084 14:55:47 blockdev_general.bdev_stat -- common/autotest_common.sh@972 -- # wait 48492 00:07:22.344 14:55:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@610 -- # trap - SIGINT SIGTERM EXIT 00:07:22.344 00:07:22.344 real 0m3.543s 00:07:22.344 user 0m6.392s 00:07:22.344 sys 0m0.702s 00:07:22.344 14:55:47 blockdev_general.bdev_stat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.344 14:55:47 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:07:22.344 ************************************ 00:07:22.344 END TEST bdev_stat 00:07:22.344 ************************************ 00:07:22.344 14:55:47 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:07:22.344 14:55:47 blockdev_general -- bdev/blockdev.sh@794 -- # [[ bdev == gpt ]] 00:07:22.344 14:55:47 blockdev_general -- bdev/blockdev.sh@798 -- # [[ bdev == crypto_sw ]] 00:07:22.344 14:55:47 blockdev_general -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:07:22.344 14:55:47 blockdev_general -- bdev/blockdev.sh@811 -- # cleanup 00:07:22.344 14:55:47 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:22.344 14:55:47 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:22.344 14:55:47 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:07:22.344 14:55:47 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:07:22.344 14:55:47 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:07:22.344 14:55:47 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:07:22.344 00:07:22.344 real 1m33.796s 00:07:22.344 user 4m30.661s 00:07:22.344 sys 0m24.976s 00:07:22.344 14:55:47 blockdev_general -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.344 ************************************ 00:07:22.344 END TEST blockdev_general 00:07:22.344 ************************************ 00:07:22.344 14:55:47 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:22.344 14:55:47 -- common/autotest_common.sh@1142 -- # return 0 00:07:22.344 14:55:47 -- spdk/autotest.sh@190 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:22.344 14:55:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:22.344 14:55:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.344 14:55:47 -- common/autotest_common.sh@10 -- # set +x 00:07:22.344 ************************************ 00:07:22.344 START TEST bdev_raid 00:07:22.344 ************************************ 00:07:22.344 14:55:47 bdev_raid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:22.344 * Looking for test storage... 00:07:22.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:22.344 14:55:48 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:22.344 14:55:48 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:22.344 14:55:48 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:07:22.344 14:55:48 bdev_raid -- bdev/bdev_raid.sh@851 -- # mkdir -p /raidtest 00:07:22.344 14:55:48 bdev_raid -- bdev/bdev_raid.sh@852 -- # trap 'cleanup; exit 1' EXIT 00:07:22.344 14:55:48 bdev_raid -- bdev/bdev_raid.sh@854 -- # base_blocklen=512 00:07:22.344 14:55:48 bdev_raid -- bdev/bdev_raid.sh@856 -- # uname -s 00:07:22.344 14:55:48 bdev_raid -- bdev/bdev_raid.sh@856 -- # '[' FreeBSD = Linux ']' 00:07:22.344 14:55:48 bdev_raid -- bdev/bdev_raid.sh@863 -- # run_test raid0_resize_test raid0_resize_test 00:07:22.344 14:55:48 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:22.344 14:55:48 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.344 14:55:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:22.344 ************************************ 00:07:22.344 START TEST raid0_resize_test 00:07:22.344 ************************************ 00:07:22.344 14:55:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1123 -- # raid0_resize_test 00:07:22.344 14:55:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local blksize=512 00:07:22.344 14:55:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local bdev_size_mb=32 00:07:22.344 14:55:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local new_bdev_size_mb=64 00:07:22.344 14:55:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local blkcnt 00:07:22.344 14:55:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local raid_size_mb 00:07:22.344 14:55:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local new_raid_size_mb 00:07:22.344 14:55:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # raid_pid=48597 00:07:22.344 14:55:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # echo 'Process raid pid: 48597' 00:07:22.344 Process raid pid: 48597 00:07:22.344 14:55:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # waitforlisten 48597 /var/tmp/spdk-raid.sock 00:07:22.344 14:55:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@829 -- # '[' -z 48597 ']' 00:07:22.344 14:55:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:22.344 14:55:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:22.344 14:55:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:22.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:22.344 14:55:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:22.344 14:55:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:22.344 14:55:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.344 [2024-07-12 14:55:48.145357] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:07:22.344 [2024-07-12 14:55:48.145551] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:22.910 EAL: TSC is not safe to use in SMP mode 00:07:22.910 EAL: TSC is not invariant 00:07:22.910 [2024-07-12 14:55:48.710754] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.169 [2024-07-12 14:55:48.803947] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:23.169 [2024-07-12 14:55:48.806482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.169 [2024-07-12 14:55:48.807473] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.169 [2024-07-12 14:55:48.807491] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.427 14:55:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.427 14:55:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # return 0 00:07:23.427 14:55:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:07:23.686 Base_1 00:07:23.686 14:55:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:07:23.945 Base_2 00:07:23.945 14:55:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:07:24.203 [2024-07-12 14:55:49.896519] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:24.203 [2024-07-12 14:55:49.897159] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:24.203 [2024-07-12 14:55:49.897194] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x12a21a634a00 00:07:24.203 [2024-07-12 14:55:49.897202] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:24.203 [2024-07-12 14:55:49.897252] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12a21a697e20 00:07:24.203 [2024-07-12 14:55:49.897343] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x12a21a634a00 00:07:24.203 [2024-07-12 14:55:49.897350] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x12a21a634a00 00:07:24.203 [2024-07-12 14:55:49.897402] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.203 14:55:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:07:24.462 [2024-07-12 14:55:50.148517] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:24.462 [2024-07-12 14:55:50.148571] bdev_raid.c:2276:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:24.462 true 00:07:24.462 14:55:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # jq '.[].num_blocks' 00:07:24.462 14:55:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:07:24.719 [2024-07-12 14:55:50.384553] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:24.719 14:55:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # blkcnt=131072 00:07:24.719 14:55:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # raid_size_mb=64 00:07:24.719 14:55:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # '[' 64 '!=' 64 ']' 00:07:24.719 14:55:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:07:24.977 [2024-07-12 14:55:50.692525] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:24.977 [2024-07-12 14:55:50.692562] bdev_raid.c:2276:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:24.977 [2024-07-12 14:55:50.692605] bdev_raid.c:2290:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:24.977 true 00:07:24.977 14:55:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:07:24.977 14:55:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # jq '.[].num_blocks' 00:07:25.236 [2024-07-12 14:55:50.924561] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:25.236 14:55:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # blkcnt=262144 00:07:25.236 14:55:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # raid_size_mb=128 00:07:25.236 14:55:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 128 '!=' 128 ']' 00:07:25.236 14:55:50 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@386 -- # killprocess 48597 00:07:25.236 14:55:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@948 -- # '[' -z 48597 ']' 00:07:25.236 14:55:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # kill -0 48597 00:07:25.236 14:55:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # uname 00:07:25.236 14:55:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:25.236 14:55:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps -c -o command 48597 00:07:25.236 14:55:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # tail -1 00:07:25.236 14:55:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:07:25.236 14:55:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:07:25.236 14:55:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48597' 00:07:25.236 killing process with pid 48597 00:07:25.236 14:55:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@967 -- # kill 48597 00:07:25.236 [2024-07-12 14:55:50.953676] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:25.236 [2024-07-12 14:55:50.953708] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:25.236 [2024-07-12 14:55:50.953724] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:25.236 [2024-07-12 14:55:50.953730] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12a21a634a00 name Raid, state offline 00:07:25.236 14:55:50 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # wait 48597 00:07:25.236 [2024-07-12 14:55:50.953877] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:25.494 14:55:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@388 -- # return 0 00:07:25.494 00:07:25.494 real 0m2.987s 00:07:25.494 user 0m4.430s 00:07:25.494 sys 0m0.805s 00:07:25.494 14:55:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.494 ************************************ 00:07:25.494 END TEST raid0_resize_test 00:07:25.494 ************************************ 00:07:25.494 14:55:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.494 14:55:51 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:25.494 14:55:51 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:07:25.494 14:55:51 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:07:25.494 14:55:51 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:25.494 14:55:51 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:25.494 14:55:51 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.494 14:55:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:25.494 ************************************ 00:07:25.494 START TEST raid_state_function_test 00:07:25.494 ************************************ 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 false 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=48647 00:07:25.494 Process raid pid: 48647 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 48647' 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 48647 /var/tmp/spdk-raid.sock 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 48647 ']' 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:25.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:25.494 14:55:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.494 [2024-07-12 14:55:51.175018] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:07:25.494 [2024-07-12 14:55:51.175285] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:26.058 EAL: TSC is not safe to use in SMP mode 00:07:26.058 EAL: TSC is not invariant 00:07:26.058 [2024-07-12 14:55:51.686472] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.058 [2024-07-12 14:55:51.766145] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:26.058 [2024-07-12 14:55:51.768326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.058 [2024-07-12 14:55:51.769252] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.058 [2024-07-12 14:55:51.769268] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.622 14:55:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:26.622 14:55:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:07:26.622 14:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:26.879 [2024-07-12 14:55:52.505036] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:26.879 [2024-07-12 14:55:52.505096] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:26.879 [2024-07-12 14:55:52.505102] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:26.879 [2024-07-12 14:55:52.505112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:26.879 14:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:26.879 14:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:26.879 14:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:26.879 14:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:26.879 14:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:26.879 14:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:26.879 14:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:26.879 14:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:26.879 14:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:26.879 14:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:26.879 14:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:26.879 14:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.137 14:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:27.137 "name": "Existed_Raid", 00:07:27.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.137 "strip_size_kb": 64, 00:07:27.137 "state": "configuring", 00:07:27.137 "raid_level": "raid0", 00:07:27.137 "superblock": false, 00:07:27.137 "num_base_bdevs": 2, 00:07:27.137 "num_base_bdevs_discovered": 0, 00:07:27.137 "num_base_bdevs_operational": 2, 00:07:27.137 "base_bdevs_list": [ 00:07:27.137 { 00:07:27.137 "name": "BaseBdev1", 00:07:27.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.137 "is_configured": false, 00:07:27.137 "data_offset": 0, 00:07:27.137 "data_size": 0 00:07:27.137 }, 00:07:27.137 { 00:07:27.137 "name": "BaseBdev2", 00:07:27.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.137 "is_configured": false, 00:07:27.137 "data_offset": 0, 00:07:27.137 "data_size": 0 00:07:27.137 } 00:07:27.137 ] 00:07:27.137 }' 00:07:27.137 14:55:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:27.137 14:55:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.395 14:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:27.653 [2024-07-12 14:55:53.301029] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:27.653 [2024-07-12 14:55:53.301052] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x198b19834500 name Existed_Raid, state configuring 00:07:27.653 14:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:27.911 [2024-07-12 14:55:53.533040] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:27.911 [2024-07-12 14:55:53.533082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:27.912 [2024-07-12 14:55:53.533096] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:27.912 [2024-07-12 14:55:53.533107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:27.912 14:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:28.169 [2024-07-12 14:55:53.806055] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:28.169 BaseBdev1 00:07:28.169 14:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:07:28.169 14:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:07:28.169 14:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:28.169 14:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:07:28.169 14:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:28.169 14:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:28.169 14:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:28.425 14:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:28.685 [ 00:07:28.685 { 00:07:28.685 "name": "BaseBdev1", 00:07:28.685 "aliases": [ 00:07:28.685 "d649fa65-405e-11ef-b2a4-e9dca065e82e" 00:07:28.685 ], 00:07:28.685 "product_name": "Malloc disk", 00:07:28.685 "block_size": 512, 00:07:28.685 "num_blocks": 65536, 00:07:28.685 "uuid": "d649fa65-405e-11ef-b2a4-e9dca065e82e", 00:07:28.685 "assigned_rate_limits": { 00:07:28.685 "rw_ios_per_sec": 0, 00:07:28.685 "rw_mbytes_per_sec": 0, 00:07:28.685 "r_mbytes_per_sec": 0, 00:07:28.685 "w_mbytes_per_sec": 0 00:07:28.685 }, 00:07:28.685 "claimed": true, 00:07:28.685 "claim_type": "exclusive_write", 00:07:28.685 "zoned": false, 00:07:28.685 "supported_io_types": { 00:07:28.685 "read": true, 00:07:28.685 "write": true, 00:07:28.685 "unmap": true, 00:07:28.685 "flush": true, 00:07:28.685 "reset": true, 00:07:28.685 "nvme_admin": false, 00:07:28.685 "nvme_io": false, 00:07:28.685 "nvme_io_md": false, 00:07:28.685 "write_zeroes": true, 00:07:28.685 "zcopy": true, 00:07:28.685 "get_zone_info": false, 00:07:28.685 "zone_management": false, 00:07:28.685 "zone_append": false, 00:07:28.685 "compare": false, 00:07:28.685 "compare_and_write": false, 00:07:28.685 "abort": true, 00:07:28.685 "seek_hole": false, 00:07:28.685 "seek_data": false, 00:07:28.685 "copy": true, 00:07:28.685 "nvme_iov_md": false 00:07:28.685 }, 00:07:28.685 "memory_domains": [ 00:07:28.685 { 00:07:28.685 "dma_device_id": "system", 00:07:28.685 "dma_device_type": 1 00:07:28.685 }, 00:07:28.685 { 00:07:28.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.685 "dma_device_type": 2 00:07:28.685 } 00:07:28.685 ], 00:07:28.685 "driver_specific": {} 00:07:28.685 } 00:07:28.685 ] 00:07:28.685 14:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:07:28.685 14:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:28.685 14:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:28.685 14:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:28.685 14:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:28.685 14:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:28.685 14:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:28.685 14:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:28.685 14:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:28.685 14:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:28.685 14:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:28.685 14:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.685 14:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:28.943 14:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:28.943 "name": "Existed_Raid", 00:07:28.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.943 "strip_size_kb": 64, 00:07:28.943 "state": "configuring", 00:07:28.943 "raid_level": "raid0", 00:07:28.943 "superblock": false, 00:07:28.943 "num_base_bdevs": 2, 00:07:28.943 "num_base_bdevs_discovered": 1, 00:07:28.943 "num_base_bdevs_operational": 2, 00:07:28.943 "base_bdevs_list": [ 00:07:28.943 { 00:07:28.943 "name": "BaseBdev1", 00:07:28.943 "uuid": "d649fa65-405e-11ef-b2a4-e9dca065e82e", 00:07:28.943 "is_configured": true, 00:07:28.943 "data_offset": 0, 00:07:28.943 "data_size": 65536 00:07:28.943 }, 00:07:28.943 { 00:07:28.943 "name": "BaseBdev2", 00:07:28.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.943 "is_configured": false, 00:07:28.943 "data_offset": 0, 00:07:28.943 "data_size": 0 00:07:28.943 } 00:07:28.943 ] 00:07:28.943 }' 00:07:28.943 14:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:28.943 14:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.224 14:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:29.480 [2024-07-12 14:55:55.149072] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:29.480 [2024-07-12 14:55:55.149107] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x198b19834500 name Existed_Raid, state configuring 00:07:29.480 14:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:29.739 [2024-07-12 14:55:55.389092] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:29.739 [2024-07-12 14:55:55.389873] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:29.739 [2024-07-12 14:55:55.389912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:29.739 14:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:07:29.739 14:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:29.739 14:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:29.739 14:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:29.739 14:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:29.739 14:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:29.739 14:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:29.739 14:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:29.739 14:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:29.739 14:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:29.739 14:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:29.739 14:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:29.739 14:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.739 14:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:29.996 14:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:29.996 "name": "Existed_Raid", 00:07:29.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.996 "strip_size_kb": 64, 00:07:29.996 "state": "configuring", 00:07:29.996 "raid_level": "raid0", 00:07:29.996 "superblock": false, 00:07:29.996 "num_base_bdevs": 2, 00:07:29.996 "num_base_bdevs_discovered": 1, 00:07:29.996 "num_base_bdevs_operational": 2, 00:07:29.996 "base_bdevs_list": [ 00:07:29.996 { 00:07:29.996 "name": "BaseBdev1", 00:07:29.996 "uuid": "d649fa65-405e-11ef-b2a4-e9dca065e82e", 00:07:29.996 "is_configured": true, 00:07:29.996 "data_offset": 0, 00:07:29.996 "data_size": 65536 00:07:29.996 }, 00:07:29.996 { 00:07:29.996 "name": "BaseBdev2", 00:07:29.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.996 "is_configured": false, 00:07:29.996 "data_offset": 0, 00:07:29.996 "data_size": 0 00:07:29.996 } 00:07:29.996 ] 00:07:29.996 }' 00:07:29.996 14:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:29.996 14:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.253 14:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:30.511 [2024-07-12 14:55:56.165232] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:30.511 [2024-07-12 14:55:56.165261] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x198b19834a00 00:07:30.511 [2024-07-12 14:55:56.165265] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:30.511 [2024-07-12 14:55:56.165288] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x198b19897e20 00:07:30.511 [2024-07-12 14:55:56.165379] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x198b19834a00 00:07:30.511 [2024-07-12 14:55:56.165383] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x198b19834a00 00:07:30.511 [2024-07-12 14:55:56.165422] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:30.511 BaseBdev2 00:07:30.511 14:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:07:30.511 14:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:07:30.511 14:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:30.511 14:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:07:30.511 14:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:30.511 14:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:30.511 14:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:30.768 14:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:31.025 [ 00:07:31.025 { 00:07:31.025 "name": "BaseBdev2", 00:07:31.025 "aliases": [ 00:07:31.025 "d7b21766-405e-11ef-b2a4-e9dca065e82e" 00:07:31.025 ], 00:07:31.025 "product_name": "Malloc disk", 00:07:31.025 "block_size": 512, 00:07:31.025 "num_blocks": 65536, 00:07:31.025 "uuid": "d7b21766-405e-11ef-b2a4-e9dca065e82e", 00:07:31.025 "assigned_rate_limits": { 00:07:31.025 "rw_ios_per_sec": 0, 00:07:31.025 "rw_mbytes_per_sec": 0, 00:07:31.025 "r_mbytes_per_sec": 0, 00:07:31.025 "w_mbytes_per_sec": 0 00:07:31.025 }, 00:07:31.025 "claimed": true, 00:07:31.025 "claim_type": "exclusive_write", 00:07:31.025 "zoned": false, 00:07:31.025 "supported_io_types": { 00:07:31.025 "read": true, 00:07:31.025 "write": true, 00:07:31.025 "unmap": true, 00:07:31.025 "flush": true, 00:07:31.025 "reset": true, 00:07:31.025 "nvme_admin": false, 00:07:31.025 "nvme_io": false, 00:07:31.025 "nvme_io_md": false, 00:07:31.025 "write_zeroes": true, 00:07:31.025 "zcopy": true, 00:07:31.025 "get_zone_info": false, 00:07:31.025 "zone_management": false, 00:07:31.025 "zone_append": false, 00:07:31.025 "compare": false, 00:07:31.025 "compare_and_write": false, 00:07:31.025 "abort": true, 00:07:31.025 "seek_hole": false, 00:07:31.025 "seek_data": false, 00:07:31.025 "copy": true, 00:07:31.025 "nvme_iov_md": false 00:07:31.025 }, 00:07:31.025 "memory_domains": [ 00:07:31.025 { 00:07:31.025 "dma_device_id": "system", 00:07:31.025 "dma_device_type": 1 00:07:31.025 }, 00:07:31.025 { 00:07:31.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.025 "dma_device_type": 2 00:07:31.025 } 00:07:31.025 ], 00:07:31.025 "driver_specific": {} 00:07:31.025 } 00:07:31.025 ] 00:07:31.025 14:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:07:31.026 14:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:07:31.026 14:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:31.026 14:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:31.026 14:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:31.026 14:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:31.026 14:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:31.026 14:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:31.026 14:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:31.026 14:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:31.026 14:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:31.026 14:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:31.026 14:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:31.026 14:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:31.026 14:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:31.283 14:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:31.283 "name": "Existed_Raid", 00:07:31.283 "uuid": "d7b21de7-405e-11ef-b2a4-e9dca065e82e", 00:07:31.283 "strip_size_kb": 64, 00:07:31.283 "state": "online", 00:07:31.283 "raid_level": "raid0", 00:07:31.283 "superblock": false, 00:07:31.283 "num_base_bdevs": 2, 00:07:31.283 "num_base_bdevs_discovered": 2, 00:07:31.283 "num_base_bdevs_operational": 2, 00:07:31.283 "base_bdevs_list": [ 00:07:31.283 { 00:07:31.283 "name": "BaseBdev1", 00:07:31.283 "uuid": "d649fa65-405e-11ef-b2a4-e9dca065e82e", 00:07:31.283 "is_configured": true, 00:07:31.283 "data_offset": 0, 00:07:31.283 "data_size": 65536 00:07:31.283 }, 00:07:31.283 { 00:07:31.283 "name": "BaseBdev2", 00:07:31.283 "uuid": "d7b21766-405e-11ef-b2a4-e9dca065e82e", 00:07:31.283 "is_configured": true, 00:07:31.283 "data_offset": 0, 00:07:31.283 "data_size": 65536 00:07:31.283 } 00:07:31.283 ] 00:07:31.283 }' 00:07:31.283 14:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:31.283 14:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.570 14:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:07:31.570 14:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:07:31.570 14:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:31.570 14:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:31.570 14:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:31.570 14:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:31.570 14:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:07:31.570 14:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:31.829 [2024-07-12 14:55:57.525161] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:31.829 14:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:31.829 "name": "Existed_Raid", 00:07:31.829 "aliases": [ 00:07:31.829 "d7b21de7-405e-11ef-b2a4-e9dca065e82e" 00:07:31.829 ], 00:07:31.829 "product_name": "Raid Volume", 00:07:31.829 "block_size": 512, 00:07:31.829 "num_blocks": 131072, 00:07:31.829 "uuid": "d7b21de7-405e-11ef-b2a4-e9dca065e82e", 00:07:31.829 "assigned_rate_limits": { 00:07:31.829 "rw_ios_per_sec": 0, 00:07:31.829 "rw_mbytes_per_sec": 0, 00:07:31.829 "r_mbytes_per_sec": 0, 00:07:31.829 "w_mbytes_per_sec": 0 00:07:31.829 }, 00:07:31.829 "claimed": false, 00:07:31.829 "zoned": false, 00:07:31.829 "supported_io_types": { 00:07:31.829 "read": true, 00:07:31.829 "write": true, 00:07:31.829 "unmap": true, 00:07:31.829 "flush": true, 00:07:31.829 "reset": true, 00:07:31.829 "nvme_admin": false, 00:07:31.829 "nvme_io": false, 00:07:31.829 "nvme_io_md": false, 00:07:31.829 "write_zeroes": true, 00:07:31.829 "zcopy": false, 00:07:31.829 "get_zone_info": false, 00:07:31.829 "zone_management": false, 00:07:31.829 "zone_append": false, 00:07:31.829 "compare": false, 00:07:31.829 "compare_and_write": false, 00:07:31.829 "abort": false, 00:07:31.829 "seek_hole": false, 00:07:31.829 "seek_data": false, 00:07:31.829 "copy": false, 00:07:31.829 "nvme_iov_md": false 00:07:31.829 }, 00:07:31.829 "memory_domains": [ 00:07:31.829 { 00:07:31.829 "dma_device_id": "system", 00:07:31.829 "dma_device_type": 1 00:07:31.829 }, 00:07:31.829 { 00:07:31.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.829 "dma_device_type": 2 00:07:31.829 }, 00:07:31.829 { 00:07:31.829 "dma_device_id": "system", 00:07:31.829 "dma_device_type": 1 00:07:31.829 }, 00:07:31.829 { 00:07:31.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.829 "dma_device_type": 2 00:07:31.829 } 00:07:31.829 ], 00:07:31.829 "driver_specific": { 00:07:31.829 "raid": { 00:07:31.829 "uuid": "d7b21de7-405e-11ef-b2a4-e9dca065e82e", 00:07:31.829 "strip_size_kb": 64, 00:07:31.829 "state": "online", 00:07:31.829 "raid_level": "raid0", 00:07:31.829 "superblock": false, 00:07:31.829 "num_base_bdevs": 2, 00:07:31.829 "num_base_bdevs_discovered": 2, 00:07:31.829 "num_base_bdevs_operational": 2, 00:07:31.829 "base_bdevs_list": [ 00:07:31.829 { 00:07:31.829 "name": "BaseBdev1", 00:07:31.829 "uuid": "d649fa65-405e-11ef-b2a4-e9dca065e82e", 00:07:31.829 "is_configured": true, 00:07:31.829 "data_offset": 0, 00:07:31.829 "data_size": 65536 00:07:31.829 }, 00:07:31.829 { 00:07:31.829 "name": "BaseBdev2", 00:07:31.829 "uuid": "d7b21766-405e-11ef-b2a4-e9dca065e82e", 00:07:31.829 "is_configured": true, 00:07:31.829 "data_offset": 0, 00:07:31.829 "data_size": 65536 00:07:31.829 } 00:07:31.829 ] 00:07:31.829 } 00:07:31.829 } 00:07:31.829 }' 00:07:31.829 14:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:31.829 14:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:07:31.829 BaseBdev2' 00:07:31.829 14:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:31.829 14:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:07:31.829 14:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:32.121 14:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:32.121 "name": "BaseBdev1", 00:07:32.121 "aliases": [ 00:07:32.121 "d649fa65-405e-11ef-b2a4-e9dca065e82e" 00:07:32.121 ], 00:07:32.121 "product_name": "Malloc disk", 00:07:32.121 "block_size": 512, 00:07:32.121 "num_blocks": 65536, 00:07:32.121 "uuid": "d649fa65-405e-11ef-b2a4-e9dca065e82e", 00:07:32.121 "assigned_rate_limits": { 00:07:32.121 "rw_ios_per_sec": 0, 00:07:32.121 "rw_mbytes_per_sec": 0, 00:07:32.121 "r_mbytes_per_sec": 0, 00:07:32.121 "w_mbytes_per_sec": 0 00:07:32.121 }, 00:07:32.121 "claimed": true, 00:07:32.121 "claim_type": "exclusive_write", 00:07:32.121 "zoned": false, 00:07:32.121 "supported_io_types": { 00:07:32.121 "read": true, 00:07:32.121 "write": true, 00:07:32.121 "unmap": true, 00:07:32.121 "flush": true, 00:07:32.121 "reset": true, 00:07:32.121 "nvme_admin": false, 00:07:32.121 "nvme_io": false, 00:07:32.121 "nvme_io_md": false, 00:07:32.121 "write_zeroes": true, 00:07:32.121 "zcopy": true, 00:07:32.121 "get_zone_info": false, 00:07:32.121 "zone_management": false, 00:07:32.121 "zone_append": false, 00:07:32.121 "compare": false, 00:07:32.121 "compare_and_write": false, 00:07:32.121 "abort": true, 00:07:32.121 "seek_hole": false, 00:07:32.121 "seek_data": false, 00:07:32.121 "copy": true, 00:07:32.121 "nvme_iov_md": false 00:07:32.121 }, 00:07:32.121 "memory_domains": [ 00:07:32.121 { 00:07:32.121 "dma_device_id": "system", 00:07:32.121 "dma_device_type": 1 00:07:32.122 }, 00:07:32.122 { 00:07:32.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.122 "dma_device_type": 2 00:07:32.122 } 00:07:32.122 ], 00:07:32.122 "driver_specific": {} 00:07:32.122 }' 00:07:32.122 14:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:32.122 14:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:32.122 14:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:32.122 14:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:32.122 14:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:32.122 14:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:32.122 14:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:32.122 14:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:32.122 14:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:32.122 14:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:32.122 14:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:32.122 14:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:32.122 14:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:32.122 14:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:07:32.122 14:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:32.380 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:32.380 "name": "BaseBdev2", 00:07:32.380 "aliases": [ 00:07:32.380 "d7b21766-405e-11ef-b2a4-e9dca065e82e" 00:07:32.380 ], 00:07:32.380 "product_name": "Malloc disk", 00:07:32.380 "block_size": 512, 00:07:32.380 "num_blocks": 65536, 00:07:32.380 "uuid": "d7b21766-405e-11ef-b2a4-e9dca065e82e", 00:07:32.380 "assigned_rate_limits": { 00:07:32.380 "rw_ios_per_sec": 0, 00:07:32.380 "rw_mbytes_per_sec": 0, 00:07:32.380 "r_mbytes_per_sec": 0, 00:07:32.380 "w_mbytes_per_sec": 0 00:07:32.380 }, 00:07:32.380 "claimed": true, 00:07:32.380 "claim_type": "exclusive_write", 00:07:32.380 "zoned": false, 00:07:32.380 "supported_io_types": { 00:07:32.380 "read": true, 00:07:32.380 "write": true, 00:07:32.380 "unmap": true, 00:07:32.380 "flush": true, 00:07:32.380 "reset": true, 00:07:32.380 "nvme_admin": false, 00:07:32.380 "nvme_io": false, 00:07:32.380 "nvme_io_md": false, 00:07:32.380 "write_zeroes": true, 00:07:32.380 "zcopy": true, 00:07:32.380 "get_zone_info": false, 00:07:32.380 "zone_management": false, 00:07:32.380 "zone_append": false, 00:07:32.380 "compare": false, 00:07:32.380 "compare_and_write": false, 00:07:32.380 "abort": true, 00:07:32.380 "seek_hole": false, 00:07:32.380 "seek_data": false, 00:07:32.380 "copy": true, 00:07:32.380 "nvme_iov_md": false 00:07:32.380 }, 00:07:32.380 "memory_domains": [ 00:07:32.380 { 00:07:32.380 "dma_device_id": "system", 00:07:32.380 "dma_device_type": 1 00:07:32.380 }, 00:07:32.380 { 00:07:32.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.380 "dma_device_type": 2 00:07:32.380 } 00:07:32.380 ], 00:07:32.380 "driver_specific": {} 00:07:32.380 }' 00:07:32.380 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:32.380 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:32.380 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:32.380 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:32.380 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:32.380 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:32.380 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:32.380 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:32.380 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:32.380 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:32.380 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:32.380 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:32.380 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:32.639 [2024-07-12 14:55:58.437151] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:32.639 [2024-07-12 14:55:58.437179] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:32.639 [2024-07-12 14:55:58.437193] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:32.897 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:07:32.897 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:07:32.897 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:32.897 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:32.897 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:07:32.897 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:32.897 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:32.897 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:07:32.897 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:32.897 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:32.897 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:07:32.897 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:32.897 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:32.897 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:32.897 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:32.897 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:32.897 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.897 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:32.897 "name": "Existed_Raid", 00:07:32.897 "uuid": "d7b21de7-405e-11ef-b2a4-e9dca065e82e", 00:07:32.897 "strip_size_kb": 64, 00:07:32.897 "state": "offline", 00:07:32.897 "raid_level": "raid0", 00:07:32.897 "superblock": false, 00:07:32.897 "num_base_bdevs": 2, 00:07:32.897 "num_base_bdevs_discovered": 1, 00:07:32.897 "num_base_bdevs_operational": 1, 00:07:32.897 "base_bdevs_list": [ 00:07:32.897 { 00:07:32.897 "name": null, 00:07:32.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.897 "is_configured": false, 00:07:32.897 "data_offset": 0, 00:07:32.897 "data_size": 65536 00:07:32.897 }, 00:07:32.897 { 00:07:32.897 "name": "BaseBdev2", 00:07:32.897 "uuid": "d7b21766-405e-11ef-b2a4-e9dca065e82e", 00:07:32.897 "is_configured": true, 00:07:32.897 "data_offset": 0, 00:07:32.897 "data_size": 65536 00:07:32.897 } 00:07:32.897 ] 00:07:32.897 }' 00:07:32.897 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:32.897 14:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.153 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:07:33.153 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:33.411 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:33.411 14:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:07:33.669 14:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:07:33.669 14:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:33.669 14:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:33.669 [2024-07-12 14:55:59.458871] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:33.669 [2024-07-12 14:55:59.458904] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x198b19834a00 name Existed_Raid, state offline 00:07:33.669 14:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:07:33.669 14:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:33.669 14:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:33.669 14:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 48647 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 48647 ']' 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 48647 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 48647 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:07:34.234 killing process with pid 48647 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48647' 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 48647 00:07:34.234 [2024-07-12 14:55:59.765679] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:34.234 [2024-07-12 14:55:59.765711] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 48647 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:07:34.234 00:07:34.234 real 0m8.772s 00:07:34.234 user 0m15.298s 00:07:34.234 sys 0m1.492s 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.234 ************************************ 00:07:34.234 END TEST raid_state_function_test 00:07:34.234 ************************************ 00:07:34.234 14:55:59 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:34.234 14:55:59 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:34.234 14:55:59 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:34.234 14:55:59 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.234 14:55:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:34.234 ************************************ 00:07:34.234 START TEST raid_state_function_test_sb 00:07:34.234 ************************************ 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 true 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=48918 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 48918' 00:07:34.234 Process raid pid: 48918 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 48918 /var/tmp/spdk-raid.sock 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 48918 ']' 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:34.234 14:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:34.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:34.235 14:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:34.235 14:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:34.235 14:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.235 [2024-07-12 14:55:59.997862] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:07:34.235 [2024-07-12 14:55:59.998121] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:34.798 EAL: TSC is not safe to use in SMP mode 00:07:34.798 EAL: TSC is not invariant 00:07:34.798 [2024-07-12 14:56:00.526610] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.798 [2024-07-12 14:56:00.608038] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:34.798 [2024-07-12 14:56:00.610099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.798 [2024-07-12 14:56:00.610834] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.798 [2024-07-12 14:56:00.610848] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.363 14:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:35.363 14:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:07:35.363 14:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:35.620 [2024-07-12 14:56:01.222487] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:35.620 [2024-07-12 14:56:01.222537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:35.620 [2024-07-12 14:56:01.222543] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:35.620 [2024-07-12 14:56:01.222551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:35.620 14:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:35.620 14:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:35.620 14:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:35.620 14:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:35.621 14:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:35.621 14:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:35.621 14:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:35.621 14:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:35.621 14:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:35.621 14:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:35.621 14:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:35.621 14:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:35.877 14:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:35.877 "name": "Existed_Raid", 00:07:35.877 "uuid": "dab5c934-405e-11ef-b2a4-e9dca065e82e", 00:07:35.877 "strip_size_kb": 64, 00:07:35.877 "state": "configuring", 00:07:35.877 "raid_level": "raid0", 00:07:35.877 "superblock": true, 00:07:35.877 "num_base_bdevs": 2, 00:07:35.877 "num_base_bdevs_discovered": 0, 00:07:35.877 "num_base_bdevs_operational": 2, 00:07:35.877 "base_bdevs_list": [ 00:07:35.877 { 00:07:35.877 "name": "BaseBdev1", 00:07:35.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.877 "is_configured": false, 00:07:35.877 "data_offset": 0, 00:07:35.877 "data_size": 0 00:07:35.877 }, 00:07:35.877 { 00:07:35.877 "name": "BaseBdev2", 00:07:35.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.877 "is_configured": false, 00:07:35.877 "data_offset": 0, 00:07:35.877 "data_size": 0 00:07:35.877 } 00:07:35.877 ] 00:07:35.877 }' 00:07:35.877 14:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:35.877 14:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.133 14:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:36.391 [2024-07-12 14:56:02.078476] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:36.391 [2024-07-12 14:56:02.078500] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2aa237634500 name Existed_Raid, state configuring 00:07:36.391 14:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:36.648 [2024-07-12 14:56:02.314490] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:36.648 [2024-07-12 14:56:02.314532] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:36.648 [2024-07-12 14:56:02.314537] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.648 [2024-07-12 14:56:02.314545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.648 14:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:36.906 [2024-07-12 14:56:02.543494] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:36.906 BaseBdev1 00:07:36.906 14:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:07:36.906 14:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:07:36.906 14:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:36.906 14:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:07:36.906 14:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:36.906 14:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:36.906 14:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:37.163 14:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:37.421 [ 00:07:37.421 { 00:07:37.421 "name": "BaseBdev1", 00:07:37.421 "aliases": [ 00:07:37.421 "db7f3451-405e-11ef-b2a4-e9dca065e82e" 00:07:37.421 ], 00:07:37.421 "product_name": "Malloc disk", 00:07:37.421 "block_size": 512, 00:07:37.421 "num_blocks": 65536, 00:07:37.421 "uuid": "db7f3451-405e-11ef-b2a4-e9dca065e82e", 00:07:37.421 "assigned_rate_limits": { 00:07:37.421 "rw_ios_per_sec": 0, 00:07:37.421 "rw_mbytes_per_sec": 0, 00:07:37.421 "r_mbytes_per_sec": 0, 00:07:37.421 "w_mbytes_per_sec": 0 00:07:37.421 }, 00:07:37.421 "claimed": true, 00:07:37.421 "claim_type": "exclusive_write", 00:07:37.421 "zoned": false, 00:07:37.421 "supported_io_types": { 00:07:37.421 "read": true, 00:07:37.421 "write": true, 00:07:37.421 "unmap": true, 00:07:37.421 "flush": true, 00:07:37.421 "reset": true, 00:07:37.421 "nvme_admin": false, 00:07:37.421 "nvme_io": false, 00:07:37.421 "nvme_io_md": false, 00:07:37.421 "write_zeroes": true, 00:07:37.421 "zcopy": true, 00:07:37.422 "get_zone_info": false, 00:07:37.422 "zone_management": false, 00:07:37.422 "zone_append": false, 00:07:37.422 "compare": false, 00:07:37.422 "compare_and_write": false, 00:07:37.422 "abort": true, 00:07:37.422 "seek_hole": false, 00:07:37.422 "seek_data": false, 00:07:37.422 "copy": true, 00:07:37.422 "nvme_iov_md": false 00:07:37.422 }, 00:07:37.422 "memory_domains": [ 00:07:37.422 { 00:07:37.422 "dma_device_id": "system", 00:07:37.422 "dma_device_type": 1 00:07:37.422 }, 00:07:37.422 { 00:07:37.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.422 "dma_device_type": 2 00:07:37.422 } 00:07:37.422 ], 00:07:37.422 "driver_specific": {} 00:07:37.422 } 00:07:37.422 ] 00:07:37.422 14:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:07:37.422 14:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:37.422 14:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:37.422 14:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:37.422 14:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:37.422 14:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:37.422 14:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:37.422 14:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:37.422 14:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:37.422 14:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:37.422 14:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:37.422 14:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:37.422 14:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.680 14:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:37.680 "name": "Existed_Raid", 00:07:37.680 "uuid": "db5c6999-405e-11ef-b2a4-e9dca065e82e", 00:07:37.680 "strip_size_kb": 64, 00:07:37.680 "state": "configuring", 00:07:37.680 "raid_level": "raid0", 00:07:37.680 "superblock": true, 00:07:37.680 "num_base_bdevs": 2, 00:07:37.680 "num_base_bdevs_discovered": 1, 00:07:37.680 "num_base_bdevs_operational": 2, 00:07:37.680 "base_bdevs_list": [ 00:07:37.680 { 00:07:37.680 "name": "BaseBdev1", 00:07:37.680 "uuid": "db7f3451-405e-11ef-b2a4-e9dca065e82e", 00:07:37.680 "is_configured": true, 00:07:37.680 "data_offset": 2048, 00:07:37.680 "data_size": 63488 00:07:37.680 }, 00:07:37.680 { 00:07:37.680 "name": "BaseBdev2", 00:07:37.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.680 "is_configured": false, 00:07:37.680 "data_offset": 0, 00:07:37.680 "data_size": 0 00:07:37.680 } 00:07:37.680 ] 00:07:37.680 }' 00:07:37.680 14:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:37.680 14:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.939 14:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:38.197 [2024-07-12 14:56:03.858517] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:38.197 [2024-07-12 14:56:03.858553] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2aa237634500 name Existed_Raid, state configuring 00:07:38.197 14:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:38.455 [2024-07-12 14:56:04.098541] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:38.455 [2024-07-12 14:56:04.099424] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:38.455 [2024-07-12 14:56:04.099478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:38.455 14:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:07:38.455 14:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:38.455 14:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:38.455 14:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:38.455 14:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:38.455 14:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:38.455 14:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:38.455 14:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:38.455 14:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:38.455 14:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:38.455 14:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:38.455 14:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:38.455 14:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.455 14:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:38.713 14:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:38.713 "name": "Existed_Raid", 00:07:38.713 "uuid": "dc6ca2ee-405e-11ef-b2a4-e9dca065e82e", 00:07:38.713 "strip_size_kb": 64, 00:07:38.713 "state": "configuring", 00:07:38.713 "raid_level": "raid0", 00:07:38.713 "superblock": true, 00:07:38.713 "num_base_bdevs": 2, 00:07:38.713 "num_base_bdevs_discovered": 1, 00:07:38.713 "num_base_bdevs_operational": 2, 00:07:38.713 "base_bdevs_list": [ 00:07:38.713 { 00:07:38.713 "name": "BaseBdev1", 00:07:38.713 "uuid": "db7f3451-405e-11ef-b2a4-e9dca065e82e", 00:07:38.713 "is_configured": true, 00:07:38.713 "data_offset": 2048, 00:07:38.713 "data_size": 63488 00:07:38.713 }, 00:07:38.713 { 00:07:38.713 "name": "BaseBdev2", 00:07:38.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.713 "is_configured": false, 00:07:38.713 "data_offset": 0, 00:07:38.713 "data_size": 0 00:07:38.713 } 00:07:38.713 ] 00:07:38.713 }' 00:07:38.713 14:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:38.713 14:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.971 14:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:39.229 [2024-07-12 14:56:04.922728] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:39.229 [2024-07-12 14:56:04.922840] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2aa237634a00 00:07:39.229 [2024-07-12 14:56:04.922847] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:39.229 [2024-07-12 14:56:04.922869] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2aa237697e20 00:07:39.229 [2024-07-12 14:56:04.922918] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2aa237634a00 00:07:39.229 [2024-07-12 14:56:04.922922] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2aa237634a00 00:07:39.229 [2024-07-12 14:56:04.923006] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.229 BaseBdev2 00:07:39.229 14:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:07:39.229 14:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:07:39.229 14:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:39.229 14:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:07:39.229 14:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:39.229 14:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:39.229 14:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:39.488 14:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:39.746 [ 00:07:39.746 { 00:07:39.746 "name": "BaseBdev2", 00:07:39.746 "aliases": [ 00:07:39.746 "dcea5f73-405e-11ef-b2a4-e9dca065e82e" 00:07:39.746 ], 00:07:39.746 "product_name": "Malloc disk", 00:07:39.746 "block_size": 512, 00:07:39.746 "num_blocks": 65536, 00:07:39.746 "uuid": "dcea5f73-405e-11ef-b2a4-e9dca065e82e", 00:07:39.746 "assigned_rate_limits": { 00:07:39.746 "rw_ios_per_sec": 0, 00:07:39.746 "rw_mbytes_per_sec": 0, 00:07:39.746 "r_mbytes_per_sec": 0, 00:07:39.746 "w_mbytes_per_sec": 0 00:07:39.746 }, 00:07:39.746 "claimed": true, 00:07:39.746 "claim_type": "exclusive_write", 00:07:39.746 "zoned": false, 00:07:39.746 "supported_io_types": { 00:07:39.746 "read": true, 00:07:39.746 "write": true, 00:07:39.746 "unmap": true, 00:07:39.746 "flush": true, 00:07:39.746 "reset": true, 00:07:39.746 "nvme_admin": false, 00:07:39.746 "nvme_io": false, 00:07:39.746 "nvme_io_md": false, 00:07:39.746 "write_zeroes": true, 00:07:39.746 "zcopy": true, 00:07:39.746 "get_zone_info": false, 00:07:39.746 "zone_management": false, 00:07:39.746 "zone_append": false, 00:07:39.746 "compare": false, 00:07:39.746 "compare_and_write": false, 00:07:39.746 "abort": true, 00:07:39.746 "seek_hole": false, 00:07:39.746 "seek_data": false, 00:07:39.746 "copy": true, 00:07:39.746 "nvme_iov_md": false 00:07:39.746 }, 00:07:39.746 "memory_domains": [ 00:07:39.746 { 00:07:39.746 "dma_device_id": "system", 00:07:39.746 "dma_device_type": 1 00:07:39.746 }, 00:07:39.746 { 00:07:39.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.746 "dma_device_type": 2 00:07:39.746 } 00:07:39.746 ], 00:07:39.746 "driver_specific": {} 00:07:39.746 } 00:07:39.746 ] 00:07:39.746 14:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:07:39.747 14:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:07:39.747 14:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:39.747 14:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:39.747 14:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:39.747 14:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:39.747 14:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:39.747 14:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:39.747 14:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:39.747 14:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:39.747 14:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:39.747 14:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:39.747 14:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:39.747 14:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:39.747 14:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.005 14:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:40.005 "name": "Existed_Raid", 00:07:40.005 "uuid": "dc6ca2ee-405e-11ef-b2a4-e9dca065e82e", 00:07:40.005 "strip_size_kb": 64, 00:07:40.005 "state": "online", 00:07:40.005 "raid_level": "raid0", 00:07:40.005 "superblock": true, 00:07:40.005 "num_base_bdevs": 2, 00:07:40.005 "num_base_bdevs_discovered": 2, 00:07:40.005 "num_base_bdevs_operational": 2, 00:07:40.005 "base_bdevs_list": [ 00:07:40.005 { 00:07:40.005 "name": "BaseBdev1", 00:07:40.005 "uuid": "db7f3451-405e-11ef-b2a4-e9dca065e82e", 00:07:40.005 "is_configured": true, 00:07:40.005 "data_offset": 2048, 00:07:40.005 "data_size": 63488 00:07:40.005 }, 00:07:40.005 { 00:07:40.005 "name": "BaseBdev2", 00:07:40.005 "uuid": "dcea5f73-405e-11ef-b2a4-e9dca065e82e", 00:07:40.005 "is_configured": true, 00:07:40.005 "data_offset": 2048, 00:07:40.005 "data_size": 63488 00:07:40.005 } 00:07:40.005 ] 00:07:40.005 }' 00:07:40.005 14:56:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:40.005 14:56:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.264 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:07:40.264 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:07:40.264 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:40.264 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:40.264 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:40.264 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:07:40.264 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:07:40.264 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:40.571 [2024-07-12 14:56:06.250649] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:40.571 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:40.571 "name": "Existed_Raid", 00:07:40.571 "aliases": [ 00:07:40.571 "dc6ca2ee-405e-11ef-b2a4-e9dca065e82e" 00:07:40.571 ], 00:07:40.571 "product_name": "Raid Volume", 00:07:40.571 "block_size": 512, 00:07:40.571 "num_blocks": 126976, 00:07:40.571 "uuid": "dc6ca2ee-405e-11ef-b2a4-e9dca065e82e", 00:07:40.571 "assigned_rate_limits": { 00:07:40.571 "rw_ios_per_sec": 0, 00:07:40.571 "rw_mbytes_per_sec": 0, 00:07:40.571 "r_mbytes_per_sec": 0, 00:07:40.571 "w_mbytes_per_sec": 0 00:07:40.571 }, 00:07:40.571 "claimed": false, 00:07:40.571 "zoned": false, 00:07:40.571 "supported_io_types": { 00:07:40.571 "read": true, 00:07:40.571 "write": true, 00:07:40.571 "unmap": true, 00:07:40.571 "flush": true, 00:07:40.571 "reset": true, 00:07:40.571 "nvme_admin": false, 00:07:40.571 "nvme_io": false, 00:07:40.571 "nvme_io_md": false, 00:07:40.571 "write_zeroes": true, 00:07:40.571 "zcopy": false, 00:07:40.571 "get_zone_info": false, 00:07:40.571 "zone_management": false, 00:07:40.571 "zone_append": false, 00:07:40.571 "compare": false, 00:07:40.571 "compare_and_write": false, 00:07:40.571 "abort": false, 00:07:40.571 "seek_hole": false, 00:07:40.571 "seek_data": false, 00:07:40.571 "copy": false, 00:07:40.571 "nvme_iov_md": false 00:07:40.571 }, 00:07:40.571 "memory_domains": [ 00:07:40.571 { 00:07:40.571 "dma_device_id": "system", 00:07:40.571 "dma_device_type": 1 00:07:40.571 }, 00:07:40.571 { 00:07:40.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.571 "dma_device_type": 2 00:07:40.571 }, 00:07:40.571 { 00:07:40.571 "dma_device_id": "system", 00:07:40.571 "dma_device_type": 1 00:07:40.571 }, 00:07:40.571 { 00:07:40.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.571 "dma_device_type": 2 00:07:40.571 } 00:07:40.571 ], 00:07:40.571 "driver_specific": { 00:07:40.571 "raid": { 00:07:40.571 "uuid": "dc6ca2ee-405e-11ef-b2a4-e9dca065e82e", 00:07:40.571 "strip_size_kb": 64, 00:07:40.571 "state": "online", 00:07:40.571 "raid_level": "raid0", 00:07:40.571 "superblock": true, 00:07:40.571 "num_base_bdevs": 2, 00:07:40.571 "num_base_bdevs_discovered": 2, 00:07:40.571 "num_base_bdevs_operational": 2, 00:07:40.571 "base_bdevs_list": [ 00:07:40.571 { 00:07:40.571 "name": "BaseBdev1", 00:07:40.571 "uuid": "db7f3451-405e-11ef-b2a4-e9dca065e82e", 00:07:40.571 "is_configured": true, 00:07:40.571 "data_offset": 2048, 00:07:40.571 "data_size": 63488 00:07:40.571 }, 00:07:40.571 { 00:07:40.571 "name": "BaseBdev2", 00:07:40.571 "uuid": "dcea5f73-405e-11ef-b2a4-e9dca065e82e", 00:07:40.571 "is_configured": true, 00:07:40.571 "data_offset": 2048, 00:07:40.571 "data_size": 63488 00:07:40.571 } 00:07:40.571 ] 00:07:40.571 } 00:07:40.571 } 00:07:40.571 }' 00:07:40.571 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:40.571 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:07:40.571 BaseBdev2' 00:07:40.572 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:40.572 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:07:40.572 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:40.832 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:40.832 "name": "BaseBdev1", 00:07:40.832 "aliases": [ 00:07:40.832 "db7f3451-405e-11ef-b2a4-e9dca065e82e" 00:07:40.832 ], 00:07:40.832 "product_name": "Malloc disk", 00:07:40.832 "block_size": 512, 00:07:40.832 "num_blocks": 65536, 00:07:40.832 "uuid": "db7f3451-405e-11ef-b2a4-e9dca065e82e", 00:07:40.832 "assigned_rate_limits": { 00:07:40.832 "rw_ios_per_sec": 0, 00:07:40.832 "rw_mbytes_per_sec": 0, 00:07:40.832 "r_mbytes_per_sec": 0, 00:07:40.832 "w_mbytes_per_sec": 0 00:07:40.832 }, 00:07:40.832 "claimed": true, 00:07:40.832 "claim_type": "exclusive_write", 00:07:40.832 "zoned": false, 00:07:40.832 "supported_io_types": { 00:07:40.832 "read": true, 00:07:40.832 "write": true, 00:07:40.832 "unmap": true, 00:07:40.832 "flush": true, 00:07:40.832 "reset": true, 00:07:40.832 "nvme_admin": false, 00:07:40.832 "nvme_io": false, 00:07:40.832 "nvme_io_md": false, 00:07:40.832 "write_zeroes": true, 00:07:40.832 "zcopy": true, 00:07:40.832 "get_zone_info": false, 00:07:40.832 "zone_management": false, 00:07:40.832 "zone_append": false, 00:07:40.832 "compare": false, 00:07:40.832 "compare_and_write": false, 00:07:40.832 "abort": true, 00:07:40.832 "seek_hole": false, 00:07:40.832 "seek_data": false, 00:07:40.832 "copy": true, 00:07:40.832 "nvme_iov_md": false 00:07:40.832 }, 00:07:40.832 "memory_domains": [ 00:07:40.832 { 00:07:40.832 "dma_device_id": "system", 00:07:40.832 "dma_device_type": 1 00:07:40.832 }, 00:07:40.832 { 00:07:40.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.832 "dma_device_type": 2 00:07:40.832 } 00:07:40.832 ], 00:07:40.832 "driver_specific": {} 00:07:40.832 }' 00:07:40.832 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:40.832 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:40.832 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:40.832 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:40.832 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:40.832 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:40.832 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:40.832 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:40.833 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:40.833 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:40.833 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:40.833 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:40.833 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:40.833 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:40.833 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:07:41.091 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:41.091 "name": "BaseBdev2", 00:07:41.091 "aliases": [ 00:07:41.091 "dcea5f73-405e-11ef-b2a4-e9dca065e82e" 00:07:41.091 ], 00:07:41.091 "product_name": "Malloc disk", 00:07:41.091 "block_size": 512, 00:07:41.091 "num_blocks": 65536, 00:07:41.091 "uuid": "dcea5f73-405e-11ef-b2a4-e9dca065e82e", 00:07:41.091 "assigned_rate_limits": { 00:07:41.091 "rw_ios_per_sec": 0, 00:07:41.091 "rw_mbytes_per_sec": 0, 00:07:41.091 "r_mbytes_per_sec": 0, 00:07:41.091 "w_mbytes_per_sec": 0 00:07:41.091 }, 00:07:41.091 "claimed": true, 00:07:41.091 "claim_type": "exclusive_write", 00:07:41.091 "zoned": false, 00:07:41.091 "supported_io_types": { 00:07:41.091 "read": true, 00:07:41.091 "write": true, 00:07:41.091 "unmap": true, 00:07:41.091 "flush": true, 00:07:41.091 "reset": true, 00:07:41.091 "nvme_admin": false, 00:07:41.091 "nvme_io": false, 00:07:41.091 "nvme_io_md": false, 00:07:41.091 "write_zeroes": true, 00:07:41.091 "zcopy": true, 00:07:41.091 "get_zone_info": false, 00:07:41.091 "zone_management": false, 00:07:41.091 "zone_append": false, 00:07:41.091 "compare": false, 00:07:41.091 "compare_and_write": false, 00:07:41.091 "abort": true, 00:07:41.091 "seek_hole": false, 00:07:41.091 "seek_data": false, 00:07:41.091 "copy": true, 00:07:41.091 "nvme_iov_md": false 00:07:41.091 }, 00:07:41.091 "memory_domains": [ 00:07:41.091 { 00:07:41.091 "dma_device_id": "system", 00:07:41.091 "dma_device_type": 1 00:07:41.091 }, 00:07:41.091 { 00:07:41.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.091 "dma_device_type": 2 00:07:41.091 } 00:07:41.091 ], 00:07:41.091 "driver_specific": {} 00:07:41.091 }' 00:07:41.091 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:41.349 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:41.349 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:41.349 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:41.349 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:41.349 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:41.349 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:41.349 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:41.349 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:41.349 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:41.349 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:41.349 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:41.349 14:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:41.675 [2024-07-12 14:56:07.182640] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:41.675 [2024-07-12 14:56:07.182668] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:41.675 [2024-07-12 14:56:07.182698] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:41.675 14:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:07:41.675 14:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:07:41.675 14:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:41.675 14:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:07:41.675 14:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:07:41.675 14:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:41.675 14:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:41.675 14:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:07:41.675 14:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:41.675 14:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:41.675 14:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:07:41.675 14:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:41.675 14:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:41.675 14:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:41.675 14:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:41.675 14:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:41.675 14:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.940 14:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:41.940 "name": "Existed_Raid", 00:07:41.940 "uuid": "dc6ca2ee-405e-11ef-b2a4-e9dca065e82e", 00:07:41.940 "strip_size_kb": 64, 00:07:41.940 "state": "offline", 00:07:41.940 "raid_level": "raid0", 00:07:41.940 "superblock": true, 00:07:41.940 "num_base_bdevs": 2, 00:07:41.940 "num_base_bdevs_discovered": 1, 00:07:41.940 "num_base_bdevs_operational": 1, 00:07:41.940 "base_bdevs_list": [ 00:07:41.940 { 00:07:41.940 "name": null, 00:07:41.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.940 "is_configured": false, 00:07:41.940 "data_offset": 2048, 00:07:41.940 "data_size": 63488 00:07:41.940 }, 00:07:41.941 { 00:07:41.941 "name": "BaseBdev2", 00:07:41.941 "uuid": "dcea5f73-405e-11ef-b2a4-e9dca065e82e", 00:07:41.941 "is_configured": true, 00:07:41.941 "data_offset": 2048, 00:07:41.941 "data_size": 63488 00:07:41.941 } 00:07:41.941 ] 00:07:41.941 }' 00:07:41.941 14:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:41.941 14:56:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.199 14:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:07:42.199 14:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:42.199 14:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:42.199 14:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:07:42.458 14:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:07:42.458 14:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:42.458 14:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:42.717 [2024-07-12 14:56:08.320598] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:42.717 [2024-07-12 14:56:08.320648] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2aa237634a00 name Existed_Raid, state offline 00:07:42.717 14:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:07:42.717 14:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:42.717 14:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:07:42.717 14:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:42.975 14:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:07:42.975 14:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:07:42.975 14:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:07:42.975 14:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 48918 00:07:42.975 14:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 48918 ']' 00:07:42.975 14:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 48918 00:07:42.975 14:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:07:42.975 14:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:42.975 14:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 48918 00:07:42.975 14:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:07:42.975 14:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:07:42.975 14:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:07:42.975 killing process with pid 48918 00:07:42.975 14:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48918' 00:07:42.975 14:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 48918 00:07:42.975 [2024-07-12 14:56:08.584352] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:42.975 [2024-07-12 14:56:08.584386] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:42.975 14:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 48918 00:07:42.975 14:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:07:42.975 00:07:42.975 real 0m8.773s 00:07:42.975 user 0m15.285s 00:07:42.975 sys 0m1.534s 00:07:42.975 14:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.975 14:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.975 ************************************ 00:07:42.975 END TEST raid_state_function_test_sb 00:07:42.975 ************************************ 00:07:43.234 14:56:08 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:43.234 14:56:08 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:43.234 14:56:08 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:43.234 14:56:08 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.234 14:56:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:43.234 ************************************ 00:07:43.234 START TEST raid_superblock_test 00:07:43.234 ************************************ 00:07:43.234 14:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 2 00:07:43.234 14:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:07:43.234 14:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:07:43.234 14:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:07:43.234 14:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:07:43.234 14:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:07:43.234 14:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:07:43.234 14:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:07:43.234 14:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:07:43.234 14:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:07:43.235 14:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:07:43.235 14:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:07:43.235 14:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:07:43.235 14:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:07:43.235 14:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:07:43.235 14:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:07:43.235 14:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:07:43.235 14:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=49188 00:07:43.235 14:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:07:43.235 14:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 49188 /var/tmp/spdk-raid.sock 00:07:43.235 14:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 49188 ']' 00:07:43.235 14:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:43.235 14:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:43.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:43.235 14:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:43.235 14:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:43.235 14:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.235 [2024-07-12 14:56:08.809131] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:07:43.235 [2024-07-12 14:56:08.809293] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:43.802 EAL: TSC is not safe to use in SMP mode 00:07:43.802 EAL: TSC is not invariant 00:07:43.802 [2024-07-12 14:56:09.350608] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.802 [2024-07-12 14:56:09.435054] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:43.802 [2024-07-12 14:56:09.437210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.803 [2024-07-12 14:56:09.437977] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.803 [2024-07-12 14:56:09.437992] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:44.061 14:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:44.061 14:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:07:44.061 14:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:07:44.061 14:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:07:44.061 14:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:07:44.061 14:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:07:44.061 14:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:44.061 14:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:44.061 14:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:07:44.061 14:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:44.061 14:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:07:44.320 malloc1 00:07:44.320 14:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:44.579 [2024-07-12 14:56:10.382141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:44.579 [2024-07-12 14:56:10.382201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.579 [2024-07-12 14:56:10.382213] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x34e00dc34780 00:07:44.579 [2024-07-12 14:56:10.382222] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.579 [2024-07-12 14:56:10.383131] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.579 [2024-07-12 14:56:10.383161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:44.579 pt1 00:07:44.838 14:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:07:44.838 14:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:07:44.838 14:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:07:44.838 14:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:07:44.838 14:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:44.838 14:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:44.838 14:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:07:44.838 14:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:44.838 14:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:07:44.838 malloc2 00:07:44.838 14:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:45.097 [2024-07-12 14:56:10.866150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:45.097 [2024-07-12 14:56:10.866219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:45.097 [2024-07-12 14:56:10.866247] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x34e00dc34c80 00:07:45.097 [2024-07-12 14:56:10.866255] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:45.097 [2024-07-12 14:56:10.866921] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:45.097 [2024-07-12 14:56:10.866946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:45.097 pt2 00:07:45.097 14:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:07:45.097 14:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:07:45.097 14:56:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:07:45.357 [2024-07-12 14:56:11.110168] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:45.357 [2024-07-12 14:56:11.110745] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:45.357 [2024-07-12 14:56:11.110804] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x34e00dc34f00 00:07:45.357 [2024-07-12 14:56:11.110811] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:45.357 [2024-07-12 14:56:11.110851] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x34e00dc97e20 00:07:45.357 [2024-07-12 14:56:11.110924] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x34e00dc34f00 00:07:45.357 [2024-07-12 14:56:11.110928] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x34e00dc34f00 00:07:45.357 [2024-07-12 14:56:11.110957] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:45.357 14:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:45.357 14:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:45.357 14:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:45.357 14:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:45.357 14:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:45.357 14:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:45.357 14:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:45.357 14:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:45.357 14:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:45.357 14:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:45.357 14:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:45.357 14:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:45.616 14:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:45.616 "name": "raid_bdev1", 00:07:45.616 "uuid": "e09a869c-405e-11ef-b2a4-e9dca065e82e", 00:07:45.616 "strip_size_kb": 64, 00:07:45.616 "state": "online", 00:07:45.616 "raid_level": "raid0", 00:07:45.616 "superblock": true, 00:07:45.616 "num_base_bdevs": 2, 00:07:45.616 "num_base_bdevs_discovered": 2, 00:07:45.616 "num_base_bdevs_operational": 2, 00:07:45.616 "base_bdevs_list": [ 00:07:45.616 { 00:07:45.616 "name": "pt1", 00:07:45.616 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:45.616 "is_configured": true, 00:07:45.616 "data_offset": 2048, 00:07:45.616 "data_size": 63488 00:07:45.616 }, 00:07:45.616 { 00:07:45.616 "name": "pt2", 00:07:45.616 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:45.616 "is_configured": true, 00:07:45.616 "data_offset": 2048, 00:07:45.616 "data_size": 63488 00:07:45.616 } 00:07:45.616 ] 00:07:45.616 }' 00:07:45.616 14:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:45.616 14:56:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.184 14:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:07:46.184 14:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:07:46.184 14:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:46.184 14:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:46.184 14:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:46.184 14:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:46.184 14:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:46.184 14:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:46.184 [2024-07-12 14:56:11.962219] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.184 14:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:46.184 "name": "raid_bdev1", 00:07:46.184 "aliases": [ 00:07:46.184 "e09a869c-405e-11ef-b2a4-e9dca065e82e" 00:07:46.184 ], 00:07:46.184 "product_name": "Raid Volume", 00:07:46.184 "block_size": 512, 00:07:46.184 "num_blocks": 126976, 00:07:46.184 "uuid": "e09a869c-405e-11ef-b2a4-e9dca065e82e", 00:07:46.184 "assigned_rate_limits": { 00:07:46.184 "rw_ios_per_sec": 0, 00:07:46.184 "rw_mbytes_per_sec": 0, 00:07:46.184 "r_mbytes_per_sec": 0, 00:07:46.184 "w_mbytes_per_sec": 0 00:07:46.184 }, 00:07:46.184 "claimed": false, 00:07:46.184 "zoned": false, 00:07:46.184 "supported_io_types": { 00:07:46.184 "read": true, 00:07:46.184 "write": true, 00:07:46.184 "unmap": true, 00:07:46.184 "flush": true, 00:07:46.184 "reset": true, 00:07:46.184 "nvme_admin": false, 00:07:46.184 "nvme_io": false, 00:07:46.184 "nvme_io_md": false, 00:07:46.184 "write_zeroes": true, 00:07:46.184 "zcopy": false, 00:07:46.184 "get_zone_info": false, 00:07:46.184 "zone_management": false, 00:07:46.184 "zone_append": false, 00:07:46.184 "compare": false, 00:07:46.184 "compare_and_write": false, 00:07:46.184 "abort": false, 00:07:46.184 "seek_hole": false, 00:07:46.184 "seek_data": false, 00:07:46.184 "copy": false, 00:07:46.184 "nvme_iov_md": false 00:07:46.184 }, 00:07:46.184 "memory_domains": [ 00:07:46.184 { 00:07:46.184 "dma_device_id": "system", 00:07:46.184 "dma_device_type": 1 00:07:46.184 }, 00:07:46.184 { 00:07:46.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.184 "dma_device_type": 2 00:07:46.184 }, 00:07:46.184 { 00:07:46.184 "dma_device_id": "system", 00:07:46.184 "dma_device_type": 1 00:07:46.184 }, 00:07:46.184 { 00:07:46.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.184 "dma_device_type": 2 00:07:46.184 } 00:07:46.184 ], 00:07:46.184 "driver_specific": { 00:07:46.184 "raid": { 00:07:46.184 "uuid": "e09a869c-405e-11ef-b2a4-e9dca065e82e", 00:07:46.184 "strip_size_kb": 64, 00:07:46.184 "state": "online", 00:07:46.184 "raid_level": "raid0", 00:07:46.184 "superblock": true, 00:07:46.184 "num_base_bdevs": 2, 00:07:46.184 "num_base_bdevs_discovered": 2, 00:07:46.184 "num_base_bdevs_operational": 2, 00:07:46.184 "base_bdevs_list": [ 00:07:46.184 { 00:07:46.184 "name": "pt1", 00:07:46.184 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:46.184 "is_configured": true, 00:07:46.184 "data_offset": 2048, 00:07:46.184 "data_size": 63488 00:07:46.184 }, 00:07:46.184 { 00:07:46.184 "name": "pt2", 00:07:46.184 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:46.184 "is_configured": true, 00:07:46.184 "data_offset": 2048, 00:07:46.184 "data_size": 63488 00:07:46.184 } 00:07:46.184 ] 00:07:46.184 } 00:07:46.184 } 00:07:46.184 }' 00:07:46.184 14:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:46.184 14:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:07:46.184 pt2' 00:07:46.184 14:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:46.184 14:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:07:46.184 14:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:46.522 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:46.522 "name": "pt1", 00:07:46.522 "aliases": [ 00:07:46.522 "00000000-0000-0000-0000-000000000001" 00:07:46.522 ], 00:07:46.522 "product_name": "passthru", 00:07:46.522 "block_size": 512, 00:07:46.522 "num_blocks": 65536, 00:07:46.522 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:46.522 "assigned_rate_limits": { 00:07:46.522 "rw_ios_per_sec": 0, 00:07:46.522 "rw_mbytes_per_sec": 0, 00:07:46.522 "r_mbytes_per_sec": 0, 00:07:46.522 "w_mbytes_per_sec": 0 00:07:46.522 }, 00:07:46.522 "claimed": true, 00:07:46.522 "claim_type": "exclusive_write", 00:07:46.522 "zoned": false, 00:07:46.522 "supported_io_types": { 00:07:46.522 "read": true, 00:07:46.522 "write": true, 00:07:46.522 "unmap": true, 00:07:46.522 "flush": true, 00:07:46.522 "reset": true, 00:07:46.522 "nvme_admin": false, 00:07:46.522 "nvme_io": false, 00:07:46.522 "nvme_io_md": false, 00:07:46.522 "write_zeroes": true, 00:07:46.522 "zcopy": true, 00:07:46.522 "get_zone_info": false, 00:07:46.522 "zone_management": false, 00:07:46.522 "zone_append": false, 00:07:46.522 "compare": false, 00:07:46.522 "compare_and_write": false, 00:07:46.522 "abort": true, 00:07:46.522 "seek_hole": false, 00:07:46.522 "seek_data": false, 00:07:46.522 "copy": true, 00:07:46.522 "nvme_iov_md": false 00:07:46.522 }, 00:07:46.522 "memory_domains": [ 00:07:46.522 { 00:07:46.522 "dma_device_id": "system", 00:07:46.522 "dma_device_type": 1 00:07:46.522 }, 00:07:46.522 { 00:07:46.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.522 "dma_device_type": 2 00:07:46.522 } 00:07:46.522 ], 00:07:46.522 "driver_specific": { 00:07:46.522 "passthru": { 00:07:46.522 "name": "pt1", 00:07:46.522 "base_bdev_name": "malloc1" 00:07:46.522 } 00:07:46.522 } 00:07:46.522 }' 00:07:46.522 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:46.522 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:46.522 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:46.522 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:46.522 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:46.522 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:46.522 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:46.522 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:46.522 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:46.522 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:46.522 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:46.798 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:46.798 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:46.798 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:07:46.798 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:46.798 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:46.798 "name": "pt2", 00:07:46.798 "aliases": [ 00:07:46.798 "00000000-0000-0000-0000-000000000002" 00:07:46.798 ], 00:07:46.798 "product_name": "passthru", 00:07:46.798 "block_size": 512, 00:07:46.798 "num_blocks": 65536, 00:07:46.798 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:46.798 "assigned_rate_limits": { 00:07:46.798 "rw_ios_per_sec": 0, 00:07:46.798 "rw_mbytes_per_sec": 0, 00:07:46.798 "r_mbytes_per_sec": 0, 00:07:46.798 "w_mbytes_per_sec": 0 00:07:46.798 }, 00:07:46.798 "claimed": true, 00:07:46.798 "claim_type": "exclusive_write", 00:07:46.798 "zoned": false, 00:07:46.798 "supported_io_types": { 00:07:46.798 "read": true, 00:07:46.798 "write": true, 00:07:46.798 "unmap": true, 00:07:46.798 "flush": true, 00:07:46.798 "reset": true, 00:07:46.798 "nvme_admin": false, 00:07:46.798 "nvme_io": false, 00:07:46.798 "nvme_io_md": false, 00:07:46.798 "write_zeroes": true, 00:07:46.798 "zcopy": true, 00:07:46.798 "get_zone_info": false, 00:07:46.798 "zone_management": false, 00:07:46.798 "zone_append": false, 00:07:46.798 "compare": false, 00:07:46.798 "compare_and_write": false, 00:07:46.798 "abort": true, 00:07:46.798 "seek_hole": false, 00:07:46.798 "seek_data": false, 00:07:46.798 "copy": true, 00:07:46.798 "nvme_iov_md": false 00:07:46.798 }, 00:07:46.798 "memory_domains": [ 00:07:46.798 { 00:07:46.798 "dma_device_id": "system", 00:07:46.798 "dma_device_type": 1 00:07:46.798 }, 00:07:46.798 { 00:07:46.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.798 "dma_device_type": 2 00:07:46.798 } 00:07:46.798 ], 00:07:46.798 "driver_specific": { 00:07:46.798 "passthru": { 00:07:46.798 "name": "pt2", 00:07:46.798 "base_bdev_name": "malloc2" 00:07:46.798 } 00:07:46.798 } 00:07:46.798 }' 00:07:46.798 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:46.798 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:46.798 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:46.798 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:46.798 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:46.798 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:46.798 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:47.057 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:47.057 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:47.057 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:47.057 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:47.057 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:47.057 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:47.057 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:07:47.057 [2024-07-12 14:56:12.862245] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.315 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=e09a869c-405e-11ef-b2a4-e9dca065e82e 00:07:47.315 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z e09a869c-405e-11ef-b2a4-e9dca065e82e ']' 00:07:47.315 14:56:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:47.574 [2024-07-12 14:56:13.194202] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:47.574 [2024-07-12 14:56:13.194225] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:47.574 [2024-07-12 14:56:13.194249] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:47.574 [2024-07-12 14:56:13.194261] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:47.574 [2024-07-12 14:56:13.194266] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x34e00dc34f00 name raid_bdev1, state offline 00:07:47.574 14:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:07:47.574 14:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:47.832 14:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:07:47.832 14:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:07:47.832 14:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:07:47.832 14:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:07:48.091 14:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:07:48.091 14:56:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:07:48.350 14:56:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:07:48.350 14:56:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:48.609 14:56:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:07:48.609 14:56:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:07:48.609 14:56:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:07:48.609 14:56:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:07:48.609 14:56:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:48.609 14:56:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:48.609 14:56:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:48.609 14:56:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:48.609 14:56:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:48.609 14:56:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:48.609 14:56:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:48.609 14:56:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:48.609 14:56:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:07:48.869 [2024-07-12 14:56:14.510237] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:48.869 [2024-07-12 14:56:14.510805] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:48.869 [2024-07-12 14:56:14.510830] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:48.869 [2024-07-12 14:56:14.510868] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:48.869 [2024-07-12 14:56:14.510881] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:48.869 [2024-07-12 14:56:14.510885] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x34e00dc34c80 name raid_bdev1, state configuring 00:07:48.869 request: 00:07:48.869 { 00:07:48.869 "name": "raid_bdev1", 00:07:48.869 "raid_level": "raid0", 00:07:48.869 "base_bdevs": [ 00:07:48.869 "malloc1", 00:07:48.869 "malloc2" 00:07:48.869 ], 00:07:48.869 "strip_size_kb": 64, 00:07:48.869 "superblock": false, 00:07:48.869 "method": "bdev_raid_create", 00:07:48.869 "req_id": 1 00:07:48.869 } 00:07:48.869 Got JSON-RPC error response 00:07:48.869 response: 00:07:48.869 { 00:07:48.869 "code": -17, 00:07:48.869 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:48.869 } 00:07:48.869 14:56:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:07:48.869 14:56:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:48.869 14:56:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:48.869 14:56:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:48.869 14:56:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:48.869 14:56:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:07:49.128 14:56:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:07:49.128 14:56:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:07:49.128 14:56:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:49.433 [2024-07-12 14:56:15.018244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:49.433 [2024-07-12 14:56:15.018299] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.433 [2024-07-12 14:56:15.018311] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x34e00dc34780 00:07:49.433 [2024-07-12 14:56:15.018320] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.433 [2024-07-12 14:56:15.018957] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.433 [2024-07-12 14:56:15.018983] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:49.433 [2024-07-12 14:56:15.019008] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:49.433 [2024-07-12 14:56:15.019020] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:49.433 pt1 00:07:49.433 14:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:49.433 14:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:49.433 14:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:49.433 14:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:49.433 14:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:49.433 14:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:49.433 14:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:49.433 14:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:49.433 14:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:49.433 14:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:49.433 14:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:49.434 14:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:49.692 14:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:49.692 "name": "raid_bdev1", 00:07:49.692 "uuid": "e09a869c-405e-11ef-b2a4-e9dca065e82e", 00:07:49.692 "strip_size_kb": 64, 00:07:49.692 "state": "configuring", 00:07:49.692 "raid_level": "raid0", 00:07:49.692 "superblock": true, 00:07:49.692 "num_base_bdevs": 2, 00:07:49.692 "num_base_bdevs_discovered": 1, 00:07:49.692 "num_base_bdevs_operational": 2, 00:07:49.692 "base_bdevs_list": [ 00:07:49.692 { 00:07:49.692 "name": "pt1", 00:07:49.692 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:49.692 "is_configured": true, 00:07:49.692 "data_offset": 2048, 00:07:49.692 "data_size": 63488 00:07:49.692 }, 00:07:49.692 { 00:07:49.692 "name": null, 00:07:49.692 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:49.692 "is_configured": false, 00:07:49.692 "data_offset": 2048, 00:07:49.692 "data_size": 63488 00:07:49.692 } 00:07:49.692 ] 00:07:49.692 }' 00:07:49.692 14:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:49.692 14:56:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.952 14:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:07:49.952 14:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:07:49.952 14:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:07:49.952 14:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:50.221 [2024-07-12 14:56:15.870536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:50.221 [2024-07-12 14:56:15.870606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.221 [2024-07-12 14:56:15.870634] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x34e00dc34f00 00:07:50.221 [2024-07-12 14:56:15.870643] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.221 [2024-07-12 14:56:15.870757] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.221 [2024-07-12 14:56:15.870769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:50.221 [2024-07-12 14:56:15.870794] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:50.221 [2024-07-12 14:56:15.870803] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:50.221 [2024-07-12 14:56:15.870828] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x34e00dc35180 00:07:50.221 [2024-07-12 14:56:15.870832] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:50.221 [2024-07-12 14:56:15.870852] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x34e00dc97e20 00:07:50.221 [2024-07-12 14:56:15.870908] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x34e00dc35180 00:07:50.221 [2024-07-12 14:56:15.870913] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x34e00dc35180 00:07:50.221 [2024-07-12 14:56:15.870935] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.221 pt2 00:07:50.221 14:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:07:50.221 14:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:07:50.221 14:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:50.221 14:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:50.221 14:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:50.221 14:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:50.221 14:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:50.221 14:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:50.221 14:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:50.221 14:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:50.221 14:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:50.221 14:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:50.221 14:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:50.221 14:56:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:50.480 14:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:50.480 "name": "raid_bdev1", 00:07:50.480 "uuid": "e09a869c-405e-11ef-b2a4-e9dca065e82e", 00:07:50.480 "strip_size_kb": 64, 00:07:50.480 "state": "online", 00:07:50.480 "raid_level": "raid0", 00:07:50.480 "superblock": true, 00:07:50.480 "num_base_bdevs": 2, 00:07:50.480 "num_base_bdevs_discovered": 2, 00:07:50.480 "num_base_bdevs_operational": 2, 00:07:50.480 "base_bdevs_list": [ 00:07:50.480 { 00:07:50.480 "name": "pt1", 00:07:50.480 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:50.480 "is_configured": true, 00:07:50.480 "data_offset": 2048, 00:07:50.480 "data_size": 63488 00:07:50.480 }, 00:07:50.480 { 00:07:50.480 "name": "pt2", 00:07:50.480 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:50.480 "is_configured": true, 00:07:50.480 "data_offset": 2048, 00:07:50.480 "data_size": 63488 00:07:50.480 } 00:07:50.480 ] 00:07:50.480 }' 00:07:50.480 14:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:50.480 14:56:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.740 14:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:07:50.740 14:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:07:50.740 14:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:50.740 14:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:50.740 14:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:50.740 14:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:50.740 14:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:50.740 14:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:51.000 [2024-07-12 14:56:16.782882] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.001 14:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:51.001 "name": "raid_bdev1", 00:07:51.001 "aliases": [ 00:07:51.001 "e09a869c-405e-11ef-b2a4-e9dca065e82e" 00:07:51.001 ], 00:07:51.001 "product_name": "Raid Volume", 00:07:51.001 "block_size": 512, 00:07:51.001 "num_blocks": 126976, 00:07:51.001 "uuid": "e09a869c-405e-11ef-b2a4-e9dca065e82e", 00:07:51.001 "assigned_rate_limits": { 00:07:51.001 "rw_ios_per_sec": 0, 00:07:51.001 "rw_mbytes_per_sec": 0, 00:07:51.001 "r_mbytes_per_sec": 0, 00:07:51.001 "w_mbytes_per_sec": 0 00:07:51.001 }, 00:07:51.001 "claimed": false, 00:07:51.001 "zoned": false, 00:07:51.001 "supported_io_types": { 00:07:51.001 "read": true, 00:07:51.001 "write": true, 00:07:51.001 "unmap": true, 00:07:51.001 "flush": true, 00:07:51.001 "reset": true, 00:07:51.001 "nvme_admin": false, 00:07:51.001 "nvme_io": false, 00:07:51.001 "nvme_io_md": false, 00:07:51.001 "write_zeroes": true, 00:07:51.001 "zcopy": false, 00:07:51.001 "get_zone_info": false, 00:07:51.001 "zone_management": false, 00:07:51.001 "zone_append": false, 00:07:51.001 "compare": false, 00:07:51.001 "compare_and_write": false, 00:07:51.001 "abort": false, 00:07:51.001 "seek_hole": false, 00:07:51.001 "seek_data": false, 00:07:51.001 "copy": false, 00:07:51.001 "nvme_iov_md": false 00:07:51.001 }, 00:07:51.001 "memory_domains": [ 00:07:51.001 { 00:07:51.001 "dma_device_id": "system", 00:07:51.001 "dma_device_type": 1 00:07:51.001 }, 00:07:51.001 { 00:07:51.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.001 "dma_device_type": 2 00:07:51.001 }, 00:07:51.001 { 00:07:51.001 "dma_device_id": "system", 00:07:51.001 "dma_device_type": 1 00:07:51.001 }, 00:07:51.001 { 00:07:51.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.001 "dma_device_type": 2 00:07:51.001 } 00:07:51.001 ], 00:07:51.001 "driver_specific": { 00:07:51.001 "raid": { 00:07:51.001 "uuid": "e09a869c-405e-11ef-b2a4-e9dca065e82e", 00:07:51.001 "strip_size_kb": 64, 00:07:51.001 "state": "online", 00:07:51.001 "raid_level": "raid0", 00:07:51.001 "superblock": true, 00:07:51.001 "num_base_bdevs": 2, 00:07:51.001 "num_base_bdevs_discovered": 2, 00:07:51.001 "num_base_bdevs_operational": 2, 00:07:51.001 "base_bdevs_list": [ 00:07:51.001 { 00:07:51.001 "name": "pt1", 00:07:51.001 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:51.001 "is_configured": true, 00:07:51.001 "data_offset": 2048, 00:07:51.001 "data_size": 63488 00:07:51.001 }, 00:07:51.001 { 00:07:51.001 "name": "pt2", 00:07:51.001 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:51.001 "is_configured": true, 00:07:51.001 "data_offset": 2048, 00:07:51.001 "data_size": 63488 00:07:51.001 } 00:07:51.001 ] 00:07:51.001 } 00:07:51.001 } 00:07:51.001 }' 00:07:51.001 14:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:51.001 14:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:07:51.001 pt2' 00:07:51.001 14:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:51.001 14:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:07:51.260 14:56:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:51.519 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:51.519 "name": "pt1", 00:07:51.519 "aliases": [ 00:07:51.519 "00000000-0000-0000-0000-000000000001" 00:07:51.519 ], 00:07:51.519 "product_name": "passthru", 00:07:51.519 "block_size": 512, 00:07:51.519 "num_blocks": 65536, 00:07:51.519 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:51.519 "assigned_rate_limits": { 00:07:51.519 "rw_ios_per_sec": 0, 00:07:51.519 "rw_mbytes_per_sec": 0, 00:07:51.519 "r_mbytes_per_sec": 0, 00:07:51.519 "w_mbytes_per_sec": 0 00:07:51.519 }, 00:07:51.519 "claimed": true, 00:07:51.519 "claim_type": "exclusive_write", 00:07:51.519 "zoned": false, 00:07:51.519 "supported_io_types": { 00:07:51.519 "read": true, 00:07:51.519 "write": true, 00:07:51.519 "unmap": true, 00:07:51.519 "flush": true, 00:07:51.519 "reset": true, 00:07:51.519 "nvme_admin": false, 00:07:51.519 "nvme_io": false, 00:07:51.519 "nvme_io_md": false, 00:07:51.519 "write_zeroes": true, 00:07:51.519 "zcopy": true, 00:07:51.519 "get_zone_info": false, 00:07:51.519 "zone_management": false, 00:07:51.519 "zone_append": false, 00:07:51.519 "compare": false, 00:07:51.519 "compare_and_write": false, 00:07:51.519 "abort": true, 00:07:51.519 "seek_hole": false, 00:07:51.519 "seek_data": false, 00:07:51.519 "copy": true, 00:07:51.519 "nvme_iov_md": false 00:07:51.519 }, 00:07:51.519 "memory_domains": [ 00:07:51.519 { 00:07:51.519 "dma_device_id": "system", 00:07:51.519 "dma_device_type": 1 00:07:51.519 }, 00:07:51.519 { 00:07:51.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.519 "dma_device_type": 2 00:07:51.519 } 00:07:51.519 ], 00:07:51.519 "driver_specific": { 00:07:51.519 "passthru": { 00:07:51.519 "name": "pt1", 00:07:51.519 "base_bdev_name": "malloc1" 00:07:51.519 } 00:07:51.519 } 00:07:51.519 }' 00:07:51.519 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:51.519 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:51.519 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:51.519 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:51.519 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:51.519 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:51.519 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:51.519 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:51.519 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:51.519 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:51.520 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:51.520 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:51.520 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:51.520 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:07:51.520 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:51.779 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:51.779 "name": "pt2", 00:07:51.779 "aliases": [ 00:07:51.779 "00000000-0000-0000-0000-000000000002" 00:07:51.779 ], 00:07:51.779 "product_name": "passthru", 00:07:51.779 "block_size": 512, 00:07:51.779 "num_blocks": 65536, 00:07:51.779 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:51.779 "assigned_rate_limits": { 00:07:51.779 "rw_ios_per_sec": 0, 00:07:51.779 "rw_mbytes_per_sec": 0, 00:07:51.779 "r_mbytes_per_sec": 0, 00:07:51.779 "w_mbytes_per_sec": 0 00:07:51.779 }, 00:07:51.779 "claimed": true, 00:07:51.779 "claim_type": "exclusive_write", 00:07:51.779 "zoned": false, 00:07:51.779 "supported_io_types": { 00:07:51.779 "read": true, 00:07:51.779 "write": true, 00:07:51.779 "unmap": true, 00:07:51.779 "flush": true, 00:07:51.779 "reset": true, 00:07:51.779 "nvme_admin": false, 00:07:51.779 "nvme_io": false, 00:07:51.779 "nvme_io_md": false, 00:07:51.779 "write_zeroes": true, 00:07:51.779 "zcopy": true, 00:07:51.779 "get_zone_info": false, 00:07:51.779 "zone_management": false, 00:07:51.779 "zone_append": false, 00:07:51.779 "compare": false, 00:07:51.779 "compare_and_write": false, 00:07:51.779 "abort": true, 00:07:51.779 "seek_hole": false, 00:07:51.779 "seek_data": false, 00:07:51.779 "copy": true, 00:07:51.779 "nvme_iov_md": false 00:07:51.779 }, 00:07:51.779 "memory_domains": [ 00:07:51.779 { 00:07:51.779 "dma_device_id": "system", 00:07:51.779 "dma_device_type": 1 00:07:51.779 }, 00:07:51.779 { 00:07:51.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.779 "dma_device_type": 2 00:07:51.779 } 00:07:51.779 ], 00:07:51.779 "driver_specific": { 00:07:51.779 "passthru": { 00:07:51.779 "name": "pt2", 00:07:51.779 "base_bdev_name": "malloc2" 00:07:51.779 } 00:07:51.779 } 00:07:51.779 }' 00:07:51.779 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:51.779 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:51.779 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:51.779 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:51.779 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:51.779 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:51.779 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:51.779 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:51.779 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:51.779 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:51.779 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:51.779 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:51.779 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:51.779 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:07:52.037 [2024-07-12 14:56:17.771246] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:52.037 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' e09a869c-405e-11ef-b2a4-e9dca065e82e '!=' e09a869c-405e-11ef-b2a4-e9dca065e82e ']' 00:07:52.037 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:07:52.037 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:52.037 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:52.037 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 49188 00:07:52.037 14:56:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 49188 ']' 00:07:52.038 14:56:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 49188 00:07:52.038 14:56:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:07:52.038 14:56:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:52.038 14:56:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 49188 00:07:52.038 14:56:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:07:52.038 14:56:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:07:52.038 14:56:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:07:52.038 killing process with pid 49188 00:07:52.038 14:56:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49188' 00:07:52.038 14:56:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 49188 00:07:52.038 [2024-07-12 14:56:17.800055] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:52.038 [2024-07-12 14:56:17.800081] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.038 [2024-07-12 14:56:17.800093] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:52.038 [2024-07-12 14:56:17.800098] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x34e00dc35180 name raid_bdev1, state offline 00:07:52.038 14:56:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 49188 00:07:52.038 [2024-07-12 14:56:17.811753] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:52.296 14:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:07:52.296 ************************************ 00:07:52.296 END TEST raid_superblock_test 00:07:52.296 ************************************ 00:07:52.296 00:07:52.296 real 0m9.188s 00:07:52.296 user 0m16.032s 00:07:52.296 sys 0m1.586s 00:07:52.296 14:56:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.296 14:56:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.296 14:56:18 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:52.296 14:56:18 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:52.296 14:56:18 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:52.296 14:56:18 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.296 14:56:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:52.296 ************************************ 00:07:52.296 START TEST raid_read_error_test 00:07:52.296 ************************************ 00:07:52.296 14:56:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 read 00:07:52.296 14:56:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:07:52.296 14:56:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:07:52.296 14:56:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:07:52.296 14:56:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:07:52.296 14:56:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:52.296 14:56:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:07:52.296 14:56:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:52.296 14:56:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:52.296 14:56:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:07:52.296 14:56:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:52.296 14:56:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:52.296 14:56:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:52.296 14:56:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:07:52.296 14:56:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:07:52.296 14:56:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:07:52.296 14:56:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:07:52.296 14:56:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:07:52.296 14:56:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:07:52.297 14:56:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:07:52.297 14:56:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:07:52.297 14:56:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:07:52.297 14:56:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:07:52.297 14:56:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.uXKFmmEPb6 00:07:52.297 14:56:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=49457 00:07:52.297 14:56:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:52.297 14:56:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 49457 /var/tmp/spdk-raid.sock 00:07:52.297 14:56:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 49457 ']' 00:07:52.297 14:56:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:52.297 14:56:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:52.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:52.297 14:56:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:52.297 14:56:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:52.297 14:56:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.297 [2024-07-12 14:56:18.047547] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:07:52.297 [2024-07-12 14:56:18.047769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:52.864 EAL: TSC is not safe to use in SMP mode 00:07:52.864 EAL: TSC is not invariant 00:07:52.864 [2024-07-12 14:56:18.580884] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.864 [2024-07-12 14:56:18.666807] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:52.864 [2024-07-12 14:56:18.668962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.864 [2024-07-12 14:56:18.669709] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:52.864 [2024-07-12 14:56:18.669731] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.478 14:56:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:53.478 14:56:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:07:53.478 14:56:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:53.478 14:56:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:53.737 BaseBdev1_malloc 00:07:53.737 14:56:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:07:53.737 true 00:07:53.737 14:56:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:53.994 [2024-07-12 14:56:19.749779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:53.995 [2024-07-12 14:56:19.749845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.995 [2024-07-12 14:56:19.749874] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9601c434780 00:07:53.995 [2024-07-12 14:56:19.749883] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.995 [2024-07-12 14:56:19.750605] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.995 [2024-07-12 14:56:19.750635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:53.995 BaseBdev1 00:07:53.995 14:56:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:53.995 14:56:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:54.252 BaseBdev2_malloc 00:07:54.252 14:56:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:07:54.511 true 00:07:54.511 14:56:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:54.768 [2024-07-12 14:56:20.485992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:54.768 [2024-07-12 14:56:20.486066] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.768 [2024-07-12 14:56:20.486105] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9601c434c80 00:07:54.768 [2024-07-12 14:56:20.486130] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.768 [2024-07-12 14:56:20.486807] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.768 [2024-07-12 14:56:20.486836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:54.768 BaseBdev2 00:07:54.768 14:56:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:07:55.025 [2024-07-12 14:56:20.774085] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:55.025 [2024-07-12 14:56:20.774680] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:55.025 [2024-07-12 14:56:20.774775] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x9601c434f00 00:07:55.025 [2024-07-12 14:56:20.774782] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:55.025 [2024-07-12 14:56:20.774815] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9601c4a0e20 00:07:55.025 [2024-07-12 14:56:20.774889] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9601c434f00 00:07:55.025 [2024-07-12 14:56:20.774894] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x9601c434f00 00:07:55.025 [2024-07-12 14:56:20.774921] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.025 14:56:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:55.025 14:56:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:55.025 14:56:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:55.025 14:56:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:55.025 14:56:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:55.025 14:56:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:55.025 14:56:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:55.025 14:56:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:55.025 14:56:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:55.025 14:56:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:55.025 14:56:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:55.025 14:56:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.282 14:56:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:55.282 "name": "raid_bdev1", 00:07:55.282 "uuid": "e65d1f5a-405e-11ef-b2a4-e9dca065e82e", 00:07:55.282 "strip_size_kb": 64, 00:07:55.282 "state": "online", 00:07:55.282 "raid_level": "raid0", 00:07:55.282 "superblock": true, 00:07:55.282 "num_base_bdevs": 2, 00:07:55.282 "num_base_bdevs_discovered": 2, 00:07:55.282 "num_base_bdevs_operational": 2, 00:07:55.282 "base_bdevs_list": [ 00:07:55.282 { 00:07:55.282 "name": "BaseBdev1", 00:07:55.282 "uuid": "2c5f4fb1-dc98-515d-a5e8-039bd6c07a17", 00:07:55.282 "is_configured": true, 00:07:55.282 "data_offset": 2048, 00:07:55.282 "data_size": 63488 00:07:55.282 }, 00:07:55.282 { 00:07:55.282 "name": "BaseBdev2", 00:07:55.282 "uuid": "b215cc8f-ceed-595f-b653-a55f002ff45c", 00:07:55.282 "is_configured": true, 00:07:55.282 "data_offset": 2048, 00:07:55.282 "data_size": 63488 00:07:55.282 } 00:07:55.282 ] 00:07:55.282 }' 00:07:55.282 14:56:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:55.282 14:56:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.847 14:56:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:07:55.847 14:56:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:07:55.847 [2024-07-12 14:56:21.506495] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9601c4a0ec0 00:07:56.842 14:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:57.102 14:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:07:57.102 14:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:57.102 14:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:07:57.102 14:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:57.102 14:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:57.102 14:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:57.102 14:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:57.102 14:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:57.102 14:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:57.102 14:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:57.102 14:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:57.102 14:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:57.102 14:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:57.102 14:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:57.102 14:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:57.360 14:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:57.360 "name": "raid_bdev1", 00:07:57.360 "uuid": "e65d1f5a-405e-11ef-b2a4-e9dca065e82e", 00:07:57.360 "strip_size_kb": 64, 00:07:57.360 "state": "online", 00:07:57.360 "raid_level": "raid0", 00:07:57.360 "superblock": true, 00:07:57.360 "num_base_bdevs": 2, 00:07:57.360 "num_base_bdevs_discovered": 2, 00:07:57.360 "num_base_bdevs_operational": 2, 00:07:57.360 "base_bdevs_list": [ 00:07:57.360 { 00:07:57.360 "name": "BaseBdev1", 00:07:57.360 "uuid": "2c5f4fb1-dc98-515d-a5e8-039bd6c07a17", 00:07:57.360 "is_configured": true, 00:07:57.360 "data_offset": 2048, 00:07:57.360 "data_size": 63488 00:07:57.360 }, 00:07:57.360 { 00:07:57.360 "name": "BaseBdev2", 00:07:57.360 "uuid": "b215cc8f-ceed-595f-b653-a55f002ff45c", 00:07:57.360 "is_configured": true, 00:07:57.360 "data_offset": 2048, 00:07:57.360 "data_size": 63488 00:07:57.360 } 00:07:57.360 ] 00:07:57.360 }' 00:07:57.360 14:56:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:57.360 14:56:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.617 14:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:57.875 [2024-07-12 14:56:23.528291] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:57.875 [2024-07-12 14:56:23.528321] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:57.875 [2024-07-12 14:56:23.528643] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.875 [2024-07-12 14:56:23.528665] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:57.875 [2024-07-12 14:56:23.528673] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:57.876 [2024-07-12 14:56:23.528678] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9601c434f00 name raid_bdev1, state offline 00:07:57.876 0 00:07:57.876 14:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 49457 00:07:57.876 14:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 49457 ']' 00:07:57.876 14:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 49457 00:07:57.876 14:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:07:57.876 14:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:57.876 14:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 49457 00:07:57.876 14:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:07:57.876 14:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:07:57.876 14:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:07:57.876 killing process with pid 49457 00:07:57.876 14:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49457' 00:07:57.876 14:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 49457 00:07:57.876 [2024-07-12 14:56:23.554197] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:57.876 14:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 49457 00:07:57.876 [2024-07-12 14:56:23.565129] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:58.134 14:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.uXKFmmEPb6 00:07:58.134 14:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:07:58.134 14:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:07:58.134 14:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.49 00:07:58.134 14:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:07:58.134 14:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:58.134 14:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:58.134 14:56:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.49 != \0\.\0\0 ]] 00:07:58.134 00:07:58.134 real 0m5.713s 00:07:58.134 user 0m8.659s 00:07:58.134 sys 0m1.023s 00:07:58.134 14:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.134 ************************************ 00:07:58.134 END TEST raid_read_error_test 00:07:58.134 ************************************ 00:07:58.134 14:56:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.134 14:56:23 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:58.134 14:56:23 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:58.134 14:56:23 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:58.134 14:56:23 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.134 14:56:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:58.134 ************************************ 00:07:58.134 START TEST raid_write_error_test 00:07:58.134 ************************************ 00:07:58.134 14:56:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 write 00:07:58.134 14:56:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:07:58.134 14:56:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:07:58.134 14:56:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:07:58.134 14:56:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:07:58.134 14:56:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:58.134 14:56:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:07:58.134 14:56:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:58.134 14:56:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:58.134 14:56:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:07:58.134 14:56:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:58.134 14:56:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:58.134 14:56:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:58.134 14:56:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:07:58.134 14:56:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:07:58.134 14:56:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:07:58.134 14:56:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:07:58.134 14:56:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:07:58.134 14:56:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:07:58.134 14:56:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:07:58.134 14:56:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:07:58.134 14:56:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:07:58.134 14:56:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:07:58.134 14:56:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.1msgS2e8Nc 00:07:58.134 14:56:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=49581 00:07:58.134 14:56:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:58.134 14:56:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 49581 /var/tmp/spdk-raid.sock 00:07:58.134 14:56:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 49581 ']' 00:07:58.134 14:56:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:58.134 14:56:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:58.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:58.134 14:56:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:58.134 14:56:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:58.134 14:56:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.134 [2024-07-12 14:56:23.803939] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:07:58.134 [2024-07-12 14:56:23.804085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:58.699 EAL: TSC is not safe to use in SMP mode 00:07:58.699 EAL: TSC is not invariant 00:07:58.699 [2024-07-12 14:56:24.329433] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.699 [2024-07-12 14:56:24.408877] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:58.699 [2024-07-12 14:56:24.411034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.699 [2024-07-12 14:56:24.411843] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.699 [2024-07-12 14:56:24.411868] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:59.266 14:56:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:59.266 14:56:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:07:59.266 14:56:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:59.266 14:56:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:59.266 BaseBdev1_malloc 00:07:59.266 14:56:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:07:59.524 true 00:07:59.525 14:56:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:59.783 [2024-07-12 14:56:25.545220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:59.783 [2024-07-12 14:56:25.545285] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:59.783 [2024-07-12 14:56:25.545327] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f52e9c34780 00:07:59.783 [2024-07-12 14:56:25.545335] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:59.783 [2024-07-12 14:56:25.546037] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:59.783 [2024-07-12 14:56:25.546067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:59.783 BaseBdev1 00:07:59.783 14:56:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:59.783 14:56:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:00.044 BaseBdev2_malloc 00:08:00.044 14:56:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:08:00.303 true 00:08:00.303 14:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:00.562 [2024-07-12 14:56:26.261421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:00.562 [2024-07-12 14:56:26.261496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:00.562 [2024-07-12 14:56:26.261538] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f52e9c34c80 00:08:00.562 [2024-07-12 14:56:26.261546] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:00.562 [2024-07-12 14:56:26.262241] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:00.562 [2024-07-12 14:56:26.262267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:00.562 BaseBdev2 00:08:00.562 14:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:08:00.820 [2024-07-12 14:56:26.497512] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:00.820 [2024-07-12 14:56:26.498128] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:00.820 [2024-07-12 14:56:26.498192] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f52e9c34f00 00:08:00.820 [2024-07-12 14:56:26.498199] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:00.820 [2024-07-12 14:56:26.498230] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f52e9ca0e20 00:08:00.820 [2024-07-12 14:56:26.498307] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f52e9c34f00 00:08:00.820 [2024-07-12 14:56:26.498311] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f52e9c34f00 00:08:00.820 [2024-07-12 14:56:26.498339] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.820 14:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:00.820 14:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:00.820 14:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:00.820 14:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:00.820 14:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:00.820 14:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:00.820 14:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:00.820 14:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:00.820 14:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:00.820 14:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:00.820 14:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:00.820 14:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:01.078 14:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:01.078 "name": "raid_bdev1", 00:08:01.078 "uuid": "e9c672c2-405e-11ef-b2a4-e9dca065e82e", 00:08:01.078 "strip_size_kb": 64, 00:08:01.078 "state": "online", 00:08:01.078 "raid_level": "raid0", 00:08:01.078 "superblock": true, 00:08:01.078 "num_base_bdevs": 2, 00:08:01.078 "num_base_bdevs_discovered": 2, 00:08:01.078 "num_base_bdevs_operational": 2, 00:08:01.078 "base_bdevs_list": [ 00:08:01.078 { 00:08:01.078 "name": "BaseBdev1", 00:08:01.078 "uuid": "7f2655c8-cd5a-1855-9618-d50390c3224a", 00:08:01.078 "is_configured": true, 00:08:01.078 "data_offset": 2048, 00:08:01.078 "data_size": 63488 00:08:01.078 }, 00:08:01.078 { 00:08:01.078 "name": "BaseBdev2", 00:08:01.078 "uuid": "7d4535b3-7059-da55-817f-6ff44aa487fa", 00:08:01.078 "is_configured": true, 00:08:01.078 "data_offset": 2048, 00:08:01.078 "data_size": 63488 00:08:01.078 } 00:08:01.078 ] 00:08:01.078 }' 00:08:01.078 14:56:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:01.078 14:56:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.337 14:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:08:01.337 14:56:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:08:01.337 [2024-07-12 14:56:27.149870] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f52e9ca0ec0 00:08:02.271 14:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:02.838 14:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:08:02.838 14:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:02.838 14:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:08:02.838 14:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:02.838 14:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:02.838 14:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:02.838 14:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:02.838 14:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:02.838 14:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:02.838 14:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:02.838 14:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:02.838 14:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:02.838 14:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:02.838 14:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:02.838 14:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.838 14:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:02.838 "name": "raid_bdev1", 00:08:02.838 "uuid": "e9c672c2-405e-11ef-b2a4-e9dca065e82e", 00:08:02.838 "strip_size_kb": 64, 00:08:02.838 "state": "online", 00:08:02.838 "raid_level": "raid0", 00:08:02.838 "superblock": true, 00:08:02.838 "num_base_bdevs": 2, 00:08:02.838 "num_base_bdevs_discovered": 2, 00:08:02.838 "num_base_bdevs_operational": 2, 00:08:02.838 "base_bdevs_list": [ 00:08:02.838 { 00:08:02.838 "name": "BaseBdev1", 00:08:02.838 "uuid": "7f2655c8-cd5a-1855-9618-d50390c3224a", 00:08:02.838 "is_configured": true, 00:08:02.838 "data_offset": 2048, 00:08:02.838 "data_size": 63488 00:08:02.838 }, 00:08:02.838 { 00:08:02.838 "name": "BaseBdev2", 00:08:02.838 "uuid": "7d4535b3-7059-da55-817f-6ff44aa487fa", 00:08:02.838 "is_configured": true, 00:08:02.838 "data_offset": 2048, 00:08:02.838 "data_size": 63488 00:08:02.838 } 00:08:02.838 ] 00:08:02.838 }' 00:08:02.838 14:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:02.838 14:56:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.405 14:56:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:03.664 [2024-07-12 14:56:29.231748] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:03.664 [2024-07-12 14:56:29.231778] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:03.664 [2024-07-12 14:56:29.232145] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:03.664 [2024-07-12 14:56:29.232162] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:03.664 [2024-07-12 14:56:29.232170] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:03.664 [2024-07-12 14:56:29.232174] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f52e9c34f00 name raid_bdev1, state offline 00:08:03.664 0 00:08:03.664 14:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 49581 00:08:03.664 14:56:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 49581 ']' 00:08:03.664 14:56:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 49581 00:08:03.664 14:56:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:08:03.664 14:56:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:03.664 14:56:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 49581 00:08:03.664 14:56:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:08:03.664 14:56:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:08:03.664 14:56:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:08:03.664 killing process with pid 49581 00:08:03.664 14:56:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49581' 00:08:03.664 14:56:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 49581 00:08:03.664 [2024-07-12 14:56:29.257004] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:03.664 14:56:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 49581 00:08:03.664 [2024-07-12 14:56:29.267908] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:03.664 14:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.1msgS2e8Nc 00:08:03.664 14:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:08:03.664 14:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:08:03.664 14:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.48 00:08:03.664 14:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:08:03.664 14:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:03.664 14:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:08:03.664 14:56:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.48 != \0\.\0\0 ]] 00:08:03.664 00:08:03.664 real 0m5.659s 00:08:03.664 user 0m8.578s 00:08:03.664 sys 0m1.045s 00:08:03.664 14:56:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.664 ************************************ 00:08:03.664 END TEST raid_write_error_test 00:08:03.664 ************************************ 00:08:03.664 14:56:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.923 14:56:29 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:08:03.923 14:56:29 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:08:03.923 14:56:29 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:08:03.923 14:56:29 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:03.923 14:56:29 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.923 14:56:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:03.923 ************************************ 00:08:03.923 START TEST raid_state_function_test 00:08:03.923 ************************************ 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 false 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=49707 00:08:03.923 Process raid pid: 49707 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 49707' 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 49707 /var/tmp/spdk-raid.sock 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 49707 ']' 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:03.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:03.923 14:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.923 [2024-07-12 14:56:29.503824] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:08:03.923 [2024-07-12 14:56:29.504041] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:04.489 EAL: TSC is not safe to use in SMP mode 00:08:04.489 EAL: TSC is not invariant 00:08:04.489 [2024-07-12 14:56:30.043815] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.489 [2024-07-12 14:56:30.128296] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:04.489 [2024-07-12 14:56:30.130395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.489 [2024-07-12 14:56:30.131152] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.489 [2024-07-12 14:56:30.131167] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.056 14:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:05.056 14:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:08:05.056 14:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:05.056 [2024-07-12 14:56:30.827097] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:05.056 [2024-07-12 14:56:30.827153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:05.056 [2024-07-12 14:56:30.827158] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:05.056 [2024-07-12 14:56:30.827167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:05.056 14:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:05.056 14:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:05.056 14:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:05.056 14:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:05.056 14:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:05.056 14:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:05.056 14:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:05.056 14:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:05.056 14:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:05.056 14:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:05.056 14:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:05.056 14:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.314 14:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:05.314 "name": "Existed_Raid", 00:08:05.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.314 "strip_size_kb": 64, 00:08:05.314 "state": "configuring", 00:08:05.314 "raid_level": "concat", 00:08:05.314 "superblock": false, 00:08:05.314 "num_base_bdevs": 2, 00:08:05.314 "num_base_bdevs_discovered": 0, 00:08:05.314 "num_base_bdevs_operational": 2, 00:08:05.314 "base_bdevs_list": [ 00:08:05.314 { 00:08:05.314 "name": "BaseBdev1", 00:08:05.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.314 "is_configured": false, 00:08:05.314 "data_offset": 0, 00:08:05.314 "data_size": 0 00:08:05.314 }, 00:08:05.314 { 00:08:05.314 "name": "BaseBdev2", 00:08:05.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.314 "is_configured": false, 00:08:05.314 "data_offset": 0, 00:08:05.314 "data_size": 0 00:08:05.314 } 00:08:05.314 ] 00:08:05.314 }' 00:08:05.314 14:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:05.314 14:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.881 14:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:05.881 [2024-07-12 14:56:31.635271] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:05.881 [2024-07-12 14:56:31.635300] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x396a44834500 name Existed_Raid, state configuring 00:08:05.881 14:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:06.139 [2024-07-12 14:56:31.911336] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:06.139 [2024-07-12 14:56:31.911382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:06.139 [2024-07-12 14:56:31.911388] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:06.139 [2024-07-12 14:56:31.911397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:06.139 14:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:06.398 [2024-07-12 14:56:32.164390] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:06.398 BaseBdev1 00:08:06.398 14:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:08:06.398 14:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:08:06.398 14:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:06.398 14:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:08:06.398 14:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:06.398 14:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:06.398 14:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:06.656 14:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:06.914 [ 00:08:06.914 { 00:08:06.914 "name": "BaseBdev1", 00:08:06.914 "aliases": [ 00:08:06.914 "ed26feda-405e-11ef-b2a4-e9dca065e82e" 00:08:06.914 ], 00:08:06.914 "product_name": "Malloc disk", 00:08:06.914 "block_size": 512, 00:08:06.914 "num_blocks": 65536, 00:08:06.914 "uuid": "ed26feda-405e-11ef-b2a4-e9dca065e82e", 00:08:06.914 "assigned_rate_limits": { 00:08:06.914 "rw_ios_per_sec": 0, 00:08:06.914 "rw_mbytes_per_sec": 0, 00:08:06.914 "r_mbytes_per_sec": 0, 00:08:06.914 "w_mbytes_per_sec": 0 00:08:06.914 }, 00:08:06.914 "claimed": true, 00:08:06.914 "claim_type": "exclusive_write", 00:08:06.914 "zoned": false, 00:08:06.914 "supported_io_types": { 00:08:06.914 "read": true, 00:08:06.914 "write": true, 00:08:06.915 "unmap": true, 00:08:06.915 "flush": true, 00:08:06.915 "reset": true, 00:08:06.915 "nvme_admin": false, 00:08:06.915 "nvme_io": false, 00:08:06.915 "nvme_io_md": false, 00:08:06.915 "write_zeroes": true, 00:08:06.915 "zcopy": true, 00:08:06.915 "get_zone_info": false, 00:08:06.915 "zone_management": false, 00:08:06.915 "zone_append": false, 00:08:06.915 "compare": false, 00:08:06.915 "compare_and_write": false, 00:08:06.915 "abort": true, 00:08:06.915 "seek_hole": false, 00:08:06.915 "seek_data": false, 00:08:06.915 "copy": true, 00:08:06.915 "nvme_iov_md": false 00:08:06.915 }, 00:08:06.915 "memory_domains": [ 00:08:06.915 { 00:08:06.915 "dma_device_id": "system", 00:08:06.915 "dma_device_type": 1 00:08:06.915 }, 00:08:06.915 { 00:08:06.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.915 "dma_device_type": 2 00:08:06.915 } 00:08:06.915 ], 00:08:06.915 "driver_specific": {} 00:08:06.915 } 00:08:06.915 ] 00:08:07.173 14:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:08:07.173 14:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:07.173 14:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:07.173 14:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:07.173 14:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:07.173 14:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:07.173 14:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:07.173 14:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:07.173 14:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:07.173 14:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:07.173 14:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:07.173 14:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:07.173 14:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.173 14:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:07.173 "name": "Existed_Raid", 00:08:07.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.173 "strip_size_kb": 64, 00:08:07.173 "state": "configuring", 00:08:07.173 "raid_level": "concat", 00:08:07.173 "superblock": false, 00:08:07.173 "num_base_bdevs": 2, 00:08:07.173 "num_base_bdevs_discovered": 1, 00:08:07.173 "num_base_bdevs_operational": 2, 00:08:07.173 "base_bdevs_list": [ 00:08:07.173 { 00:08:07.173 "name": "BaseBdev1", 00:08:07.173 "uuid": "ed26feda-405e-11ef-b2a4-e9dca065e82e", 00:08:07.173 "is_configured": true, 00:08:07.173 "data_offset": 0, 00:08:07.173 "data_size": 65536 00:08:07.173 }, 00:08:07.173 { 00:08:07.173 "name": "BaseBdev2", 00:08:07.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.173 "is_configured": false, 00:08:07.173 "data_offset": 0, 00:08:07.173 "data_size": 0 00:08:07.173 } 00:08:07.173 ] 00:08:07.173 }' 00:08:07.173 14:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:07.173 14:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.739 14:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:07.739 [2024-07-12 14:56:33.539689] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:07.739 [2024-07-12 14:56:33.539724] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x396a44834500 name Existed_Raid, state configuring 00:08:07.998 14:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:08.256 [2024-07-12 14:56:33.883783] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:08.256 [2024-07-12 14:56:33.884621] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:08.256 [2024-07-12 14:56:33.884661] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:08.256 14:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:08:08.256 14:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:08.256 14:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:08.256 14:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:08.256 14:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:08.256 14:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:08.256 14:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:08.256 14:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:08.256 14:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:08.256 14:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:08.256 14:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:08.256 14:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:08.256 14:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:08.256 14:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.514 14:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:08.514 "name": "Existed_Raid", 00:08:08.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.514 "strip_size_kb": 64, 00:08:08.514 "state": "configuring", 00:08:08.514 "raid_level": "concat", 00:08:08.514 "superblock": false, 00:08:08.514 "num_base_bdevs": 2, 00:08:08.514 "num_base_bdevs_discovered": 1, 00:08:08.514 "num_base_bdevs_operational": 2, 00:08:08.514 "base_bdevs_list": [ 00:08:08.514 { 00:08:08.514 "name": "BaseBdev1", 00:08:08.514 "uuid": "ed26feda-405e-11ef-b2a4-e9dca065e82e", 00:08:08.514 "is_configured": true, 00:08:08.514 "data_offset": 0, 00:08:08.514 "data_size": 65536 00:08:08.514 }, 00:08:08.514 { 00:08:08.514 "name": "BaseBdev2", 00:08:08.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.514 "is_configured": false, 00:08:08.514 "data_offset": 0, 00:08:08.514 "data_size": 0 00:08:08.514 } 00:08:08.514 ] 00:08:08.514 }' 00:08:08.514 14:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:08.514 14:56:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.772 14:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:09.031 [2024-07-12 14:56:34.768091] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:09.031 [2024-07-12 14:56:34.768120] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x396a44834a00 00:08:09.031 [2024-07-12 14:56:34.768125] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:09.031 [2024-07-12 14:56:34.768147] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x396a44897e20 00:08:09.031 [2024-07-12 14:56:34.768239] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x396a44834a00 00:08:09.031 [2024-07-12 14:56:34.768243] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x396a44834a00 00:08:09.031 [2024-07-12 14:56:34.768280] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.031 BaseBdev2 00:08:09.031 14:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:08:09.031 14:56:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:08:09.031 14:56:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:09.031 14:56:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:08:09.031 14:56:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:09.031 14:56:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:09.031 14:56:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:09.289 14:56:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:09.855 [ 00:08:09.855 { 00:08:09.855 "name": "BaseBdev2", 00:08:09.855 "aliases": [ 00:08:09.855 "eeb46b68-405e-11ef-b2a4-e9dca065e82e" 00:08:09.855 ], 00:08:09.855 "product_name": "Malloc disk", 00:08:09.855 "block_size": 512, 00:08:09.855 "num_blocks": 65536, 00:08:09.855 "uuid": "eeb46b68-405e-11ef-b2a4-e9dca065e82e", 00:08:09.855 "assigned_rate_limits": { 00:08:09.855 "rw_ios_per_sec": 0, 00:08:09.855 "rw_mbytes_per_sec": 0, 00:08:09.855 "r_mbytes_per_sec": 0, 00:08:09.855 "w_mbytes_per_sec": 0 00:08:09.855 }, 00:08:09.855 "claimed": true, 00:08:09.855 "claim_type": "exclusive_write", 00:08:09.855 "zoned": false, 00:08:09.855 "supported_io_types": { 00:08:09.855 "read": true, 00:08:09.855 "write": true, 00:08:09.855 "unmap": true, 00:08:09.855 "flush": true, 00:08:09.855 "reset": true, 00:08:09.855 "nvme_admin": false, 00:08:09.855 "nvme_io": false, 00:08:09.855 "nvme_io_md": false, 00:08:09.855 "write_zeroes": true, 00:08:09.855 "zcopy": true, 00:08:09.855 "get_zone_info": false, 00:08:09.855 "zone_management": false, 00:08:09.855 "zone_append": false, 00:08:09.855 "compare": false, 00:08:09.855 "compare_and_write": false, 00:08:09.855 "abort": true, 00:08:09.855 "seek_hole": false, 00:08:09.855 "seek_data": false, 00:08:09.855 "copy": true, 00:08:09.855 "nvme_iov_md": false 00:08:09.855 }, 00:08:09.855 "memory_domains": [ 00:08:09.855 { 00:08:09.855 "dma_device_id": "system", 00:08:09.855 "dma_device_type": 1 00:08:09.855 }, 00:08:09.855 { 00:08:09.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.855 "dma_device_type": 2 00:08:09.855 } 00:08:09.855 ], 00:08:09.855 "driver_specific": {} 00:08:09.855 } 00:08:09.855 ] 00:08:09.855 14:56:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:08:09.855 14:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:08:09.855 14:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:09.855 14:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:09.855 14:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:09.856 14:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:09.856 14:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:09.856 14:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:09.856 14:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:09.856 14:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:09.856 14:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:09.856 14:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:09.856 14:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:09.856 14:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:09.856 14:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.856 14:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:09.856 "name": "Existed_Raid", 00:08:09.856 "uuid": "eeb4721c-405e-11ef-b2a4-e9dca065e82e", 00:08:09.856 "strip_size_kb": 64, 00:08:09.856 "state": "online", 00:08:09.856 "raid_level": "concat", 00:08:09.856 "superblock": false, 00:08:09.856 "num_base_bdevs": 2, 00:08:09.856 "num_base_bdevs_discovered": 2, 00:08:09.856 "num_base_bdevs_operational": 2, 00:08:09.856 "base_bdevs_list": [ 00:08:09.856 { 00:08:09.856 "name": "BaseBdev1", 00:08:09.856 "uuid": "ed26feda-405e-11ef-b2a4-e9dca065e82e", 00:08:09.856 "is_configured": true, 00:08:09.856 "data_offset": 0, 00:08:09.856 "data_size": 65536 00:08:09.856 }, 00:08:09.856 { 00:08:09.856 "name": "BaseBdev2", 00:08:09.856 "uuid": "eeb46b68-405e-11ef-b2a4-e9dca065e82e", 00:08:09.856 "is_configured": true, 00:08:09.856 "data_offset": 0, 00:08:09.856 "data_size": 65536 00:08:09.856 } 00:08:09.856 ] 00:08:09.856 }' 00:08:09.856 14:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:09.856 14:56:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.422 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:08:10.422 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:08:10.422 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:10.422 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:10.422 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:10.422 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:10.422 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:08:10.422 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:10.422 [2024-07-12 14:56:36.224295] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:10.680 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:10.680 "name": "Existed_Raid", 00:08:10.680 "aliases": [ 00:08:10.680 "eeb4721c-405e-11ef-b2a4-e9dca065e82e" 00:08:10.681 ], 00:08:10.681 "product_name": "Raid Volume", 00:08:10.681 "block_size": 512, 00:08:10.681 "num_blocks": 131072, 00:08:10.681 "uuid": "eeb4721c-405e-11ef-b2a4-e9dca065e82e", 00:08:10.681 "assigned_rate_limits": { 00:08:10.681 "rw_ios_per_sec": 0, 00:08:10.681 "rw_mbytes_per_sec": 0, 00:08:10.681 "r_mbytes_per_sec": 0, 00:08:10.681 "w_mbytes_per_sec": 0 00:08:10.681 }, 00:08:10.681 "claimed": false, 00:08:10.681 "zoned": false, 00:08:10.681 "supported_io_types": { 00:08:10.681 "read": true, 00:08:10.681 "write": true, 00:08:10.681 "unmap": true, 00:08:10.681 "flush": true, 00:08:10.681 "reset": true, 00:08:10.681 "nvme_admin": false, 00:08:10.681 "nvme_io": false, 00:08:10.681 "nvme_io_md": false, 00:08:10.681 "write_zeroes": true, 00:08:10.681 "zcopy": false, 00:08:10.681 "get_zone_info": false, 00:08:10.681 "zone_management": false, 00:08:10.681 "zone_append": false, 00:08:10.681 "compare": false, 00:08:10.681 "compare_and_write": false, 00:08:10.681 "abort": false, 00:08:10.681 "seek_hole": false, 00:08:10.681 "seek_data": false, 00:08:10.681 "copy": false, 00:08:10.681 "nvme_iov_md": false 00:08:10.681 }, 00:08:10.681 "memory_domains": [ 00:08:10.681 { 00:08:10.681 "dma_device_id": "system", 00:08:10.681 "dma_device_type": 1 00:08:10.681 }, 00:08:10.681 { 00:08:10.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.681 "dma_device_type": 2 00:08:10.681 }, 00:08:10.681 { 00:08:10.681 "dma_device_id": "system", 00:08:10.681 "dma_device_type": 1 00:08:10.681 }, 00:08:10.681 { 00:08:10.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.681 "dma_device_type": 2 00:08:10.681 } 00:08:10.681 ], 00:08:10.681 "driver_specific": { 00:08:10.681 "raid": { 00:08:10.681 "uuid": "eeb4721c-405e-11ef-b2a4-e9dca065e82e", 00:08:10.681 "strip_size_kb": 64, 00:08:10.681 "state": "online", 00:08:10.681 "raid_level": "concat", 00:08:10.681 "superblock": false, 00:08:10.681 "num_base_bdevs": 2, 00:08:10.681 "num_base_bdevs_discovered": 2, 00:08:10.681 "num_base_bdevs_operational": 2, 00:08:10.681 "base_bdevs_list": [ 00:08:10.681 { 00:08:10.681 "name": "BaseBdev1", 00:08:10.681 "uuid": "ed26feda-405e-11ef-b2a4-e9dca065e82e", 00:08:10.681 "is_configured": true, 00:08:10.681 "data_offset": 0, 00:08:10.681 "data_size": 65536 00:08:10.681 }, 00:08:10.681 { 00:08:10.681 "name": "BaseBdev2", 00:08:10.681 "uuid": "eeb46b68-405e-11ef-b2a4-e9dca065e82e", 00:08:10.681 "is_configured": true, 00:08:10.681 "data_offset": 0, 00:08:10.681 "data_size": 65536 00:08:10.681 } 00:08:10.681 ] 00:08:10.681 } 00:08:10.681 } 00:08:10.681 }' 00:08:10.681 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:10.681 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:08:10.681 BaseBdev2' 00:08:10.681 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:10.681 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:08:10.681 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:10.939 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:10.939 "name": "BaseBdev1", 00:08:10.939 "aliases": [ 00:08:10.939 "ed26feda-405e-11ef-b2a4-e9dca065e82e" 00:08:10.939 ], 00:08:10.939 "product_name": "Malloc disk", 00:08:10.939 "block_size": 512, 00:08:10.939 "num_blocks": 65536, 00:08:10.939 "uuid": "ed26feda-405e-11ef-b2a4-e9dca065e82e", 00:08:10.939 "assigned_rate_limits": { 00:08:10.939 "rw_ios_per_sec": 0, 00:08:10.939 "rw_mbytes_per_sec": 0, 00:08:10.939 "r_mbytes_per_sec": 0, 00:08:10.939 "w_mbytes_per_sec": 0 00:08:10.939 }, 00:08:10.939 "claimed": true, 00:08:10.939 "claim_type": "exclusive_write", 00:08:10.939 "zoned": false, 00:08:10.939 "supported_io_types": { 00:08:10.939 "read": true, 00:08:10.939 "write": true, 00:08:10.940 "unmap": true, 00:08:10.940 "flush": true, 00:08:10.940 "reset": true, 00:08:10.940 "nvme_admin": false, 00:08:10.940 "nvme_io": false, 00:08:10.940 "nvme_io_md": false, 00:08:10.940 "write_zeroes": true, 00:08:10.940 "zcopy": true, 00:08:10.940 "get_zone_info": false, 00:08:10.940 "zone_management": false, 00:08:10.940 "zone_append": false, 00:08:10.940 "compare": false, 00:08:10.940 "compare_and_write": false, 00:08:10.940 "abort": true, 00:08:10.940 "seek_hole": false, 00:08:10.940 "seek_data": false, 00:08:10.940 "copy": true, 00:08:10.940 "nvme_iov_md": false 00:08:10.940 }, 00:08:10.940 "memory_domains": [ 00:08:10.940 { 00:08:10.940 "dma_device_id": "system", 00:08:10.940 "dma_device_type": 1 00:08:10.940 }, 00:08:10.940 { 00:08:10.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.940 "dma_device_type": 2 00:08:10.940 } 00:08:10.940 ], 00:08:10.940 "driver_specific": {} 00:08:10.940 }' 00:08:10.940 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:10.940 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:10.940 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:10.940 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:10.940 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:10.940 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:10.940 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:10.940 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:10.940 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:10.940 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:10.940 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:10.940 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:10.940 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:10.940 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:10.940 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:08:11.198 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:11.198 "name": "BaseBdev2", 00:08:11.198 "aliases": [ 00:08:11.198 "eeb46b68-405e-11ef-b2a4-e9dca065e82e" 00:08:11.198 ], 00:08:11.198 "product_name": "Malloc disk", 00:08:11.198 "block_size": 512, 00:08:11.198 "num_blocks": 65536, 00:08:11.198 "uuid": "eeb46b68-405e-11ef-b2a4-e9dca065e82e", 00:08:11.198 "assigned_rate_limits": { 00:08:11.198 "rw_ios_per_sec": 0, 00:08:11.198 "rw_mbytes_per_sec": 0, 00:08:11.198 "r_mbytes_per_sec": 0, 00:08:11.198 "w_mbytes_per_sec": 0 00:08:11.198 }, 00:08:11.198 "claimed": true, 00:08:11.198 "claim_type": "exclusive_write", 00:08:11.198 "zoned": false, 00:08:11.198 "supported_io_types": { 00:08:11.198 "read": true, 00:08:11.198 "write": true, 00:08:11.198 "unmap": true, 00:08:11.198 "flush": true, 00:08:11.198 "reset": true, 00:08:11.198 "nvme_admin": false, 00:08:11.198 "nvme_io": false, 00:08:11.198 "nvme_io_md": false, 00:08:11.198 "write_zeroes": true, 00:08:11.198 "zcopy": true, 00:08:11.198 "get_zone_info": false, 00:08:11.198 "zone_management": false, 00:08:11.198 "zone_append": false, 00:08:11.198 "compare": false, 00:08:11.198 "compare_and_write": false, 00:08:11.198 "abort": true, 00:08:11.198 "seek_hole": false, 00:08:11.198 "seek_data": false, 00:08:11.198 "copy": true, 00:08:11.198 "nvme_iov_md": false 00:08:11.198 }, 00:08:11.198 "memory_domains": [ 00:08:11.198 { 00:08:11.198 "dma_device_id": "system", 00:08:11.198 "dma_device_type": 1 00:08:11.198 }, 00:08:11.198 { 00:08:11.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.198 "dma_device_type": 2 00:08:11.198 } 00:08:11.198 ], 00:08:11.198 "driver_specific": {} 00:08:11.198 }' 00:08:11.198 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:11.198 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:11.198 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:11.198 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:11.198 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:11.198 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:11.198 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:11.198 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:11.198 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:11.198 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:11.198 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:11.198 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:11.198 14:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:11.456 [2024-07-12 14:56:37.184456] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:11.456 [2024-07-12 14:56:37.184484] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:11.456 [2024-07-12 14:56:37.184499] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:11.456 14:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:08:11.456 14:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:08:11.456 14:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:11.456 14:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:08:11.456 14:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:08:11.456 14:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:11.456 14:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:11.456 14:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:08:11.456 14:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:11.456 14:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:11.456 14:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:11.456 14:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:11.456 14:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:11.456 14:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:11.456 14:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:11.456 14:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:11.457 14:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.714 14:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:11.714 "name": "Existed_Raid", 00:08:11.714 "uuid": "eeb4721c-405e-11ef-b2a4-e9dca065e82e", 00:08:11.714 "strip_size_kb": 64, 00:08:11.714 "state": "offline", 00:08:11.714 "raid_level": "concat", 00:08:11.714 "superblock": false, 00:08:11.714 "num_base_bdevs": 2, 00:08:11.714 "num_base_bdevs_discovered": 1, 00:08:11.714 "num_base_bdevs_operational": 1, 00:08:11.714 "base_bdevs_list": [ 00:08:11.714 { 00:08:11.714 "name": null, 00:08:11.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.714 "is_configured": false, 00:08:11.714 "data_offset": 0, 00:08:11.714 "data_size": 65536 00:08:11.714 }, 00:08:11.714 { 00:08:11.714 "name": "BaseBdev2", 00:08:11.714 "uuid": "eeb46b68-405e-11ef-b2a4-e9dca065e82e", 00:08:11.714 "is_configured": true, 00:08:11.714 "data_offset": 0, 00:08:11.714 "data_size": 65536 00:08:11.714 } 00:08:11.714 ] 00:08:11.714 }' 00:08:11.715 14:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:11.715 14:56:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.973 14:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:08:11.973 14:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:11.973 14:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:11.973 14:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:08:12.231 14:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:08:12.231 14:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:12.231 14:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:12.489 [2024-07-12 14:56:38.186519] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:12.489 [2024-07-12 14:56:38.186553] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x396a44834a00 name Existed_Raid, state offline 00:08:12.489 14:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:08:12.489 14:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:12.489 14:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:08:12.489 14:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:12.747 14:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:08:12.747 14:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:08:12.747 14:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:08:12.747 14:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 49707 00:08:12.747 14:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 49707 ']' 00:08:12.747 14:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 49707 00:08:12.747 14:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:08:12.747 14:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:12.747 14:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 49707 00:08:12.747 14:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:08:12.747 14:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:08:12.747 14:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:08:12.747 killing process with pid 49707 00:08:12.747 14:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49707' 00:08:12.747 14:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 49707 00:08:12.747 [2024-07-12 14:56:38.504589] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:12.747 [2024-07-12 14:56:38.504624] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:12.747 14:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 49707 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:08:13.005 00:08:13.005 real 0m9.196s 00:08:13.005 user 0m16.093s 00:08:13.005 sys 0m1.526s 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.005 ************************************ 00:08:13.005 END TEST raid_state_function_test 00:08:13.005 ************************************ 00:08:13.005 14:56:38 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:08:13.005 14:56:38 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:08:13.005 14:56:38 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:13.005 14:56:38 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.005 14:56:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:13.005 ************************************ 00:08:13.005 START TEST raid_state_function_test_sb 00:08:13.005 ************************************ 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 true 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=49978 00:08:13.005 Process raid pid: 49978 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 49978' 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 49978 /var/tmp/spdk-raid.sock 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 49978 ']' 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:13.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:13.005 14:56:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.005 [2024-07-12 14:56:38.746468] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:08:13.005 [2024-07-12 14:56:38.746660] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:13.590 EAL: TSC is not safe to use in SMP mode 00:08:13.590 EAL: TSC is not invariant 00:08:13.590 [2024-07-12 14:56:39.281620] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.590 [2024-07-12 14:56:39.368205] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:13.590 [2024-07-12 14:56:39.370308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.591 [2024-07-12 14:56:39.371103] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.591 [2024-07-12 14:56:39.371118] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.165 14:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:14.165 14:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:08:14.165 14:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:14.423 [2024-07-12 14:56:40.034631] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:14.423 [2024-07-12 14:56:40.034681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:14.423 [2024-07-12 14:56:40.034688] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:14.423 [2024-07-12 14:56:40.034697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:14.423 14:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:14.423 14:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:14.423 14:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:14.423 14:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:14.423 14:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:14.423 14:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:14.423 14:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:14.423 14:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:14.424 14:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:14.424 14:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:14.424 14:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:14.424 14:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.682 14:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:14.682 "name": "Existed_Raid", 00:08:14.682 "uuid": "f1d80c90-405e-11ef-b2a4-e9dca065e82e", 00:08:14.682 "strip_size_kb": 64, 00:08:14.682 "state": "configuring", 00:08:14.682 "raid_level": "concat", 00:08:14.682 "superblock": true, 00:08:14.682 "num_base_bdevs": 2, 00:08:14.682 "num_base_bdevs_discovered": 0, 00:08:14.682 "num_base_bdevs_operational": 2, 00:08:14.682 "base_bdevs_list": [ 00:08:14.682 { 00:08:14.682 "name": "BaseBdev1", 00:08:14.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.682 "is_configured": false, 00:08:14.682 "data_offset": 0, 00:08:14.682 "data_size": 0 00:08:14.682 }, 00:08:14.682 { 00:08:14.682 "name": "BaseBdev2", 00:08:14.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.682 "is_configured": false, 00:08:14.682 "data_offset": 0, 00:08:14.682 "data_size": 0 00:08:14.682 } 00:08:14.682 ] 00:08:14.682 }' 00:08:14.682 14:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:14.682 14:56:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.940 14:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:15.198 [2024-07-12 14:56:40.830758] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:15.198 [2024-07-12 14:56:40.830788] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x5bc87c34500 name Existed_Raid, state configuring 00:08:15.198 14:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:15.455 [2024-07-12 14:56:41.062820] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:15.455 [2024-07-12 14:56:41.062871] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:15.455 [2024-07-12 14:56:41.062877] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:15.455 [2024-07-12 14:56:41.062886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:15.455 14:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:15.713 [2024-07-12 14:56:41.295978] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:15.713 BaseBdev1 00:08:15.713 14:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:08:15.713 14:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:08:15.713 14:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:15.713 14:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:08:15.713 14:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:15.713 14:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:15.713 14:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:15.971 14:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:15.971 [ 00:08:15.971 { 00:08:15.971 "name": "BaseBdev1", 00:08:15.971 "aliases": [ 00:08:15.971 "f2985895-405e-11ef-b2a4-e9dca065e82e" 00:08:15.971 ], 00:08:15.971 "product_name": "Malloc disk", 00:08:15.971 "block_size": 512, 00:08:15.971 "num_blocks": 65536, 00:08:15.971 "uuid": "f2985895-405e-11ef-b2a4-e9dca065e82e", 00:08:15.971 "assigned_rate_limits": { 00:08:15.971 "rw_ios_per_sec": 0, 00:08:15.971 "rw_mbytes_per_sec": 0, 00:08:15.971 "r_mbytes_per_sec": 0, 00:08:15.971 "w_mbytes_per_sec": 0 00:08:15.971 }, 00:08:15.971 "claimed": true, 00:08:15.971 "claim_type": "exclusive_write", 00:08:15.971 "zoned": false, 00:08:15.971 "supported_io_types": { 00:08:15.971 "read": true, 00:08:15.971 "write": true, 00:08:15.971 "unmap": true, 00:08:15.971 "flush": true, 00:08:15.971 "reset": true, 00:08:15.971 "nvme_admin": false, 00:08:15.971 "nvme_io": false, 00:08:15.971 "nvme_io_md": false, 00:08:15.971 "write_zeroes": true, 00:08:15.971 "zcopy": true, 00:08:15.971 "get_zone_info": false, 00:08:15.971 "zone_management": false, 00:08:15.971 "zone_append": false, 00:08:15.971 "compare": false, 00:08:15.971 "compare_and_write": false, 00:08:15.971 "abort": true, 00:08:15.971 "seek_hole": false, 00:08:15.971 "seek_data": false, 00:08:15.971 "copy": true, 00:08:15.971 "nvme_iov_md": false 00:08:15.971 }, 00:08:15.971 "memory_domains": [ 00:08:15.971 { 00:08:15.971 "dma_device_id": "system", 00:08:15.971 "dma_device_type": 1 00:08:15.971 }, 00:08:15.971 { 00:08:15.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.971 "dma_device_type": 2 00:08:15.971 } 00:08:15.971 ], 00:08:15.971 "driver_specific": {} 00:08:15.971 } 00:08:15.971 ] 00:08:16.230 14:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:08:16.230 14:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:16.230 14:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:16.230 14:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:16.230 14:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:16.230 14:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:16.230 14:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:16.230 14:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:16.230 14:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:16.230 14:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:16.230 14:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:16.230 14:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:16.230 14:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.488 14:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:16.488 "name": "Existed_Raid", 00:08:16.488 "uuid": "f274f02e-405e-11ef-b2a4-e9dca065e82e", 00:08:16.488 "strip_size_kb": 64, 00:08:16.488 "state": "configuring", 00:08:16.488 "raid_level": "concat", 00:08:16.488 "superblock": true, 00:08:16.488 "num_base_bdevs": 2, 00:08:16.488 "num_base_bdevs_discovered": 1, 00:08:16.488 "num_base_bdevs_operational": 2, 00:08:16.488 "base_bdevs_list": [ 00:08:16.488 { 00:08:16.488 "name": "BaseBdev1", 00:08:16.488 "uuid": "f2985895-405e-11ef-b2a4-e9dca065e82e", 00:08:16.488 "is_configured": true, 00:08:16.488 "data_offset": 2048, 00:08:16.488 "data_size": 63488 00:08:16.488 }, 00:08:16.488 { 00:08:16.488 "name": "BaseBdev2", 00:08:16.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.488 "is_configured": false, 00:08:16.488 "data_offset": 0, 00:08:16.488 "data_size": 0 00:08:16.488 } 00:08:16.488 ] 00:08:16.488 }' 00:08:16.488 14:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:16.488 14:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.746 14:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:17.004 [2024-07-12 14:56:42.659081] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:17.004 [2024-07-12 14:56:42.659118] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x5bc87c34500 name Existed_Raid, state configuring 00:08:17.004 14:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:17.261 [2024-07-12 14:56:42.895134] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:17.261 [2024-07-12 14:56:42.895991] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:17.261 [2024-07-12 14:56:42.896028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:17.261 14:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:08:17.261 14:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:17.261 14:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:17.261 14:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:17.261 14:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:17.261 14:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:17.261 14:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:17.261 14:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:17.261 14:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:17.261 14:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:17.261 14:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:17.261 14:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:17.261 14:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:17.261 14:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.518 14:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:17.518 "name": "Existed_Raid", 00:08:17.518 "uuid": "f38c86c8-405e-11ef-b2a4-e9dca065e82e", 00:08:17.518 "strip_size_kb": 64, 00:08:17.518 "state": "configuring", 00:08:17.518 "raid_level": "concat", 00:08:17.518 "superblock": true, 00:08:17.518 "num_base_bdevs": 2, 00:08:17.518 "num_base_bdevs_discovered": 1, 00:08:17.519 "num_base_bdevs_operational": 2, 00:08:17.519 "base_bdevs_list": [ 00:08:17.519 { 00:08:17.519 "name": "BaseBdev1", 00:08:17.519 "uuid": "f2985895-405e-11ef-b2a4-e9dca065e82e", 00:08:17.519 "is_configured": true, 00:08:17.519 "data_offset": 2048, 00:08:17.519 "data_size": 63488 00:08:17.519 }, 00:08:17.519 { 00:08:17.519 "name": "BaseBdev2", 00:08:17.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.519 "is_configured": false, 00:08:17.519 "data_offset": 0, 00:08:17.519 "data_size": 0 00:08:17.519 } 00:08:17.519 ] 00:08:17.519 }' 00:08:17.519 14:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:17.519 14:56:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.776 14:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:18.034 [2024-07-12 14:56:43.747406] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:18.034 [2024-07-12 14:56:43.747477] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x5bc87c34a00 00:08:18.034 [2024-07-12 14:56:43.747484] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:18.034 [2024-07-12 14:56:43.747506] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x5bc87c97e20 00:08:18.034 [2024-07-12 14:56:43.747571] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x5bc87c34a00 00:08:18.034 [2024-07-12 14:56:43.747576] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x5bc87c34a00 00:08:18.034 [2024-07-12 14:56:43.747597] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:18.034 BaseBdev2 00:08:18.034 14:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:08:18.034 14:56:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:08:18.034 14:56:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:18.034 14:56:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:08:18.034 14:56:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:18.034 14:56:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:18.034 14:56:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:18.291 14:56:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:18.550 [ 00:08:18.550 { 00:08:18.550 "name": "BaseBdev2", 00:08:18.550 "aliases": [ 00:08:18.550 "f40e8dcf-405e-11ef-b2a4-e9dca065e82e" 00:08:18.550 ], 00:08:18.550 "product_name": "Malloc disk", 00:08:18.550 "block_size": 512, 00:08:18.550 "num_blocks": 65536, 00:08:18.550 "uuid": "f40e8dcf-405e-11ef-b2a4-e9dca065e82e", 00:08:18.550 "assigned_rate_limits": { 00:08:18.550 "rw_ios_per_sec": 0, 00:08:18.550 "rw_mbytes_per_sec": 0, 00:08:18.550 "r_mbytes_per_sec": 0, 00:08:18.550 "w_mbytes_per_sec": 0 00:08:18.550 }, 00:08:18.550 "claimed": true, 00:08:18.550 "claim_type": "exclusive_write", 00:08:18.550 "zoned": false, 00:08:18.550 "supported_io_types": { 00:08:18.550 "read": true, 00:08:18.550 "write": true, 00:08:18.550 "unmap": true, 00:08:18.550 "flush": true, 00:08:18.550 "reset": true, 00:08:18.550 "nvme_admin": false, 00:08:18.550 "nvme_io": false, 00:08:18.550 "nvme_io_md": false, 00:08:18.550 "write_zeroes": true, 00:08:18.550 "zcopy": true, 00:08:18.550 "get_zone_info": false, 00:08:18.550 "zone_management": false, 00:08:18.550 "zone_append": false, 00:08:18.550 "compare": false, 00:08:18.550 "compare_and_write": false, 00:08:18.550 "abort": true, 00:08:18.550 "seek_hole": false, 00:08:18.550 "seek_data": false, 00:08:18.550 "copy": true, 00:08:18.550 "nvme_iov_md": false 00:08:18.550 }, 00:08:18.550 "memory_domains": [ 00:08:18.550 { 00:08:18.550 "dma_device_id": "system", 00:08:18.550 "dma_device_type": 1 00:08:18.550 }, 00:08:18.550 { 00:08:18.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.550 "dma_device_type": 2 00:08:18.550 } 00:08:18.550 ], 00:08:18.550 "driver_specific": {} 00:08:18.550 } 00:08:18.550 ] 00:08:18.550 14:56:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:08:18.550 14:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:08:18.550 14:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:18.550 14:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:18.550 14:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:18.550 14:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:18.550 14:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:18.550 14:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:18.550 14:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:18.550 14:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:18.550 14:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:18.550 14:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:18.550 14:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:18.550 14:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.550 14:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:18.809 14:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:18.809 "name": "Existed_Raid", 00:08:18.809 "uuid": "f38c86c8-405e-11ef-b2a4-e9dca065e82e", 00:08:18.809 "strip_size_kb": 64, 00:08:18.809 "state": "online", 00:08:18.809 "raid_level": "concat", 00:08:18.809 "superblock": true, 00:08:18.809 "num_base_bdevs": 2, 00:08:18.809 "num_base_bdevs_discovered": 2, 00:08:18.809 "num_base_bdevs_operational": 2, 00:08:18.809 "base_bdevs_list": [ 00:08:18.809 { 00:08:18.809 "name": "BaseBdev1", 00:08:18.809 "uuid": "f2985895-405e-11ef-b2a4-e9dca065e82e", 00:08:18.809 "is_configured": true, 00:08:18.809 "data_offset": 2048, 00:08:18.809 "data_size": 63488 00:08:18.809 }, 00:08:18.809 { 00:08:18.809 "name": "BaseBdev2", 00:08:18.809 "uuid": "f40e8dcf-405e-11ef-b2a4-e9dca065e82e", 00:08:18.809 "is_configured": true, 00:08:18.809 "data_offset": 2048, 00:08:18.809 "data_size": 63488 00:08:18.809 } 00:08:18.809 ] 00:08:18.809 }' 00:08:18.809 14:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:18.809 14:56:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.067 14:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:08:19.067 14:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:08:19.067 14:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:19.067 14:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:19.067 14:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:19.067 14:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:08:19.067 14:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:08:19.067 14:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:19.326 [2024-07-12 14:56:45.119537] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:19.326 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:19.326 "name": "Existed_Raid", 00:08:19.326 "aliases": [ 00:08:19.326 "f38c86c8-405e-11ef-b2a4-e9dca065e82e" 00:08:19.326 ], 00:08:19.326 "product_name": "Raid Volume", 00:08:19.326 "block_size": 512, 00:08:19.326 "num_blocks": 126976, 00:08:19.326 "uuid": "f38c86c8-405e-11ef-b2a4-e9dca065e82e", 00:08:19.326 "assigned_rate_limits": { 00:08:19.326 "rw_ios_per_sec": 0, 00:08:19.326 "rw_mbytes_per_sec": 0, 00:08:19.326 "r_mbytes_per_sec": 0, 00:08:19.326 "w_mbytes_per_sec": 0 00:08:19.326 }, 00:08:19.326 "claimed": false, 00:08:19.326 "zoned": false, 00:08:19.326 "supported_io_types": { 00:08:19.326 "read": true, 00:08:19.326 "write": true, 00:08:19.326 "unmap": true, 00:08:19.326 "flush": true, 00:08:19.326 "reset": true, 00:08:19.326 "nvme_admin": false, 00:08:19.326 "nvme_io": false, 00:08:19.326 "nvme_io_md": false, 00:08:19.326 "write_zeroes": true, 00:08:19.326 "zcopy": false, 00:08:19.326 "get_zone_info": false, 00:08:19.326 "zone_management": false, 00:08:19.326 "zone_append": false, 00:08:19.326 "compare": false, 00:08:19.326 "compare_and_write": false, 00:08:19.326 "abort": false, 00:08:19.326 "seek_hole": false, 00:08:19.326 "seek_data": false, 00:08:19.326 "copy": false, 00:08:19.326 "nvme_iov_md": false 00:08:19.326 }, 00:08:19.326 "memory_domains": [ 00:08:19.326 { 00:08:19.326 "dma_device_id": "system", 00:08:19.326 "dma_device_type": 1 00:08:19.326 }, 00:08:19.326 { 00:08:19.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.326 "dma_device_type": 2 00:08:19.326 }, 00:08:19.326 { 00:08:19.326 "dma_device_id": "system", 00:08:19.326 "dma_device_type": 1 00:08:19.326 }, 00:08:19.326 { 00:08:19.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.326 "dma_device_type": 2 00:08:19.326 } 00:08:19.326 ], 00:08:19.326 "driver_specific": { 00:08:19.326 "raid": { 00:08:19.326 "uuid": "f38c86c8-405e-11ef-b2a4-e9dca065e82e", 00:08:19.326 "strip_size_kb": 64, 00:08:19.326 "state": "online", 00:08:19.326 "raid_level": "concat", 00:08:19.326 "superblock": true, 00:08:19.326 "num_base_bdevs": 2, 00:08:19.326 "num_base_bdevs_discovered": 2, 00:08:19.326 "num_base_bdevs_operational": 2, 00:08:19.326 "base_bdevs_list": [ 00:08:19.326 { 00:08:19.326 "name": "BaseBdev1", 00:08:19.326 "uuid": "f2985895-405e-11ef-b2a4-e9dca065e82e", 00:08:19.326 "is_configured": true, 00:08:19.326 "data_offset": 2048, 00:08:19.326 "data_size": 63488 00:08:19.326 }, 00:08:19.326 { 00:08:19.326 "name": "BaseBdev2", 00:08:19.326 "uuid": "f40e8dcf-405e-11ef-b2a4-e9dca065e82e", 00:08:19.326 "is_configured": true, 00:08:19.326 "data_offset": 2048, 00:08:19.326 "data_size": 63488 00:08:19.326 } 00:08:19.326 ] 00:08:19.326 } 00:08:19.326 } 00:08:19.326 }' 00:08:19.326 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:19.585 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:08:19.585 BaseBdev2' 00:08:19.585 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:19.585 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:19.585 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:08:19.843 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:19.843 "name": "BaseBdev1", 00:08:19.843 "aliases": [ 00:08:19.843 "f2985895-405e-11ef-b2a4-e9dca065e82e" 00:08:19.843 ], 00:08:19.843 "product_name": "Malloc disk", 00:08:19.843 "block_size": 512, 00:08:19.843 "num_blocks": 65536, 00:08:19.843 "uuid": "f2985895-405e-11ef-b2a4-e9dca065e82e", 00:08:19.843 "assigned_rate_limits": { 00:08:19.843 "rw_ios_per_sec": 0, 00:08:19.843 "rw_mbytes_per_sec": 0, 00:08:19.843 "r_mbytes_per_sec": 0, 00:08:19.843 "w_mbytes_per_sec": 0 00:08:19.843 }, 00:08:19.843 "claimed": true, 00:08:19.843 "claim_type": "exclusive_write", 00:08:19.843 "zoned": false, 00:08:19.843 "supported_io_types": { 00:08:19.843 "read": true, 00:08:19.843 "write": true, 00:08:19.843 "unmap": true, 00:08:19.843 "flush": true, 00:08:19.843 "reset": true, 00:08:19.843 "nvme_admin": false, 00:08:19.843 "nvme_io": false, 00:08:19.843 "nvme_io_md": false, 00:08:19.843 "write_zeroes": true, 00:08:19.843 "zcopy": true, 00:08:19.843 "get_zone_info": false, 00:08:19.843 "zone_management": false, 00:08:19.843 "zone_append": false, 00:08:19.843 "compare": false, 00:08:19.843 "compare_and_write": false, 00:08:19.843 "abort": true, 00:08:19.843 "seek_hole": false, 00:08:19.843 "seek_data": false, 00:08:19.843 "copy": true, 00:08:19.843 "nvme_iov_md": false 00:08:19.843 }, 00:08:19.843 "memory_domains": [ 00:08:19.843 { 00:08:19.843 "dma_device_id": "system", 00:08:19.843 "dma_device_type": 1 00:08:19.843 }, 00:08:19.843 { 00:08:19.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.843 "dma_device_type": 2 00:08:19.843 } 00:08:19.843 ], 00:08:19.843 "driver_specific": {} 00:08:19.843 }' 00:08:19.843 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:19.843 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:19.843 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:19.843 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:19.843 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:19.843 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:19.843 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:19.843 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:19.843 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:19.843 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:19.843 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:19.843 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:19.843 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:19.844 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:08:19.844 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:20.102 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:20.102 "name": "BaseBdev2", 00:08:20.102 "aliases": [ 00:08:20.102 "f40e8dcf-405e-11ef-b2a4-e9dca065e82e" 00:08:20.102 ], 00:08:20.102 "product_name": "Malloc disk", 00:08:20.102 "block_size": 512, 00:08:20.102 "num_blocks": 65536, 00:08:20.102 "uuid": "f40e8dcf-405e-11ef-b2a4-e9dca065e82e", 00:08:20.102 "assigned_rate_limits": { 00:08:20.102 "rw_ios_per_sec": 0, 00:08:20.102 "rw_mbytes_per_sec": 0, 00:08:20.102 "r_mbytes_per_sec": 0, 00:08:20.102 "w_mbytes_per_sec": 0 00:08:20.102 }, 00:08:20.102 "claimed": true, 00:08:20.102 "claim_type": "exclusive_write", 00:08:20.102 "zoned": false, 00:08:20.102 "supported_io_types": { 00:08:20.102 "read": true, 00:08:20.102 "write": true, 00:08:20.102 "unmap": true, 00:08:20.102 "flush": true, 00:08:20.102 "reset": true, 00:08:20.102 "nvme_admin": false, 00:08:20.102 "nvme_io": false, 00:08:20.102 "nvme_io_md": false, 00:08:20.102 "write_zeroes": true, 00:08:20.102 "zcopy": true, 00:08:20.102 "get_zone_info": false, 00:08:20.102 "zone_management": false, 00:08:20.102 "zone_append": false, 00:08:20.102 "compare": false, 00:08:20.102 "compare_and_write": false, 00:08:20.102 "abort": true, 00:08:20.102 "seek_hole": false, 00:08:20.102 "seek_data": false, 00:08:20.102 "copy": true, 00:08:20.102 "nvme_iov_md": false 00:08:20.102 }, 00:08:20.102 "memory_domains": [ 00:08:20.102 { 00:08:20.102 "dma_device_id": "system", 00:08:20.102 "dma_device_type": 1 00:08:20.102 }, 00:08:20.102 { 00:08:20.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.102 "dma_device_type": 2 00:08:20.102 } 00:08:20.102 ], 00:08:20.102 "driver_specific": {} 00:08:20.102 }' 00:08:20.102 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:20.102 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:20.102 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:20.102 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:20.102 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:20.102 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:20.102 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:20.102 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:20.102 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:20.102 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:20.102 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:20.102 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:20.102 14:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:20.362 [2024-07-12 14:56:45.995659] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:20.362 [2024-07-12 14:56:45.995690] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:20.362 [2024-07-12 14:56:45.995705] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:20.362 14:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:08:20.362 14:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:08:20.362 14:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:20.362 14:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:08:20.362 14:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:08:20.362 14:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:20.362 14:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:20.362 14:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:08:20.362 14:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:20.362 14:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:20.362 14:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:20.362 14:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:20.362 14:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:20.362 14:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:20.362 14:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:20.362 14:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:20.362 14:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.620 14:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:20.620 "name": "Existed_Raid", 00:08:20.620 "uuid": "f38c86c8-405e-11ef-b2a4-e9dca065e82e", 00:08:20.620 "strip_size_kb": 64, 00:08:20.620 "state": "offline", 00:08:20.620 "raid_level": "concat", 00:08:20.620 "superblock": true, 00:08:20.620 "num_base_bdevs": 2, 00:08:20.620 "num_base_bdevs_discovered": 1, 00:08:20.620 "num_base_bdevs_operational": 1, 00:08:20.620 "base_bdevs_list": [ 00:08:20.620 { 00:08:20.620 "name": null, 00:08:20.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.620 "is_configured": false, 00:08:20.620 "data_offset": 2048, 00:08:20.620 "data_size": 63488 00:08:20.620 }, 00:08:20.620 { 00:08:20.620 "name": "BaseBdev2", 00:08:20.620 "uuid": "f40e8dcf-405e-11ef-b2a4-e9dca065e82e", 00:08:20.620 "is_configured": true, 00:08:20.620 "data_offset": 2048, 00:08:20.620 "data_size": 63488 00:08:20.620 } 00:08:20.620 ] 00:08:20.620 }' 00:08:20.620 14:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:20.620 14:56:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.880 14:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:08:20.880 14:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:20.880 14:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:20.880 14:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:08:21.139 14:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:08:21.139 14:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:21.139 14:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:21.398 [2024-07-12 14:56:47.081707] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:21.398 [2024-07-12 14:56:47.081746] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x5bc87c34a00 name Existed_Raid, state offline 00:08:21.398 14:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:08:21.398 14:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:21.398 14:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:21.398 14:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:08:21.657 14:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:08:21.657 14:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:08:21.657 14:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:08:21.657 14:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 49978 00:08:21.657 14:56:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 49978 ']' 00:08:21.657 14:56:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 49978 00:08:21.657 14:56:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:08:21.657 14:56:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:21.657 14:56:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 49978 00:08:21.657 14:56:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:08:21.657 14:56:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:08:21.657 killing process with pid 49978 00:08:21.657 14:56:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:08:21.657 14:56:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49978' 00:08:21.657 14:56:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 49978 00:08:21.657 [2024-07-12 14:56:47.415248] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:21.657 [2024-07-12 14:56:47.415282] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:21.657 14:56:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 49978 00:08:21.915 14:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:08:21.915 00:08:21.915 real 0m8.858s 00:08:21.915 user 0m15.490s 00:08:21.915 sys 0m1.444s 00:08:21.915 14:56:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.915 14:56:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.915 ************************************ 00:08:21.915 END TEST raid_state_function_test_sb 00:08:21.915 ************************************ 00:08:21.915 14:56:47 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:08:21.915 14:56:47 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:08:21.915 14:56:47 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:21.915 14:56:47 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.915 14:56:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:21.915 ************************************ 00:08:21.915 START TEST raid_superblock_test 00:08:21.915 ************************************ 00:08:21.915 14:56:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 2 00:08:21.915 14:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:08:21.915 14:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:08:21.915 14:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:08:21.915 14:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:08:21.915 14:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:08:21.915 14:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:08:21.915 14:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:08:21.915 14:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:08:21.915 14:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:08:21.915 14:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:08:21.915 14:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:08:21.915 14:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:08:21.915 14:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:08:21.915 14:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:08:21.915 14:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:08:21.915 14:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:08:21.915 14:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=50252 00:08:21.915 14:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 50252 /var/tmp/spdk-raid.sock 00:08:21.915 14:56:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 50252 ']' 00:08:21.915 14:56:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:21.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:21.915 14:56:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:21.916 14:56:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:21.916 14:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:08:21.916 14:56:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:21.916 14:56:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.916 [2024-07-12 14:56:47.643311] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:08:21.916 [2024-07-12 14:56:47.643578] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:22.483 EAL: TSC is not safe to use in SMP mode 00:08:22.483 EAL: TSC is not invariant 00:08:22.483 [2024-07-12 14:56:48.199432] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.483 [2024-07-12 14:56:48.285058] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:22.483 [2024-07-12 14:56:48.287175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.483 [2024-07-12 14:56:48.287980] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.483 [2024-07-12 14:56:48.287998] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.048 14:56:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:23.048 14:56:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:08:23.048 14:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:08:23.048 14:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:08:23.048 14:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:08:23.048 14:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:08:23.048 14:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:23.048 14:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:23.048 14:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:08:23.048 14:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:23.048 14:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:08:23.306 malloc1 00:08:23.306 14:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:23.564 [2024-07-12 14:56:49.340080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:23.564 [2024-07-12 14:56:49.340148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.564 [2024-07-12 14:56:49.340161] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24510cc34780 00:08:23.564 [2024-07-12 14:56:49.340170] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.564 [2024-07-12 14:56:49.341069] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.564 [2024-07-12 14:56:49.341091] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:23.564 pt1 00:08:23.564 14:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:08:23.564 14:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:08:23.564 14:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:08:23.564 14:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:08:23.564 14:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:23.564 14:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:23.564 14:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:08:23.564 14:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:23.564 14:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:08:23.821 malloc2 00:08:23.821 14:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:24.079 [2024-07-12 14:56:49.808145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:24.079 [2024-07-12 14:56:49.808206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.079 [2024-07-12 14:56:49.808219] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24510cc34c80 00:08:24.079 [2024-07-12 14:56:49.808227] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.079 [2024-07-12 14:56:49.808909] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.079 [2024-07-12 14:56:49.808935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:24.079 pt2 00:08:24.079 14:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:08:24.079 14:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:08:24.079 14:56:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:08:24.337 [2024-07-12 14:56:50.044188] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:24.337 [2024-07-12 14:56:50.044760] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:24.337 [2024-07-12 14:56:50.044820] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x24510cc34f00 00:08:24.337 [2024-07-12 14:56:50.044827] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:24.337 [2024-07-12 14:56:50.044865] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24510cc97e20 00:08:24.337 [2024-07-12 14:56:50.044937] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x24510cc34f00 00:08:24.337 [2024-07-12 14:56:50.044942] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x24510cc34f00 00:08:24.337 [2024-07-12 14:56:50.044969] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.337 14:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:24.337 14:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:24.337 14:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:24.337 14:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:24.337 14:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:24.337 14:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:24.337 14:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:24.337 14:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:24.337 14:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:24.337 14:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:24.337 14:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:24.337 14:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:24.596 14:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:24.596 "name": "raid_bdev1", 00:08:24.596 "uuid": "f7cf62bc-405e-11ef-b2a4-e9dca065e82e", 00:08:24.596 "strip_size_kb": 64, 00:08:24.596 "state": "online", 00:08:24.596 "raid_level": "concat", 00:08:24.596 "superblock": true, 00:08:24.596 "num_base_bdevs": 2, 00:08:24.596 "num_base_bdevs_discovered": 2, 00:08:24.596 "num_base_bdevs_operational": 2, 00:08:24.596 "base_bdevs_list": [ 00:08:24.596 { 00:08:24.596 "name": "pt1", 00:08:24.596 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:24.596 "is_configured": true, 00:08:24.596 "data_offset": 2048, 00:08:24.596 "data_size": 63488 00:08:24.596 }, 00:08:24.596 { 00:08:24.596 "name": "pt2", 00:08:24.596 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:24.596 "is_configured": true, 00:08:24.596 "data_offset": 2048, 00:08:24.596 "data_size": 63488 00:08:24.596 } 00:08:24.596 ] 00:08:24.596 }' 00:08:24.596 14:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:24.596 14:56:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.853 14:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:08:24.853 14:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:08:24.853 14:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:24.853 14:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:24.853 14:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:24.853 14:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:24.853 14:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:24.853 14:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:25.111 [2024-07-12 14:56:50.876336] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:25.111 14:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:25.111 "name": "raid_bdev1", 00:08:25.111 "aliases": [ 00:08:25.111 "f7cf62bc-405e-11ef-b2a4-e9dca065e82e" 00:08:25.111 ], 00:08:25.111 "product_name": "Raid Volume", 00:08:25.111 "block_size": 512, 00:08:25.111 "num_blocks": 126976, 00:08:25.111 "uuid": "f7cf62bc-405e-11ef-b2a4-e9dca065e82e", 00:08:25.111 "assigned_rate_limits": { 00:08:25.111 "rw_ios_per_sec": 0, 00:08:25.111 "rw_mbytes_per_sec": 0, 00:08:25.111 "r_mbytes_per_sec": 0, 00:08:25.111 "w_mbytes_per_sec": 0 00:08:25.111 }, 00:08:25.111 "claimed": false, 00:08:25.111 "zoned": false, 00:08:25.111 "supported_io_types": { 00:08:25.111 "read": true, 00:08:25.111 "write": true, 00:08:25.111 "unmap": true, 00:08:25.111 "flush": true, 00:08:25.111 "reset": true, 00:08:25.111 "nvme_admin": false, 00:08:25.111 "nvme_io": false, 00:08:25.111 "nvme_io_md": false, 00:08:25.111 "write_zeroes": true, 00:08:25.111 "zcopy": false, 00:08:25.111 "get_zone_info": false, 00:08:25.111 "zone_management": false, 00:08:25.111 "zone_append": false, 00:08:25.111 "compare": false, 00:08:25.111 "compare_and_write": false, 00:08:25.111 "abort": false, 00:08:25.111 "seek_hole": false, 00:08:25.111 "seek_data": false, 00:08:25.111 "copy": false, 00:08:25.111 "nvme_iov_md": false 00:08:25.111 }, 00:08:25.111 "memory_domains": [ 00:08:25.111 { 00:08:25.111 "dma_device_id": "system", 00:08:25.111 "dma_device_type": 1 00:08:25.111 }, 00:08:25.111 { 00:08:25.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.111 "dma_device_type": 2 00:08:25.111 }, 00:08:25.111 { 00:08:25.111 "dma_device_id": "system", 00:08:25.111 "dma_device_type": 1 00:08:25.111 }, 00:08:25.111 { 00:08:25.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.111 "dma_device_type": 2 00:08:25.111 } 00:08:25.111 ], 00:08:25.111 "driver_specific": { 00:08:25.111 "raid": { 00:08:25.111 "uuid": "f7cf62bc-405e-11ef-b2a4-e9dca065e82e", 00:08:25.112 "strip_size_kb": 64, 00:08:25.112 "state": "online", 00:08:25.112 "raid_level": "concat", 00:08:25.112 "superblock": true, 00:08:25.112 "num_base_bdevs": 2, 00:08:25.112 "num_base_bdevs_discovered": 2, 00:08:25.112 "num_base_bdevs_operational": 2, 00:08:25.112 "base_bdevs_list": [ 00:08:25.112 { 00:08:25.112 "name": "pt1", 00:08:25.112 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:25.112 "is_configured": true, 00:08:25.112 "data_offset": 2048, 00:08:25.112 "data_size": 63488 00:08:25.112 }, 00:08:25.112 { 00:08:25.112 "name": "pt2", 00:08:25.112 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:25.112 "is_configured": true, 00:08:25.112 "data_offset": 2048, 00:08:25.112 "data_size": 63488 00:08:25.112 } 00:08:25.112 ] 00:08:25.112 } 00:08:25.112 } 00:08:25.112 }' 00:08:25.112 14:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:25.112 14:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:08:25.112 pt2' 00:08:25.112 14:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:25.112 14:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:08:25.112 14:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:25.369 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:25.369 "name": "pt1", 00:08:25.369 "aliases": [ 00:08:25.369 "00000000-0000-0000-0000-000000000001" 00:08:25.369 ], 00:08:25.369 "product_name": "passthru", 00:08:25.369 "block_size": 512, 00:08:25.369 "num_blocks": 65536, 00:08:25.369 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:25.369 "assigned_rate_limits": { 00:08:25.369 "rw_ios_per_sec": 0, 00:08:25.369 "rw_mbytes_per_sec": 0, 00:08:25.369 "r_mbytes_per_sec": 0, 00:08:25.369 "w_mbytes_per_sec": 0 00:08:25.369 }, 00:08:25.369 "claimed": true, 00:08:25.369 "claim_type": "exclusive_write", 00:08:25.369 "zoned": false, 00:08:25.369 "supported_io_types": { 00:08:25.369 "read": true, 00:08:25.369 "write": true, 00:08:25.369 "unmap": true, 00:08:25.369 "flush": true, 00:08:25.369 "reset": true, 00:08:25.369 "nvme_admin": false, 00:08:25.369 "nvme_io": false, 00:08:25.369 "nvme_io_md": false, 00:08:25.369 "write_zeroes": true, 00:08:25.369 "zcopy": true, 00:08:25.369 "get_zone_info": false, 00:08:25.369 "zone_management": false, 00:08:25.369 "zone_append": false, 00:08:25.369 "compare": false, 00:08:25.369 "compare_and_write": false, 00:08:25.369 "abort": true, 00:08:25.369 "seek_hole": false, 00:08:25.369 "seek_data": false, 00:08:25.369 "copy": true, 00:08:25.369 "nvme_iov_md": false 00:08:25.369 }, 00:08:25.369 "memory_domains": [ 00:08:25.369 { 00:08:25.369 "dma_device_id": "system", 00:08:25.369 "dma_device_type": 1 00:08:25.369 }, 00:08:25.369 { 00:08:25.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.369 "dma_device_type": 2 00:08:25.369 } 00:08:25.369 ], 00:08:25.369 "driver_specific": { 00:08:25.369 "passthru": { 00:08:25.369 "name": "pt1", 00:08:25.369 "base_bdev_name": "malloc1" 00:08:25.369 } 00:08:25.369 } 00:08:25.369 }' 00:08:25.369 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:25.369 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:25.369 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:25.369 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:25.369 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:25.369 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:25.369 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:25.369 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:25.369 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:25.369 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:25.369 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:25.628 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:25.628 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:25.629 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:08:25.629 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:25.886 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:25.886 "name": "pt2", 00:08:25.886 "aliases": [ 00:08:25.886 "00000000-0000-0000-0000-000000000002" 00:08:25.887 ], 00:08:25.887 "product_name": "passthru", 00:08:25.887 "block_size": 512, 00:08:25.887 "num_blocks": 65536, 00:08:25.887 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:25.887 "assigned_rate_limits": { 00:08:25.887 "rw_ios_per_sec": 0, 00:08:25.887 "rw_mbytes_per_sec": 0, 00:08:25.887 "r_mbytes_per_sec": 0, 00:08:25.887 "w_mbytes_per_sec": 0 00:08:25.887 }, 00:08:25.887 "claimed": true, 00:08:25.887 "claim_type": "exclusive_write", 00:08:25.887 "zoned": false, 00:08:25.887 "supported_io_types": { 00:08:25.887 "read": true, 00:08:25.887 "write": true, 00:08:25.887 "unmap": true, 00:08:25.887 "flush": true, 00:08:25.887 "reset": true, 00:08:25.887 "nvme_admin": false, 00:08:25.887 "nvme_io": false, 00:08:25.887 "nvme_io_md": false, 00:08:25.887 "write_zeroes": true, 00:08:25.887 "zcopy": true, 00:08:25.887 "get_zone_info": false, 00:08:25.887 "zone_management": false, 00:08:25.887 "zone_append": false, 00:08:25.887 "compare": false, 00:08:25.887 "compare_and_write": false, 00:08:25.887 "abort": true, 00:08:25.887 "seek_hole": false, 00:08:25.887 "seek_data": false, 00:08:25.887 "copy": true, 00:08:25.887 "nvme_iov_md": false 00:08:25.887 }, 00:08:25.887 "memory_domains": [ 00:08:25.887 { 00:08:25.887 "dma_device_id": "system", 00:08:25.887 "dma_device_type": 1 00:08:25.887 }, 00:08:25.887 { 00:08:25.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.887 "dma_device_type": 2 00:08:25.887 } 00:08:25.887 ], 00:08:25.887 "driver_specific": { 00:08:25.887 "passthru": { 00:08:25.887 "name": "pt2", 00:08:25.887 "base_bdev_name": "malloc2" 00:08:25.887 } 00:08:25.887 } 00:08:25.887 }' 00:08:25.887 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:25.887 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:25.887 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:25.887 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:25.887 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:25.887 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:25.887 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:25.887 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:25.887 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:25.887 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:25.887 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:25.887 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:25.887 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:25.887 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:08:26.145 [2024-07-12 14:56:51.772455] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:26.145 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=f7cf62bc-405e-11ef-b2a4-e9dca065e82e 00:08:26.145 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z f7cf62bc-405e-11ef-b2a4-e9dca065e82e ']' 00:08:26.145 14:56:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:26.404 [2024-07-12 14:56:52.052455] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:26.404 [2024-07-12 14:56:52.052479] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:26.404 [2024-07-12 14:56:52.052501] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:26.404 [2024-07-12 14:56:52.052513] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:26.404 [2024-07-12 14:56:52.052517] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x24510cc34f00 name raid_bdev1, state offline 00:08:26.404 14:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:08:26.404 14:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:26.661 14:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:08:26.661 14:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:08:26.662 14:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:08:26.662 14:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:26.928 14:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:08:26.928 14:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:27.205 14:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:27.205 14:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:08:27.465 14:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:08:27.465 14:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:08:27.465 14:56:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:08:27.465 14:56:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:08:27.465 14:56:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:27.465 14:56:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:27.465 14:56:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:27.465 14:56:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:27.465 14:56:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:27.465 14:56:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:27.465 14:56:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:27.465 14:56:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:27.465 14:56:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:08:27.723 [2024-07-12 14:56:53.328645] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:27.723 [2024-07-12 14:56:53.329248] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:27.723 [2024-07-12 14:56:53.329275] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:27.723 [2024-07-12 14:56:53.329314] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:27.723 [2024-07-12 14:56:53.329325] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:27.723 [2024-07-12 14:56:53.329330] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x24510cc34c80 name raid_bdev1, state configuring 00:08:27.723 request: 00:08:27.723 { 00:08:27.723 "name": "raid_bdev1", 00:08:27.723 "raid_level": "concat", 00:08:27.723 "base_bdevs": [ 00:08:27.723 "malloc1", 00:08:27.723 "malloc2" 00:08:27.723 ], 00:08:27.723 "strip_size_kb": 64, 00:08:27.723 "superblock": false, 00:08:27.723 "method": "bdev_raid_create", 00:08:27.723 "req_id": 1 00:08:27.723 } 00:08:27.723 Got JSON-RPC error response 00:08:27.723 response: 00:08:27.723 { 00:08:27.723 "code": -17, 00:08:27.723 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:27.723 } 00:08:27.723 14:56:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:08:27.723 14:56:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:27.723 14:56:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:27.723 14:56:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:27.723 14:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:27.723 14:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:08:27.980 14:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:08:27.980 14:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:08:27.980 14:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:28.238 [2024-07-12 14:56:53.824696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:28.238 [2024-07-12 14:56:53.824745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:28.238 [2024-07-12 14:56:53.824756] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24510cc34780 00:08:28.238 [2024-07-12 14:56:53.824764] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:28.238 [2024-07-12 14:56:53.825409] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:28.238 [2024-07-12 14:56:53.825434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:28.238 [2024-07-12 14:56:53.825458] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:28.238 [2024-07-12 14:56:53.825470] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:28.238 pt1 00:08:28.238 14:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:08:28.238 14:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:28.238 14:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:28.238 14:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:28.238 14:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:28.238 14:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:28.238 14:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:28.238 14:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:28.238 14:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:28.238 14:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:28.238 14:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:28.238 14:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:28.496 14:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:28.496 "name": "raid_bdev1", 00:08:28.496 "uuid": "f7cf62bc-405e-11ef-b2a4-e9dca065e82e", 00:08:28.496 "strip_size_kb": 64, 00:08:28.496 "state": "configuring", 00:08:28.496 "raid_level": "concat", 00:08:28.496 "superblock": true, 00:08:28.496 "num_base_bdevs": 2, 00:08:28.496 "num_base_bdevs_discovered": 1, 00:08:28.496 "num_base_bdevs_operational": 2, 00:08:28.496 "base_bdevs_list": [ 00:08:28.496 { 00:08:28.496 "name": "pt1", 00:08:28.496 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:28.496 "is_configured": true, 00:08:28.496 "data_offset": 2048, 00:08:28.496 "data_size": 63488 00:08:28.496 }, 00:08:28.496 { 00:08:28.496 "name": null, 00:08:28.496 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:28.496 "is_configured": false, 00:08:28.497 "data_offset": 2048, 00:08:28.497 "data_size": 63488 00:08:28.497 } 00:08:28.497 ] 00:08:28.497 }' 00:08:28.497 14:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:28.497 14:56:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.755 14:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:08:28.755 14:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:08:28.755 14:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:08:28.755 14:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:29.013 [2024-07-12 14:56:54.612803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:29.013 [2024-07-12 14:56:54.612857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.013 [2024-07-12 14:56:54.612869] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24510cc34f00 00:08:29.013 [2024-07-12 14:56:54.612877] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.013 [2024-07-12 14:56:54.612991] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.013 [2024-07-12 14:56:54.613003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:29.013 [2024-07-12 14:56:54.613026] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:29.013 [2024-07-12 14:56:54.613035] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:29.013 [2024-07-12 14:56:54.613060] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x24510cc35180 00:08:29.013 [2024-07-12 14:56:54.613065] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:29.013 [2024-07-12 14:56:54.613094] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24510cc97e20 00:08:29.013 [2024-07-12 14:56:54.613150] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x24510cc35180 00:08:29.013 [2024-07-12 14:56:54.613155] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x24510cc35180 00:08:29.013 [2024-07-12 14:56:54.613178] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.013 pt2 00:08:29.013 14:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:08:29.013 14:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:08:29.013 14:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:29.013 14:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:29.013 14:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:29.013 14:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:29.013 14:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:29.013 14:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:29.013 14:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:29.013 14:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:29.013 14:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:29.013 14:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:29.013 14:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:29.013 14:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:29.272 14:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:29.272 "name": "raid_bdev1", 00:08:29.272 "uuid": "f7cf62bc-405e-11ef-b2a4-e9dca065e82e", 00:08:29.272 "strip_size_kb": 64, 00:08:29.272 "state": "online", 00:08:29.272 "raid_level": "concat", 00:08:29.272 "superblock": true, 00:08:29.272 "num_base_bdevs": 2, 00:08:29.272 "num_base_bdevs_discovered": 2, 00:08:29.272 "num_base_bdevs_operational": 2, 00:08:29.272 "base_bdevs_list": [ 00:08:29.272 { 00:08:29.272 "name": "pt1", 00:08:29.272 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:29.272 "is_configured": true, 00:08:29.272 "data_offset": 2048, 00:08:29.272 "data_size": 63488 00:08:29.272 }, 00:08:29.272 { 00:08:29.272 "name": "pt2", 00:08:29.272 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:29.272 "is_configured": true, 00:08:29.272 "data_offset": 2048, 00:08:29.272 "data_size": 63488 00:08:29.272 } 00:08:29.272 ] 00:08:29.272 }' 00:08:29.272 14:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:29.272 14:56:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.529 14:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:08:29.529 14:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:08:29.529 14:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:29.529 14:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:29.529 14:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:29.529 14:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:29.529 14:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:29.529 14:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:29.787 [2024-07-12 14:56:55.456953] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:29.787 14:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:29.787 "name": "raid_bdev1", 00:08:29.787 "aliases": [ 00:08:29.787 "f7cf62bc-405e-11ef-b2a4-e9dca065e82e" 00:08:29.787 ], 00:08:29.787 "product_name": "Raid Volume", 00:08:29.787 "block_size": 512, 00:08:29.787 "num_blocks": 126976, 00:08:29.787 "uuid": "f7cf62bc-405e-11ef-b2a4-e9dca065e82e", 00:08:29.787 "assigned_rate_limits": { 00:08:29.787 "rw_ios_per_sec": 0, 00:08:29.787 "rw_mbytes_per_sec": 0, 00:08:29.787 "r_mbytes_per_sec": 0, 00:08:29.787 "w_mbytes_per_sec": 0 00:08:29.787 }, 00:08:29.787 "claimed": false, 00:08:29.787 "zoned": false, 00:08:29.787 "supported_io_types": { 00:08:29.787 "read": true, 00:08:29.787 "write": true, 00:08:29.787 "unmap": true, 00:08:29.787 "flush": true, 00:08:29.787 "reset": true, 00:08:29.787 "nvme_admin": false, 00:08:29.787 "nvme_io": false, 00:08:29.787 "nvme_io_md": false, 00:08:29.787 "write_zeroes": true, 00:08:29.787 "zcopy": false, 00:08:29.787 "get_zone_info": false, 00:08:29.787 "zone_management": false, 00:08:29.787 "zone_append": false, 00:08:29.787 "compare": false, 00:08:29.787 "compare_and_write": false, 00:08:29.787 "abort": false, 00:08:29.787 "seek_hole": false, 00:08:29.787 "seek_data": false, 00:08:29.787 "copy": false, 00:08:29.787 "nvme_iov_md": false 00:08:29.787 }, 00:08:29.787 "memory_domains": [ 00:08:29.787 { 00:08:29.787 "dma_device_id": "system", 00:08:29.787 "dma_device_type": 1 00:08:29.787 }, 00:08:29.787 { 00:08:29.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.787 "dma_device_type": 2 00:08:29.787 }, 00:08:29.787 { 00:08:29.787 "dma_device_id": "system", 00:08:29.787 "dma_device_type": 1 00:08:29.787 }, 00:08:29.787 { 00:08:29.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.787 "dma_device_type": 2 00:08:29.787 } 00:08:29.787 ], 00:08:29.787 "driver_specific": { 00:08:29.787 "raid": { 00:08:29.787 "uuid": "f7cf62bc-405e-11ef-b2a4-e9dca065e82e", 00:08:29.787 "strip_size_kb": 64, 00:08:29.787 "state": "online", 00:08:29.787 "raid_level": "concat", 00:08:29.787 "superblock": true, 00:08:29.787 "num_base_bdevs": 2, 00:08:29.787 "num_base_bdevs_discovered": 2, 00:08:29.787 "num_base_bdevs_operational": 2, 00:08:29.787 "base_bdevs_list": [ 00:08:29.787 { 00:08:29.787 "name": "pt1", 00:08:29.787 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:29.787 "is_configured": true, 00:08:29.787 "data_offset": 2048, 00:08:29.787 "data_size": 63488 00:08:29.787 }, 00:08:29.787 { 00:08:29.787 "name": "pt2", 00:08:29.787 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:29.787 "is_configured": true, 00:08:29.787 "data_offset": 2048, 00:08:29.787 "data_size": 63488 00:08:29.787 } 00:08:29.787 ] 00:08:29.787 } 00:08:29.787 } 00:08:29.787 }' 00:08:29.787 14:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:29.787 14:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:08:29.787 pt2' 00:08:29.787 14:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:29.787 14:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:08:29.787 14:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:30.046 14:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:30.046 "name": "pt1", 00:08:30.046 "aliases": [ 00:08:30.046 "00000000-0000-0000-0000-000000000001" 00:08:30.046 ], 00:08:30.046 "product_name": "passthru", 00:08:30.046 "block_size": 512, 00:08:30.046 "num_blocks": 65536, 00:08:30.046 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:30.046 "assigned_rate_limits": { 00:08:30.046 "rw_ios_per_sec": 0, 00:08:30.046 "rw_mbytes_per_sec": 0, 00:08:30.046 "r_mbytes_per_sec": 0, 00:08:30.046 "w_mbytes_per_sec": 0 00:08:30.046 }, 00:08:30.046 "claimed": true, 00:08:30.046 "claim_type": "exclusive_write", 00:08:30.046 "zoned": false, 00:08:30.046 "supported_io_types": { 00:08:30.046 "read": true, 00:08:30.046 "write": true, 00:08:30.046 "unmap": true, 00:08:30.046 "flush": true, 00:08:30.046 "reset": true, 00:08:30.046 "nvme_admin": false, 00:08:30.046 "nvme_io": false, 00:08:30.046 "nvme_io_md": false, 00:08:30.046 "write_zeroes": true, 00:08:30.046 "zcopy": true, 00:08:30.046 "get_zone_info": false, 00:08:30.046 "zone_management": false, 00:08:30.046 "zone_append": false, 00:08:30.046 "compare": false, 00:08:30.046 "compare_and_write": false, 00:08:30.046 "abort": true, 00:08:30.046 "seek_hole": false, 00:08:30.046 "seek_data": false, 00:08:30.046 "copy": true, 00:08:30.046 "nvme_iov_md": false 00:08:30.046 }, 00:08:30.046 "memory_domains": [ 00:08:30.046 { 00:08:30.046 "dma_device_id": "system", 00:08:30.046 "dma_device_type": 1 00:08:30.046 }, 00:08:30.046 { 00:08:30.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.046 "dma_device_type": 2 00:08:30.046 } 00:08:30.046 ], 00:08:30.046 "driver_specific": { 00:08:30.046 "passthru": { 00:08:30.046 "name": "pt1", 00:08:30.046 "base_bdev_name": "malloc1" 00:08:30.046 } 00:08:30.046 } 00:08:30.046 }' 00:08:30.046 14:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:30.046 14:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:30.046 14:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:30.046 14:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:30.046 14:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:30.046 14:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:30.046 14:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:30.046 14:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:30.046 14:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:30.046 14:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:30.046 14:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:30.046 14:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:30.046 14:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:30.046 14:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:08:30.046 14:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:30.304 14:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:30.304 "name": "pt2", 00:08:30.304 "aliases": [ 00:08:30.304 "00000000-0000-0000-0000-000000000002" 00:08:30.304 ], 00:08:30.304 "product_name": "passthru", 00:08:30.304 "block_size": 512, 00:08:30.304 "num_blocks": 65536, 00:08:30.304 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:30.304 "assigned_rate_limits": { 00:08:30.304 "rw_ios_per_sec": 0, 00:08:30.304 "rw_mbytes_per_sec": 0, 00:08:30.304 "r_mbytes_per_sec": 0, 00:08:30.304 "w_mbytes_per_sec": 0 00:08:30.304 }, 00:08:30.304 "claimed": true, 00:08:30.305 "claim_type": "exclusive_write", 00:08:30.305 "zoned": false, 00:08:30.305 "supported_io_types": { 00:08:30.305 "read": true, 00:08:30.305 "write": true, 00:08:30.305 "unmap": true, 00:08:30.305 "flush": true, 00:08:30.305 "reset": true, 00:08:30.305 "nvme_admin": false, 00:08:30.305 "nvme_io": false, 00:08:30.305 "nvme_io_md": false, 00:08:30.305 "write_zeroes": true, 00:08:30.305 "zcopy": true, 00:08:30.305 "get_zone_info": false, 00:08:30.305 "zone_management": false, 00:08:30.305 "zone_append": false, 00:08:30.305 "compare": false, 00:08:30.305 "compare_and_write": false, 00:08:30.305 "abort": true, 00:08:30.305 "seek_hole": false, 00:08:30.305 "seek_data": false, 00:08:30.305 "copy": true, 00:08:30.305 "nvme_iov_md": false 00:08:30.305 }, 00:08:30.305 "memory_domains": [ 00:08:30.305 { 00:08:30.305 "dma_device_id": "system", 00:08:30.305 "dma_device_type": 1 00:08:30.305 }, 00:08:30.305 { 00:08:30.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.305 "dma_device_type": 2 00:08:30.305 } 00:08:30.305 ], 00:08:30.305 "driver_specific": { 00:08:30.305 "passthru": { 00:08:30.305 "name": "pt2", 00:08:30.305 "base_bdev_name": "malloc2" 00:08:30.305 } 00:08:30.305 } 00:08:30.305 }' 00:08:30.305 14:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:30.305 14:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:30.305 14:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:30.305 14:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:30.305 14:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:30.305 14:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:30.305 14:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:30.305 14:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:30.305 14:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:30.305 14:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:30.305 14:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:30.305 14:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:30.305 14:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:30.305 14:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:08:30.564 [2024-07-12 14:56:56.337095] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:30.564 14:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' f7cf62bc-405e-11ef-b2a4-e9dca065e82e '!=' f7cf62bc-405e-11ef-b2a4-e9dca065e82e ']' 00:08:30.564 14:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:08:30.564 14:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:30.564 14:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:08:30.564 14:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 50252 00:08:30.564 14:56:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 50252 ']' 00:08:30.564 14:56:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 50252 00:08:30.564 14:56:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:08:30.564 14:56:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:30.564 14:56:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:08:30.564 14:56:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 50252 00:08:30.564 14:56:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:08:30.564 14:56:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:08:30.564 killing process with pid 50252 00:08:30.564 14:56:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 50252' 00:08:30.564 14:56:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 50252 00:08:30.564 [2024-07-12 14:56:56.365554] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:30.564 [2024-07-12 14:56:56.365579] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:30.564 [2024-07-12 14:56:56.365590] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:30.564 [2024-07-12 14:56:56.365594] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x24510cc35180 name raid_bdev1, state offline 00:08:30.564 14:56:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 50252 00:08:30.564 [2024-07-12 14:56:56.377140] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:30.823 14:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:08:30.823 00:08:30.823 real 0m8.921s 00:08:30.823 user 0m15.510s 00:08:30.823 sys 0m1.557s 00:08:30.823 14:56:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:30.823 14:56:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.823 ************************************ 00:08:30.823 END TEST raid_superblock_test 00:08:30.823 ************************************ 00:08:30.823 14:56:56 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:08:30.823 14:56:56 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:30.823 14:56:56 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:30.823 14:56:56 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.823 14:56:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:30.823 ************************************ 00:08:30.823 START TEST raid_read_error_test 00:08:30.823 ************************************ 00:08:30.823 14:56:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 read 00:08:30.823 14:56:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:08:30.823 14:56:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:08:30.823 14:56:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:08:30.823 14:56:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:08:30.823 14:56:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:30.823 14:56:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:08:30.823 14:56:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:30.823 14:56:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:30.823 14:56:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:08:30.823 14:56:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:30.823 14:56:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:30.823 14:56:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:30.823 14:56:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:08:30.823 14:56:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:08:30.823 14:56:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:08:30.823 14:56:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:08:30.823 14:56:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:08:30.823 14:56:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:08:30.823 14:56:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:08:30.823 14:56:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:08:30.823 14:56:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:08:30.823 14:56:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:08:30.823 14:56:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.hEZjtKQ9jp 00:08:30.823 14:56:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=50517 00:08:30.823 14:56:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 50517 /var/tmp/spdk-raid.sock 00:08:30.823 14:56:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:30.823 14:56:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 50517 ']' 00:08:30.823 14:56:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:30.823 14:56:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:30.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:30.823 14:56:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:30.823 14:56:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:30.823 14:56:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.823 [2024-07-12 14:56:56.620845] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:08:30.823 [2024-07-12 14:56:56.621031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:31.391 EAL: TSC is not safe to use in SMP mode 00:08:31.391 EAL: TSC is not invariant 00:08:31.391 [2024-07-12 14:56:57.148719] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.650 [2024-07-12 14:56:57.230259] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:31.650 [2024-07-12 14:56:57.232399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.650 [2024-07-12 14:56:57.233206] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.650 [2024-07-12 14:56:57.233223] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.909 14:56:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:31.909 14:56:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:08:31.909 14:56:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:31.909 14:56:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:32.168 BaseBdev1_malloc 00:08:32.168 14:56:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:08:32.736 true 00:08:32.736 14:56:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:32.736 [2024-07-12 14:56:58.549135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:32.736 [2024-07-12 14:56:58.549200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.736 [2024-07-12 14:56:58.549226] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x88b4da34780 00:08:32.736 [2024-07-12 14:56:58.549235] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.737 [2024-07-12 14:56:58.549899] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.737 [2024-07-12 14:56:58.549924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:32.737 BaseBdev1 00:08:32.995 14:56:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:32.995 14:56:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:32.995 BaseBdev2_malloc 00:08:32.995 14:56:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:08:33.254 true 00:08:33.254 14:56:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:33.512 [2024-07-12 14:56:59.305222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:33.512 [2024-07-12 14:56:59.305273] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:33.512 [2024-07-12 14:56:59.305300] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x88b4da34c80 00:08:33.512 [2024-07-12 14:56:59.305309] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:33.512 [2024-07-12 14:56:59.305983] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:33.512 [2024-07-12 14:56:59.306008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:33.512 BaseBdev2 00:08:33.512 14:56:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:08:33.771 [2024-07-12 14:56:59.529262] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:33.771 [2024-07-12 14:56:59.529877] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:33.771 [2024-07-12 14:56:59.529943] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x88b4da34f00 00:08:33.771 [2024-07-12 14:56:59.529950] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:33.771 [2024-07-12 14:56:59.529982] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x88b4daa0e20 00:08:33.771 [2024-07-12 14:56:59.530056] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x88b4da34f00 00:08:33.771 [2024-07-12 14:56:59.530061] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x88b4da34f00 00:08:33.771 [2024-07-12 14:56:59.530089] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.771 14:56:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:33.771 14:56:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:33.771 14:56:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:33.771 14:56:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:33.771 14:56:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:33.771 14:56:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:33.771 14:56:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:33.771 14:56:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:33.771 14:56:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:33.771 14:56:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:33.771 14:56:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:33.771 14:56:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:34.030 14:56:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:34.030 "name": "raid_bdev1", 00:08:34.030 "uuid": "fd76b180-405e-11ef-b2a4-e9dca065e82e", 00:08:34.030 "strip_size_kb": 64, 00:08:34.030 "state": "online", 00:08:34.030 "raid_level": "concat", 00:08:34.030 "superblock": true, 00:08:34.030 "num_base_bdevs": 2, 00:08:34.030 "num_base_bdevs_discovered": 2, 00:08:34.030 "num_base_bdevs_operational": 2, 00:08:34.030 "base_bdevs_list": [ 00:08:34.030 { 00:08:34.030 "name": "BaseBdev1", 00:08:34.030 "uuid": "cb5bc28b-d39a-6452-b171-664e8c062a1d", 00:08:34.030 "is_configured": true, 00:08:34.030 "data_offset": 2048, 00:08:34.030 "data_size": 63488 00:08:34.030 }, 00:08:34.030 { 00:08:34.030 "name": "BaseBdev2", 00:08:34.030 "uuid": "3d636bee-d6b5-ba51-b061-9a286e24ff32", 00:08:34.030 "is_configured": true, 00:08:34.030 "data_offset": 2048, 00:08:34.030 "data_size": 63488 00:08:34.030 } 00:08:34.030 ] 00:08:34.030 }' 00:08:34.030 14:56:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:34.030 14:56:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.606 14:57:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:08:34.606 14:57:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:08:34.606 [2024-07-12 14:57:00.229536] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x88b4daa0ec0 00:08:35.539 14:57:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:35.798 14:57:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:08:35.798 14:57:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:08:35.798 14:57:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:08:35.798 14:57:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:35.798 14:57:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:35.798 14:57:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:35.798 14:57:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:35.798 14:57:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:35.798 14:57:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:35.798 14:57:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:35.798 14:57:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:35.798 14:57:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:35.798 14:57:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:35.798 14:57:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:35.798 14:57:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:36.056 14:57:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:36.056 "name": "raid_bdev1", 00:08:36.056 "uuid": "fd76b180-405e-11ef-b2a4-e9dca065e82e", 00:08:36.056 "strip_size_kb": 64, 00:08:36.056 "state": "online", 00:08:36.056 "raid_level": "concat", 00:08:36.056 "superblock": true, 00:08:36.056 "num_base_bdevs": 2, 00:08:36.056 "num_base_bdevs_discovered": 2, 00:08:36.056 "num_base_bdevs_operational": 2, 00:08:36.056 "base_bdevs_list": [ 00:08:36.056 { 00:08:36.056 "name": "BaseBdev1", 00:08:36.056 "uuid": "cb5bc28b-d39a-6452-b171-664e8c062a1d", 00:08:36.056 "is_configured": true, 00:08:36.056 "data_offset": 2048, 00:08:36.056 "data_size": 63488 00:08:36.056 }, 00:08:36.056 { 00:08:36.056 "name": "BaseBdev2", 00:08:36.056 "uuid": "3d636bee-d6b5-ba51-b061-9a286e24ff32", 00:08:36.056 "is_configured": true, 00:08:36.056 "data_offset": 2048, 00:08:36.056 "data_size": 63488 00:08:36.056 } 00:08:36.056 ] 00:08:36.056 }' 00:08:36.056 14:57:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:36.056 14:57:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.314 14:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:36.572 [2024-07-12 14:57:02.255022] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:36.572 [2024-07-12 14:57:02.255051] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:36.572 [2024-07-12 14:57:02.255404] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:36.572 [2024-07-12 14:57:02.255414] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.572 [2024-07-12 14:57:02.255422] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:36.572 [2024-07-12 14:57:02.255426] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x88b4da34f00 name raid_bdev1, state offline 00:08:36.572 0 00:08:36.572 14:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 50517 00:08:36.572 14:57:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 50517 ']' 00:08:36.572 14:57:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 50517 00:08:36.572 14:57:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:08:36.572 14:57:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:36.572 14:57:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 50517 00:08:36.572 14:57:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:08:36.572 14:57:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:08:36.572 14:57:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:08:36.572 killing process with pid 50517 00:08:36.572 14:57:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 50517' 00:08:36.572 14:57:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 50517 00:08:36.572 [2024-07-12 14:57:02.281201] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:36.572 14:57:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 50517 00:08:36.572 [2024-07-12 14:57:02.292578] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:36.830 14:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.hEZjtKQ9jp 00:08:36.830 14:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:08:36.830 14:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:08:36.830 14:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.49 00:08:36.830 14:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:08:36.830 14:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:36.830 14:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:08:36.830 14:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.49 != \0\.\0\0 ]] 00:08:36.830 00:08:36.830 real 0m5.871s 00:08:36.830 user 0m9.008s 00:08:36.830 sys 0m0.995s 00:08:36.830 14:57:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:36.830 14:57:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.830 ************************************ 00:08:36.830 END TEST raid_read_error_test 00:08:36.830 ************************************ 00:08:36.830 14:57:02 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:08:36.830 14:57:02 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:36.830 14:57:02 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:36.830 14:57:02 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.830 14:57:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:36.830 ************************************ 00:08:36.830 START TEST raid_write_error_test 00:08:36.830 ************************************ 00:08:36.830 14:57:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 write 00:08:36.830 14:57:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:08:36.830 14:57:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:08:36.830 14:57:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:08:36.830 14:57:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:08:36.830 14:57:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:36.830 14:57:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:08:36.830 14:57:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:36.830 14:57:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:36.830 14:57:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:08:36.830 14:57:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:36.830 14:57:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:36.830 14:57:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:36.830 14:57:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:08:36.830 14:57:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:08:36.830 14:57:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:08:36.830 14:57:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:08:36.830 14:57:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:08:36.830 14:57:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:08:36.830 14:57:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:08:36.830 14:57:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:08:36.830 14:57:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:08:36.830 14:57:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:08:36.830 14:57:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.E9nRt25E4E 00:08:36.830 14:57:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=50645 00:08:36.830 14:57:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:36.830 14:57:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 50645 /var/tmp/spdk-raid.sock 00:08:36.830 14:57:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 50645 ']' 00:08:36.830 14:57:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:36.830 14:57:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:36.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:36.830 14:57:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:36.830 14:57:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:36.830 14:57:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.830 [2024-07-12 14:57:02.531947] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:08:36.830 [2024-07-12 14:57:02.532154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:37.397 EAL: TSC is not safe to use in SMP mode 00:08:37.397 EAL: TSC is not invariant 00:08:37.397 [2024-07-12 14:57:03.083107] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.397 [2024-07-12 14:57:03.170314] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:37.397 [2024-07-12 14:57:03.172461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.397 [2024-07-12 14:57:03.173225] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.398 [2024-07-12 14:57:03.173239] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.986 14:57:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:37.986 14:57:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:08:37.986 14:57:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:37.986 14:57:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:38.244 BaseBdev1_malloc 00:08:38.244 14:57:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:08:38.503 true 00:08:38.503 14:57:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:38.760 [2024-07-12 14:57:04.385309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:38.760 [2024-07-12 14:57:04.385363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:38.760 [2024-07-12 14:57:04.385390] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe347e834780 00:08:38.760 [2024-07-12 14:57:04.385399] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:38.760 [2024-07-12 14:57:04.386087] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:38.761 [2024-07-12 14:57:04.386115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:38.761 BaseBdev1 00:08:38.761 14:57:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:38.761 14:57:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:39.019 BaseBdev2_malloc 00:08:39.019 14:57:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:08:39.277 true 00:08:39.277 14:57:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:39.535 [2024-07-12 14:57:05.181389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:39.535 [2024-07-12 14:57:05.181468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.535 [2024-07-12 14:57:05.181493] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe347e834c80 00:08:39.535 [2024-07-12 14:57:05.181502] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.535 [2024-07-12 14:57:05.182178] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.535 [2024-07-12 14:57:05.182203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:39.535 BaseBdev2 00:08:39.535 14:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:08:39.794 [2024-07-12 14:57:05.433431] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:39.794 [2024-07-12 14:57:05.434024] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:39.794 [2024-07-12 14:57:05.434091] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xe347e834f00 00:08:39.794 [2024-07-12 14:57:05.434097] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:39.794 [2024-07-12 14:57:05.434131] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe347e8a0e20 00:08:39.794 [2024-07-12 14:57:05.434208] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe347e834f00 00:08:39.794 [2024-07-12 14:57:05.434213] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xe347e834f00 00:08:39.794 [2024-07-12 14:57:05.434240] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.794 14:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:39.794 14:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:39.794 14:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:39.794 14:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:39.794 14:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:39.794 14:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:39.794 14:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:39.794 14:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:39.794 14:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:39.794 14:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:39.794 14:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:39.794 14:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:40.052 14:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:40.052 "name": "raid_bdev1", 00:08:40.052 "uuid": "00fb98fa-405f-11ef-b2a4-e9dca065e82e", 00:08:40.052 "strip_size_kb": 64, 00:08:40.052 "state": "online", 00:08:40.052 "raid_level": "concat", 00:08:40.052 "superblock": true, 00:08:40.052 "num_base_bdevs": 2, 00:08:40.052 "num_base_bdevs_discovered": 2, 00:08:40.052 "num_base_bdevs_operational": 2, 00:08:40.052 "base_bdevs_list": [ 00:08:40.052 { 00:08:40.052 "name": "BaseBdev1", 00:08:40.052 "uuid": "e0b840ff-91e8-895d-8be8-4819b34ca867", 00:08:40.052 "is_configured": true, 00:08:40.052 "data_offset": 2048, 00:08:40.052 "data_size": 63488 00:08:40.052 }, 00:08:40.052 { 00:08:40.052 "name": "BaseBdev2", 00:08:40.052 "uuid": "bc08d12e-b49b-3754-bada-30898c2a16bf", 00:08:40.052 "is_configured": true, 00:08:40.052 "data_offset": 2048, 00:08:40.052 "data_size": 63488 00:08:40.052 } 00:08:40.052 ] 00:08:40.052 }' 00:08:40.052 14:57:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:40.052 14:57:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.311 14:57:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:08:40.311 14:57:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:08:40.570 [2024-07-12 14:57:06.137678] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe347e8a0ec0 00:08:41.504 14:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:41.763 14:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:08:41.763 14:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:08:41.763 14:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:08:41.763 14:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:41.763 14:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:41.763 14:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:41.763 14:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:41.763 14:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:41.763 14:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:41.763 14:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:41.763 14:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:41.763 14:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:41.763 14:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:41.763 14:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:41.763 14:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.021 14:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:42.021 "name": "raid_bdev1", 00:08:42.021 "uuid": "00fb98fa-405f-11ef-b2a4-e9dca065e82e", 00:08:42.021 "strip_size_kb": 64, 00:08:42.021 "state": "online", 00:08:42.021 "raid_level": "concat", 00:08:42.021 "superblock": true, 00:08:42.021 "num_base_bdevs": 2, 00:08:42.021 "num_base_bdevs_discovered": 2, 00:08:42.021 "num_base_bdevs_operational": 2, 00:08:42.021 "base_bdevs_list": [ 00:08:42.021 { 00:08:42.021 "name": "BaseBdev1", 00:08:42.021 "uuid": "e0b840ff-91e8-895d-8be8-4819b34ca867", 00:08:42.021 "is_configured": true, 00:08:42.021 "data_offset": 2048, 00:08:42.021 "data_size": 63488 00:08:42.021 }, 00:08:42.021 { 00:08:42.021 "name": "BaseBdev2", 00:08:42.021 "uuid": "bc08d12e-b49b-3754-bada-30898c2a16bf", 00:08:42.021 "is_configured": true, 00:08:42.021 "data_offset": 2048, 00:08:42.021 "data_size": 63488 00:08:42.021 } 00:08:42.021 ] 00:08:42.021 }' 00:08:42.021 14:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:42.021 14:57:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.279 14:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:42.537 [2024-07-12 14:57:08.162887] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:42.537 [2024-07-12 14:57:08.162920] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:42.537 [2024-07-12 14:57:08.163278] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.537 [2024-07-12 14:57:08.163287] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.537 [2024-07-12 14:57:08.163294] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:42.537 [2024-07-12 14:57:08.163298] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe347e834f00 name raid_bdev1, state offline 00:08:42.537 0 00:08:42.537 14:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 50645 00:08:42.537 14:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 50645 ']' 00:08:42.537 14:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 50645 00:08:42.537 14:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:08:42.537 14:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:42.538 14:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 50645 00:08:42.538 14:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:08:42.538 14:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:08:42.538 14:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:08:42.538 killing process with pid 50645 00:08:42.538 14:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 50645' 00:08:42.538 14:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 50645 00:08:42.538 [2024-07-12 14:57:08.193855] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:42.538 14:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 50645 00:08:42.538 [2024-07-12 14:57:08.205151] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:42.796 14:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.E9nRt25E4E 00:08:42.796 14:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:08:42.796 14:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:08:42.796 14:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.49 00:08:42.796 14:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:08:42.796 14:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:42.796 14:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:08:42.796 14:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.49 != \0\.\0\0 ]] 00:08:42.796 00:08:42.796 real 0m5.871s 00:08:42.796 user 0m9.005s 00:08:42.796 sys 0m1.048s 00:08:42.796 14:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:42.796 ************************************ 00:08:42.796 END TEST raid_write_error_test 00:08:42.796 14:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.796 ************************************ 00:08:42.796 14:57:08 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:08:42.796 14:57:08 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:08:42.796 14:57:08 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:42.796 14:57:08 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:42.796 14:57:08 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.796 14:57:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:42.796 ************************************ 00:08:42.796 START TEST raid_state_function_test 00:08:42.796 ************************************ 00:08:42.796 14:57:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 false 00:08:42.796 14:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:08:42.796 14:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:08:42.796 14:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:08:42.796 14:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:08:42.796 14:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:08:42.796 14:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:42.796 14:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:08:42.796 14:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:42.796 14:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:42.796 14:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:08:42.797 14:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:42.797 14:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:42.797 14:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:42.797 14:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:08:42.797 14:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:08:42.797 14:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:08:42.797 14:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:08:42.797 14:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:08:42.797 14:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:08:42.797 14:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:08:42.797 14:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:08:42.797 14:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:08:42.797 14:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=50767 00:08:42.797 14:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 50767' 00:08:42.797 Process raid pid: 50767 00:08:42.797 14:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 50767 /var/tmp/spdk-raid.sock 00:08:42.797 14:57:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 50767 ']' 00:08:42.797 14:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:42.797 14:57:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:42.797 14:57:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:42.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:42.797 14:57:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:42.797 14:57:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:42.797 14:57:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.797 [2024-07-12 14:57:08.447253] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:08:42.797 [2024-07-12 14:57:08.447494] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:43.363 EAL: TSC is not safe to use in SMP mode 00:08:43.363 EAL: TSC is not invariant 00:08:43.364 [2024-07-12 14:57:08.984477] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.364 [2024-07-12 14:57:09.073143] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:43.364 [2024-07-12 14:57:09.075216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.364 [2024-07-12 14:57:09.075972] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.364 [2024-07-12 14:57:09.075985] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.975 14:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:43.975 14:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:08:43.975 14:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:44.233 [2024-07-12 14:57:09.816278] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:44.233 [2024-07-12 14:57:09.816334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:44.233 [2024-07-12 14:57:09.816339] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:44.233 [2024-07-12 14:57:09.816349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:44.233 14:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:44.233 14:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:44.233 14:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:44.233 14:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:44.233 14:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:44.233 14:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:44.233 14:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:44.233 14:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:44.233 14:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:44.233 14:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:44.233 14:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:44.233 14:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.492 14:57:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:44.492 "name": "Existed_Raid", 00:08:44.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.492 "strip_size_kb": 0, 00:08:44.492 "state": "configuring", 00:08:44.492 "raid_level": "raid1", 00:08:44.492 "superblock": false, 00:08:44.492 "num_base_bdevs": 2, 00:08:44.492 "num_base_bdevs_discovered": 0, 00:08:44.492 "num_base_bdevs_operational": 2, 00:08:44.492 "base_bdevs_list": [ 00:08:44.492 { 00:08:44.492 "name": "BaseBdev1", 00:08:44.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.492 "is_configured": false, 00:08:44.492 "data_offset": 0, 00:08:44.492 "data_size": 0 00:08:44.492 }, 00:08:44.492 { 00:08:44.492 "name": "BaseBdev2", 00:08:44.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.492 "is_configured": false, 00:08:44.492 "data_offset": 0, 00:08:44.492 "data_size": 0 00:08:44.492 } 00:08:44.492 ] 00:08:44.492 }' 00:08:44.492 14:57:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:44.492 14:57:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.751 14:57:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:45.011 [2024-07-12 14:57:10.708340] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:45.011 [2024-07-12 14:57:10.708366] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1691d2e34500 name Existed_Raid, state configuring 00:08:45.011 14:57:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:45.269 [2024-07-12 14:57:10.996374] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:45.269 [2024-07-12 14:57:10.996423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:45.269 [2024-07-12 14:57:10.996429] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:45.269 [2024-07-12 14:57:10.996437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:45.269 14:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:45.528 [2024-07-12 14:57:11.297418] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:45.528 BaseBdev1 00:08:45.528 14:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:08:45.528 14:57:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:08:45.528 14:57:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:45.528 14:57:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:08:45.528 14:57:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:45.528 14:57:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:45.528 14:57:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:45.786 14:57:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:46.045 [ 00:08:46.045 { 00:08:46.045 "name": "BaseBdev1", 00:08:46.045 "aliases": [ 00:08:46.045 "047a38a5-405f-11ef-b2a4-e9dca065e82e" 00:08:46.045 ], 00:08:46.045 "product_name": "Malloc disk", 00:08:46.045 "block_size": 512, 00:08:46.045 "num_blocks": 65536, 00:08:46.045 "uuid": "047a38a5-405f-11ef-b2a4-e9dca065e82e", 00:08:46.045 "assigned_rate_limits": { 00:08:46.045 "rw_ios_per_sec": 0, 00:08:46.045 "rw_mbytes_per_sec": 0, 00:08:46.045 "r_mbytes_per_sec": 0, 00:08:46.045 "w_mbytes_per_sec": 0 00:08:46.045 }, 00:08:46.045 "claimed": true, 00:08:46.045 "claim_type": "exclusive_write", 00:08:46.045 "zoned": false, 00:08:46.045 "supported_io_types": { 00:08:46.045 "read": true, 00:08:46.045 "write": true, 00:08:46.046 "unmap": true, 00:08:46.046 "flush": true, 00:08:46.046 "reset": true, 00:08:46.046 "nvme_admin": false, 00:08:46.046 "nvme_io": false, 00:08:46.046 "nvme_io_md": false, 00:08:46.046 "write_zeroes": true, 00:08:46.046 "zcopy": true, 00:08:46.046 "get_zone_info": false, 00:08:46.046 "zone_management": false, 00:08:46.046 "zone_append": false, 00:08:46.046 "compare": false, 00:08:46.046 "compare_and_write": false, 00:08:46.046 "abort": true, 00:08:46.046 "seek_hole": false, 00:08:46.046 "seek_data": false, 00:08:46.046 "copy": true, 00:08:46.046 "nvme_iov_md": false 00:08:46.046 }, 00:08:46.046 "memory_domains": [ 00:08:46.046 { 00:08:46.046 "dma_device_id": "system", 00:08:46.046 "dma_device_type": 1 00:08:46.046 }, 00:08:46.046 { 00:08:46.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.046 "dma_device_type": 2 00:08:46.046 } 00:08:46.046 ], 00:08:46.046 "driver_specific": {} 00:08:46.046 } 00:08:46.046 ] 00:08:46.046 14:57:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:08:46.046 14:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:46.046 14:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:46.046 14:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:46.046 14:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:46.046 14:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:46.046 14:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:46.046 14:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:46.046 14:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:46.046 14:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:46.046 14:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:46.046 14:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.046 14:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:46.305 14:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:46.305 "name": "Existed_Raid", 00:08:46.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.305 "strip_size_kb": 0, 00:08:46.305 "state": "configuring", 00:08:46.305 "raid_level": "raid1", 00:08:46.305 "superblock": false, 00:08:46.305 "num_base_bdevs": 2, 00:08:46.305 "num_base_bdevs_discovered": 1, 00:08:46.305 "num_base_bdevs_operational": 2, 00:08:46.305 "base_bdevs_list": [ 00:08:46.305 { 00:08:46.305 "name": "BaseBdev1", 00:08:46.305 "uuid": "047a38a5-405f-11ef-b2a4-e9dca065e82e", 00:08:46.305 "is_configured": true, 00:08:46.305 "data_offset": 0, 00:08:46.305 "data_size": 65536 00:08:46.305 }, 00:08:46.305 { 00:08:46.305 "name": "BaseBdev2", 00:08:46.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.305 "is_configured": false, 00:08:46.305 "data_offset": 0, 00:08:46.305 "data_size": 0 00:08:46.305 } 00:08:46.305 ] 00:08:46.305 }' 00:08:46.305 14:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:46.305 14:57:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.572 14:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:46.831 [2024-07-12 14:57:12.628523] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:46.831 [2024-07-12 14:57:12.628556] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1691d2e34500 name Existed_Raid, state configuring 00:08:46.831 14:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:47.397 [2024-07-12 14:57:12.968564] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:47.397 [2024-07-12 14:57:12.969350] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:47.397 [2024-07-12 14:57:12.969392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:47.397 14:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:08:47.397 14:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:47.397 14:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:47.397 14:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:47.397 14:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:47.397 14:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:47.397 14:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:47.397 14:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:47.397 14:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:47.397 14:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:47.397 14:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:47.398 14:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:47.398 14:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:47.398 14:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.654 14:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:47.655 "name": "Existed_Raid", 00:08:47.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.655 "strip_size_kb": 0, 00:08:47.655 "state": "configuring", 00:08:47.655 "raid_level": "raid1", 00:08:47.655 "superblock": false, 00:08:47.655 "num_base_bdevs": 2, 00:08:47.655 "num_base_bdevs_discovered": 1, 00:08:47.655 "num_base_bdevs_operational": 2, 00:08:47.655 "base_bdevs_list": [ 00:08:47.655 { 00:08:47.655 "name": "BaseBdev1", 00:08:47.655 "uuid": "047a38a5-405f-11ef-b2a4-e9dca065e82e", 00:08:47.655 "is_configured": true, 00:08:47.655 "data_offset": 0, 00:08:47.655 "data_size": 65536 00:08:47.655 }, 00:08:47.655 { 00:08:47.655 "name": "BaseBdev2", 00:08:47.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.655 "is_configured": false, 00:08:47.655 "data_offset": 0, 00:08:47.655 "data_size": 0 00:08:47.655 } 00:08:47.655 ] 00:08:47.655 }' 00:08:47.655 14:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:47.655 14:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.913 14:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:48.172 [2024-07-12 14:57:13.808773] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.172 [2024-07-12 14:57:13.808802] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1691d2e34a00 00:08:48.172 [2024-07-12 14:57:13.808806] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:48.172 [2024-07-12 14:57:13.808836] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1691d2e97e20 00:08:48.172 [2024-07-12 14:57:13.808929] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1691d2e34a00 00:08:48.172 [2024-07-12 14:57:13.808934] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1691d2e34a00 00:08:48.172 [2024-07-12 14:57:13.808971] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.172 BaseBdev2 00:08:48.172 14:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:08:48.172 14:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:08:48.172 14:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:48.172 14:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:08:48.172 14:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:48.172 14:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:48.172 14:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:48.430 14:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:48.689 [ 00:08:48.689 { 00:08:48.689 "name": "BaseBdev2", 00:08:48.689 "aliases": [ 00:08:48.689 "05f98e40-405f-11ef-b2a4-e9dca065e82e" 00:08:48.689 ], 00:08:48.689 "product_name": "Malloc disk", 00:08:48.689 "block_size": 512, 00:08:48.689 "num_blocks": 65536, 00:08:48.689 "uuid": "05f98e40-405f-11ef-b2a4-e9dca065e82e", 00:08:48.689 "assigned_rate_limits": { 00:08:48.689 "rw_ios_per_sec": 0, 00:08:48.689 "rw_mbytes_per_sec": 0, 00:08:48.689 "r_mbytes_per_sec": 0, 00:08:48.689 "w_mbytes_per_sec": 0 00:08:48.689 }, 00:08:48.689 "claimed": true, 00:08:48.689 "claim_type": "exclusive_write", 00:08:48.689 "zoned": false, 00:08:48.689 "supported_io_types": { 00:08:48.689 "read": true, 00:08:48.689 "write": true, 00:08:48.689 "unmap": true, 00:08:48.689 "flush": true, 00:08:48.689 "reset": true, 00:08:48.689 "nvme_admin": false, 00:08:48.689 "nvme_io": false, 00:08:48.689 "nvme_io_md": false, 00:08:48.689 "write_zeroes": true, 00:08:48.690 "zcopy": true, 00:08:48.690 "get_zone_info": false, 00:08:48.690 "zone_management": false, 00:08:48.690 "zone_append": false, 00:08:48.690 "compare": false, 00:08:48.690 "compare_and_write": false, 00:08:48.690 "abort": true, 00:08:48.690 "seek_hole": false, 00:08:48.690 "seek_data": false, 00:08:48.690 "copy": true, 00:08:48.690 "nvme_iov_md": false 00:08:48.690 }, 00:08:48.690 "memory_domains": [ 00:08:48.690 { 00:08:48.690 "dma_device_id": "system", 00:08:48.690 "dma_device_type": 1 00:08:48.690 }, 00:08:48.690 { 00:08:48.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.690 "dma_device_type": 2 00:08:48.690 } 00:08:48.690 ], 00:08:48.690 "driver_specific": {} 00:08:48.690 } 00:08:48.690 ] 00:08:48.690 14:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:08:48.690 14:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:08:48.690 14:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:48.690 14:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:48.690 14:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:48.690 14:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:48.690 14:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:48.690 14:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:48.690 14:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:48.690 14:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:48.690 14:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:48.690 14:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:48.690 14:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:48.690 14:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:48.690 14:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.948 14:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:48.948 "name": "Existed_Raid", 00:08:48.948 "uuid": "05f994b9-405f-11ef-b2a4-e9dca065e82e", 00:08:48.948 "strip_size_kb": 0, 00:08:48.948 "state": "online", 00:08:48.948 "raid_level": "raid1", 00:08:48.948 "superblock": false, 00:08:48.948 "num_base_bdevs": 2, 00:08:48.948 "num_base_bdevs_discovered": 2, 00:08:48.948 "num_base_bdevs_operational": 2, 00:08:48.948 "base_bdevs_list": [ 00:08:48.948 { 00:08:48.948 "name": "BaseBdev1", 00:08:48.948 "uuid": "047a38a5-405f-11ef-b2a4-e9dca065e82e", 00:08:48.948 "is_configured": true, 00:08:48.948 "data_offset": 0, 00:08:48.948 "data_size": 65536 00:08:48.948 }, 00:08:48.948 { 00:08:48.948 "name": "BaseBdev2", 00:08:48.948 "uuid": "05f98e40-405f-11ef-b2a4-e9dca065e82e", 00:08:48.948 "is_configured": true, 00:08:48.948 "data_offset": 0, 00:08:48.948 "data_size": 65536 00:08:48.948 } 00:08:48.948 ] 00:08:48.948 }' 00:08:48.948 14:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:48.948 14:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.206 14:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:08:49.206 14:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:08:49.206 14:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:49.206 14:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:49.206 14:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:49.206 14:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:49.206 14:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:08:49.206 14:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:49.465 [2024-07-12 14:57:15.164799] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:49.465 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:49.465 "name": "Existed_Raid", 00:08:49.465 "aliases": [ 00:08:49.465 "05f994b9-405f-11ef-b2a4-e9dca065e82e" 00:08:49.465 ], 00:08:49.465 "product_name": "Raid Volume", 00:08:49.465 "block_size": 512, 00:08:49.465 "num_blocks": 65536, 00:08:49.465 "uuid": "05f994b9-405f-11ef-b2a4-e9dca065e82e", 00:08:49.465 "assigned_rate_limits": { 00:08:49.465 "rw_ios_per_sec": 0, 00:08:49.465 "rw_mbytes_per_sec": 0, 00:08:49.465 "r_mbytes_per_sec": 0, 00:08:49.465 "w_mbytes_per_sec": 0 00:08:49.465 }, 00:08:49.465 "claimed": false, 00:08:49.465 "zoned": false, 00:08:49.465 "supported_io_types": { 00:08:49.465 "read": true, 00:08:49.465 "write": true, 00:08:49.465 "unmap": false, 00:08:49.465 "flush": false, 00:08:49.465 "reset": true, 00:08:49.465 "nvme_admin": false, 00:08:49.465 "nvme_io": false, 00:08:49.465 "nvme_io_md": false, 00:08:49.465 "write_zeroes": true, 00:08:49.465 "zcopy": false, 00:08:49.465 "get_zone_info": false, 00:08:49.465 "zone_management": false, 00:08:49.465 "zone_append": false, 00:08:49.465 "compare": false, 00:08:49.465 "compare_and_write": false, 00:08:49.465 "abort": false, 00:08:49.465 "seek_hole": false, 00:08:49.465 "seek_data": false, 00:08:49.465 "copy": false, 00:08:49.465 "nvme_iov_md": false 00:08:49.465 }, 00:08:49.465 "memory_domains": [ 00:08:49.465 { 00:08:49.465 "dma_device_id": "system", 00:08:49.465 "dma_device_type": 1 00:08:49.465 }, 00:08:49.465 { 00:08:49.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.465 "dma_device_type": 2 00:08:49.465 }, 00:08:49.465 { 00:08:49.465 "dma_device_id": "system", 00:08:49.465 "dma_device_type": 1 00:08:49.465 }, 00:08:49.465 { 00:08:49.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.465 "dma_device_type": 2 00:08:49.465 } 00:08:49.465 ], 00:08:49.465 "driver_specific": { 00:08:49.465 "raid": { 00:08:49.465 "uuid": "05f994b9-405f-11ef-b2a4-e9dca065e82e", 00:08:49.465 "strip_size_kb": 0, 00:08:49.465 "state": "online", 00:08:49.465 "raid_level": "raid1", 00:08:49.465 "superblock": false, 00:08:49.465 "num_base_bdevs": 2, 00:08:49.465 "num_base_bdevs_discovered": 2, 00:08:49.465 "num_base_bdevs_operational": 2, 00:08:49.465 "base_bdevs_list": [ 00:08:49.465 { 00:08:49.465 "name": "BaseBdev1", 00:08:49.465 "uuid": "047a38a5-405f-11ef-b2a4-e9dca065e82e", 00:08:49.465 "is_configured": true, 00:08:49.465 "data_offset": 0, 00:08:49.465 "data_size": 65536 00:08:49.465 }, 00:08:49.465 { 00:08:49.465 "name": "BaseBdev2", 00:08:49.465 "uuid": "05f98e40-405f-11ef-b2a4-e9dca065e82e", 00:08:49.465 "is_configured": true, 00:08:49.465 "data_offset": 0, 00:08:49.465 "data_size": 65536 00:08:49.465 } 00:08:49.465 ] 00:08:49.465 } 00:08:49.465 } 00:08:49.465 }' 00:08:49.465 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:49.465 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:08:49.465 BaseBdev2' 00:08:49.465 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:49.465 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:08:49.465 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:49.723 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:49.723 "name": "BaseBdev1", 00:08:49.723 "aliases": [ 00:08:49.723 "047a38a5-405f-11ef-b2a4-e9dca065e82e" 00:08:49.723 ], 00:08:49.723 "product_name": "Malloc disk", 00:08:49.723 "block_size": 512, 00:08:49.723 "num_blocks": 65536, 00:08:49.723 "uuid": "047a38a5-405f-11ef-b2a4-e9dca065e82e", 00:08:49.723 "assigned_rate_limits": { 00:08:49.723 "rw_ios_per_sec": 0, 00:08:49.723 "rw_mbytes_per_sec": 0, 00:08:49.723 "r_mbytes_per_sec": 0, 00:08:49.723 "w_mbytes_per_sec": 0 00:08:49.723 }, 00:08:49.723 "claimed": true, 00:08:49.723 "claim_type": "exclusive_write", 00:08:49.723 "zoned": false, 00:08:49.723 "supported_io_types": { 00:08:49.723 "read": true, 00:08:49.723 "write": true, 00:08:49.723 "unmap": true, 00:08:49.723 "flush": true, 00:08:49.723 "reset": true, 00:08:49.723 "nvme_admin": false, 00:08:49.723 "nvme_io": false, 00:08:49.723 "nvme_io_md": false, 00:08:49.723 "write_zeroes": true, 00:08:49.723 "zcopy": true, 00:08:49.723 "get_zone_info": false, 00:08:49.723 "zone_management": false, 00:08:49.723 "zone_append": false, 00:08:49.723 "compare": false, 00:08:49.723 "compare_and_write": false, 00:08:49.723 "abort": true, 00:08:49.723 "seek_hole": false, 00:08:49.723 "seek_data": false, 00:08:49.723 "copy": true, 00:08:49.723 "nvme_iov_md": false 00:08:49.723 }, 00:08:49.723 "memory_domains": [ 00:08:49.723 { 00:08:49.723 "dma_device_id": "system", 00:08:49.723 "dma_device_type": 1 00:08:49.723 }, 00:08:49.723 { 00:08:49.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.723 "dma_device_type": 2 00:08:49.723 } 00:08:49.723 ], 00:08:49.723 "driver_specific": {} 00:08:49.723 }' 00:08:49.723 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:49.723 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:49.723 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:49.723 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:49.723 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:49.723 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:49.723 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:49.723 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:49.723 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:49.723 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:49.723 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:49.723 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:49.723 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:49.723 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:49.723 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:08:49.981 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:49.981 "name": "BaseBdev2", 00:08:49.981 "aliases": [ 00:08:49.981 "05f98e40-405f-11ef-b2a4-e9dca065e82e" 00:08:49.981 ], 00:08:49.981 "product_name": "Malloc disk", 00:08:49.981 "block_size": 512, 00:08:49.981 "num_blocks": 65536, 00:08:49.981 "uuid": "05f98e40-405f-11ef-b2a4-e9dca065e82e", 00:08:49.981 "assigned_rate_limits": { 00:08:49.981 "rw_ios_per_sec": 0, 00:08:49.981 "rw_mbytes_per_sec": 0, 00:08:49.981 "r_mbytes_per_sec": 0, 00:08:49.981 "w_mbytes_per_sec": 0 00:08:49.981 }, 00:08:49.981 "claimed": true, 00:08:49.981 "claim_type": "exclusive_write", 00:08:49.981 "zoned": false, 00:08:49.981 "supported_io_types": { 00:08:49.981 "read": true, 00:08:49.981 "write": true, 00:08:49.981 "unmap": true, 00:08:49.981 "flush": true, 00:08:49.981 "reset": true, 00:08:49.981 "nvme_admin": false, 00:08:49.981 "nvme_io": false, 00:08:49.981 "nvme_io_md": false, 00:08:49.981 "write_zeroes": true, 00:08:49.981 "zcopy": true, 00:08:49.981 "get_zone_info": false, 00:08:49.981 "zone_management": false, 00:08:49.981 "zone_append": false, 00:08:49.981 "compare": false, 00:08:49.981 "compare_and_write": false, 00:08:49.981 "abort": true, 00:08:49.981 "seek_hole": false, 00:08:49.981 "seek_data": false, 00:08:49.981 "copy": true, 00:08:49.981 "nvme_iov_md": false 00:08:49.981 }, 00:08:49.981 "memory_domains": [ 00:08:49.981 { 00:08:49.981 "dma_device_id": "system", 00:08:49.981 "dma_device_type": 1 00:08:49.981 }, 00:08:49.981 { 00:08:49.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.981 "dma_device_type": 2 00:08:49.981 } 00:08:49.981 ], 00:08:49.981 "driver_specific": {} 00:08:49.981 }' 00:08:49.981 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:49.981 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:49.981 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:49.981 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:49.981 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:49.981 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:49.981 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:49.981 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:49.981 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:49.981 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:49.981 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:50.240 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:50.240 14:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:50.240 [2024-07-12 14:57:16.060859] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:50.499 14:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:08:50.499 14:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:08:50.499 14:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:50.499 14:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:08:50.499 14:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:08:50.499 14:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:50.499 14:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:50.499 14:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:50.499 14:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:50.499 14:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:50.499 14:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:50.499 14:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:50.499 14:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:50.499 14:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:50.499 14:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:50.499 14:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.500 14:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:50.759 14:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:50.759 "name": "Existed_Raid", 00:08:50.759 "uuid": "05f994b9-405f-11ef-b2a4-e9dca065e82e", 00:08:50.759 "strip_size_kb": 0, 00:08:50.759 "state": "online", 00:08:50.759 "raid_level": "raid1", 00:08:50.759 "superblock": false, 00:08:50.759 "num_base_bdevs": 2, 00:08:50.759 "num_base_bdevs_discovered": 1, 00:08:50.759 "num_base_bdevs_operational": 1, 00:08:50.759 "base_bdevs_list": [ 00:08:50.759 { 00:08:50.759 "name": null, 00:08:50.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.759 "is_configured": false, 00:08:50.759 "data_offset": 0, 00:08:50.759 "data_size": 65536 00:08:50.759 }, 00:08:50.759 { 00:08:50.759 "name": "BaseBdev2", 00:08:50.759 "uuid": "05f98e40-405f-11ef-b2a4-e9dca065e82e", 00:08:50.759 "is_configured": true, 00:08:50.759 "data_offset": 0, 00:08:50.759 "data_size": 65536 00:08:50.759 } 00:08:50.759 ] 00:08:50.759 }' 00:08:50.759 14:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:50.759 14:57:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.016 14:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:08:51.016 14:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:51.016 14:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:51.016 14:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:08:51.274 14:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:08:51.274 14:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:51.274 14:57:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:51.531 [2024-07-12 14:57:17.254740] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:51.531 [2024-07-12 14:57:17.254782] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:51.531 [2024-07-12 14:57:17.260575] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.531 [2024-07-12 14:57:17.260597] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:51.531 [2024-07-12 14:57:17.260602] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1691d2e34a00 name Existed_Raid, state offline 00:08:51.531 14:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:08:51.531 14:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:51.531 14:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:51.531 14:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:08:51.790 14:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:08:51.790 14:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:08:51.790 14:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:08:51.790 14:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 50767 00:08:51.790 14:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 50767 ']' 00:08:51.790 14:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 50767 00:08:51.790 14:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:08:51.790 14:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:51.790 14:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:08:51.790 14:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 50767 00:08:51.790 14:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:08:51.790 killing process with pid 50767 00:08:51.790 14:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:08:51.790 14:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 50767' 00:08:51.790 14:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 50767 00:08:51.790 [2024-07-12 14:57:17.567336] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:51.790 [2024-07-12 14:57:17.567369] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:51.790 14:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 50767 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:08:52.049 00:08:52.049 real 0m9.304s 00:08:52.049 user 0m16.323s 00:08:52.049 sys 0m1.533s 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.049 ************************************ 00:08:52.049 END TEST raid_state_function_test 00:08:52.049 ************************************ 00:08:52.049 14:57:17 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:08:52.049 14:57:17 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:52.049 14:57:17 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:52.049 14:57:17 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.049 14:57:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:52.049 ************************************ 00:08:52.049 START TEST raid_state_function_test_sb 00:08:52.049 ************************************ 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=51042 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 51042' 00:08:52.049 Process raid pid: 51042 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 51042 /var/tmp/spdk-raid.sock 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 51042 ']' 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:52.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:52.049 14:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.049 [2024-07-12 14:57:17.791868] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:08:52.049 [2024-07-12 14:57:17.792014] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:52.655 EAL: TSC is not safe to use in SMP mode 00:08:52.655 EAL: TSC is not invariant 00:08:52.655 [2024-07-12 14:57:18.312717] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.655 [2024-07-12 14:57:18.396213] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:52.655 [2024-07-12 14:57:18.398328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.655 [2024-07-12 14:57:18.399101] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.655 [2024-07-12 14:57:18.399116] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:53.219 14:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:53.219 14:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:08:53.219 14:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:53.477 [2024-07-12 14:57:19.070732] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:53.477 [2024-07-12 14:57:19.070783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:53.477 [2024-07-12 14:57:19.070789] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:53.477 [2024-07-12 14:57:19.070797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:53.477 14:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:53.477 14:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:53.477 14:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:53.477 14:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:53.477 14:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:53.477 14:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:53.477 14:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:53.477 14:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:53.477 14:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:53.477 14:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:53.477 14:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.477 14:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:53.736 14:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:53.736 "name": "Existed_Raid", 00:08:53.736 "uuid": "091c7c47-405f-11ef-b2a4-e9dca065e82e", 00:08:53.736 "strip_size_kb": 0, 00:08:53.736 "state": "configuring", 00:08:53.736 "raid_level": "raid1", 00:08:53.736 "superblock": true, 00:08:53.736 "num_base_bdevs": 2, 00:08:53.736 "num_base_bdevs_discovered": 0, 00:08:53.736 "num_base_bdevs_operational": 2, 00:08:53.736 "base_bdevs_list": [ 00:08:53.736 { 00:08:53.736 "name": "BaseBdev1", 00:08:53.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.736 "is_configured": false, 00:08:53.736 "data_offset": 0, 00:08:53.736 "data_size": 0 00:08:53.736 }, 00:08:53.736 { 00:08:53.736 "name": "BaseBdev2", 00:08:53.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.736 "is_configured": false, 00:08:53.736 "data_offset": 0, 00:08:53.736 "data_size": 0 00:08:53.736 } 00:08:53.736 ] 00:08:53.736 }' 00:08:53.736 14:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:53.736 14:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.993 14:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:54.251 [2024-07-12 14:57:19.854775] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:54.251 [2024-07-12 14:57:19.854801] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x125e89034500 name Existed_Raid, state configuring 00:08:54.251 14:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:54.510 [2024-07-12 14:57:20.090803] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:54.510 [2024-07-12 14:57:20.090850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:54.510 [2024-07-12 14:57:20.090855] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:54.510 [2024-07-12 14:57:20.090863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:54.510 14:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:54.510 [2024-07-12 14:57:20.327846] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:54.510 BaseBdev1 00:08:54.768 14:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:08:54.768 14:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:08:54.768 14:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:54.768 14:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:08:54.768 14:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:54.768 14:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:54.768 14:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:55.028 14:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:55.288 [ 00:08:55.288 { 00:08:55.288 "name": "BaseBdev1", 00:08:55.288 "aliases": [ 00:08:55.288 "09dc26ef-405f-11ef-b2a4-e9dca065e82e" 00:08:55.288 ], 00:08:55.288 "product_name": "Malloc disk", 00:08:55.288 "block_size": 512, 00:08:55.288 "num_blocks": 65536, 00:08:55.288 "uuid": "09dc26ef-405f-11ef-b2a4-e9dca065e82e", 00:08:55.288 "assigned_rate_limits": { 00:08:55.288 "rw_ios_per_sec": 0, 00:08:55.288 "rw_mbytes_per_sec": 0, 00:08:55.288 "r_mbytes_per_sec": 0, 00:08:55.288 "w_mbytes_per_sec": 0 00:08:55.288 }, 00:08:55.288 "claimed": true, 00:08:55.288 "claim_type": "exclusive_write", 00:08:55.288 "zoned": false, 00:08:55.288 "supported_io_types": { 00:08:55.288 "read": true, 00:08:55.288 "write": true, 00:08:55.288 "unmap": true, 00:08:55.288 "flush": true, 00:08:55.288 "reset": true, 00:08:55.288 "nvme_admin": false, 00:08:55.288 "nvme_io": false, 00:08:55.288 "nvme_io_md": false, 00:08:55.288 "write_zeroes": true, 00:08:55.288 "zcopy": true, 00:08:55.288 "get_zone_info": false, 00:08:55.288 "zone_management": false, 00:08:55.288 "zone_append": false, 00:08:55.288 "compare": false, 00:08:55.288 "compare_and_write": false, 00:08:55.288 "abort": true, 00:08:55.288 "seek_hole": false, 00:08:55.288 "seek_data": false, 00:08:55.288 "copy": true, 00:08:55.288 "nvme_iov_md": false 00:08:55.288 }, 00:08:55.288 "memory_domains": [ 00:08:55.288 { 00:08:55.288 "dma_device_id": "system", 00:08:55.288 "dma_device_type": 1 00:08:55.288 }, 00:08:55.288 { 00:08:55.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.288 "dma_device_type": 2 00:08:55.288 } 00:08:55.288 ], 00:08:55.288 "driver_specific": {} 00:08:55.288 } 00:08:55.288 ] 00:08:55.288 14:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:08:55.288 14:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:55.288 14:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:55.288 14:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:55.288 14:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:55.288 14:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:55.288 14:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:55.288 14:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:55.288 14:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:55.288 14:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:55.288 14:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:55.288 14:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:55.288 14:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.547 14:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:55.547 "name": "Existed_Raid", 00:08:55.547 "uuid": "09b822d9-405f-11ef-b2a4-e9dca065e82e", 00:08:55.547 "strip_size_kb": 0, 00:08:55.547 "state": "configuring", 00:08:55.547 "raid_level": "raid1", 00:08:55.547 "superblock": true, 00:08:55.547 "num_base_bdevs": 2, 00:08:55.547 "num_base_bdevs_discovered": 1, 00:08:55.547 "num_base_bdevs_operational": 2, 00:08:55.547 "base_bdevs_list": [ 00:08:55.547 { 00:08:55.547 "name": "BaseBdev1", 00:08:55.547 "uuid": "09dc26ef-405f-11ef-b2a4-e9dca065e82e", 00:08:55.547 "is_configured": true, 00:08:55.547 "data_offset": 2048, 00:08:55.547 "data_size": 63488 00:08:55.547 }, 00:08:55.547 { 00:08:55.547 "name": "BaseBdev2", 00:08:55.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.547 "is_configured": false, 00:08:55.547 "data_offset": 0, 00:08:55.547 "data_size": 0 00:08:55.547 } 00:08:55.547 ] 00:08:55.547 }' 00:08:55.547 14:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:55.547 14:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.807 14:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:56.065 [2024-07-12 14:57:21.786946] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:56.065 [2024-07-12 14:57:21.786982] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x125e89034500 name Existed_Raid, state configuring 00:08:56.065 14:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:56.323 [2024-07-12 14:57:22.070988] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:56.323 [2024-07-12 14:57:22.071812] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:56.323 [2024-07-12 14:57:22.071850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:56.323 14:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:08:56.323 14:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:56.323 14:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:56.323 14:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:56.323 14:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:56.323 14:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:56.323 14:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:56.323 14:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:56.323 14:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:56.323 14:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:56.323 14:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:56.323 14:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:56.323 14:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:56.323 14:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.581 14:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:56.581 "name": "Existed_Raid", 00:08:56.581 "uuid": "0ae6499d-405f-11ef-b2a4-e9dca065e82e", 00:08:56.581 "strip_size_kb": 0, 00:08:56.581 "state": "configuring", 00:08:56.581 "raid_level": "raid1", 00:08:56.581 "superblock": true, 00:08:56.581 "num_base_bdevs": 2, 00:08:56.581 "num_base_bdevs_discovered": 1, 00:08:56.581 "num_base_bdevs_operational": 2, 00:08:56.581 "base_bdevs_list": [ 00:08:56.581 { 00:08:56.581 "name": "BaseBdev1", 00:08:56.581 "uuid": "09dc26ef-405f-11ef-b2a4-e9dca065e82e", 00:08:56.581 "is_configured": true, 00:08:56.581 "data_offset": 2048, 00:08:56.581 "data_size": 63488 00:08:56.581 }, 00:08:56.581 { 00:08:56.581 "name": "BaseBdev2", 00:08:56.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.581 "is_configured": false, 00:08:56.581 "data_offset": 0, 00:08:56.581 "data_size": 0 00:08:56.581 } 00:08:56.581 ] 00:08:56.581 }' 00:08:56.582 14:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:56.582 14:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.148 14:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:57.406 [2024-07-12 14:57:23.007197] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:57.406 [2024-07-12 14:57:23.007266] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x125e89034a00 00:08:57.406 [2024-07-12 14:57:23.007272] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:57.406 [2024-07-12 14:57:23.007294] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x125e89097e20 00:08:57.406 [2024-07-12 14:57:23.007361] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x125e89034a00 00:08:57.406 [2024-07-12 14:57:23.007366] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x125e89034a00 00:08:57.406 [2024-07-12 14:57:23.007386] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:57.406 BaseBdev2 00:08:57.406 14:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:08:57.406 14:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:08:57.406 14:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:57.406 14:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:08:57.406 14:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:57.406 14:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:57.406 14:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:57.664 14:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:57.924 [ 00:08:57.924 { 00:08:57.924 "name": "BaseBdev2", 00:08:57.924 "aliases": [ 00:08:57.924 "0b751f67-405f-11ef-b2a4-e9dca065e82e" 00:08:57.924 ], 00:08:57.924 "product_name": "Malloc disk", 00:08:57.924 "block_size": 512, 00:08:57.924 "num_blocks": 65536, 00:08:57.924 "uuid": "0b751f67-405f-11ef-b2a4-e9dca065e82e", 00:08:57.924 "assigned_rate_limits": { 00:08:57.924 "rw_ios_per_sec": 0, 00:08:57.924 "rw_mbytes_per_sec": 0, 00:08:57.924 "r_mbytes_per_sec": 0, 00:08:57.924 "w_mbytes_per_sec": 0 00:08:57.924 }, 00:08:57.924 "claimed": true, 00:08:57.924 "claim_type": "exclusive_write", 00:08:57.924 "zoned": false, 00:08:57.924 "supported_io_types": { 00:08:57.924 "read": true, 00:08:57.924 "write": true, 00:08:57.924 "unmap": true, 00:08:57.924 "flush": true, 00:08:57.924 "reset": true, 00:08:57.924 "nvme_admin": false, 00:08:57.924 "nvme_io": false, 00:08:57.924 "nvme_io_md": false, 00:08:57.924 "write_zeroes": true, 00:08:57.924 "zcopy": true, 00:08:57.924 "get_zone_info": false, 00:08:57.924 "zone_management": false, 00:08:57.924 "zone_append": false, 00:08:57.924 "compare": false, 00:08:57.924 "compare_and_write": false, 00:08:57.924 "abort": true, 00:08:57.924 "seek_hole": false, 00:08:57.924 "seek_data": false, 00:08:57.924 "copy": true, 00:08:57.924 "nvme_iov_md": false 00:08:57.924 }, 00:08:57.924 "memory_domains": [ 00:08:57.924 { 00:08:57.924 "dma_device_id": "system", 00:08:57.924 "dma_device_type": 1 00:08:57.924 }, 00:08:57.924 { 00:08:57.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.924 "dma_device_type": 2 00:08:57.924 } 00:08:57.924 ], 00:08:57.924 "driver_specific": {} 00:08:57.924 } 00:08:57.924 ] 00:08:57.924 14:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:08:57.924 14:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:08:57.924 14:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:57.924 14:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:57.924 14:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:57.924 14:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:57.924 14:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:57.924 14:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:57.924 14:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:57.924 14:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:57.924 14:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:57.924 14:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:57.924 14:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:57.924 14:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:57.924 14:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.924 14:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:57.924 "name": "Existed_Raid", 00:08:57.924 "uuid": "0ae6499d-405f-11ef-b2a4-e9dca065e82e", 00:08:57.924 "strip_size_kb": 0, 00:08:57.924 "state": "online", 00:08:57.924 "raid_level": "raid1", 00:08:57.924 "superblock": true, 00:08:57.924 "num_base_bdevs": 2, 00:08:57.924 "num_base_bdevs_discovered": 2, 00:08:57.924 "num_base_bdevs_operational": 2, 00:08:57.924 "base_bdevs_list": [ 00:08:57.924 { 00:08:57.924 "name": "BaseBdev1", 00:08:57.924 "uuid": "09dc26ef-405f-11ef-b2a4-e9dca065e82e", 00:08:57.924 "is_configured": true, 00:08:57.924 "data_offset": 2048, 00:08:57.924 "data_size": 63488 00:08:57.924 }, 00:08:57.924 { 00:08:57.924 "name": "BaseBdev2", 00:08:57.924 "uuid": "0b751f67-405f-11ef-b2a4-e9dca065e82e", 00:08:57.924 "is_configured": true, 00:08:57.924 "data_offset": 2048, 00:08:57.924 "data_size": 63488 00:08:57.924 } 00:08:57.924 ] 00:08:57.924 }' 00:08:57.924 14:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:57.924 14:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.491 14:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:08:58.491 14:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:08:58.491 14:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:58.491 14:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:58.491 14:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:58.491 14:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:08:58.491 14:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:08:58.491 14:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:58.750 [2024-07-12 14:57:24.379534] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:58.750 14:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:58.750 "name": "Existed_Raid", 00:08:58.750 "aliases": [ 00:08:58.750 "0ae6499d-405f-11ef-b2a4-e9dca065e82e" 00:08:58.750 ], 00:08:58.750 "product_name": "Raid Volume", 00:08:58.750 "block_size": 512, 00:08:58.750 "num_blocks": 63488, 00:08:58.750 "uuid": "0ae6499d-405f-11ef-b2a4-e9dca065e82e", 00:08:58.750 "assigned_rate_limits": { 00:08:58.750 "rw_ios_per_sec": 0, 00:08:58.750 "rw_mbytes_per_sec": 0, 00:08:58.750 "r_mbytes_per_sec": 0, 00:08:58.750 "w_mbytes_per_sec": 0 00:08:58.750 }, 00:08:58.750 "claimed": false, 00:08:58.750 "zoned": false, 00:08:58.750 "supported_io_types": { 00:08:58.750 "read": true, 00:08:58.750 "write": true, 00:08:58.750 "unmap": false, 00:08:58.750 "flush": false, 00:08:58.750 "reset": true, 00:08:58.750 "nvme_admin": false, 00:08:58.750 "nvme_io": false, 00:08:58.750 "nvme_io_md": false, 00:08:58.750 "write_zeroes": true, 00:08:58.750 "zcopy": false, 00:08:58.750 "get_zone_info": false, 00:08:58.750 "zone_management": false, 00:08:58.750 "zone_append": false, 00:08:58.750 "compare": false, 00:08:58.750 "compare_and_write": false, 00:08:58.750 "abort": false, 00:08:58.750 "seek_hole": false, 00:08:58.750 "seek_data": false, 00:08:58.750 "copy": false, 00:08:58.750 "nvme_iov_md": false 00:08:58.750 }, 00:08:58.750 "memory_domains": [ 00:08:58.750 { 00:08:58.750 "dma_device_id": "system", 00:08:58.750 "dma_device_type": 1 00:08:58.750 }, 00:08:58.750 { 00:08:58.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.750 "dma_device_type": 2 00:08:58.750 }, 00:08:58.750 { 00:08:58.750 "dma_device_id": "system", 00:08:58.750 "dma_device_type": 1 00:08:58.750 }, 00:08:58.750 { 00:08:58.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.750 "dma_device_type": 2 00:08:58.750 } 00:08:58.750 ], 00:08:58.750 "driver_specific": { 00:08:58.750 "raid": { 00:08:58.750 "uuid": "0ae6499d-405f-11ef-b2a4-e9dca065e82e", 00:08:58.750 "strip_size_kb": 0, 00:08:58.750 "state": "online", 00:08:58.750 "raid_level": "raid1", 00:08:58.750 "superblock": true, 00:08:58.750 "num_base_bdevs": 2, 00:08:58.750 "num_base_bdevs_discovered": 2, 00:08:58.750 "num_base_bdevs_operational": 2, 00:08:58.750 "base_bdevs_list": [ 00:08:58.750 { 00:08:58.750 "name": "BaseBdev1", 00:08:58.750 "uuid": "09dc26ef-405f-11ef-b2a4-e9dca065e82e", 00:08:58.750 "is_configured": true, 00:08:58.750 "data_offset": 2048, 00:08:58.750 "data_size": 63488 00:08:58.750 }, 00:08:58.750 { 00:08:58.750 "name": "BaseBdev2", 00:08:58.750 "uuid": "0b751f67-405f-11ef-b2a4-e9dca065e82e", 00:08:58.750 "is_configured": true, 00:08:58.750 "data_offset": 2048, 00:08:58.750 "data_size": 63488 00:08:58.750 } 00:08:58.750 ] 00:08:58.750 } 00:08:58.750 } 00:08:58.750 }' 00:08:58.750 14:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:58.750 14:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:08:58.750 BaseBdev2' 00:08:58.750 14:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:58.750 14:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:08:58.750 14:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:59.050 14:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:59.050 "name": "BaseBdev1", 00:08:59.050 "aliases": [ 00:08:59.050 "09dc26ef-405f-11ef-b2a4-e9dca065e82e" 00:08:59.050 ], 00:08:59.050 "product_name": "Malloc disk", 00:08:59.050 "block_size": 512, 00:08:59.050 "num_blocks": 65536, 00:08:59.050 "uuid": "09dc26ef-405f-11ef-b2a4-e9dca065e82e", 00:08:59.050 "assigned_rate_limits": { 00:08:59.050 "rw_ios_per_sec": 0, 00:08:59.050 "rw_mbytes_per_sec": 0, 00:08:59.050 "r_mbytes_per_sec": 0, 00:08:59.050 "w_mbytes_per_sec": 0 00:08:59.050 }, 00:08:59.050 "claimed": true, 00:08:59.050 "claim_type": "exclusive_write", 00:08:59.050 "zoned": false, 00:08:59.050 "supported_io_types": { 00:08:59.050 "read": true, 00:08:59.050 "write": true, 00:08:59.050 "unmap": true, 00:08:59.050 "flush": true, 00:08:59.050 "reset": true, 00:08:59.050 "nvme_admin": false, 00:08:59.050 "nvme_io": false, 00:08:59.050 "nvme_io_md": false, 00:08:59.050 "write_zeroes": true, 00:08:59.051 "zcopy": true, 00:08:59.051 "get_zone_info": false, 00:08:59.051 "zone_management": false, 00:08:59.051 "zone_append": false, 00:08:59.051 "compare": false, 00:08:59.051 "compare_and_write": false, 00:08:59.051 "abort": true, 00:08:59.051 "seek_hole": false, 00:08:59.051 "seek_data": false, 00:08:59.051 "copy": true, 00:08:59.051 "nvme_iov_md": false 00:08:59.051 }, 00:08:59.051 "memory_domains": [ 00:08:59.051 { 00:08:59.051 "dma_device_id": "system", 00:08:59.051 "dma_device_type": 1 00:08:59.051 }, 00:08:59.051 { 00:08:59.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.051 "dma_device_type": 2 00:08:59.051 } 00:08:59.051 ], 00:08:59.051 "driver_specific": {} 00:08:59.051 }' 00:08:59.051 14:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:59.051 14:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:59.051 14:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:59.051 14:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:59.051 14:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:59.051 14:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:59.051 14:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:59.051 14:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:59.051 14:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:59.051 14:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:59.051 14:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:59.051 14:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:59.051 14:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:59.051 14:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:08:59.051 14:57:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:59.309 14:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:59.309 "name": "BaseBdev2", 00:08:59.309 "aliases": [ 00:08:59.309 "0b751f67-405f-11ef-b2a4-e9dca065e82e" 00:08:59.309 ], 00:08:59.309 "product_name": "Malloc disk", 00:08:59.309 "block_size": 512, 00:08:59.309 "num_blocks": 65536, 00:08:59.309 "uuid": "0b751f67-405f-11ef-b2a4-e9dca065e82e", 00:08:59.309 "assigned_rate_limits": { 00:08:59.309 "rw_ios_per_sec": 0, 00:08:59.309 "rw_mbytes_per_sec": 0, 00:08:59.309 "r_mbytes_per_sec": 0, 00:08:59.309 "w_mbytes_per_sec": 0 00:08:59.309 }, 00:08:59.309 "claimed": true, 00:08:59.309 "claim_type": "exclusive_write", 00:08:59.309 "zoned": false, 00:08:59.309 "supported_io_types": { 00:08:59.309 "read": true, 00:08:59.309 "write": true, 00:08:59.309 "unmap": true, 00:08:59.309 "flush": true, 00:08:59.309 "reset": true, 00:08:59.309 "nvme_admin": false, 00:08:59.309 "nvme_io": false, 00:08:59.309 "nvme_io_md": false, 00:08:59.309 "write_zeroes": true, 00:08:59.309 "zcopy": true, 00:08:59.309 "get_zone_info": false, 00:08:59.309 "zone_management": false, 00:08:59.309 "zone_append": false, 00:08:59.309 "compare": false, 00:08:59.309 "compare_and_write": false, 00:08:59.309 "abort": true, 00:08:59.309 "seek_hole": false, 00:08:59.309 "seek_data": false, 00:08:59.309 "copy": true, 00:08:59.309 "nvme_iov_md": false 00:08:59.309 }, 00:08:59.309 "memory_domains": [ 00:08:59.309 { 00:08:59.309 "dma_device_id": "system", 00:08:59.309 "dma_device_type": 1 00:08:59.309 }, 00:08:59.309 { 00:08:59.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.309 "dma_device_type": 2 00:08:59.309 } 00:08:59.309 ], 00:08:59.309 "driver_specific": {} 00:08:59.309 }' 00:08:59.309 14:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:59.309 14:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:59.309 14:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:59.309 14:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:59.309 14:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:59.309 14:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:59.309 14:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:59.309 14:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:59.309 14:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:59.309 14:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:59.309 14:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:59.309 14:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:59.309 14:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:59.569 [2024-07-12 14:57:25.383806] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:59.828 14:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:08:59.828 14:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:08:59.828 14:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:59.828 14:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:08:59.828 14:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:08:59.828 14:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:59.828 14:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:59.828 14:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:59.828 14:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:59.828 14:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:59.828 14:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:59.828 14:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:59.828 14:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:59.828 14:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:59.828 14:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:59.828 14:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:59.828 14:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.828 14:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:59.828 "name": "Existed_Raid", 00:08:59.828 "uuid": "0ae6499d-405f-11ef-b2a4-e9dca065e82e", 00:08:59.828 "strip_size_kb": 0, 00:08:59.828 "state": "online", 00:08:59.828 "raid_level": "raid1", 00:08:59.828 "superblock": true, 00:08:59.828 "num_base_bdevs": 2, 00:08:59.828 "num_base_bdevs_discovered": 1, 00:08:59.828 "num_base_bdevs_operational": 1, 00:08:59.828 "base_bdevs_list": [ 00:08:59.828 { 00:08:59.828 "name": null, 00:08:59.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.828 "is_configured": false, 00:08:59.828 "data_offset": 2048, 00:08:59.828 "data_size": 63488 00:08:59.828 }, 00:08:59.828 { 00:08:59.828 "name": "BaseBdev2", 00:08:59.828 "uuid": "0b751f67-405f-11ef-b2a4-e9dca065e82e", 00:08:59.828 "is_configured": true, 00:08:59.828 "data_offset": 2048, 00:08:59.828 "data_size": 63488 00:08:59.828 } 00:08:59.828 ] 00:08:59.828 }' 00:08:59.828 14:57:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:59.828 14:57:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.394 14:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:09:00.394 14:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:00.394 14:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:00.394 14:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:00.652 14:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:00.652 14:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:00.652 14:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:00.911 [2024-07-12 14:57:26.486046] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:00.911 [2024-07-12 14:57:26.486088] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:00.911 [2024-07-12 14:57:26.492104] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:00.911 [2024-07-12 14:57:26.492124] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:00.911 [2024-07-12 14:57:26.492128] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x125e89034a00 name Existed_Raid, state offline 00:09:00.911 14:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:00.911 14:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:00.911 14:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:00.911 14:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:09:01.170 14:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:09:01.170 14:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:09:01.171 14:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:09:01.171 14:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 51042 00:09:01.171 14:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 51042 ']' 00:09:01.171 14:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 51042 00:09:01.171 14:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:09:01.171 14:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:09:01.171 14:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 51042 00:09:01.171 14:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:09:01.171 14:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:09:01.171 14:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:09:01.171 killing process with pid 51042 00:09:01.171 14:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51042' 00:09:01.171 14:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 51042 00:09:01.171 [2024-07-12 14:57:26.756302] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:01.171 [2024-07-12 14:57:26.756336] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:01.171 14:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 51042 00:09:01.171 14:57:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:09:01.171 00:09:01.171 real 0m9.149s 00:09:01.171 user 0m15.859s 00:09:01.171 sys 0m1.677s 00:09:01.171 14:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.171 ************************************ 00:09:01.171 END TEST raid_state_function_test_sb 00:09:01.171 ************************************ 00:09:01.171 14:57:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.171 14:57:26 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:01.171 14:57:26 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:09:01.171 14:57:26 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:01.171 14:57:26 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.171 14:57:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:01.171 ************************************ 00:09:01.171 START TEST raid_superblock_test 00:09:01.171 ************************************ 00:09:01.171 14:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:09:01.171 14:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:09:01.171 14:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:09:01.171 14:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:09:01.171 14:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:09:01.171 14:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:09:01.171 14:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:09:01.171 14:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:09:01.171 14:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:09:01.171 14:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:09:01.171 14:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:09:01.171 14:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:09:01.171 14:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:09:01.171 14:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:09:01.171 14:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:09:01.171 14:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:09:01.171 14:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=51316 00:09:01.171 14:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 51316 /var/tmp/spdk-raid.sock 00:09:01.171 14:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 51316 ']' 00:09:01.171 14:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:09:01.171 14:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:01.171 14:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:01.171 14:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:01.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:01.171 14:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:01.171 14:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.171 [2024-07-12 14:57:26.985612] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:09:01.171 [2024-07-12 14:57:26.985874] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:09:01.738 EAL: TSC is not safe to use in SMP mode 00:09:01.738 EAL: TSC is not invariant 00:09:01.738 [2024-07-12 14:57:27.522202] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.996 [2024-07-12 14:57:27.606994] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:01.996 [2024-07-12 14:57:27.609310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.996 [2024-07-12 14:57:27.610354] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:01.996 [2024-07-12 14:57:27.610376] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.257 14:57:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:02.257 14:57:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:09:02.257 14:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:09:02.258 14:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:02.258 14:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:09:02.258 14:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:09:02.258 14:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:02.258 14:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:02.258 14:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:09:02.258 14:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:02.258 14:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:09:02.516 malloc1 00:09:02.516 14:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:02.774 [2024-07-12 14:57:28.595959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:02.774 [2024-07-12 14:57:28.596030] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.774 [2024-07-12 14:57:28.596043] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a29aae34780 00:09:02.774 [2024-07-12 14:57:28.596051] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.775 [2024-07-12 14:57:28.597000] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.775 [2024-07-12 14:57:28.597026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:03.033 pt1 00:09:03.033 14:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:09:03.033 14:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:03.033 14:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:09:03.033 14:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:09:03.033 14:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:03.033 14:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:03.033 14:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:09:03.033 14:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:03.033 14:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:09:03.291 malloc2 00:09:03.291 14:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:03.550 [2024-07-12 14:57:29.172055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:03.550 [2024-07-12 14:57:29.172112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.550 [2024-07-12 14:57:29.172140] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a29aae34c80 00:09:03.550 [2024-07-12 14:57:29.172148] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.550 [2024-07-12 14:57:29.172839] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.550 [2024-07-12 14:57:29.172867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:03.550 pt2 00:09:03.550 14:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:09:03.550 14:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:03.550 14:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:09:03.809 [2024-07-12 14:57:29.472124] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:03.809 [2024-07-12 14:57:29.472738] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:03.809 [2024-07-12 14:57:29.472807] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1a29aae34f00 00:09:03.809 [2024-07-12 14:57:29.472813] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:03.809 [2024-07-12 14:57:29.472853] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a29aae97e20 00:09:03.809 [2024-07-12 14:57:29.472919] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1a29aae34f00 00:09:03.809 [2024-07-12 14:57:29.472924] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1a29aae34f00 00:09:03.809 [2024-07-12 14:57:29.472951] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.809 14:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:03.809 14:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:03.809 14:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:03.809 14:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:03.809 14:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:03.809 14:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:03.809 14:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:03.809 14:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:03.809 14:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:03.809 14:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:03.809 14:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:03.809 14:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:04.068 14:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:04.068 "name": "raid_bdev1", 00:09:04.068 "uuid": "0f4f9c7c-405f-11ef-b2a4-e9dca065e82e", 00:09:04.068 "strip_size_kb": 0, 00:09:04.068 "state": "online", 00:09:04.068 "raid_level": "raid1", 00:09:04.068 "superblock": true, 00:09:04.068 "num_base_bdevs": 2, 00:09:04.068 "num_base_bdevs_discovered": 2, 00:09:04.068 "num_base_bdevs_operational": 2, 00:09:04.068 "base_bdevs_list": [ 00:09:04.068 { 00:09:04.068 "name": "pt1", 00:09:04.068 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:04.068 "is_configured": true, 00:09:04.068 "data_offset": 2048, 00:09:04.068 "data_size": 63488 00:09:04.068 }, 00:09:04.068 { 00:09:04.068 "name": "pt2", 00:09:04.068 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:04.068 "is_configured": true, 00:09:04.068 "data_offset": 2048, 00:09:04.068 "data_size": 63488 00:09:04.068 } 00:09:04.068 ] 00:09:04.068 }' 00:09:04.068 14:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:04.068 14:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.326 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:09:04.326 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:09:04.326 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:04.326 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:04.326 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:04.326 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:04.326 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:04.326 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:04.584 [2024-07-12 14:57:30.380315] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:04.584 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:04.584 "name": "raid_bdev1", 00:09:04.584 "aliases": [ 00:09:04.584 "0f4f9c7c-405f-11ef-b2a4-e9dca065e82e" 00:09:04.584 ], 00:09:04.584 "product_name": "Raid Volume", 00:09:04.584 "block_size": 512, 00:09:04.584 "num_blocks": 63488, 00:09:04.584 "uuid": "0f4f9c7c-405f-11ef-b2a4-e9dca065e82e", 00:09:04.584 "assigned_rate_limits": { 00:09:04.584 "rw_ios_per_sec": 0, 00:09:04.584 "rw_mbytes_per_sec": 0, 00:09:04.584 "r_mbytes_per_sec": 0, 00:09:04.584 "w_mbytes_per_sec": 0 00:09:04.584 }, 00:09:04.584 "claimed": false, 00:09:04.584 "zoned": false, 00:09:04.584 "supported_io_types": { 00:09:04.584 "read": true, 00:09:04.584 "write": true, 00:09:04.584 "unmap": false, 00:09:04.584 "flush": false, 00:09:04.584 "reset": true, 00:09:04.584 "nvme_admin": false, 00:09:04.584 "nvme_io": false, 00:09:04.584 "nvme_io_md": false, 00:09:04.584 "write_zeroes": true, 00:09:04.584 "zcopy": false, 00:09:04.584 "get_zone_info": false, 00:09:04.584 "zone_management": false, 00:09:04.584 "zone_append": false, 00:09:04.584 "compare": false, 00:09:04.584 "compare_and_write": false, 00:09:04.584 "abort": false, 00:09:04.584 "seek_hole": false, 00:09:04.584 "seek_data": false, 00:09:04.584 "copy": false, 00:09:04.584 "nvme_iov_md": false 00:09:04.584 }, 00:09:04.584 "memory_domains": [ 00:09:04.584 { 00:09:04.584 "dma_device_id": "system", 00:09:04.584 "dma_device_type": 1 00:09:04.584 }, 00:09:04.584 { 00:09:04.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.584 "dma_device_type": 2 00:09:04.584 }, 00:09:04.584 { 00:09:04.584 "dma_device_id": "system", 00:09:04.584 "dma_device_type": 1 00:09:04.584 }, 00:09:04.584 { 00:09:04.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.584 "dma_device_type": 2 00:09:04.584 } 00:09:04.584 ], 00:09:04.584 "driver_specific": { 00:09:04.584 "raid": { 00:09:04.584 "uuid": "0f4f9c7c-405f-11ef-b2a4-e9dca065e82e", 00:09:04.584 "strip_size_kb": 0, 00:09:04.584 "state": "online", 00:09:04.584 "raid_level": "raid1", 00:09:04.584 "superblock": true, 00:09:04.584 "num_base_bdevs": 2, 00:09:04.584 "num_base_bdevs_discovered": 2, 00:09:04.584 "num_base_bdevs_operational": 2, 00:09:04.584 "base_bdevs_list": [ 00:09:04.584 { 00:09:04.584 "name": "pt1", 00:09:04.584 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:04.584 "is_configured": true, 00:09:04.584 "data_offset": 2048, 00:09:04.584 "data_size": 63488 00:09:04.584 }, 00:09:04.584 { 00:09:04.584 "name": "pt2", 00:09:04.584 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:04.584 "is_configured": true, 00:09:04.584 "data_offset": 2048, 00:09:04.584 "data_size": 63488 00:09:04.584 } 00:09:04.584 ] 00:09:04.584 } 00:09:04.584 } 00:09:04.584 }' 00:09:04.584 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:04.584 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:09:04.584 pt2' 00:09:04.584 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:04.584 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:09:04.584 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:04.844 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:04.844 "name": "pt1", 00:09:04.844 "aliases": [ 00:09:04.844 "00000000-0000-0000-0000-000000000001" 00:09:04.844 ], 00:09:04.844 "product_name": "passthru", 00:09:04.844 "block_size": 512, 00:09:04.844 "num_blocks": 65536, 00:09:04.844 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:04.844 "assigned_rate_limits": { 00:09:04.844 "rw_ios_per_sec": 0, 00:09:04.844 "rw_mbytes_per_sec": 0, 00:09:04.844 "r_mbytes_per_sec": 0, 00:09:04.844 "w_mbytes_per_sec": 0 00:09:04.844 }, 00:09:04.844 "claimed": true, 00:09:04.844 "claim_type": "exclusive_write", 00:09:04.844 "zoned": false, 00:09:04.844 "supported_io_types": { 00:09:04.844 "read": true, 00:09:04.844 "write": true, 00:09:04.844 "unmap": true, 00:09:04.844 "flush": true, 00:09:04.844 "reset": true, 00:09:04.844 "nvme_admin": false, 00:09:04.844 "nvme_io": false, 00:09:04.844 "nvme_io_md": false, 00:09:04.844 "write_zeroes": true, 00:09:04.844 "zcopy": true, 00:09:04.844 "get_zone_info": false, 00:09:04.844 "zone_management": false, 00:09:04.844 "zone_append": false, 00:09:04.844 "compare": false, 00:09:04.844 "compare_and_write": false, 00:09:04.844 "abort": true, 00:09:04.844 "seek_hole": false, 00:09:04.844 "seek_data": false, 00:09:04.844 "copy": true, 00:09:04.844 "nvme_iov_md": false 00:09:04.844 }, 00:09:04.844 "memory_domains": [ 00:09:04.844 { 00:09:04.844 "dma_device_id": "system", 00:09:04.844 "dma_device_type": 1 00:09:04.844 }, 00:09:04.844 { 00:09:04.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.844 "dma_device_type": 2 00:09:04.844 } 00:09:04.844 ], 00:09:04.844 "driver_specific": { 00:09:04.844 "passthru": { 00:09:04.844 "name": "pt1", 00:09:04.844 "base_bdev_name": "malloc1" 00:09:04.844 } 00:09:04.844 } 00:09:04.844 }' 00:09:04.844 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:04.844 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:04.844 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:04.844 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:04.844 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:04.844 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:04.844 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:05.103 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:05.103 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:05.103 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:05.103 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:05.103 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:05.103 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:05.103 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:09:05.103 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:05.362 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:05.362 "name": "pt2", 00:09:05.362 "aliases": [ 00:09:05.362 "00000000-0000-0000-0000-000000000002" 00:09:05.362 ], 00:09:05.362 "product_name": "passthru", 00:09:05.362 "block_size": 512, 00:09:05.362 "num_blocks": 65536, 00:09:05.362 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:05.362 "assigned_rate_limits": { 00:09:05.362 "rw_ios_per_sec": 0, 00:09:05.362 "rw_mbytes_per_sec": 0, 00:09:05.362 "r_mbytes_per_sec": 0, 00:09:05.362 "w_mbytes_per_sec": 0 00:09:05.362 }, 00:09:05.362 "claimed": true, 00:09:05.362 "claim_type": "exclusive_write", 00:09:05.362 "zoned": false, 00:09:05.362 "supported_io_types": { 00:09:05.362 "read": true, 00:09:05.362 "write": true, 00:09:05.362 "unmap": true, 00:09:05.362 "flush": true, 00:09:05.362 "reset": true, 00:09:05.362 "nvme_admin": false, 00:09:05.362 "nvme_io": false, 00:09:05.362 "nvme_io_md": false, 00:09:05.362 "write_zeroes": true, 00:09:05.362 "zcopy": true, 00:09:05.362 "get_zone_info": false, 00:09:05.362 "zone_management": false, 00:09:05.362 "zone_append": false, 00:09:05.362 "compare": false, 00:09:05.362 "compare_and_write": false, 00:09:05.362 "abort": true, 00:09:05.362 "seek_hole": false, 00:09:05.362 "seek_data": false, 00:09:05.362 "copy": true, 00:09:05.362 "nvme_iov_md": false 00:09:05.362 }, 00:09:05.363 "memory_domains": [ 00:09:05.363 { 00:09:05.363 "dma_device_id": "system", 00:09:05.363 "dma_device_type": 1 00:09:05.363 }, 00:09:05.363 { 00:09:05.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.363 "dma_device_type": 2 00:09:05.363 } 00:09:05.363 ], 00:09:05.363 "driver_specific": { 00:09:05.363 "passthru": { 00:09:05.363 "name": "pt2", 00:09:05.363 "base_bdev_name": "malloc2" 00:09:05.363 } 00:09:05.363 } 00:09:05.363 }' 00:09:05.363 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:05.363 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:05.363 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:05.363 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:05.363 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:05.363 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:05.363 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:05.363 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:05.363 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:05.363 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:05.363 14:57:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:05.363 14:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:05.363 14:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:09:05.363 14:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:05.622 [2024-07-12 14:57:31.256468] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:05.622 14:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=0f4f9c7c-405f-11ef-b2a4-e9dca065e82e 00:09:05.622 14:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 0f4f9c7c-405f-11ef-b2a4-e9dca065e82e ']' 00:09:05.622 14:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:05.880 [2024-07-12 14:57:31.548480] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:05.880 [2024-07-12 14:57:31.548508] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:05.880 [2024-07-12 14:57:31.548533] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:05.880 [2024-07-12 14:57:31.548547] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:05.880 [2024-07-12 14:57:31.548551] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a29aae34f00 name raid_bdev1, state offline 00:09:05.880 14:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:09:05.880 14:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:06.138 14:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:09:06.138 14:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:09:06.138 14:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:09:06.138 14:57:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:09:06.401 14:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:09:06.401 14:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:06.967 14:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:09:06.967 14:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:06.967 14:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:09:06.967 14:57:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:09:06.967 14:57:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:09:06.967 14:57:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:09:06.967 14:57:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:06.967 14:57:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:06.967 14:57:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:06.967 14:57:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:06.967 14:57:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:06.967 14:57:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:06.968 14:57:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:06.968 14:57:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:06.968 14:57:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:09:07.224 [2024-07-12 14:57:33.040745] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:07.224 [2024-07-12 14:57:33.041311] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:07.225 [2024-07-12 14:57:33.041329] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:07.225 [2024-07-12 14:57:33.041366] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:07.225 [2024-07-12 14:57:33.041377] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:07.225 [2024-07-12 14:57:33.041382] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a29aae34c80 name raid_bdev1, state configuring 00:09:07.225 request: 00:09:07.225 { 00:09:07.225 "name": "raid_bdev1", 00:09:07.225 "raid_level": "raid1", 00:09:07.225 "base_bdevs": [ 00:09:07.225 "malloc1", 00:09:07.225 "malloc2" 00:09:07.225 ], 00:09:07.225 "superblock": false, 00:09:07.225 "method": "bdev_raid_create", 00:09:07.225 "req_id": 1 00:09:07.225 } 00:09:07.225 Got JSON-RPC error response 00:09:07.225 response: 00:09:07.225 { 00:09:07.225 "code": -17, 00:09:07.225 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:07.225 } 00:09:07.483 14:57:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:09:07.483 14:57:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:07.483 14:57:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:07.483 14:57:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:07.483 14:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:07.483 14:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:09:07.741 14:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:09:07.741 14:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:09:07.741 14:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:07.741 [2024-07-12 14:57:33.561103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:07.741 [2024-07-12 14:57:33.561160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.741 [2024-07-12 14:57:33.561179] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a29aae34780 00:09:07.741 [2024-07-12 14:57:33.561187] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.741 [2024-07-12 14:57:33.561819] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.741 [2024-07-12 14:57:33.561843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:07.741 [2024-07-12 14:57:33.561867] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:07.741 [2024-07-12 14:57:33.561878] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:07.741 pt1 00:09:07.999 14:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:07.999 14:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:07.999 14:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:07.999 14:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:07.999 14:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:07.999 14:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:07.999 14:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:07.999 14:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:07.999 14:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:07.999 14:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:07.999 14:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:07.999 14:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.999 14:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:07.999 "name": "raid_bdev1", 00:09:07.999 "uuid": "0f4f9c7c-405f-11ef-b2a4-e9dca065e82e", 00:09:07.999 "strip_size_kb": 0, 00:09:07.999 "state": "configuring", 00:09:07.999 "raid_level": "raid1", 00:09:07.999 "superblock": true, 00:09:07.999 "num_base_bdevs": 2, 00:09:07.999 "num_base_bdevs_discovered": 1, 00:09:07.999 "num_base_bdevs_operational": 2, 00:09:07.999 "base_bdevs_list": [ 00:09:07.999 { 00:09:07.999 "name": "pt1", 00:09:07.999 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:07.999 "is_configured": true, 00:09:07.999 "data_offset": 2048, 00:09:07.999 "data_size": 63488 00:09:07.999 }, 00:09:07.999 { 00:09:07.999 "name": null, 00:09:07.999 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:07.999 "is_configured": false, 00:09:07.999 "data_offset": 2048, 00:09:07.999 "data_size": 63488 00:09:07.999 } 00:09:07.999 ] 00:09:07.999 }' 00:09:07.999 14:57:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:07.999 14:57:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.565 14:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:09:08.565 14:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:09:08.565 14:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:09:08.565 14:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:08.565 [2024-07-12 14:57:34.353434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:08.565 [2024-07-12 14:57:34.353488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.565 [2024-07-12 14:57:34.353510] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a29aae34f00 00:09:08.565 [2024-07-12 14:57:34.353518] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.565 [2024-07-12 14:57:34.353626] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.565 [2024-07-12 14:57:34.353637] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:08.565 [2024-07-12 14:57:34.353660] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:08.565 [2024-07-12 14:57:34.353669] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:08.565 [2024-07-12 14:57:34.353696] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1a29aae35180 00:09:08.565 [2024-07-12 14:57:34.353701] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:08.565 [2024-07-12 14:57:34.353721] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a29aae97e20 00:09:08.565 [2024-07-12 14:57:34.353775] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1a29aae35180 00:09:08.565 [2024-07-12 14:57:34.353780] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1a29aae35180 00:09:08.565 [2024-07-12 14:57:34.353801] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.565 pt2 00:09:08.565 14:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:09:08.565 14:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:09:08.565 14:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:08.565 14:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:08.565 14:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:08.565 14:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:08.565 14:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:08.565 14:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:08.565 14:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:08.565 14:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:08.565 14:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:08.565 14:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:08.565 14:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:08.565 14:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:09.156 14:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:09.156 "name": "raid_bdev1", 00:09:09.156 "uuid": "0f4f9c7c-405f-11ef-b2a4-e9dca065e82e", 00:09:09.156 "strip_size_kb": 0, 00:09:09.156 "state": "online", 00:09:09.156 "raid_level": "raid1", 00:09:09.156 "superblock": true, 00:09:09.156 "num_base_bdevs": 2, 00:09:09.156 "num_base_bdevs_discovered": 2, 00:09:09.156 "num_base_bdevs_operational": 2, 00:09:09.156 "base_bdevs_list": [ 00:09:09.156 { 00:09:09.156 "name": "pt1", 00:09:09.156 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:09.156 "is_configured": true, 00:09:09.156 "data_offset": 2048, 00:09:09.156 "data_size": 63488 00:09:09.156 }, 00:09:09.156 { 00:09:09.156 "name": "pt2", 00:09:09.156 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:09.157 "is_configured": true, 00:09:09.157 "data_offset": 2048, 00:09:09.157 "data_size": 63488 00:09:09.157 } 00:09:09.157 ] 00:09:09.157 }' 00:09:09.157 14:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:09.157 14:57:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.157 14:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:09:09.157 14:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:09:09.157 14:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:09.157 14:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:09.157 14:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:09.157 14:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:09.157 14:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:09.157 14:57:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:09.415 [2024-07-12 14:57:35.221603] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:09.673 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:09.673 "name": "raid_bdev1", 00:09:09.673 "aliases": [ 00:09:09.673 "0f4f9c7c-405f-11ef-b2a4-e9dca065e82e" 00:09:09.673 ], 00:09:09.673 "product_name": "Raid Volume", 00:09:09.673 "block_size": 512, 00:09:09.673 "num_blocks": 63488, 00:09:09.673 "uuid": "0f4f9c7c-405f-11ef-b2a4-e9dca065e82e", 00:09:09.673 "assigned_rate_limits": { 00:09:09.673 "rw_ios_per_sec": 0, 00:09:09.673 "rw_mbytes_per_sec": 0, 00:09:09.673 "r_mbytes_per_sec": 0, 00:09:09.673 "w_mbytes_per_sec": 0 00:09:09.673 }, 00:09:09.673 "claimed": false, 00:09:09.673 "zoned": false, 00:09:09.673 "supported_io_types": { 00:09:09.673 "read": true, 00:09:09.673 "write": true, 00:09:09.673 "unmap": false, 00:09:09.673 "flush": false, 00:09:09.673 "reset": true, 00:09:09.673 "nvme_admin": false, 00:09:09.673 "nvme_io": false, 00:09:09.673 "nvme_io_md": false, 00:09:09.673 "write_zeroes": true, 00:09:09.673 "zcopy": false, 00:09:09.673 "get_zone_info": false, 00:09:09.673 "zone_management": false, 00:09:09.673 "zone_append": false, 00:09:09.673 "compare": false, 00:09:09.673 "compare_and_write": false, 00:09:09.673 "abort": false, 00:09:09.673 "seek_hole": false, 00:09:09.673 "seek_data": false, 00:09:09.673 "copy": false, 00:09:09.673 "nvme_iov_md": false 00:09:09.673 }, 00:09:09.673 "memory_domains": [ 00:09:09.673 { 00:09:09.673 "dma_device_id": "system", 00:09:09.673 "dma_device_type": 1 00:09:09.673 }, 00:09:09.673 { 00:09:09.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.673 "dma_device_type": 2 00:09:09.673 }, 00:09:09.673 { 00:09:09.673 "dma_device_id": "system", 00:09:09.673 "dma_device_type": 1 00:09:09.673 }, 00:09:09.673 { 00:09:09.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.673 "dma_device_type": 2 00:09:09.673 } 00:09:09.673 ], 00:09:09.673 "driver_specific": { 00:09:09.673 "raid": { 00:09:09.673 "uuid": "0f4f9c7c-405f-11ef-b2a4-e9dca065e82e", 00:09:09.673 "strip_size_kb": 0, 00:09:09.673 "state": "online", 00:09:09.673 "raid_level": "raid1", 00:09:09.673 "superblock": true, 00:09:09.673 "num_base_bdevs": 2, 00:09:09.673 "num_base_bdevs_discovered": 2, 00:09:09.673 "num_base_bdevs_operational": 2, 00:09:09.673 "base_bdevs_list": [ 00:09:09.673 { 00:09:09.673 "name": "pt1", 00:09:09.673 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:09.673 "is_configured": true, 00:09:09.673 "data_offset": 2048, 00:09:09.673 "data_size": 63488 00:09:09.673 }, 00:09:09.673 { 00:09:09.673 "name": "pt2", 00:09:09.673 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:09.674 "is_configured": true, 00:09:09.674 "data_offset": 2048, 00:09:09.674 "data_size": 63488 00:09:09.674 } 00:09:09.674 ] 00:09:09.674 } 00:09:09.674 } 00:09:09.674 }' 00:09:09.674 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:09.674 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:09:09.674 pt2' 00:09:09.674 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:09.674 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:09:09.674 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:09.674 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:09.674 "name": "pt1", 00:09:09.674 "aliases": [ 00:09:09.674 "00000000-0000-0000-0000-000000000001" 00:09:09.674 ], 00:09:09.674 "product_name": "passthru", 00:09:09.674 "block_size": 512, 00:09:09.674 "num_blocks": 65536, 00:09:09.674 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:09.674 "assigned_rate_limits": { 00:09:09.674 "rw_ios_per_sec": 0, 00:09:09.674 "rw_mbytes_per_sec": 0, 00:09:09.674 "r_mbytes_per_sec": 0, 00:09:09.674 "w_mbytes_per_sec": 0 00:09:09.674 }, 00:09:09.674 "claimed": true, 00:09:09.674 "claim_type": "exclusive_write", 00:09:09.674 "zoned": false, 00:09:09.674 "supported_io_types": { 00:09:09.674 "read": true, 00:09:09.674 "write": true, 00:09:09.674 "unmap": true, 00:09:09.674 "flush": true, 00:09:09.674 "reset": true, 00:09:09.674 "nvme_admin": false, 00:09:09.674 "nvme_io": false, 00:09:09.674 "nvme_io_md": false, 00:09:09.674 "write_zeroes": true, 00:09:09.674 "zcopy": true, 00:09:09.674 "get_zone_info": false, 00:09:09.674 "zone_management": false, 00:09:09.674 "zone_append": false, 00:09:09.674 "compare": false, 00:09:09.674 "compare_and_write": false, 00:09:09.674 "abort": true, 00:09:09.674 "seek_hole": false, 00:09:09.674 "seek_data": false, 00:09:09.674 "copy": true, 00:09:09.674 "nvme_iov_md": false 00:09:09.674 }, 00:09:09.674 "memory_domains": [ 00:09:09.674 { 00:09:09.674 "dma_device_id": "system", 00:09:09.674 "dma_device_type": 1 00:09:09.674 }, 00:09:09.674 { 00:09:09.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.674 "dma_device_type": 2 00:09:09.674 } 00:09:09.674 ], 00:09:09.674 "driver_specific": { 00:09:09.674 "passthru": { 00:09:09.674 "name": "pt1", 00:09:09.674 "base_bdev_name": "malloc1" 00:09:09.674 } 00:09:09.674 } 00:09:09.674 }' 00:09:09.674 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:09.674 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:09.932 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:09.932 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:09.932 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:09.932 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:09.932 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:09.932 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:09.932 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:09.932 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:09.932 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:09.932 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:09.932 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:09.932 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:09:09.932 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:10.190 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:10.190 "name": "pt2", 00:09:10.190 "aliases": [ 00:09:10.190 "00000000-0000-0000-0000-000000000002" 00:09:10.190 ], 00:09:10.190 "product_name": "passthru", 00:09:10.190 "block_size": 512, 00:09:10.190 "num_blocks": 65536, 00:09:10.190 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:10.190 "assigned_rate_limits": { 00:09:10.190 "rw_ios_per_sec": 0, 00:09:10.190 "rw_mbytes_per_sec": 0, 00:09:10.190 "r_mbytes_per_sec": 0, 00:09:10.190 "w_mbytes_per_sec": 0 00:09:10.190 }, 00:09:10.190 "claimed": true, 00:09:10.190 "claim_type": "exclusive_write", 00:09:10.190 "zoned": false, 00:09:10.190 "supported_io_types": { 00:09:10.190 "read": true, 00:09:10.190 "write": true, 00:09:10.190 "unmap": true, 00:09:10.190 "flush": true, 00:09:10.190 "reset": true, 00:09:10.190 "nvme_admin": false, 00:09:10.190 "nvme_io": false, 00:09:10.190 "nvme_io_md": false, 00:09:10.190 "write_zeroes": true, 00:09:10.190 "zcopy": true, 00:09:10.190 "get_zone_info": false, 00:09:10.190 "zone_management": false, 00:09:10.190 "zone_append": false, 00:09:10.190 "compare": false, 00:09:10.190 "compare_and_write": false, 00:09:10.190 "abort": true, 00:09:10.190 "seek_hole": false, 00:09:10.190 "seek_data": false, 00:09:10.190 "copy": true, 00:09:10.190 "nvme_iov_md": false 00:09:10.190 }, 00:09:10.190 "memory_domains": [ 00:09:10.190 { 00:09:10.190 "dma_device_id": "system", 00:09:10.190 "dma_device_type": 1 00:09:10.190 }, 00:09:10.190 { 00:09:10.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.190 "dma_device_type": 2 00:09:10.190 } 00:09:10.190 ], 00:09:10.190 "driver_specific": { 00:09:10.190 "passthru": { 00:09:10.190 "name": "pt2", 00:09:10.190 "base_bdev_name": "malloc2" 00:09:10.190 } 00:09:10.190 } 00:09:10.190 }' 00:09:10.190 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:10.190 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:10.190 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:10.190 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:10.190 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:10.190 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:10.190 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:10.190 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:10.190 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:10.190 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:10.190 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:10.190 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:10.190 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:10.190 14:57:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:09:10.464 [2024-07-12 14:57:36.105730] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:10.464 14:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 0f4f9c7c-405f-11ef-b2a4-e9dca065e82e '!=' 0f4f9c7c-405f-11ef-b2a4-e9dca065e82e ']' 00:09:10.464 14:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:09:10.464 14:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:10.464 14:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:09:10.464 14:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:09:10.722 [2024-07-12 14:57:36.341741] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:10.722 14:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:10.722 14:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:10.722 14:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:10.722 14:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:10.722 14:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:10.722 14:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:09:10.722 14:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:10.722 14:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:10.722 14:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:10.722 14:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:10.722 14:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:10.722 14:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:10.980 14:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:10.980 "name": "raid_bdev1", 00:09:10.980 "uuid": "0f4f9c7c-405f-11ef-b2a4-e9dca065e82e", 00:09:10.980 "strip_size_kb": 0, 00:09:10.980 "state": "online", 00:09:10.980 "raid_level": "raid1", 00:09:10.980 "superblock": true, 00:09:10.980 "num_base_bdevs": 2, 00:09:10.980 "num_base_bdevs_discovered": 1, 00:09:10.980 "num_base_bdevs_operational": 1, 00:09:10.980 "base_bdevs_list": [ 00:09:10.980 { 00:09:10.980 "name": null, 00:09:10.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.980 "is_configured": false, 00:09:10.980 "data_offset": 2048, 00:09:10.980 "data_size": 63488 00:09:10.980 }, 00:09:10.980 { 00:09:10.980 "name": "pt2", 00:09:10.980 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:10.980 "is_configured": true, 00:09:10.980 "data_offset": 2048, 00:09:10.980 "data_size": 63488 00:09:10.980 } 00:09:10.980 ] 00:09:10.980 }' 00:09:10.980 14:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:10.980 14:57:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.238 14:57:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:11.495 [2024-07-12 14:57:37.313875] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:11.495 [2024-07-12 14:57:37.313901] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:11.495 [2024-07-12 14:57:37.313925] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.495 [2024-07-12 14:57:37.313936] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:11.495 [2024-07-12 14:57:37.313941] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a29aae35180 name raid_bdev1, state offline 00:09:11.753 14:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:11.753 14:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:09:12.012 14:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:09:12.012 14:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:09:12.012 14:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:09:12.012 14:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:09:12.012 14:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:12.271 14:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:09:12.271 14:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:09:12.271 14:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:09:12.271 14:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:09:12.271 14:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=1 00:09:12.271 14:57:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:12.530 [2024-07-12 14:57:38.174003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:12.530 [2024-07-12 14:57:38.174059] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.530 [2024-07-12 14:57:38.174071] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a29aae34f00 00:09:12.530 [2024-07-12 14:57:38.174079] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.530 [2024-07-12 14:57:38.174705] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.530 [2024-07-12 14:57:38.174731] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:12.530 [2024-07-12 14:57:38.174756] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:12.530 [2024-07-12 14:57:38.174767] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:12.530 [2024-07-12 14:57:38.174793] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1a29aae35180 00:09:12.530 [2024-07-12 14:57:38.174797] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:12.530 [2024-07-12 14:57:38.174817] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a29aae97e20 00:09:12.530 [2024-07-12 14:57:38.174864] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1a29aae35180 00:09:12.530 [2024-07-12 14:57:38.174878] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1a29aae35180 00:09:12.530 [2024-07-12 14:57:38.174901] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.530 pt2 00:09:12.530 14:57:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:12.530 14:57:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:12.530 14:57:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:12.530 14:57:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:12.530 14:57:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:12.530 14:57:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:09:12.530 14:57:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:12.530 14:57:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:12.530 14:57:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:12.530 14:57:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:12.530 14:57:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:12.530 14:57:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:12.788 14:57:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:12.788 "name": "raid_bdev1", 00:09:12.788 "uuid": "0f4f9c7c-405f-11ef-b2a4-e9dca065e82e", 00:09:12.788 "strip_size_kb": 0, 00:09:12.788 "state": "online", 00:09:12.788 "raid_level": "raid1", 00:09:12.788 "superblock": true, 00:09:12.788 "num_base_bdevs": 2, 00:09:12.788 "num_base_bdevs_discovered": 1, 00:09:12.788 "num_base_bdevs_operational": 1, 00:09:12.788 "base_bdevs_list": [ 00:09:12.788 { 00:09:12.788 "name": null, 00:09:12.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.788 "is_configured": false, 00:09:12.788 "data_offset": 2048, 00:09:12.788 "data_size": 63488 00:09:12.788 }, 00:09:12.788 { 00:09:12.788 "name": "pt2", 00:09:12.788 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:12.788 "is_configured": true, 00:09:12.788 "data_offset": 2048, 00:09:12.788 "data_size": 63488 00:09:12.788 } 00:09:12.788 ] 00:09:12.788 }' 00:09:12.788 14:57:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:12.788 14:57:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.047 14:57:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:13.305 [2024-07-12 14:57:39.042108] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:13.305 [2024-07-12 14:57:39.042132] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:13.305 [2024-07-12 14:57:39.042154] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:13.305 [2024-07-12 14:57:39.042165] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:13.305 [2024-07-12 14:57:39.042170] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a29aae35180 name raid_bdev1, state offline 00:09:13.305 14:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:09:13.305 14:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:13.562 14:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:09:13.562 14:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:09:13.562 14:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:09:13.562 14:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:13.819 [2024-07-12 14:57:39.502178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:13.820 [2024-07-12 14:57:39.502229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.820 [2024-07-12 14:57:39.502242] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a29aae34c80 00:09:13.820 [2024-07-12 14:57:39.502250] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.820 [2024-07-12 14:57:39.502879] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.820 [2024-07-12 14:57:39.502899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:13.820 [2024-07-12 14:57:39.502923] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:13.820 [2024-07-12 14:57:39.502935] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:13.820 [2024-07-12 14:57:39.502977] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:13.820 [2024-07-12 14:57:39.502982] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:13.820 [2024-07-12 14:57:39.502987] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a29aae34780 name raid_bdev1, state configuring 00:09:13.820 [2024-07-12 14:57:39.503003] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:13.820 [2024-07-12 14:57:39.503019] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1a29aae34780 00:09:13.820 [2024-07-12 14:57:39.503023] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:13.820 [2024-07-12 14:57:39.503043] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a29aae97e20 00:09:13.820 [2024-07-12 14:57:39.503089] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1a29aae34780 00:09:13.820 [2024-07-12 14:57:39.503094] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1a29aae34780 00:09:13.820 [2024-07-12 14:57:39.503114] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:13.820 pt1 00:09:13.820 14:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:09:13.820 14:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:13.820 14:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:13.820 14:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:13.820 14:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:13.820 14:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:13.820 14:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:09:13.820 14:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:13.820 14:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:13.820 14:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:13.820 14:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:13.820 14:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:13.820 14:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:14.090 14:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:14.090 "name": "raid_bdev1", 00:09:14.090 "uuid": "0f4f9c7c-405f-11ef-b2a4-e9dca065e82e", 00:09:14.090 "strip_size_kb": 0, 00:09:14.090 "state": "online", 00:09:14.090 "raid_level": "raid1", 00:09:14.090 "superblock": true, 00:09:14.090 "num_base_bdevs": 2, 00:09:14.090 "num_base_bdevs_discovered": 1, 00:09:14.090 "num_base_bdevs_operational": 1, 00:09:14.090 "base_bdevs_list": [ 00:09:14.090 { 00:09:14.090 "name": null, 00:09:14.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.090 "is_configured": false, 00:09:14.090 "data_offset": 2048, 00:09:14.090 "data_size": 63488 00:09:14.090 }, 00:09:14.090 { 00:09:14.090 "name": "pt2", 00:09:14.090 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:14.090 "is_configured": true, 00:09:14.090 "data_offset": 2048, 00:09:14.090 "data_size": 63488 00:09:14.090 } 00:09:14.090 ] 00:09:14.090 }' 00:09:14.090 14:57:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:14.090 14:57:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.372 14:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:14.372 14:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:09:14.630 14:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:09:14.630 14:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:14.630 14:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:09:14.888 [2024-07-12 14:57:40.650380] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:14.888 14:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 0f4f9c7c-405f-11ef-b2a4-e9dca065e82e '!=' 0f4f9c7c-405f-11ef-b2a4-e9dca065e82e ']' 00:09:14.888 14:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 51316 00:09:14.888 14:57:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 51316 ']' 00:09:14.888 14:57:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 51316 00:09:14.888 14:57:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:09:14.888 14:57:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:09:14.888 14:57:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 51316 00:09:14.888 14:57:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:09:14.888 14:57:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:09:14.888 14:57:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:09:14.888 killing process with pid 51316 00:09:14.888 14:57:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51316' 00:09:14.888 14:57:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 51316 00:09:14.888 [2024-07-12 14:57:40.680184] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:14.889 [2024-07-12 14:57:40.680209] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:14.889 [2024-07-12 14:57:40.680220] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:14.889 [2024-07-12 14:57:40.680224] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a29aae34780 name raid_bdev1, state offline 00:09:14.889 14:57:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 51316 00:09:14.889 [2024-07-12 14:57:40.691752] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:15.146 14:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:09:15.146 00:09:15.146 real 0m13.886s 00:09:15.146 user 0m24.866s 00:09:15.146 sys 0m2.140s 00:09:15.146 14:57:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:15.146 14:57:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.146 ************************************ 00:09:15.146 END TEST raid_superblock_test 00:09:15.146 ************************************ 00:09:15.146 14:57:40 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:15.147 14:57:40 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:09:15.147 14:57:40 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:15.147 14:57:40 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:15.147 14:57:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:15.147 ************************************ 00:09:15.147 START TEST raid_read_error_test 00:09:15.147 ************************************ 00:09:15.147 14:57:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 read 00:09:15.147 14:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:09:15.147 14:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:09:15.147 14:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:09:15.147 14:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:09:15.147 14:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:15.147 14:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:09:15.147 14:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:15.147 14:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:15.147 14:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:09:15.147 14:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:15.147 14:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:15.147 14:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:15.147 14:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:09:15.147 14:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:09:15.147 14:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:09:15.147 14:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:09:15.147 14:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:09:15.147 14:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:09:15.147 14:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:09:15.147 14:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:09:15.147 14:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:09:15.147 14:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.VZHmUx12AN 00:09:15.147 14:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=51709 00:09:15.147 14:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 51709 /var/tmp/spdk-raid.sock 00:09:15.147 14:57:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:15.147 14:57:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 51709 ']' 00:09:15.147 14:57:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:15.147 14:57:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:15.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:15.147 14:57:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:15.147 14:57:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:15.147 14:57:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.147 [2024-07-12 14:57:40.924586] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:09:15.147 [2024-07-12 14:57:40.924754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:09:15.713 EAL: TSC is not safe to use in SMP mode 00:09:15.713 EAL: TSC is not invariant 00:09:15.713 [2024-07-12 14:57:41.441704] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.713 [2024-07-12 14:57:41.525186] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:15.713 [2024-07-12 14:57:41.527304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.714 [2024-07-12 14:57:41.528043] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:15.714 [2024-07-12 14:57:41.528057] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:16.279 14:57:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:16.279 14:57:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:09:16.279 14:57:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:16.279 14:57:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:16.537 BaseBdev1_malloc 00:09:16.537 14:57:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:09:16.830 true 00:09:16.830 14:57:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:17.087 [2024-07-12 14:57:42.672612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:17.088 [2024-07-12 14:57:42.672724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.088 [2024-07-12 14:57:42.672767] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x113bb8a34780 00:09:17.088 [2024-07-12 14:57:42.672776] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.088 [2024-07-12 14:57:42.673491] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.088 [2024-07-12 14:57:42.673532] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:17.088 BaseBdev1 00:09:17.088 14:57:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:17.088 14:57:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:17.344 BaseBdev2_malloc 00:09:17.344 14:57:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:09:17.344 true 00:09:17.601 14:57:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:17.858 [2024-07-12 14:57:43.428718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:17.858 [2024-07-12 14:57:43.428782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.858 [2024-07-12 14:57:43.428808] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x113bb8a34c80 00:09:17.858 [2024-07-12 14:57:43.428817] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.858 [2024-07-12 14:57:43.429580] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.858 [2024-07-12 14:57:43.429610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:17.858 BaseBdev2 00:09:17.858 14:57:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:09:17.858 [2024-07-12 14:57:43.652747] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:17.858 [2024-07-12 14:57:43.653420] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:17.859 [2024-07-12 14:57:43.653485] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x113bb8a34f00 00:09:17.859 [2024-07-12 14:57:43.653491] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:17.859 [2024-07-12 14:57:43.653522] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x113bb8aa0e20 00:09:17.859 [2024-07-12 14:57:43.653594] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x113bb8a34f00 00:09:17.859 [2024-07-12 14:57:43.653603] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x113bb8a34f00 00:09:17.859 [2024-07-12 14:57:43.653644] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.859 14:57:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:17.859 14:57:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:17.859 14:57:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:17.859 14:57:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:17.859 14:57:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:17.859 14:57:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:17.859 14:57:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:17.859 14:57:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:17.859 14:57:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:17.859 14:57:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:17.859 14:57:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.859 14:57:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:18.424 14:57:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:18.424 "name": "raid_bdev1", 00:09:18.424 "uuid": "17c3670e-405f-11ef-b2a4-e9dca065e82e", 00:09:18.424 "strip_size_kb": 0, 00:09:18.424 "state": "online", 00:09:18.424 "raid_level": "raid1", 00:09:18.424 "superblock": true, 00:09:18.424 "num_base_bdevs": 2, 00:09:18.424 "num_base_bdevs_discovered": 2, 00:09:18.424 "num_base_bdevs_operational": 2, 00:09:18.424 "base_bdevs_list": [ 00:09:18.424 { 00:09:18.424 "name": "BaseBdev1", 00:09:18.424 "uuid": "6a542315-aadb-4a5b-aea0-26b259c34d2d", 00:09:18.424 "is_configured": true, 00:09:18.424 "data_offset": 2048, 00:09:18.424 "data_size": 63488 00:09:18.424 }, 00:09:18.424 { 00:09:18.424 "name": "BaseBdev2", 00:09:18.424 "uuid": "62f14125-2a40-5a5f-a3a9-ffd93691bbc5", 00:09:18.424 "is_configured": true, 00:09:18.424 "data_offset": 2048, 00:09:18.424 "data_size": 63488 00:09:18.424 } 00:09:18.424 ] 00:09:18.424 }' 00:09:18.424 14:57:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:18.424 14:57:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.682 14:57:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:09:18.682 14:57:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:09:18.682 [2024-07-12 14:57:44.437146] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x113bb8aa0ec0 00:09:19.615 14:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:19.872 14:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:09:19.872 14:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:19.872 14:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:09:19.872 14:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:19.872 14:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:19.872 14:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:19.872 14:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:19.872 14:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:19.872 14:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:19.872 14:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:19.872 14:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:19.872 14:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:19.872 14:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:19.872 14:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:19.872 14:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:19.872 14:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.130 14:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:20.130 "name": "raid_bdev1", 00:09:20.130 "uuid": "17c3670e-405f-11ef-b2a4-e9dca065e82e", 00:09:20.130 "strip_size_kb": 0, 00:09:20.130 "state": "online", 00:09:20.130 "raid_level": "raid1", 00:09:20.130 "superblock": true, 00:09:20.130 "num_base_bdevs": 2, 00:09:20.130 "num_base_bdevs_discovered": 2, 00:09:20.130 "num_base_bdevs_operational": 2, 00:09:20.130 "base_bdevs_list": [ 00:09:20.130 { 00:09:20.130 "name": "BaseBdev1", 00:09:20.130 "uuid": "6a542315-aadb-4a5b-aea0-26b259c34d2d", 00:09:20.130 "is_configured": true, 00:09:20.130 "data_offset": 2048, 00:09:20.130 "data_size": 63488 00:09:20.130 }, 00:09:20.130 { 00:09:20.130 "name": "BaseBdev2", 00:09:20.130 "uuid": "62f14125-2a40-5a5f-a3a9-ffd93691bbc5", 00:09:20.130 "is_configured": true, 00:09:20.130 "data_offset": 2048, 00:09:20.130 "data_size": 63488 00:09:20.130 } 00:09:20.130 ] 00:09:20.130 }' 00:09:20.130 14:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:20.130 14:57:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.696 14:57:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:20.696 [2024-07-12 14:57:46.512885] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:20.696 [2024-07-12 14:57:46.512912] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:20.696 [2024-07-12 14:57:46.513286] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.696 [2024-07-12 14:57:46.513296] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.696 [2024-07-12 14:57:46.513310] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:20.696 [2024-07-12 14:57:46.513314] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x113bb8a34f00 name raid_bdev1, state offline 00:09:20.696 0 00:09:20.955 14:57:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 51709 00:09:20.955 14:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 51709 ']' 00:09:20.955 14:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 51709 00:09:20.955 14:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:09:20.955 14:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:09:20.955 14:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 51709 00:09:20.955 14:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:09:20.955 14:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:09:20.955 14:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:09:20.955 killing process with pid 51709 00:09:20.955 14:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51709' 00:09:20.955 14:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 51709 00:09:20.955 [2024-07-12 14:57:46.540245] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:20.955 14:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 51709 00:09:20.955 [2024-07-12 14:57:46.551509] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:20.955 14:57:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.VZHmUx12AN 00:09:20.955 14:57:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:09:20.955 14:57:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:09:20.955 14:57:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:09:20.955 14:57:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:09:20.955 14:57:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:20.955 14:57:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:09:20.955 14:57:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:20.955 00:09:20.955 real 0m5.813s 00:09:20.955 user 0m8.988s 00:09:20.955 sys 0m0.955s 00:09:20.955 14:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:20.955 14:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.955 ************************************ 00:09:20.955 END TEST raid_read_error_test 00:09:20.955 ************************************ 00:09:20.955 14:57:46 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:20.955 14:57:46 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:09:20.955 14:57:46 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:20.955 14:57:46 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.955 14:57:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:20.955 ************************************ 00:09:20.955 START TEST raid_write_error_test 00:09:20.956 ************************************ 00:09:20.956 14:57:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 write 00:09:20.956 14:57:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:09:20.956 14:57:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:09:20.956 14:57:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:09:20.956 14:57:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:09:20.956 14:57:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:20.956 14:57:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:09:20.956 14:57:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:20.956 14:57:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:20.956 14:57:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:09:20.956 14:57:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:20.956 14:57:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:20.956 14:57:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:20.956 14:57:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:09:20.956 14:57:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:09:20.956 14:57:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:09:20.956 14:57:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:09:20.956 14:57:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:09:20.956 14:57:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:09:20.956 14:57:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:09:20.956 14:57:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:09:20.956 14:57:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:09:20.956 14:57:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.wQHBzyoe84 00:09:20.956 14:57:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=51837 00:09:20.956 14:57:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 51837 /var/tmp/spdk-raid.sock 00:09:20.956 14:57:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:20.956 14:57:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 51837 ']' 00:09:20.956 14:57:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:20.956 14:57:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:20.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:20.956 14:57:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:20.956 14:57:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:20.956 14:57:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.214 [2024-07-12 14:57:46.784515] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:09:21.214 [2024-07-12 14:57:46.784722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:09:21.781 EAL: TSC is not safe to use in SMP mode 00:09:21.781 EAL: TSC is not invariant 00:09:21.781 [2024-07-12 14:57:47.331061] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.781 [2024-07-12 14:57:47.420856] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:21.781 [2024-07-12 14:57:47.422927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.781 [2024-07-12 14:57:47.423674] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.781 [2024-07-12 14:57:47.423694] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.040 14:57:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:22.040 14:57:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:09:22.040 14:57:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:22.040 14:57:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:22.298 BaseBdev1_malloc 00:09:22.298 14:57:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:09:22.556 true 00:09:22.556 14:57:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:22.814 [2024-07-12 14:57:48.567920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:22.814 [2024-07-12 14:57:48.567976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.814 [2024-07-12 14:57:48.568002] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x741ce834780 00:09:22.814 [2024-07-12 14:57:48.568011] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.814 [2024-07-12 14:57:48.568646] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.814 [2024-07-12 14:57:48.568672] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:22.814 BaseBdev1 00:09:22.814 14:57:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:22.814 14:57:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:23.121 BaseBdev2_malloc 00:09:23.121 14:57:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:09:23.379 true 00:09:23.379 14:57:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:23.637 [2024-07-12 14:57:49.328002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:23.637 [2024-07-12 14:57:49.328071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.637 [2024-07-12 14:57:49.328101] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x741ce834c80 00:09:23.637 [2024-07-12 14:57:49.328110] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.637 [2024-07-12 14:57:49.328771] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.637 [2024-07-12 14:57:49.328795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:23.637 BaseBdev2 00:09:23.637 14:57:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:09:23.895 [2024-07-12 14:57:49.564026] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:23.895 [2024-07-12 14:57:49.564627] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:23.895 [2024-07-12 14:57:49.564692] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x741ce834f00 00:09:23.896 [2024-07-12 14:57:49.564698] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:23.896 [2024-07-12 14:57:49.564729] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x741ce8a0e20 00:09:23.896 [2024-07-12 14:57:49.564811] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x741ce834f00 00:09:23.896 [2024-07-12 14:57:49.564816] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x741ce834f00 00:09:23.896 [2024-07-12 14:57:49.564843] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:23.896 14:57:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:23.896 14:57:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:23.896 14:57:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:23.896 14:57:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:23.896 14:57:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:23.896 14:57:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:23.896 14:57:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:23.896 14:57:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:23.896 14:57:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:23.896 14:57:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:23.896 14:57:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:23.896 14:57:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.154 14:57:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:24.154 "name": "raid_bdev1", 00:09:24.154 "uuid": "1b49646f-405f-11ef-b2a4-e9dca065e82e", 00:09:24.154 "strip_size_kb": 0, 00:09:24.154 "state": "online", 00:09:24.154 "raid_level": "raid1", 00:09:24.154 "superblock": true, 00:09:24.154 "num_base_bdevs": 2, 00:09:24.154 "num_base_bdevs_discovered": 2, 00:09:24.154 "num_base_bdevs_operational": 2, 00:09:24.154 "base_bdevs_list": [ 00:09:24.154 { 00:09:24.154 "name": "BaseBdev1", 00:09:24.154 "uuid": "23064679-a64c-1d55-9ae0-9cf8ff6246a5", 00:09:24.154 "is_configured": true, 00:09:24.154 "data_offset": 2048, 00:09:24.154 "data_size": 63488 00:09:24.154 }, 00:09:24.154 { 00:09:24.154 "name": "BaseBdev2", 00:09:24.154 "uuid": "bc8b9baf-c3e3-9553-a6a6-1c712692e3d6", 00:09:24.154 "is_configured": true, 00:09:24.154 "data_offset": 2048, 00:09:24.154 "data_size": 63488 00:09:24.154 } 00:09:24.154 ] 00:09:24.154 }' 00:09:24.154 14:57:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:24.154 14:57:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.412 14:57:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:09:24.412 14:57:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:09:24.669 [2024-07-12 14:57:50.284244] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x741ce8a0ec0 00:09:25.603 14:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:25.603 [2024-07-12 14:57:51.400800] bdev_raid.c:2222:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:25.603 [2024-07-12 14:57:51.400856] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:25.603 [2024-07-12 14:57:51.400980] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x741ce8a0ec0 00:09:25.603 14:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:09:25.603 14:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:25.603 14:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:09:25.603 14:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=1 00:09:25.603 14:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:25.603 14:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:25.603 14:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:25.603 14:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:25.603 14:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:25.603 14:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:09:25.603 14:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:25.603 14:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:25.603 14:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:25.603 14:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:25.603 14:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:25.603 14:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:26.170 14:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:26.170 "name": "raid_bdev1", 00:09:26.170 "uuid": "1b49646f-405f-11ef-b2a4-e9dca065e82e", 00:09:26.170 "strip_size_kb": 0, 00:09:26.170 "state": "online", 00:09:26.170 "raid_level": "raid1", 00:09:26.170 "superblock": true, 00:09:26.170 "num_base_bdevs": 2, 00:09:26.170 "num_base_bdevs_discovered": 1, 00:09:26.170 "num_base_bdevs_operational": 1, 00:09:26.170 "base_bdevs_list": [ 00:09:26.170 { 00:09:26.170 "name": null, 00:09:26.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.170 "is_configured": false, 00:09:26.170 "data_offset": 2048, 00:09:26.170 "data_size": 63488 00:09:26.170 }, 00:09:26.170 { 00:09:26.170 "name": "BaseBdev2", 00:09:26.170 "uuid": "bc8b9baf-c3e3-9553-a6a6-1c712692e3d6", 00:09:26.170 "is_configured": true, 00:09:26.170 "data_offset": 2048, 00:09:26.170 "data_size": 63488 00:09:26.170 } 00:09:26.170 ] 00:09:26.170 }' 00:09:26.170 14:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:26.170 14:57:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.429 14:57:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:26.687 [2024-07-12 14:57:52.402196] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:26.687 [2024-07-12 14:57:52.402225] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:26.687 [2024-07-12 14:57:52.402538] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:26.687 [2024-07-12 14:57:52.402547] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.687 [2024-07-12 14:57:52.402557] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:26.687 [2024-07-12 14:57:52.402562] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x741ce834f00 name raid_bdev1, state offline 00:09:26.687 0 00:09:26.687 14:57:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 51837 00:09:26.687 14:57:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 51837 ']' 00:09:26.687 14:57:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 51837 00:09:26.687 14:57:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:09:26.687 14:57:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:09:26.687 14:57:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 51837 00:09:26.687 14:57:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:09:26.687 14:57:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:09:26.687 14:57:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:09:26.687 killing process with pid 51837 00:09:26.687 14:57:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51837' 00:09:26.687 14:57:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 51837 00:09:26.687 [2024-07-12 14:57:52.430534] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:26.687 14:57:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 51837 00:09:26.687 [2024-07-12 14:57:52.441671] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:26.946 14:57:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.wQHBzyoe84 00:09:26.946 14:57:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:09:26.946 14:57:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:09:26.946 14:57:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:09:26.946 14:57:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:09:26.946 14:57:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:26.946 14:57:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:09:26.946 14:57:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:26.946 00:09:26.946 real 0m5.845s 00:09:26.946 user 0m8.935s 00:09:26.946 sys 0m1.147s 00:09:26.946 14:57:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:26.946 14:57:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.946 ************************************ 00:09:26.946 END TEST raid_write_error_test 00:09:26.946 ************************************ 00:09:26.946 14:57:52 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:26.946 14:57:52 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:09:26.946 14:57:52 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:09:26.946 14:57:52 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:09:26.946 14:57:52 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:26.946 14:57:52 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:26.946 14:57:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:26.946 ************************************ 00:09:26.946 START TEST raid_state_function_test 00:09:26.946 ************************************ 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 false 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=51959 00:09:26.946 Process raid pid: 51959 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 51959' 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 51959 /var/tmp/spdk-raid.sock 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 51959 ']' 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:26.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:26.946 14:57:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.946 [2024-07-12 14:57:52.671128] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:09:26.946 [2024-07-12 14:57:52.671367] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:09:27.532 EAL: TSC is not safe to use in SMP mode 00:09:27.532 EAL: TSC is not invariant 00:09:27.532 [2024-07-12 14:57:53.190108] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.533 [2024-07-12 14:57:53.287456] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:27.533 [2024-07-12 14:57:53.290135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.533 [2024-07-12 14:57:53.291108] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.533 [2024-07-12 14:57:53.291126] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.098 14:57:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:28.098 14:57:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:09:28.098 14:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:28.356 [2024-07-12 14:57:53.977522] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:28.356 [2024-07-12 14:57:53.977596] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:28.356 [2024-07-12 14:57:53.977602] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:28.356 [2024-07-12 14:57:53.977628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:28.356 [2024-07-12 14:57:53.977632] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:28.356 [2024-07-12 14:57:53.977639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:28.356 14:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:28.356 14:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:28.357 14:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:28.357 14:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:28.357 14:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:28.357 14:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:28.357 14:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:28.357 14:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:28.357 14:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:28.357 14:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:28.357 14:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:28.357 14:57:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.615 14:57:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:28.615 "name": "Existed_Raid", 00:09:28.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.615 "strip_size_kb": 64, 00:09:28.615 "state": "configuring", 00:09:28.615 "raid_level": "raid0", 00:09:28.615 "superblock": false, 00:09:28.615 "num_base_bdevs": 3, 00:09:28.615 "num_base_bdevs_discovered": 0, 00:09:28.615 "num_base_bdevs_operational": 3, 00:09:28.615 "base_bdevs_list": [ 00:09:28.615 { 00:09:28.615 "name": "BaseBdev1", 00:09:28.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.615 "is_configured": false, 00:09:28.615 "data_offset": 0, 00:09:28.615 "data_size": 0 00:09:28.615 }, 00:09:28.615 { 00:09:28.615 "name": "BaseBdev2", 00:09:28.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.615 "is_configured": false, 00:09:28.615 "data_offset": 0, 00:09:28.615 "data_size": 0 00:09:28.615 }, 00:09:28.615 { 00:09:28.615 "name": "BaseBdev3", 00:09:28.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.615 "is_configured": false, 00:09:28.615 "data_offset": 0, 00:09:28.615 "data_size": 0 00:09:28.615 } 00:09:28.615 ] 00:09:28.615 }' 00:09:28.615 14:57:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:28.615 14:57:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.874 14:57:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:29.133 [2024-07-12 14:57:54.777604] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:29.133 [2024-07-12 14:57:54.777635] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x34801b834500 name Existed_Raid, state configuring 00:09:29.133 14:57:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:29.391 [2024-07-12 14:57:55.085629] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:29.391 [2024-07-12 14:57:55.085683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:29.391 [2024-07-12 14:57:55.085689] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:29.391 [2024-07-12 14:57:55.085698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:29.391 [2024-07-12 14:57:55.085701] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:29.391 [2024-07-12 14:57:55.085709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:29.392 14:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:29.651 [2024-07-12 14:57:55.334660] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:29.651 BaseBdev1 00:09:29.651 14:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:09:29.651 14:57:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:09:29.651 14:57:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:29.651 14:57:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:29.651 14:57:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:29.651 14:57:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:29.651 14:57:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:29.910 14:57:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:30.170 [ 00:09:30.170 { 00:09:30.170 "name": "BaseBdev1", 00:09:30.170 "aliases": [ 00:09:30.170 "1eb9c5a1-405f-11ef-b2a4-e9dca065e82e" 00:09:30.170 ], 00:09:30.170 "product_name": "Malloc disk", 00:09:30.170 "block_size": 512, 00:09:30.170 "num_blocks": 65536, 00:09:30.170 "uuid": "1eb9c5a1-405f-11ef-b2a4-e9dca065e82e", 00:09:30.170 "assigned_rate_limits": { 00:09:30.170 "rw_ios_per_sec": 0, 00:09:30.170 "rw_mbytes_per_sec": 0, 00:09:30.170 "r_mbytes_per_sec": 0, 00:09:30.170 "w_mbytes_per_sec": 0 00:09:30.170 }, 00:09:30.170 "claimed": true, 00:09:30.170 "claim_type": "exclusive_write", 00:09:30.170 "zoned": false, 00:09:30.170 "supported_io_types": { 00:09:30.170 "read": true, 00:09:30.170 "write": true, 00:09:30.170 "unmap": true, 00:09:30.170 "flush": true, 00:09:30.170 "reset": true, 00:09:30.170 "nvme_admin": false, 00:09:30.170 "nvme_io": false, 00:09:30.170 "nvme_io_md": false, 00:09:30.170 "write_zeroes": true, 00:09:30.170 "zcopy": true, 00:09:30.170 "get_zone_info": false, 00:09:30.170 "zone_management": false, 00:09:30.170 "zone_append": false, 00:09:30.170 "compare": false, 00:09:30.170 "compare_and_write": false, 00:09:30.170 "abort": true, 00:09:30.170 "seek_hole": false, 00:09:30.170 "seek_data": false, 00:09:30.170 "copy": true, 00:09:30.170 "nvme_iov_md": false 00:09:30.170 }, 00:09:30.170 "memory_domains": [ 00:09:30.170 { 00:09:30.170 "dma_device_id": "system", 00:09:30.170 "dma_device_type": 1 00:09:30.170 }, 00:09:30.170 { 00:09:30.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.170 "dma_device_type": 2 00:09:30.170 } 00:09:30.170 ], 00:09:30.170 "driver_specific": {} 00:09:30.170 } 00:09:30.170 ] 00:09:30.170 14:57:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:30.170 14:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:30.170 14:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:30.170 14:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:30.170 14:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:30.170 14:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:30.170 14:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:30.170 14:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:30.170 14:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:30.170 14:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:30.170 14:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:30.170 14:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:30.170 14:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.429 14:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:30.429 "name": "Existed_Raid", 00:09:30.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.429 "strip_size_kb": 64, 00:09:30.429 "state": "configuring", 00:09:30.429 "raid_level": "raid0", 00:09:30.429 "superblock": false, 00:09:30.429 "num_base_bdevs": 3, 00:09:30.429 "num_base_bdevs_discovered": 1, 00:09:30.429 "num_base_bdevs_operational": 3, 00:09:30.429 "base_bdevs_list": [ 00:09:30.429 { 00:09:30.429 "name": "BaseBdev1", 00:09:30.429 "uuid": "1eb9c5a1-405f-11ef-b2a4-e9dca065e82e", 00:09:30.429 "is_configured": true, 00:09:30.429 "data_offset": 0, 00:09:30.429 "data_size": 65536 00:09:30.429 }, 00:09:30.429 { 00:09:30.429 "name": "BaseBdev2", 00:09:30.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.429 "is_configured": false, 00:09:30.429 "data_offset": 0, 00:09:30.429 "data_size": 0 00:09:30.429 }, 00:09:30.429 { 00:09:30.429 "name": "BaseBdev3", 00:09:30.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.429 "is_configured": false, 00:09:30.429 "data_offset": 0, 00:09:30.429 "data_size": 0 00:09:30.429 } 00:09:30.429 ] 00:09:30.429 }' 00:09:30.429 14:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:30.429 14:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.687 14:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:30.945 [2024-07-12 14:57:56.689786] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:30.945 [2024-07-12 14:57:56.689827] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x34801b834500 name Existed_Raid, state configuring 00:09:30.945 14:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:31.204 [2024-07-12 14:57:56.977837] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:31.204 [2024-07-12 14:57:56.978630] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:31.204 [2024-07-12 14:57:56.978669] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:31.204 [2024-07-12 14:57:56.978675] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:31.204 [2024-07-12 14:57:56.978683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:31.204 14:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:09:31.204 14:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:31.204 14:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:31.204 14:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:31.204 14:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:31.204 14:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:31.204 14:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:31.204 14:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:31.204 14:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:31.204 14:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:31.204 14:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:31.204 14:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:31.204 14:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:31.204 14:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.462 14:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:31.462 "name": "Existed_Raid", 00:09:31.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.462 "strip_size_kb": 64, 00:09:31.462 "state": "configuring", 00:09:31.462 "raid_level": "raid0", 00:09:31.462 "superblock": false, 00:09:31.462 "num_base_bdevs": 3, 00:09:31.462 "num_base_bdevs_discovered": 1, 00:09:31.462 "num_base_bdevs_operational": 3, 00:09:31.462 "base_bdevs_list": [ 00:09:31.462 { 00:09:31.462 "name": "BaseBdev1", 00:09:31.462 "uuid": "1eb9c5a1-405f-11ef-b2a4-e9dca065e82e", 00:09:31.462 "is_configured": true, 00:09:31.462 "data_offset": 0, 00:09:31.462 "data_size": 65536 00:09:31.462 }, 00:09:31.462 { 00:09:31.463 "name": "BaseBdev2", 00:09:31.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.463 "is_configured": false, 00:09:31.463 "data_offset": 0, 00:09:31.463 "data_size": 0 00:09:31.463 }, 00:09:31.463 { 00:09:31.463 "name": "BaseBdev3", 00:09:31.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.463 "is_configured": false, 00:09:31.463 "data_offset": 0, 00:09:31.463 "data_size": 0 00:09:31.463 } 00:09:31.463 ] 00:09:31.463 }' 00:09:31.463 14:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:31.463 14:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.721 14:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:31.979 [2024-07-12 14:57:57.786041] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:31.979 BaseBdev2 00:09:31.979 14:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:09:31.979 14:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:09:31.979 14:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:31.979 14:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:31.979 14:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:31.979 14:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:31.979 14:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:32.238 14:57:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:32.804 [ 00:09:32.804 { 00:09:32.804 "name": "BaseBdev2", 00:09:32.804 "aliases": [ 00:09:32.804 "202ff40e-405f-11ef-b2a4-e9dca065e82e" 00:09:32.804 ], 00:09:32.804 "product_name": "Malloc disk", 00:09:32.804 "block_size": 512, 00:09:32.804 "num_blocks": 65536, 00:09:32.804 "uuid": "202ff40e-405f-11ef-b2a4-e9dca065e82e", 00:09:32.804 "assigned_rate_limits": { 00:09:32.804 "rw_ios_per_sec": 0, 00:09:32.804 "rw_mbytes_per_sec": 0, 00:09:32.804 "r_mbytes_per_sec": 0, 00:09:32.804 "w_mbytes_per_sec": 0 00:09:32.804 }, 00:09:32.804 "claimed": true, 00:09:32.804 "claim_type": "exclusive_write", 00:09:32.804 "zoned": false, 00:09:32.804 "supported_io_types": { 00:09:32.804 "read": true, 00:09:32.804 "write": true, 00:09:32.804 "unmap": true, 00:09:32.804 "flush": true, 00:09:32.804 "reset": true, 00:09:32.804 "nvme_admin": false, 00:09:32.804 "nvme_io": false, 00:09:32.804 "nvme_io_md": false, 00:09:32.804 "write_zeroes": true, 00:09:32.804 "zcopy": true, 00:09:32.804 "get_zone_info": false, 00:09:32.804 "zone_management": false, 00:09:32.804 "zone_append": false, 00:09:32.804 "compare": false, 00:09:32.804 "compare_and_write": false, 00:09:32.804 "abort": true, 00:09:32.804 "seek_hole": false, 00:09:32.804 "seek_data": false, 00:09:32.804 "copy": true, 00:09:32.804 "nvme_iov_md": false 00:09:32.804 }, 00:09:32.804 "memory_domains": [ 00:09:32.804 { 00:09:32.804 "dma_device_id": "system", 00:09:32.804 "dma_device_type": 1 00:09:32.804 }, 00:09:32.804 { 00:09:32.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.804 "dma_device_type": 2 00:09:32.804 } 00:09:32.804 ], 00:09:32.804 "driver_specific": {} 00:09:32.804 } 00:09:32.804 ] 00:09:32.804 14:57:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:32.804 14:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:32.804 14:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:32.804 14:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:32.804 14:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:32.804 14:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:32.804 14:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:32.804 14:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:32.804 14:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:32.804 14:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:32.804 14:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:32.804 14:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:32.804 14:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:32.804 14:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:32.804 14:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.804 14:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:32.804 "name": "Existed_Raid", 00:09:32.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.804 "strip_size_kb": 64, 00:09:32.804 "state": "configuring", 00:09:32.804 "raid_level": "raid0", 00:09:32.804 "superblock": false, 00:09:32.804 "num_base_bdevs": 3, 00:09:32.804 "num_base_bdevs_discovered": 2, 00:09:32.804 "num_base_bdevs_operational": 3, 00:09:32.804 "base_bdevs_list": [ 00:09:32.804 { 00:09:32.804 "name": "BaseBdev1", 00:09:32.804 "uuid": "1eb9c5a1-405f-11ef-b2a4-e9dca065e82e", 00:09:32.804 "is_configured": true, 00:09:32.804 "data_offset": 0, 00:09:32.804 "data_size": 65536 00:09:32.804 }, 00:09:32.804 { 00:09:32.804 "name": "BaseBdev2", 00:09:32.804 "uuid": "202ff40e-405f-11ef-b2a4-e9dca065e82e", 00:09:32.804 "is_configured": true, 00:09:32.804 "data_offset": 0, 00:09:32.804 "data_size": 65536 00:09:32.804 }, 00:09:32.804 { 00:09:32.804 "name": "BaseBdev3", 00:09:32.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.804 "is_configured": false, 00:09:32.804 "data_offset": 0, 00:09:32.804 "data_size": 0 00:09:32.804 } 00:09:32.804 ] 00:09:32.804 }' 00:09:32.804 14:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:32.804 14:57:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.371 14:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:33.629 [2024-07-12 14:57:59.238271] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:33.630 [2024-07-12 14:57:59.238319] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x34801b834a00 00:09:33.630 [2024-07-12 14:57:59.238344] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:33.630 [2024-07-12 14:57:59.238385] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x34801b897e20 00:09:33.630 [2024-07-12 14:57:59.238532] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x34801b834a00 00:09:33.630 [2024-07-12 14:57:59.238541] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x34801b834a00 00:09:33.630 [2024-07-12 14:57:59.238591] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:33.630 BaseBdev3 00:09:33.630 14:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:09:33.630 14:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:09:33.630 14:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:33.630 14:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:33.630 14:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:33.630 14:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:33.630 14:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:33.889 14:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:34.147 [ 00:09:34.147 { 00:09:34.147 "name": "BaseBdev3", 00:09:34.147 "aliases": [ 00:09:34.147 "210d8967-405f-11ef-b2a4-e9dca065e82e" 00:09:34.147 ], 00:09:34.147 "product_name": "Malloc disk", 00:09:34.147 "block_size": 512, 00:09:34.147 "num_blocks": 65536, 00:09:34.147 "uuid": "210d8967-405f-11ef-b2a4-e9dca065e82e", 00:09:34.147 "assigned_rate_limits": { 00:09:34.147 "rw_ios_per_sec": 0, 00:09:34.147 "rw_mbytes_per_sec": 0, 00:09:34.147 "r_mbytes_per_sec": 0, 00:09:34.147 "w_mbytes_per_sec": 0 00:09:34.147 }, 00:09:34.147 "claimed": true, 00:09:34.147 "claim_type": "exclusive_write", 00:09:34.147 "zoned": false, 00:09:34.147 "supported_io_types": { 00:09:34.147 "read": true, 00:09:34.147 "write": true, 00:09:34.147 "unmap": true, 00:09:34.147 "flush": true, 00:09:34.147 "reset": true, 00:09:34.147 "nvme_admin": false, 00:09:34.147 "nvme_io": false, 00:09:34.147 "nvme_io_md": false, 00:09:34.147 "write_zeroes": true, 00:09:34.147 "zcopy": true, 00:09:34.147 "get_zone_info": false, 00:09:34.147 "zone_management": false, 00:09:34.147 "zone_append": false, 00:09:34.147 "compare": false, 00:09:34.147 "compare_and_write": false, 00:09:34.147 "abort": true, 00:09:34.147 "seek_hole": false, 00:09:34.147 "seek_data": false, 00:09:34.147 "copy": true, 00:09:34.147 "nvme_iov_md": false 00:09:34.147 }, 00:09:34.147 "memory_domains": [ 00:09:34.147 { 00:09:34.147 "dma_device_id": "system", 00:09:34.147 "dma_device_type": 1 00:09:34.147 }, 00:09:34.147 { 00:09:34.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.147 "dma_device_type": 2 00:09:34.147 } 00:09:34.147 ], 00:09:34.147 "driver_specific": {} 00:09:34.147 } 00:09:34.147 ] 00:09:34.147 14:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:34.147 14:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:34.147 14:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:34.147 14:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:34.147 14:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:34.147 14:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:34.147 14:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:34.147 14:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:34.147 14:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:34.147 14:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:34.147 14:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:34.147 14:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:34.147 14:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:34.147 14:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.147 14:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:34.406 14:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:34.406 "name": "Existed_Raid", 00:09:34.406 "uuid": "210d9423-405f-11ef-b2a4-e9dca065e82e", 00:09:34.406 "strip_size_kb": 64, 00:09:34.406 "state": "online", 00:09:34.406 "raid_level": "raid0", 00:09:34.406 "superblock": false, 00:09:34.406 "num_base_bdevs": 3, 00:09:34.406 "num_base_bdevs_discovered": 3, 00:09:34.406 "num_base_bdevs_operational": 3, 00:09:34.406 "base_bdevs_list": [ 00:09:34.406 { 00:09:34.406 "name": "BaseBdev1", 00:09:34.406 "uuid": "1eb9c5a1-405f-11ef-b2a4-e9dca065e82e", 00:09:34.406 "is_configured": true, 00:09:34.406 "data_offset": 0, 00:09:34.406 "data_size": 65536 00:09:34.406 }, 00:09:34.406 { 00:09:34.406 "name": "BaseBdev2", 00:09:34.406 "uuid": "202ff40e-405f-11ef-b2a4-e9dca065e82e", 00:09:34.406 "is_configured": true, 00:09:34.406 "data_offset": 0, 00:09:34.406 "data_size": 65536 00:09:34.406 }, 00:09:34.406 { 00:09:34.406 "name": "BaseBdev3", 00:09:34.406 "uuid": "210d8967-405f-11ef-b2a4-e9dca065e82e", 00:09:34.406 "is_configured": true, 00:09:34.406 "data_offset": 0, 00:09:34.406 "data_size": 65536 00:09:34.406 } 00:09:34.406 ] 00:09:34.406 }' 00:09:34.406 14:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:34.406 14:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.664 14:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:09:34.664 14:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:34.664 14:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:34.664 14:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:34.664 14:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:34.664 14:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:34.664 14:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:34.664 14:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:34.929 [2024-07-12 14:58:00.566222] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:34.929 14:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:34.929 "name": "Existed_Raid", 00:09:34.929 "aliases": [ 00:09:34.929 "210d9423-405f-11ef-b2a4-e9dca065e82e" 00:09:34.929 ], 00:09:34.929 "product_name": "Raid Volume", 00:09:34.929 "block_size": 512, 00:09:34.929 "num_blocks": 196608, 00:09:34.929 "uuid": "210d9423-405f-11ef-b2a4-e9dca065e82e", 00:09:34.929 "assigned_rate_limits": { 00:09:34.929 "rw_ios_per_sec": 0, 00:09:34.929 "rw_mbytes_per_sec": 0, 00:09:34.929 "r_mbytes_per_sec": 0, 00:09:34.929 "w_mbytes_per_sec": 0 00:09:34.929 }, 00:09:34.929 "claimed": false, 00:09:34.929 "zoned": false, 00:09:34.929 "supported_io_types": { 00:09:34.929 "read": true, 00:09:34.929 "write": true, 00:09:34.929 "unmap": true, 00:09:34.929 "flush": true, 00:09:34.929 "reset": true, 00:09:34.929 "nvme_admin": false, 00:09:34.929 "nvme_io": false, 00:09:34.929 "nvme_io_md": false, 00:09:34.929 "write_zeroes": true, 00:09:34.929 "zcopy": false, 00:09:34.929 "get_zone_info": false, 00:09:34.929 "zone_management": false, 00:09:34.929 "zone_append": false, 00:09:34.929 "compare": false, 00:09:34.929 "compare_and_write": false, 00:09:34.929 "abort": false, 00:09:34.929 "seek_hole": false, 00:09:34.929 "seek_data": false, 00:09:34.929 "copy": false, 00:09:34.929 "nvme_iov_md": false 00:09:34.929 }, 00:09:34.929 "memory_domains": [ 00:09:34.929 { 00:09:34.929 "dma_device_id": "system", 00:09:34.929 "dma_device_type": 1 00:09:34.929 }, 00:09:34.929 { 00:09:34.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.929 "dma_device_type": 2 00:09:34.929 }, 00:09:34.929 { 00:09:34.929 "dma_device_id": "system", 00:09:34.929 "dma_device_type": 1 00:09:34.929 }, 00:09:34.929 { 00:09:34.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.929 "dma_device_type": 2 00:09:34.929 }, 00:09:34.929 { 00:09:34.929 "dma_device_id": "system", 00:09:34.929 "dma_device_type": 1 00:09:34.929 }, 00:09:34.929 { 00:09:34.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.929 "dma_device_type": 2 00:09:34.929 } 00:09:34.929 ], 00:09:34.929 "driver_specific": { 00:09:34.929 "raid": { 00:09:34.929 "uuid": "210d9423-405f-11ef-b2a4-e9dca065e82e", 00:09:34.929 "strip_size_kb": 64, 00:09:34.929 "state": "online", 00:09:34.929 "raid_level": "raid0", 00:09:34.929 "superblock": false, 00:09:34.929 "num_base_bdevs": 3, 00:09:34.929 "num_base_bdevs_discovered": 3, 00:09:34.929 "num_base_bdevs_operational": 3, 00:09:34.929 "base_bdevs_list": [ 00:09:34.929 { 00:09:34.929 "name": "BaseBdev1", 00:09:34.929 "uuid": "1eb9c5a1-405f-11ef-b2a4-e9dca065e82e", 00:09:34.929 "is_configured": true, 00:09:34.929 "data_offset": 0, 00:09:34.929 "data_size": 65536 00:09:34.929 }, 00:09:34.929 { 00:09:34.929 "name": "BaseBdev2", 00:09:34.929 "uuid": "202ff40e-405f-11ef-b2a4-e9dca065e82e", 00:09:34.929 "is_configured": true, 00:09:34.929 "data_offset": 0, 00:09:34.929 "data_size": 65536 00:09:34.929 }, 00:09:34.929 { 00:09:34.929 "name": "BaseBdev3", 00:09:34.929 "uuid": "210d8967-405f-11ef-b2a4-e9dca065e82e", 00:09:34.929 "is_configured": true, 00:09:34.929 "data_offset": 0, 00:09:34.929 "data_size": 65536 00:09:34.929 } 00:09:34.929 ] 00:09:34.929 } 00:09:34.929 } 00:09:34.929 }' 00:09:34.929 14:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:34.929 14:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:09:34.929 BaseBdev2 00:09:34.929 BaseBdev3' 00:09:34.929 14:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:34.929 14:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:34.929 14:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:35.189 14:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:35.189 "name": "BaseBdev1", 00:09:35.189 "aliases": [ 00:09:35.189 "1eb9c5a1-405f-11ef-b2a4-e9dca065e82e" 00:09:35.189 ], 00:09:35.189 "product_name": "Malloc disk", 00:09:35.189 "block_size": 512, 00:09:35.189 "num_blocks": 65536, 00:09:35.189 "uuid": "1eb9c5a1-405f-11ef-b2a4-e9dca065e82e", 00:09:35.189 "assigned_rate_limits": { 00:09:35.189 "rw_ios_per_sec": 0, 00:09:35.189 "rw_mbytes_per_sec": 0, 00:09:35.189 "r_mbytes_per_sec": 0, 00:09:35.189 "w_mbytes_per_sec": 0 00:09:35.189 }, 00:09:35.189 "claimed": true, 00:09:35.189 "claim_type": "exclusive_write", 00:09:35.189 "zoned": false, 00:09:35.189 "supported_io_types": { 00:09:35.189 "read": true, 00:09:35.189 "write": true, 00:09:35.189 "unmap": true, 00:09:35.189 "flush": true, 00:09:35.189 "reset": true, 00:09:35.189 "nvme_admin": false, 00:09:35.189 "nvme_io": false, 00:09:35.189 "nvme_io_md": false, 00:09:35.189 "write_zeroes": true, 00:09:35.189 "zcopy": true, 00:09:35.189 "get_zone_info": false, 00:09:35.189 "zone_management": false, 00:09:35.189 "zone_append": false, 00:09:35.189 "compare": false, 00:09:35.189 "compare_and_write": false, 00:09:35.189 "abort": true, 00:09:35.189 "seek_hole": false, 00:09:35.189 "seek_data": false, 00:09:35.189 "copy": true, 00:09:35.189 "nvme_iov_md": false 00:09:35.189 }, 00:09:35.189 "memory_domains": [ 00:09:35.189 { 00:09:35.189 "dma_device_id": "system", 00:09:35.189 "dma_device_type": 1 00:09:35.189 }, 00:09:35.189 { 00:09:35.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.189 "dma_device_type": 2 00:09:35.189 } 00:09:35.189 ], 00:09:35.189 "driver_specific": {} 00:09:35.189 }' 00:09:35.189 14:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:35.189 14:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:35.189 14:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:35.189 14:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:35.189 14:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:35.189 14:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:35.189 14:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:35.189 14:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:35.189 14:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:35.189 14:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:35.189 14:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:35.189 14:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:35.189 14:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:35.189 14:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:35.189 14:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:35.447 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:35.447 "name": "BaseBdev2", 00:09:35.447 "aliases": [ 00:09:35.447 "202ff40e-405f-11ef-b2a4-e9dca065e82e" 00:09:35.447 ], 00:09:35.447 "product_name": "Malloc disk", 00:09:35.447 "block_size": 512, 00:09:35.447 "num_blocks": 65536, 00:09:35.447 "uuid": "202ff40e-405f-11ef-b2a4-e9dca065e82e", 00:09:35.447 "assigned_rate_limits": { 00:09:35.447 "rw_ios_per_sec": 0, 00:09:35.447 "rw_mbytes_per_sec": 0, 00:09:35.447 "r_mbytes_per_sec": 0, 00:09:35.447 "w_mbytes_per_sec": 0 00:09:35.447 }, 00:09:35.447 "claimed": true, 00:09:35.447 "claim_type": "exclusive_write", 00:09:35.447 "zoned": false, 00:09:35.447 "supported_io_types": { 00:09:35.447 "read": true, 00:09:35.447 "write": true, 00:09:35.447 "unmap": true, 00:09:35.447 "flush": true, 00:09:35.447 "reset": true, 00:09:35.447 "nvme_admin": false, 00:09:35.447 "nvme_io": false, 00:09:35.447 "nvme_io_md": false, 00:09:35.447 "write_zeroes": true, 00:09:35.447 "zcopy": true, 00:09:35.447 "get_zone_info": false, 00:09:35.447 "zone_management": false, 00:09:35.447 "zone_append": false, 00:09:35.447 "compare": false, 00:09:35.447 "compare_and_write": false, 00:09:35.447 "abort": true, 00:09:35.447 "seek_hole": false, 00:09:35.447 "seek_data": false, 00:09:35.447 "copy": true, 00:09:35.447 "nvme_iov_md": false 00:09:35.447 }, 00:09:35.447 "memory_domains": [ 00:09:35.447 { 00:09:35.447 "dma_device_id": "system", 00:09:35.447 "dma_device_type": 1 00:09:35.447 }, 00:09:35.447 { 00:09:35.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.447 "dma_device_type": 2 00:09:35.447 } 00:09:35.447 ], 00:09:35.447 "driver_specific": {} 00:09:35.447 }' 00:09:35.447 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:35.447 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:35.447 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:35.447 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:35.447 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:35.447 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:35.447 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:35.447 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:35.447 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:35.447 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:35.447 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:35.447 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:35.447 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:35.447 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:35.447 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:09:35.705 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:35.705 "name": "BaseBdev3", 00:09:35.705 "aliases": [ 00:09:35.705 "210d8967-405f-11ef-b2a4-e9dca065e82e" 00:09:35.705 ], 00:09:35.705 "product_name": "Malloc disk", 00:09:35.705 "block_size": 512, 00:09:35.705 "num_blocks": 65536, 00:09:35.705 "uuid": "210d8967-405f-11ef-b2a4-e9dca065e82e", 00:09:35.705 "assigned_rate_limits": { 00:09:35.705 "rw_ios_per_sec": 0, 00:09:35.705 "rw_mbytes_per_sec": 0, 00:09:35.705 "r_mbytes_per_sec": 0, 00:09:35.705 "w_mbytes_per_sec": 0 00:09:35.705 }, 00:09:35.705 "claimed": true, 00:09:35.705 "claim_type": "exclusive_write", 00:09:35.705 "zoned": false, 00:09:35.705 "supported_io_types": { 00:09:35.705 "read": true, 00:09:35.705 "write": true, 00:09:35.705 "unmap": true, 00:09:35.705 "flush": true, 00:09:35.705 "reset": true, 00:09:35.705 "nvme_admin": false, 00:09:35.705 "nvme_io": false, 00:09:35.705 "nvme_io_md": false, 00:09:35.705 "write_zeroes": true, 00:09:35.705 "zcopy": true, 00:09:35.705 "get_zone_info": false, 00:09:35.705 "zone_management": false, 00:09:35.705 "zone_append": false, 00:09:35.705 "compare": false, 00:09:35.705 "compare_and_write": false, 00:09:35.705 "abort": true, 00:09:35.705 "seek_hole": false, 00:09:35.705 "seek_data": false, 00:09:35.705 "copy": true, 00:09:35.705 "nvme_iov_md": false 00:09:35.705 }, 00:09:35.705 "memory_domains": [ 00:09:35.705 { 00:09:35.705 "dma_device_id": "system", 00:09:35.705 "dma_device_type": 1 00:09:35.705 }, 00:09:35.705 { 00:09:35.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.705 "dma_device_type": 2 00:09:35.705 } 00:09:35.705 ], 00:09:35.705 "driver_specific": {} 00:09:35.705 }' 00:09:35.705 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:35.705 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:35.705 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:35.705 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:35.705 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:35.705 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:35.705 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:35.705 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:35.705 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:35.705 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:35.705 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:35.705 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:35.705 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:35.964 [2024-07-12 14:58:01.730291] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:35.965 [2024-07-12 14:58:01.730314] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:35.965 [2024-07-12 14:58:01.730329] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.965 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:09:35.965 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:09:35.965 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:35.965 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:09:35.965 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:09:35.965 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:35.965 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:35.965 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:09:35.965 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:35.965 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:35.965 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:35.965 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:35.965 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:35.965 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:35.965 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:35.965 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:35.965 14:58:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.225 14:58:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:36.225 "name": "Existed_Raid", 00:09:36.225 "uuid": "210d9423-405f-11ef-b2a4-e9dca065e82e", 00:09:36.225 "strip_size_kb": 64, 00:09:36.225 "state": "offline", 00:09:36.225 "raid_level": "raid0", 00:09:36.225 "superblock": false, 00:09:36.225 "num_base_bdevs": 3, 00:09:36.225 "num_base_bdevs_discovered": 2, 00:09:36.225 "num_base_bdevs_operational": 2, 00:09:36.225 "base_bdevs_list": [ 00:09:36.225 { 00:09:36.225 "name": null, 00:09:36.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.225 "is_configured": false, 00:09:36.225 "data_offset": 0, 00:09:36.225 "data_size": 65536 00:09:36.225 }, 00:09:36.225 { 00:09:36.225 "name": "BaseBdev2", 00:09:36.225 "uuid": "202ff40e-405f-11ef-b2a4-e9dca065e82e", 00:09:36.225 "is_configured": true, 00:09:36.225 "data_offset": 0, 00:09:36.225 "data_size": 65536 00:09:36.225 }, 00:09:36.225 { 00:09:36.225 "name": "BaseBdev3", 00:09:36.225 "uuid": "210d8967-405f-11ef-b2a4-e9dca065e82e", 00:09:36.225 "is_configured": true, 00:09:36.225 "data_offset": 0, 00:09:36.225 "data_size": 65536 00:09:36.225 } 00:09:36.225 ] 00:09:36.225 }' 00:09:36.225 14:58:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:36.225 14:58:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.484 14:58:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:09:36.484 14:58:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:36.484 14:58:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:36.484 14:58:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:36.743 14:58:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:36.743 14:58:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:36.743 14:58:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:37.002 [2024-07-12 14:58:02.819995] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:37.259 14:58:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:37.260 14:58:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:37.260 14:58:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:37.260 14:58:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:37.518 14:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:37.518 14:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:37.518 14:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:09:37.777 [2024-07-12 14:58:03.357843] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:37.777 [2024-07-12 14:58:03.357871] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x34801b834a00 name Existed_Raid, state offline 00:09:37.777 14:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:37.777 14:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:37.777 14:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:09:37.777 14:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:38.035 14:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:09:38.035 14:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:09:38.035 14:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:09:38.035 14:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:09:38.035 14:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:38.035 14:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:38.293 BaseBdev2 00:09:38.293 14:58:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:09:38.293 14:58:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:09:38.293 14:58:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:38.293 14:58:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:38.293 14:58:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:38.293 14:58:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:38.293 14:58:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:38.551 14:58:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:38.810 [ 00:09:38.810 { 00:09:38.810 "name": "BaseBdev2", 00:09:38.810 "aliases": [ 00:09:38.810 "23d7e080-405f-11ef-b2a4-e9dca065e82e" 00:09:38.810 ], 00:09:38.810 "product_name": "Malloc disk", 00:09:38.810 "block_size": 512, 00:09:38.810 "num_blocks": 65536, 00:09:38.810 "uuid": "23d7e080-405f-11ef-b2a4-e9dca065e82e", 00:09:38.810 "assigned_rate_limits": { 00:09:38.810 "rw_ios_per_sec": 0, 00:09:38.810 "rw_mbytes_per_sec": 0, 00:09:38.810 "r_mbytes_per_sec": 0, 00:09:38.810 "w_mbytes_per_sec": 0 00:09:38.810 }, 00:09:38.810 "claimed": false, 00:09:38.810 "zoned": false, 00:09:38.810 "supported_io_types": { 00:09:38.810 "read": true, 00:09:38.810 "write": true, 00:09:38.810 "unmap": true, 00:09:38.810 "flush": true, 00:09:38.810 "reset": true, 00:09:38.810 "nvme_admin": false, 00:09:38.810 "nvme_io": false, 00:09:38.810 "nvme_io_md": false, 00:09:38.810 "write_zeroes": true, 00:09:38.810 "zcopy": true, 00:09:38.810 "get_zone_info": false, 00:09:38.810 "zone_management": false, 00:09:38.810 "zone_append": false, 00:09:38.810 "compare": false, 00:09:38.810 "compare_and_write": false, 00:09:38.810 "abort": true, 00:09:38.810 "seek_hole": false, 00:09:38.810 "seek_data": false, 00:09:38.810 "copy": true, 00:09:38.810 "nvme_iov_md": false 00:09:38.810 }, 00:09:38.810 "memory_domains": [ 00:09:38.810 { 00:09:38.810 "dma_device_id": "system", 00:09:38.810 "dma_device_type": 1 00:09:38.810 }, 00:09:38.810 { 00:09:38.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.810 "dma_device_type": 2 00:09:38.810 } 00:09:38.810 ], 00:09:38.810 "driver_specific": {} 00:09:38.810 } 00:09:38.810 ] 00:09:38.810 14:58:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:38.810 14:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:09:38.810 14:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:38.810 14:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:39.069 BaseBdev3 00:09:39.069 14:58:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:09:39.069 14:58:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:09:39.069 14:58:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:39.069 14:58:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:39.069 14:58:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:39.069 14:58:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:39.069 14:58:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:39.327 14:58:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:39.585 [ 00:09:39.585 { 00:09:39.585 "name": "BaseBdev3", 00:09:39.585 "aliases": [ 00:09:39.585 "24550238-405f-11ef-b2a4-e9dca065e82e" 00:09:39.585 ], 00:09:39.585 "product_name": "Malloc disk", 00:09:39.585 "block_size": 512, 00:09:39.585 "num_blocks": 65536, 00:09:39.585 "uuid": "24550238-405f-11ef-b2a4-e9dca065e82e", 00:09:39.585 "assigned_rate_limits": { 00:09:39.585 "rw_ios_per_sec": 0, 00:09:39.585 "rw_mbytes_per_sec": 0, 00:09:39.585 "r_mbytes_per_sec": 0, 00:09:39.585 "w_mbytes_per_sec": 0 00:09:39.585 }, 00:09:39.585 "claimed": false, 00:09:39.585 "zoned": false, 00:09:39.586 "supported_io_types": { 00:09:39.586 "read": true, 00:09:39.586 "write": true, 00:09:39.586 "unmap": true, 00:09:39.586 "flush": true, 00:09:39.586 "reset": true, 00:09:39.586 "nvme_admin": false, 00:09:39.586 "nvme_io": false, 00:09:39.586 "nvme_io_md": false, 00:09:39.586 "write_zeroes": true, 00:09:39.586 "zcopy": true, 00:09:39.586 "get_zone_info": false, 00:09:39.586 "zone_management": false, 00:09:39.586 "zone_append": false, 00:09:39.586 "compare": false, 00:09:39.586 "compare_and_write": false, 00:09:39.586 "abort": true, 00:09:39.586 "seek_hole": false, 00:09:39.586 "seek_data": false, 00:09:39.586 "copy": true, 00:09:39.586 "nvme_iov_md": false 00:09:39.586 }, 00:09:39.586 "memory_domains": [ 00:09:39.586 { 00:09:39.586 "dma_device_id": "system", 00:09:39.586 "dma_device_type": 1 00:09:39.586 }, 00:09:39.586 { 00:09:39.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.586 "dma_device_type": 2 00:09:39.586 } 00:09:39.586 ], 00:09:39.586 "driver_specific": {} 00:09:39.586 } 00:09:39.586 ] 00:09:39.586 14:58:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:39.586 14:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:09:39.586 14:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:39.586 14:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:39.845 [2024-07-12 14:58:05.587736] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:39.845 [2024-07-12 14:58:05.587791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:39.845 [2024-07-12 14:58:05.587801] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:39.845 [2024-07-12 14:58:05.588357] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:39.845 14:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:39.845 14:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:39.845 14:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:39.845 14:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:39.845 14:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:39.845 14:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:39.845 14:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:39.845 14:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:39.845 14:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:39.845 14:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:39.845 14:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:39.845 14:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.104 14:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:40.104 "name": "Existed_Raid", 00:09:40.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.104 "strip_size_kb": 64, 00:09:40.104 "state": "configuring", 00:09:40.104 "raid_level": "raid0", 00:09:40.104 "superblock": false, 00:09:40.104 "num_base_bdevs": 3, 00:09:40.104 "num_base_bdevs_discovered": 2, 00:09:40.104 "num_base_bdevs_operational": 3, 00:09:40.104 "base_bdevs_list": [ 00:09:40.104 { 00:09:40.104 "name": "BaseBdev1", 00:09:40.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.104 "is_configured": false, 00:09:40.104 "data_offset": 0, 00:09:40.104 "data_size": 0 00:09:40.104 }, 00:09:40.104 { 00:09:40.104 "name": "BaseBdev2", 00:09:40.104 "uuid": "23d7e080-405f-11ef-b2a4-e9dca065e82e", 00:09:40.104 "is_configured": true, 00:09:40.104 "data_offset": 0, 00:09:40.104 "data_size": 65536 00:09:40.104 }, 00:09:40.104 { 00:09:40.104 "name": "BaseBdev3", 00:09:40.104 "uuid": "24550238-405f-11ef-b2a4-e9dca065e82e", 00:09:40.104 "is_configured": true, 00:09:40.104 "data_offset": 0, 00:09:40.104 "data_size": 65536 00:09:40.104 } 00:09:40.104 ] 00:09:40.104 }' 00:09:40.104 14:58:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:40.104 14:58:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.362 14:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:09:40.621 [2024-07-12 14:58:06.371782] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:40.621 14:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:40.621 14:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:40.621 14:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:40.621 14:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:40.621 14:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:40.621 14:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:40.621 14:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:40.621 14:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:40.621 14:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:40.621 14:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:40.621 14:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:40.621 14:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.879 14:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:40.879 "name": "Existed_Raid", 00:09:40.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.879 "strip_size_kb": 64, 00:09:40.879 "state": "configuring", 00:09:40.879 "raid_level": "raid0", 00:09:40.879 "superblock": false, 00:09:40.879 "num_base_bdevs": 3, 00:09:40.879 "num_base_bdevs_discovered": 1, 00:09:40.879 "num_base_bdevs_operational": 3, 00:09:40.879 "base_bdevs_list": [ 00:09:40.879 { 00:09:40.879 "name": "BaseBdev1", 00:09:40.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.879 "is_configured": false, 00:09:40.879 "data_offset": 0, 00:09:40.879 "data_size": 0 00:09:40.879 }, 00:09:40.879 { 00:09:40.879 "name": null, 00:09:40.879 "uuid": "23d7e080-405f-11ef-b2a4-e9dca065e82e", 00:09:40.879 "is_configured": false, 00:09:40.879 "data_offset": 0, 00:09:40.879 "data_size": 65536 00:09:40.879 }, 00:09:40.879 { 00:09:40.879 "name": "BaseBdev3", 00:09:40.879 "uuid": "24550238-405f-11ef-b2a4-e9dca065e82e", 00:09:40.879 "is_configured": true, 00:09:40.879 "data_offset": 0, 00:09:40.879 "data_size": 65536 00:09:40.879 } 00:09:40.879 ] 00:09:40.879 }' 00:09:40.879 14:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:40.879 14:58:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.137 14:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:41.137 14:58:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:41.702 14:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:09:41.702 14:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:41.702 [2024-07-12 14:58:07.520005] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:41.702 BaseBdev1 00:09:41.959 14:58:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:09:41.959 14:58:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:09:41.959 14:58:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:41.959 14:58:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:41.959 14:58:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:41.959 14:58:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:41.959 14:58:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:42.218 14:58:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:42.477 [ 00:09:42.477 { 00:09:42.477 "name": "BaseBdev1", 00:09:42.477 "aliases": [ 00:09:42.477 "25fd3da2-405f-11ef-b2a4-e9dca065e82e" 00:09:42.477 ], 00:09:42.477 "product_name": "Malloc disk", 00:09:42.477 "block_size": 512, 00:09:42.477 "num_blocks": 65536, 00:09:42.477 "uuid": "25fd3da2-405f-11ef-b2a4-e9dca065e82e", 00:09:42.477 "assigned_rate_limits": { 00:09:42.477 "rw_ios_per_sec": 0, 00:09:42.477 "rw_mbytes_per_sec": 0, 00:09:42.477 "r_mbytes_per_sec": 0, 00:09:42.477 "w_mbytes_per_sec": 0 00:09:42.477 }, 00:09:42.477 "claimed": true, 00:09:42.477 "claim_type": "exclusive_write", 00:09:42.477 "zoned": false, 00:09:42.477 "supported_io_types": { 00:09:42.477 "read": true, 00:09:42.477 "write": true, 00:09:42.477 "unmap": true, 00:09:42.477 "flush": true, 00:09:42.477 "reset": true, 00:09:42.477 "nvme_admin": false, 00:09:42.477 "nvme_io": false, 00:09:42.477 "nvme_io_md": false, 00:09:42.477 "write_zeroes": true, 00:09:42.477 "zcopy": true, 00:09:42.477 "get_zone_info": false, 00:09:42.477 "zone_management": false, 00:09:42.477 "zone_append": false, 00:09:42.477 "compare": false, 00:09:42.477 "compare_and_write": false, 00:09:42.477 "abort": true, 00:09:42.477 "seek_hole": false, 00:09:42.477 "seek_data": false, 00:09:42.477 "copy": true, 00:09:42.477 "nvme_iov_md": false 00:09:42.477 }, 00:09:42.477 "memory_domains": [ 00:09:42.477 { 00:09:42.477 "dma_device_id": "system", 00:09:42.477 "dma_device_type": 1 00:09:42.477 }, 00:09:42.477 { 00:09:42.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.477 "dma_device_type": 2 00:09:42.477 } 00:09:42.477 ], 00:09:42.477 "driver_specific": {} 00:09:42.477 } 00:09:42.477 ] 00:09:42.477 14:58:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:42.477 14:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:42.477 14:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:42.477 14:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:42.477 14:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:42.477 14:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:42.477 14:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:42.477 14:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:42.477 14:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:42.477 14:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:42.477 14:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:42.477 14:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:42.477 14:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.736 14:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:42.736 "name": "Existed_Raid", 00:09:42.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.736 "strip_size_kb": 64, 00:09:42.736 "state": "configuring", 00:09:42.736 "raid_level": "raid0", 00:09:42.736 "superblock": false, 00:09:42.736 "num_base_bdevs": 3, 00:09:42.736 "num_base_bdevs_discovered": 2, 00:09:42.736 "num_base_bdevs_operational": 3, 00:09:42.736 "base_bdevs_list": [ 00:09:42.736 { 00:09:42.736 "name": "BaseBdev1", 00:09:42.736 "uuid": "25fd3da2-405f-11ef-b2a4-e9dca065e82e", 00:09:42.736 "is_configured": true, 00:09:42.736 "data_offset": 0, 00:09:42.736 "data_size": 65536 00:09:42.736 }, 00:09:42.736 { 00:09:42.736 "name": null, 00:09:42.736 "uuid": "23d7e080-405f-11ef-b2a4-e9dca065e82e", 00:09:42.736 "is_configured": false, 00:09:42.736 "data_offset": 0, 00:09:42.736 "data_size": 65536 00:09:42.736 }, 00:09:42.736 { 00:09:42.736 "name": "BaseBdev3", 00:09:42.736 "uuid": "24550238-405f-11ef-b2a4-e9dca065e82e", 00:09:42.736 "is_configured": true, 00:09:42.736 "data_offset": 0, 00:09:42.736 "data_size": 65536 00:09:42.736 } 00:09:42.736 ] 00:09:42.736 }' 00:09:42.736 14:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:42.736 14:58:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.994 14:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:42.994 14:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:43.251 14:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:09:43.251 14:58:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:09:43.509 [2024-07-12 14:58:09.096007] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:43.509 14:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:43.509 14:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:43.509 14:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:43.509 14:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:43.509 14:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:43.509 14:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:43.509 14:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:43.509 14:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:43.509 14:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:43.509 14:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:43.509 14:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:43.509 14:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.767 14:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:43.767 "name": "Existed_Raid", 00:09:43.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.767 "strip_size_kb": 64, 00:09:43.767 "state": "configuring", 00:09:43.767 "raid_level": "raid0", 00:09:43.767 "superblock": false, 00:09:43.767 "num_base_bdevs": 3, 00:09:43.767 "num_base_bdevs_discovered": 1, 00:09:43.767 "num_base_bdevs_operational": 3, 00:09:43.767 "base_bdevs_list": [ 00:09:43.767 { 00:09:43.767 "name": "BaseBdev1", 00:09:43.767 "uuid": "25fd3da2-405f-11ef-b2a4-e9dca065e82e", 00:09:43.767 "is_configured": true, 00:09:43.767 "data_offset": 0, 00:09:43.767 "data_size": 65536 00:09:43.767 }, 00:09:43.767 { 00:09:43.767 "name": null, 00:09:43.767 "uuid": "23d7e080-405f-11ef-b2a4-e9dca065e82e", 00:09:43.767 "is_configured": false, 00:09:43.767 "data_offset": 0, 00:09:43.767 "data_size": 65536 00:09:43.767 }, 00:09:43.767 { 00:09:43.767 "name": null, 00:09:43.767 "uuid": "24550238-405f-11ef-b2a4-e9dca065e82e", 00:09:43.767 "is_configured": false, 00:09:43.767 "data_offset": 0, 00:09:43.767 "data_size": 65536 00:09:43.767 } 00:09:43.767 ] 00:09:43.767 }' 00:09:43.767 14:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:43.767 14:58:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.025 14:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:44.025 14:58:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:44.283 14:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:09:44.283 14:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:44.541 [2024-07-12 14:58:10.304113] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:44.541 14:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:44.541 14:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:44.541 14:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:44.541 14:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:44.541 14:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:44.541 14:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:44.541 14:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:44.541 14:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:44.541 14:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:44.541 14:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:44.541 14:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:44.541 14:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.799 14:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:44.799 "name": "Existed_Raid", 00:09:44.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.799 "strip_size_kb": 64, 00:09:44.799 "state": "configuring", 00:09:44.799 "raid_level": "raid0", 00:09:44.799 "superblock": false, 00:09:44.799 "num_base_bdevs": 3, 00:09:44.799 "num_base_bdevs_discovered": 2, 00:09:44.799 "num_base_bdevs_operational": 3, 00:09:44.799 "base_bdevs_list": [ 00:09:44.799 { 00:09:44.799 "name": "BaseBdev1", 00:09:44.799 "uuid": "25fd3da2-405f-11ef-b2a4-e9dca065e82e", 00:09:44.799 "is_configured": true, 00:09:44.799 "data_offset": 0, 00:09:44.799 "data_size": 65536 00:09:44.799 }, 00:09:44.799 { 00:09:44.799 "name": null, 00:09:44.799 "uuid": "23d7e080-405f-11ef-b2a4-e9dca065e82e", 00:09:44.799 "is_configured": false, 00:09:44.799 "data_offset": 0, 00:09:44.799 "data_size": 65536 00:09:44.799 }, 00:09:44.799 { 00:09:44.799 "name": "BaseBdev3", 00:09:44.799 "uuid": "24550238-405f-11ef-b2a4-e9dca065e82e", 00:09:44.799 "is_configured": true, 00:09:44.799 "data_offset": 0, 00:09:44.799 "data_size": 65536 00:09:44.799 } 00:09:44.799 ] 00:09:44.799 }' 00:09:44.799 14:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:44.799 14:58:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.367 14:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:45.367 14:58:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:45.367 14:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:09:45.367 14:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:45.626 [2024-07-12 14:58:11.372205] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:45.626 14:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:45.626 14:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:45.626 14:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:45.626 14:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:45.626 14:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:45.626 14:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:45.626 14:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:45.626 14:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:45.626 14:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:45.626 14:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:45.626 14:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.626 14:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:45.885 14:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:45.885 "name": "Existed_Raid", 00:09:45.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.885 "strip_size_kb": 64, 00:09:45.885 "state": "configuring", 00:09:45.885 "raid_level": "raid0", 00:09:45.885 "superblock": false, 00:09:45.885 "num_base_bdevs": 3, 00:09:45.885 "num_base_bdevs_discovered": 1, 00:09:45.885 "num_base_bdevs_operational": 3, 00:09:45.885 "base_bdevs_list": [ 00:09:45.885 { 00:09:45.885 "name": null, 00:09:45.885 "uuid": "25fd3da2-405f-11ef-b2a4-e9dca065e82e", 00:09:45.885 "is_configured": false, 00:09:45.885 "data_offset": 0, 00:09:45.885 "data_size": 65536 00:09:45.885 }, 00:09:45.885 { 00:09:45.885 "name": null, 00:09:45.885 "uuid": "23d7e080-405f-11ef-b2a4-e9dca065e82e", 00:09:45.885 "is_configured": false, 00:09:45.885 "data_offset": 0, 00:09:45.885 "data_size": 65536 00:09:45.885 }, 00:09:45.885 { 00:09:45.885 "name": "BaseBdev3", 00:09:45.885 "uuid": "24550238-405f-11ef-b2a4-e9dca065e82e", 00:09:45.885 "is_configured": true, 00:09:45.885 "data_offset": 0, 00:09:45.885 "data_size": 65536 00:09:45.885 } 00:09:45.885 ] 00:09:45.885 }' 00:09:45.885 14:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:45.885 14:58:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.144 14:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:46.144 14:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:46.401 14:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:09:46.401 14:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:46.659 [2024-07-12 14:58:12.385946] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:46.659 14:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:46.659 14:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:46.659 14:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:46.659 14:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:46.659 14:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:46.659 14:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:46.659 14:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:46.659 14:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:46.659 14:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:46.659 14:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:46.659 14:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.659 14:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:46.917 14:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:46.917 "name": "Existed_Raid", 00:09:46.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.917 "strip_size_kb": 64, 00:09:46.917 "state": "configuring", 00:09:46.917 "raid_level": "raid0", 00:09:46.917 "superblock": false, 00:09:46.917 "num_base_bdevs": 3, 00:09:46.917 "num_base_bdevs_discovered": 2, 00:09:46.917 "num_base_bdevs_operational": 3, 00:09:46.917 "base_bdevs_list": [ 00:09:46.917 { 00:09:46.917 "name": null, 00:09:46.917 "uuid": "25fd3da2-405f-11ef-b2a4-e9dca065e82e", 00:09:46.917 "is_configured": false, 00:09:46.917 "data_offset": 0, 00:09:46.917 "data_size": 65536 00:09:46.917 }, 00:09:46.917 { 00:09:46.917 "name": "BaseBdev2", 00:09:46.917 "uuid": "23d7e080-405f-11ef-b2a4-e9dca065e82e", 00:09:46.917 "is_configured": true, 00:09:46.917 "data_offset": 0, 00:09:46.917 "data_size": 65536 00:09:46.917 }, 00:09:46.917 { 00:09:46.917 "name": "BaseBdev3", 00:09:46.917 "uuid": "24550238-405f-11ef-b2a4-e9dca065e82e", 00:09:46.917 "is_configured": true, 00:09:46.917 "data_offset": 0, 00:09:46.917 "data_size": 65536 00:09:46.917 } 00:09:46.917 ] 00:09:46.917 }' 00:09:46.917 14:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:46.917 14:58:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.184 14:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:47.184 14:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:47.449 14:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:09:47.449 14:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:47.449 14:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:47.707 14:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 25fd3da2-405f-11ef-b2a4-e9dca065e82e 00:09:48.019 [2024-07-12 14:58:13.774245] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:48.019 [2024-07-12 14:58:13.774277] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x34801b834a00 00:09:48.019 [2024-07-12 14:58:13.774282] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:48.019 [2024-07-12 14:58:13.774321] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x34801b897e20 00:09:48.019 [2024-07-12 14:58:13.774390] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x34801b834a00 00:09:48.019 [2024-07-12 14:58:13.774394] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x34801b834a00 00:09:48.019 [2024-07-12 14:58:13.774426] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.019 NewBaseBdev 00:09:48.019 14:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:09:48.019 14:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:09:48.019 14:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:48.019 14:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:48.019 14:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:48.019 14:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:48.019 14:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:48.276 14:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:48.534 [ 00:09:48.534 { 00:09:48.534 "name": "NewBaseBdev", 00:09:48.534 "aliases": [ 00:09:48.534 "25fd3da2-405f-11ef-b2a4-e9dca065e82e" 00:09:48.534 ], 00:09:48.534 "product_name": "Malloc disk", 00:09:48.534 "block_size": 512, 00:09:48.534 "num_blocks": 65536, 00:09:48.534 "uuid": "25fd3da2-405f-11ef-b2a4-e9dca065e82e", 00:09:48.534 "assigned_rate_limits": { 00:09:48.534 "rw_ios_per_sec": 0, 00:09:48.534 "rw_mbytes_per_sec": 0, 00:09:48.534 "r_mbytes_per_sec": 0, 00:09:48.534 "w_mbytes_per_sec": 0 00:09:48.534 }, 00:09:48.534 "claimed": true, 00:09:48.534 "claim_type": "exclusive_write", 00:09:48.534 "zoned": false, 00:09:48.534 "supported_io_types": { 00:09:48.534 "read": true, 00:09:48.534 "write": true, 00:09:48.534 "unmap": true, 00:09:48.534 "flush": true, 00:09:48.534 "reset": true, 00:09:48.534 "nvme_admin": false, 00:09:48.534 "nvme_io": false, 00:09:48.534 "nvme_io_md": false, 00:09:48.534 "write_zeroes": true, 00:09:48.534 "zcopy": true, 00:09:48.534 "get_zone_info": false, 00:09:48.534 "zone_management": false, 00:09:48.534 "zone_append": false, 00:09:48.534 "compare": false, 00:09:48.534 "compare_and_write": false, 00:09:48.534 "abort": true, 00:09:48.534 "seek_hole": false, 00:09:48.534 "seek_data": false, 00:09:48.534 "copy": true, 00:09:48.534 "nvme_iov_md": false 00:09:48.534 }, 00:09:48.534 "memory_domains": [ 00:09:48.534 { 00:09:48.534 "dma_device_id": "system", 00:09:48.534 "dma_device_type": 1 00:09:48.534 }, 00:09:48.534 { 00:09:48.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.534 "dma_device_type": 2 00:09:48.534 } 00:09:48.534 ], 00:09:48.534 "driver_specific": {} 00:09:48.534 } 00:09:48.534 ] 00:09:48.534 14:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:48.534 14:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:48.534 14:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:48.534 14:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:48.534 14:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:48.534 14:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:48.534 14:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:48.534 14:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:48.534 14:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:48.534 14:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:48.534 14:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:48.534 14:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.534 14:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:48.791 14:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:48.791 "name": "Existed_Raid", 00:09:48.791 "uuid": "29b796c2-405f-11ef-b2a4-e9dca065e82e", 00:09:48.791 "strip_size_kb": 64, 00:09:48.791 "state": "online", 00:09:48.791 "raid_level": "raid0", 00:09:48.791 "superblock": false, 00:09:48.791 "num_base_bdevs": 3, 00:09:48.791 "num_base_bdevs_discovered": 3, 00:09:48.791 "num_base_bdevs_operational": 3, 00:09:48.791 "base_bdevs_list": [ 00:09:48.791 { 00:09:48.791 "name": "NewBaseBdev", 00:09:48.791 "uuid": "25fd3da2-405f-11ef-b2a4-e9dca065e82e", 00:09:48.791 "is_configured": true, 00:09:48.791 "data_offset": 0, 00:09:48.791 "data_size": 65536 00:09:48.791 }, 00:09:48.791 { 00:09:48.791 "name": "BaseBdev2", 00:09:48.791 "uuid": "23d7e080-405f-11ef-b2a4-e9dca065e82e", 00:09:48.791 "is_configured": true, 00:09:48.791 "data_offset": 0, 00:09:48.791 "data_size": 65536 00:09:48.791 }, 00:09:48.791 { 00:09:48.791 "name": "BaseBdev3", 00:09:48.791 "uuid": "24550238-405f-11ef-b2a4-e9dca065e82e", 00:09:48.791 "is_configured": true, 00:09:48.791 "data_offset": 0, 00:09:48.791 "data_size": 65536 00:09:48.791 } 00:09:48.791 ] 00:09:48.791 }' 00:09:48.791 14:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:48.791 14:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.048 14:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:09:49.048 14:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:49.048 14:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:49.048 14:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:49.048 14:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:49.048 14:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:49.048 14:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:49.048 14:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:49.306 [2024-07-12 14:58:15.086242] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:49.306 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:49.306 "name": "Existed_Raid", 00:09:49.306 "aliases": [ 00:09:49.306 "29b796c2-405f-11ef-b2a4-e9dca065e82e" 00:09:49.306 ], 00:09:49.306 "product_name": "Raid Volume", 00:09:49.306 "block_size": 512, 00:09:49.306 "num_blocks": 196608, 00:09:49.306 "uuid": "29b796c2-405f-11ef-b2a4-e9dca065e82e", 00:09:49.306 "assigned_rate_limits": { 00:09:49.306 "rw_ios_per_sec": 0, 00:09:49.306 "rw_mbytes_per_sec": 0, 00:09:49.306 "r_mbytes_per_sec": 0, 00:09:49.306 "w_mbytes_per_sec": 0 00:09:49.306 }, 00:09:49.306 "claimed": false, 00:09:49.306 "zoned": false, 00:09:49.306 "supported_io_types": { 00:09:49.306 "read": true, 00:09:49.306 "write": true, 00:09:49.306 "unmap": true, 00:09:49.306 "flush": true, 00:09:49.306 "reset": true, 00:09:49.306 "nvme_admin": false, 00:09:49.306 "nvme_io": false, 00:09:49.306 "nvme_io_md": false, 00:09:49.306 "write_zeroes": true, 00:09:49.306 "zcopy": false, 00:09:49.306 "get_zone_info": false, 00:09:49.306 "zone_management": false, 00:09:49.306 "zone_append": false, 00:09:49.306 "compare": false, 00:09:49.306 "compare_and_write": false, 00:09:49.306 "abort": false, 00:09:49.306 "seek_hole": false, 00:09:49.306 "seek_data": false, 00:09:49.306 "copy": false, 00:09:49.306 "nvme_iov_md": false 00:09:49.306 }, 00:09:49.306 "memory_domains": [ 00:09:49.306 { 00:09:49.306 "dma_device_id": "system", 00:09:49.306 "dma_device_type": 1 00:09:49.306 }, 00:09:49.306 { 00:09:49.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.306 "dma_device_type": 2 00:09:49.306 }, 00:09:49.306 { 00:09:49.306 "dma_device_id": "system", 00:09:49.306 "dma_device_type": 1 00:09:49.306 }, 00:09:49.306 { 00:09:49.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.306 "dma_device_type": 2 00:09:49.306 }, 00:09:49.306 { 00:09:49.306 "dma_device_id": "system", 00:09:49.306 "dma_device_type": 1 00:09:49.306 }, 00:09:49.306 { 00:09:49.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.306 "dma_device_type": 2 00:09:49.306 } 00:09:49.306 ], 00:09:49.306 "driver_specific": { 00:09:49.306 "raid": { 00:09:49.306 "uuid": "29b796c2-405f-11ef-b2a4-e9dca065e82e", 00:09:49.306 "strip_size_kb": 64, 00:09:49.306 "state": "online", 00:09:49.306 "raid_level": "raid0", 00:09:49.306 "superblock": false, 00:09:49.306 "num_base_bdevs": 3, 00:09:49.306 "num_base_bdevs_discovered": 3, 00:09:49.306 "num_base_bdevs_operational": 3, 00:09:49.306 "base_bdevs_list": [ 00:09:49.306 { 00:09:49.306 "name": "NewBaseBdev", 00:09:49.306 "uuid": "25fd3da2-405f-11ef-b2a4-e9dca065e82e", 00:09:49.306 "is_configured": true, 00:09:49.306 "data_offset": 0, 00:09:49.306 "data_size": 65536 00:09:49.306 }, 00:09:49.306 { 00:09:49.306 "name": "BaseBdev2", 00:09:49.306 "uuid": "23d7e080-405f-11ef-b2a4-e9dca065e82e", 00:09:49.306 "is_configured": true, 00:09:49.306 "data_offset": 0, 00:09:49.306 "data_size": 65536 00:09:49.306 }, 00:09:49.306 { 00:09:49.306 "name": "BaseBdev3", 00:09:49.306 "uuid": "24550238-405f-11ef-b2a4-e9dca065e82e", 00:09:49.306 "is_configured": true, 00:09:49.306 "data_offset": 0, 00:09:49.306 "data_size": 65536 00:09:49.306 } 00:09:49.306 ] 00:09:49.306 } 00:09:49.306 } 00:09:49.306 }' 00:09:49.306 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:49.306 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:09:49.306 BaseBdev2 00:09:49.306 BaseBdev3' 00:09:49.306 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:49.306 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:09:49.306 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:49.564 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:49.564 "name": "NewBaseBdev", 00:09:49.564 "aliases": [ 00:09:49.564 "25fd3da2-405f-11ef-b2a4-e9dca065e82e" 00:09:49.564 ], 00:09:49.564 "product_name": "Malloc disk", 00:09:49.564 "block_size": 512, 00:09:49.564 "num_blocks": 65536, 00:09:49.564 "uuid": "25fd3da2-405f-11ef-b2a4-e9dca065e82e", 00:09:49.564 "assigned_rate_limits": { 00:09:49.564 "rw_ios_per_sec": 0, 00:09:49.564 "rw_mbytes_per_sec": 0, 00:09:49.564 "r_mbytes_per_sec": 0, 00:09:49.564 "w_mbytes_per_sec": 0 00:09:49.564 }, 00:09:49.564 "claimed": true, 00:09:49.564 "claim_type": "exclusive_write", 00:09:49.564 "zoned": false, 00:09:49.564 "supported_io_types": { 00:09:49.564 "read": true, 00:09:49.564 "write": true, 00:09:49.564 "unmap": true, 00:09:49.564 "flush": true, 00:09:49.564 "reset": true, 00:09:49.564 "nvme_admin": false, 00:09:49.564 "nvme_io": false, 00:09:49.564 "nvme_io_md": false, 00:09:49.564 "write_zeroes": true, 00:09:49.564 "zcopy": true, 00:09:49.564 "get_zone_info": false, 00:09:49.564 "zone_management": false, 00:09:49.564 "zone_append": false, 00:09:49.564 "compare": false, 00:09:49.564 "compare_and_write": false, 00:09:49.564 "abort": true, 00:09:49.564 "seek_hole": false, 00:09:49.564 "seek_data": false, 00:09:49.564 "copy": true, 00:09:49.564 "nvme_iov_md": false 00:09:49.564 }, 00:09:49.564 "memory_domains": [ 00:09:49.564 { 00:09:49.564 "dma_device_id": "system", 00:09:49.564 "dma_device_type": 1 00:09:49.564 }, 00:09:49.564 { 00:09:49.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.564 "dma_device_type": 2 00:09:49.564 } 00:09:49.564 ], 00:09:49.564 "driver_specific": {} 00:09:49.564 }' 00:09:49.564 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:49.564 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:49.564 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:49.564 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:49.564 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:49.564 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:49.564 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:49.564 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:49.564 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:49.564 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:49.822 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:49.822 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:49.822 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:49.822 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:49.822 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:49.822 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:49.822 "name": "BaseBdev2", 00:09:49.822 "aliases": [ 00:09:49.822 "23d7e080-405f-11ef-b2a4-e9dca065e82e" 00:09:49.822 ], 00:09:49.822 "product_name": "Malloc disk", 00:09:49.822 "block_size": 512, 00:09:49.822 "num_blocks": 65536, 00:09:49.822 "uuid": "23d7e080-405f-11ef-b2a4-e9dca065e82e", 00:09:49.822 "assigned_rate_limits": { 00:09:49.822 "rw_ios_per_sec": 0, 00:09:49.822 "rw_mbytes_per_sec": 0, 00:09:49.822 "r_mbytes_per_sec": 0, 00:09:49.822 "w_mbytes_per_sec": 0 00:09:49.822 }, 00:09:49.822 "claimed": true, 00:09:49.822 "claim_type": "exclusive_write", 00:09:49.822 "zoned": false, 00:09:49.822 "supported_io_types": { 00:09:49.822 "read": true, 00:09:49.822 "write": true, 00:09:49.822 "unmap": true, 00:09:49.822 "flush": true, 00:09:49.822 "reset": true, 00:09:49.822 "nvme_admin": false, 00:09:49.822 "nvme_io": false, 00:09:49.822 "nvme_io_md": false, 00:09:49.822 "write_zeroes": true, 00:09:49.822 "zcopy": true, 00:09:49.822 "get_zone_info": false, 00:09:49.822 "zone_management": false, 00:09:49.822 "zone_append": false, 00:09:49.822 "compare": false, 00:09:49.822 "compare_and_write": false, 00:09:49.822 "abort": true, 00:09:49.822 "seek_hole": false, 00:09:49.822 "seek_data": false, 00:09:49.823 "copy": true, 00:09:49.823 "nvme_iov_md": false 00:09:49.823 }, 00:09:49.823 "memory_domains": [ 00:09:49.823 { 00:09:49.823 "dma_device_id": "system", 00:09:49.823 "dma_device_type": 1 00:09:49.823 }, 00:09:49.823 { 00:09:49.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.823 "dma_device_type": 2 00:09:49.823 } 00:09:49.823 ], 00:09:49.823 "driver_specific": {} 00:09:49.823 }' 00:09:49.823 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:50.082 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:50.082 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:50.082 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:50.082 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:50.082 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:50.082 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:50.082 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:50.082 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:50.082 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:50.082 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:50.082 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:50.082 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:50.082 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:09:50.082 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:50.341 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:50.341 "name": "BaseBdev3", 00:09:50.341 "aliases": [ 00:09:50.341 "24550238-405f-11ef-b2a4-e9dca065e82e" 00:09:50.341 ], 00:09:50.341 "product_name": "Malloc disk", 00:09:50.341 "block_size": 512, 00:09:50.341 "num_blocks": 65536, 00:09:50.341 "uuid": "24550238-405f-11ef-b2a4-e9dca065e82e", 00:09:50.341 "assigned_rate_limits": { 00:09:50.341 "rw_ios_per_sec": 0, 00:09:50.341 "rw_mbytes_per_sec": 0, 00:09:50.341 "r_mbytes_per_sec": 0, 00:09:50.341 "w_mbytes_per_sec": 0 00:09:50.341 }, 00:09:50.341 "claimed": true, 00:09:50.341 "claim_type": "exclusive_write", 00:09:50.341 "zoned": false, 00:09:50.341 "supported_io_types": { 00:09:50.341 "read": true, 00:09:50.341 "write": true, 00:09:50.341 "unmap": true, 00:09:50.341 "flush": true, 00:09:50.341 "reset": true, 00:09:50.341 "nvme_admin": false, 00:09:50.341 "nvme_io": false, 00:09:50.341 "nvme_io_md": false, 00:09:50.341 "write_zeroes": true, 00:09:50.341 "zcopy": true, 00:09:50.341 "get_zone_info": false, 00:09:50.341 "zone_management": false, 00:09:50.341 "zone_append": false, 00:09:50.341 "compare": false, 00:09:50.341 "compare_and_write": false, 00:09:50.341 "abort": true, 00:09:50.341 "seek_hole": false, 00:09:50.341 "seek_data": false, 00:09:50.341 "copy": true, 00:09:50.341 "nvme_iov_md": false 00:09:50.341 }, 00:09:50.341 "memory_domains": [ 00:09:50.341 { 00:09:50.341 "dma_device_id": "system", 00:09:50.341 "dma_device_type": 1 00:09:50.341 }, 00:09:50.341 { 00:09:50.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.341 "dma_device_type": 2 00:09:50.341 } 00:09:50.341 ], 00:09:50.341 "driver_specific": {} 00:09:50.341 }' 00:09:50.341 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:50.341 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:50.341 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:50.341 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:50.341 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:50.341 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:50.341 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:50.341 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:50.341 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:50.341 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:50.341 14:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:50.341 14:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:50.341 14:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:50.599 [2024-07-12 14:58:16.226281] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:50.599 [2024-07-12 14:58:16.226307] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:50.599 [2024-07-12 14:58:16.226328] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.599 [2024-07-12 14:58:16.226341] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:50.599 [2024-07-12 14:58:16.226361] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x34801b834a00 name Existed_Raid, state offline 00:09:50.599 14:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 51959 00:09:50.599 14:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 51959 ']' 00:09:50.599 14:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 51959 00:09:50.599 14:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:09:50.599 14:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:09:50.599 14:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 51959 00:09:50.599 14:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:09:50.599 14:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:09:50.599 14:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:09:50.599 14:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51959' 00:09:50.599 killing process with pid 51959 00:09:50.599 14:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 51959 00:09:50.599 [2024-07-12 14:58:16.254060] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:50.599 14:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 51959 00:09:50.599 [2024-07-12 14:58:16.270907] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:50.857 14:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:09:50.857 00:09:50.857 real 0m23.790s 00:09:50.857 user 0m43.501s 00:09:50.857 sys 0m3.202s 00:09:50.857 14:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:50.857 14:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.857 ************************************ 00:09:50.857 END TEST raid_state_function_test 00:09:50.857 ************************************ 00:09:50.857 14:58:16 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:50.857 14:58:16 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:09:50.857 14:58:16 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:50.857 14:58:16 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:50.857 14:58:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:50.857 ************************************ 00:09:50.857 START TEST raid_state_function_test_sb 00:09:50.857 ************************************ 00:09:50.857 14:58:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 true 00:09:50.857 14:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:09:50.857 14:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:09:50.857 14:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:09:50.857 14:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:50.857 14:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:50.857 14:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:50.857 14:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:50.857 14:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:50.857 14:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:50.857 14:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:50.857 14:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:50.857 14:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:50.857 14:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:09:50.857 14:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:50.857 14:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:50.857 14:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:50.857 14:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:50.857 14:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:50.857 14:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:50.857 14:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:50.857 14:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:50.857 14:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:09:50.857 14:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:09:50.858 14:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:09:50.858 14:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:09:50.858 14:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:09:50.858 14:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=52688 00:09:50.858 Process raid pid: 52688 00:09:50.858 14:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 52688' 00:09:50.858 14:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 52688 /var/tmp/spdk-raid.sock 00:09:50.858 14:58:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:50.858 14:58:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 52688 ']' 00:09:50.858 14:58:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:50.858 14:58:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:50.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:50.858 14:58:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:50.858 14:58:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:50.858 14:58:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.858 [2024-07-12 14:58:16.514669] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:09:50.858 [2024-07-12 14:58:16.514811] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:09:51.425 EAL: TSC is not safe to use in SMP mode 00:09:51.425 EAL: TSC is not invariant 00:09:51.425 [2024-07-12 14:58:17.039102] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.425 [2024-07-12 14:58:17.127289] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:51.425 [2024-07-12 14:58:17.129411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.425 [2024-07-12 14:58:17.130152] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:51.425 [2024-07-12 14:58:17.130166] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:51.683 14:58:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:51.683 14:58:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:09:51.683 14:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:51.942 [2024-07-12 14:58:17.738155] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:51.942 [2024-07-12 14:58:17.738211] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:51.942 [2024-07-12 14:58:17.738216] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:51.942 [2024-07-12 14:58:17.738225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:51.942 [2024-07-12 14:58:17.738228] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:51.942 [2024-07-12 14:58:17.738236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:51.942 14:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:51.942 14:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:51.942 14:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:51.942 14:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:51.942 14:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:51.942 14:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:51.942 14:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:51.942 14:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:51.942 14:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:51.942 14:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:51.942 14:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:51.942 14:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.201 14:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:52.201 "name": "Existed_Raid", 00:09:52.201 "uuid": "2c146c74-405f-11ef-b2a4-e9dca065e82e", 00:09:52.201 "strip_size_kb": 64, 00:09:52.201 "state": "configuring", 00:09:52.201 "raid_level": "raid0", 00:09:52.201 "superblock": true, 00:09:52.201 "num_base_bdevs": 3, 00:09:52.201 "num_base_bdevs_discovered": 0, 00:09:52.201 "num_base_bdevs_operational": 3, 00:09:52.201 "base_bdevs_list": [ 00:09:52.201 { 00:09:52.201 "name": "BaseBdev1", 00:09:52.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.201 "is_configured": false, 00:09:52.201 "data_offset": 0, 00:09:52.201 "data_size": 0 00:09:52.201 }, 00:09:52.201 { 00:09:52.201 "name": "BaseBdev2", 00:09:52.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.201 "is_configured": false, 00:09:52.201 "data_offset": 0, 00:09:52.201 "data_size": 0 00:09:52.201 }, 00:09:52.201 { 00:09:52.201 "name": "BaseBdev3", 00:09:52.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.201 "is_configured": false, 00:09:52.201 "data_offset": 0, 00:09:52.201 "data_size": 0 00:09:52.201 } 00:09:52.201 ] 00:09:52.201 }' 00:09:52.201 14:58:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:52.201 14:58:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.769 14:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:52.769 [2024-07-12 14:58:18.546184] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:52.769 [2024-07-12 14:58:18.546210] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3bd24434500 name Existed_Raid, state configuring 00:09:52.769 14:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:53.027 [2024-07-12 14:58:18.782212] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:53.027 [2024-07-12 14:58:18.782258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:53.027 [2024-07-12 14:58:18.782263] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:53.027 [2024-07-12 14:58:18.782271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:53.027 [2024-07-12 14:58:18.782275] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:53.027 [2024-07-12 14:58:18.782282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:53.027 14:58:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:53.286 [2024-07-12 14:58:19.039213] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:53.286 BaseBdev1 00:09:53.286 14:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:09:53.286 14:58:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:09:53.286 14:58:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:53.286 14:58:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:53.286 14:58:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:53.286 14:58:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:53.286 14:58:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:53.544 14:58:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:53.802 [ 00:09:53.802 { 00:09:53.802 "name": "BaseBdev1", 00:09:53.802 "aliases": [ 00:09:53.802 "2cdacd05-405f-11ef-b2a4-e9dca065e82e" 00:09:53.802 ], 00:09:53.802 "product_name": "Malloc disk", 00:09:53.802 "block_size": 512, 00:09:53.802 "num_blocks": 65536, 00:09:53.802 "uuid": "2cdacd05-405f-11ef-b2a4-e9dca065e82e", 00:09:53.802 "assigned_rate_limits": { 00:09:53.802 "rw_ios_per_sec": 0, 00:09:53.802 "rw_mbytes_per_sec": 0, 00:09:53.802 "r_mbytes_per_sec": 0, 00:09:53.802 "w_mbytes_per_sec": 0 00:09:53.802 }, 00:09:53.802 "claimed": true, 00:09:53.802 "claim_type": "exclusive_write", 00:09:53.802 "zoned": false, 00:09:53.802 "supported_io_types": { 00:09:53.802 "read": true, 00:09:53.802 "write": true, 00:09:53.802 "unmap": true, 00:09:53.802 "flush": true, 00:09:53.802 "reset": true, 00:09:53.802 "nvme_admin": false, 00:09:53.802 "nvme_io": false, 00:09:53.802 "nvme_io_md": false, 00:09:53.802 "write_zeroes": true, 00:09:53.802 "zcopy": true, 00:09:53.802 "get_zone_info": false, 00:09:53.802 "zone_management": false, 00:09:53.802 "zone_append": false, 00:09:53.802 "compare": false, 00:09:53.802 "compare_and_write": false, 00:09:53.802 "abort": true, 00:09:53.802 "seek_hole": false, 00:09:53.802 "seek_data": false, 00:09:53.802 "copy": true, 00:09:53.802 "nvme_iov_md": false 00:09:53.802 }, 00:09:53.802 "memory_domains": [ 00:09:53.802 { 00:09:53.802 "dma_device_id": "system", 00:09:53.802 "dma_device_type": 1 00:09:53.802 }, 00:09:53.802 { 00:09:53.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.802 "dma_device_type": 2 00:09:53.802 } 00:09:53.802 ], 00:09:53.802 "driver_specific": {} 00:09:53.802 } 00:09:53.802 ] 00:09:53.802 14:58:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:53.802 14:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:53.802 14:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:53.802 14:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:53.802 14:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:53.802 14:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:53.802 14:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:53.802 14:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:53.802 14:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:53.802 14:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:53.802 14:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:53.802 14:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:53.802 14:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.060 14:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:54.060 "name": "Existed_Raid", 00:09:54.060 "uuid": "2cb3bbf1-405f-11ef-b2a4-e9dca065e82e", 00:09:54.060 "strip_size_kb": 64, 00:09:54.060 "state": "configuring", 00:09:54.060 "raid_level": "raid0", 00:09:54.060 "superblock": true, 00:09:54.060 "num_base_bdevs": 3, 00:09:54.060 "num_base_bdevs_discovered": 1, 00:09:54.060 "num_base_bdevs_operational": 3, 00:09:54.060 "base_bdevs_list": [ 00:09:54.060 { 00:09:54.060 "name": "BaseBdev1", 00:09:54.060 "uuid": "2cdacd05-405f-11ef-b2a4-e9dca065e82e", 00:09:54.060 "is_configured": true, 00:09:54.060 "data_offset": 2048, 00:09:54.060 "data_size": 63488 00:09:54.060 }, 00:09:54.060 { 00:09:54.060 "name": "BaseBdev2", 00:09:54.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.060 "is_configured": false, 00:09:54.060 "data_offset": 0, 00:09:54.060 "data_size": 0 00:09:54.060 }, 00:09:54.060 { 00:09:54.060 "name": "BaseBdev3", 00:09:54.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.060 "is_configured": false, 00:09:54.060 "data_offset": 0, 00:09:54.060 "data_size": 0 00:09:54.060 } 00:09:54.060 ] 00:09:54.060 }' 00:09:54.060 14:58:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:54.060 14:58:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.625 14:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:54.883 [2024-07-12 14:58:20.502339] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:54.883 [2024-07-12 14:58:20.502374] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3bd24434500 name Existed_Raid, state configuring 00:09:54.883 14:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:55.141 [2024-07-12 14:58:20.810396] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:55.141 [2024-07-12 14:58:20.811219] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:55.141 [2024-07-12 14:58:20.811259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:55.141 [2024-07-12 14:58:20.811265] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:55.141 [2024-07-12 14:58:20.811273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:55.141 14:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:09:55.141 14:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:55.141 14:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:55.141 14:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:55.141 14:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:55.141 14:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:55.141 14:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:55.141 14:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:55.141 14:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:55.141 14:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:55.141 14:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:55.141 14:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:55.141 14:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:55.141 14:58:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.399 14:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:55.399 "name": "Existed_Raid", 00:09:55.399 "uuid": "2de935aa-405f-11ef-b2a4-e9dca065e82e", 00:09:55.399 "strip_size_kb": 64, 00:09:55.399 "state": "configuring", 00:09:55.399 "raid_level": "raid0", 00:09:55.399 "superblock": true, 00:09:55.399 "num_base_bdevs": 3, 00:09:55.399 "num_base_bdevs_discovered": 1, 00:09:55.399 "num_base_bdevs_operational": 3, 00:09:55.399 "base_bdevs_list": [ 00:09:55.399 { 00:09:55.399 "name": "BaseBdev1", 00:09:55.399 "uuid": "2cdacd05-405f-11ef-b2a4-e9dca065e82e", 00:09:55.399 "is_configured": true, 00:09:55.399 "data_offset": 2048, 00:09:55.399 "data_size": 63488 00:09:55.399 }, 00:09:55.399 { 00:09:55.399 "name": "BaseBdev2", 00:09:55.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.399 "is_configured": false, 00:09:55.399 "data_offset": 0, 00:09:55.399 "data_size": 0 00:09:55.399 }, 00:09:55.399 { 00:09:55.399 "name": "BaseBdev3", 00:09:55.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.399 "is_configured": false, 00:09:55.399 "data_offset": 0, 00:09:55.399 "data_size": 0 00:09:55.399 } 00:09:55.399 ] 00:09:55.399 }' 00:09:55.399 14:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:55.399 14:58:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.658 14:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:56.232 [2024-07-12 14:58:21.778582] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:56.232 BaseBdev2 00:09:56.232 14:58:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:09:56.232 14:58:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:09:56.232 14:58:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:56.232 14:58:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:56.232 14:58:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:56.232 14:58:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:56.232 14:58:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:56.491 14:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:56.750 [ 00:09:56.750 { 00:09:56.750 "name": "BaseBdev2", 00:09:56.750 "aliases": [ 00:09:56.750 "2e7cecc9-405f-11ef-b2a4-e9dca065e82e" 00:09:56.750 ], 00:09:56.750 "product_name": "Malloc disk", 00:09:56.750 "block_size": 512, 00:09:56.750 "num_blocks": 65536, 00:09:56.750 "uuid": "2e7cecc9-405f-11ef-b2a4-e9dca065e82e", 00:09:56.750 "assigned_rate_limits": { 00:09:56.750 "rw_ios_per_sec": 0, 00:09:56.750 "rw_mbytes_per_sec": 0, 00:09:56.750 "r_mbytes_per_sec": 0, 00:09:56.750 "w_mbytes_per_sec": 0 00:09:56.750 }, 00:09:56.750 "claimed": true, 00:09:56.750 "claim_type": "exclusive_write", 00:09:56.750 "zoned": false, 00:09:56.750 "supported_io_types": { 00:09:56.750 "read": true, 00:09:56.750 "write": true, 00:09:56.750 "unmap": true, 00:09:56.750 "flush": true, 00:09:56.750 "reset": true, 00:09:56.750 "nvme_admin": false, 00:09:56.750 "nvme_io": false, 00:09:56.750 "nvme_io_md": false, 00:09:56.750 "write_zeroes": true, 00:09:56.750 "zcopy": true, 00:09:56.750 "get_zone_info": false, 00:09:56.750 "zone_management": false, 00:09:56.750 "zone_append": false, 00:09:56.750 "compare": false, 00:09:56.750 "compare_and_write": false, 00:09:56.750 "abort": true, 00:09:56.750 "seek_hole": false, 00:09:56.750 "seek_data": false, 00:09:56.750 "copy": true, 00:09:56.750 "nvme_iov_md": false 00:09:56.750 }, 00:09:56.750 "memory_domains": [ 00:09:56.750 { 00:09:56.750 "dma_device_id": "system", 00:09:56.750 "dma_device_type": 1 00:09:56.750 }, 00:09:56.750 { 00:09:56.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.750 "dma_device_type": 2 00:09:56.750 } 00:09:56.750 ], 00:09:56.750 "driver_specific": {} 00:09:56.750 } 00:09:56.750 ] 00:09:56.750 14:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:56.750 14:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:56.750 14:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:56.750 14:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:56.750 14:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:56.750 14:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:56.750 14:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:56.750 14:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:56.750 14:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:56.750 14:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:56.750 14:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:56.750 14:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:56.750 14:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:56.750 14:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:56.750 14:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.009 14:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:57.009 "name": "Existed_Raid", 00:09:57.009 "uuid": "2de935aa-405f-11ef-b2a4-e9dca065e82e", 00:09:57.009 "strip_size_kb": 64, 00:09:57.009 "state": "configuring", 00:09:57.009 "raid_level": "raid0", 00:09:57.009 "superblock": true, 00:09:57.009 "num_base_bdevs": 3, 00:09:57.009 "num_base_bdevs_discovered": 2, 00:09:57.009 "num_base_bdevs_operational": 3, 00:09:57.009 "base_bdevs_list": [ 00:09:57.009 { 00:09:57.009 "name": "BaseBdev1", 00:09:57.009 "uuid": "2cdacd05-405f-11ef-b2a4-e9dca065e82e", 00:09:57.009 "is_configured": true, 00:09:57.009 "data_offset": 2048, 00:09:57.009 "data_size": 63488 00:09:57.009 }, 00:09:57.009 { 00:09:57.009 "name": "BaseBdev2", 00:09:57.009 "uuid": "2e7cecc9-405f-11ef-b2a4-e9dca065e82e", 00:09:57.009 "is_configured": true, 00:09:57.009 "data_offset": 2048, 00:09:57.009 "data_size": 63488 00:09:57.009 }, 00:09:57.009 { 00:09:57.009 "name": "BaseBdev3", 00:09:57.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.009 "is_configured": false, 00:09:57.009 "data_offset": 0, 00:09:57.009 "data_size": 0 00:09:57.009 } 00:09:57.009 ] 00:09:57.009 }' 00:09:57.009 14:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:57.009 14:58:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.265 14:58:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:57.523 [2024-07-12 14:58:23.210668] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:57.523 [2024-07-12 14:58:23.210748] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3bd24434a00 00:09:57.523 [2024-07-12 14:58:23.210755] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:57.523 [2024-07-12 14:58:23.210777] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3bd24497e20 00:09:57.523 [2024-07-12 14:58:23.210829] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3bd24434a00 00:09:57.523 [2024-07-12 14:58:23.210833] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3bd24434a00 00:09:57.523 [2024-07-12 14:58:23.210854] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.523 BaseBdev3 00:09:57.523 14:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:09:57.523 14:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:09:57.523 14:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:57.523 14:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:57.523 14:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:57.523 14:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:57.523 14:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:57.781 14:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:58.039 [ 00:09:58.039 { 00:09:58.039 "name": "BaseBdev3", 00:09:58.039 "aliases": [ 00:09:58.039 "2f57724c-405f-11ef-b2a4-e9dca065e82e" 00:09:58.039 ], 00:09:58.039 "product_name": "Malloc disk", 00:09:58.039 "block_size": 512, 00:09:58.039 "num_blocks": 65536, 00:09:58.039 "uuid": "2f57724c-405f-11ef-b2a4-e9dca065e82e", 00:09:58.039 "assigned_rate_limits": { 00:09:58.039 "rw_ios_per_sec": 0, 00:09:58.039 "rw_mbytes_per_sec": 0, 00:09:58.039 "r_mbytes_per_sec": 0, 00:09:58.039 "w_mbytes_per_sec": 0 00:09:58.039 }, 00:09:58.039 "claimed": true, 00:09:58.039 "claim_type": "exclusive_write", 00:09:58.039 "zoned": false, 00:09:58.039 "supported_io_types": { 00:09:58.039 "read": true, 00:09:58.039 "write": true, 00:09:58.039 "unmap": true, 00:09:58.039 "flush": true, 00:09:58.039 "reset": true, 00:09:58.039 "nvme_admin": false, 00:09:58.039 "nvme_io": false, 00:09:58.039 "nvme_io_md": false, 00:09:58.039 "write_zeroes": true, 00:09:58.039 "zcopy": true, 00:09:58.039 "get_zone_info": false, 00:09:58.039 "zone_management": false, 00:09:58.039 "zone_append": false, 00:09:58.039 "compare": false, 00:09:58.039 "compare_and_write": false, 00:09:58.039 "abort": true, 00:09:58.039 "seek_hole": false, 00:09:58.039 "seek_data": false, 00:09:58.039 "copy": true, 00:09:58.039 "nvme_iov_md": false 00:09:58.039 }, 00:09:58.039 "memory_domains": [ 00:09:58.039 { 00:09:58.039 "dma_device_id": "system", 00:09:58.039 "dma_device_type": 1 00:09:58.039 }, 00:09:58.039 { 00:09:58.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.039 "dma_device_type": 2 00:09:58.039 } 00:09:58.039 ], 00:09:58.039 "driver_specific": {} 00:09:58.039 } 00:09:58.039 ] 00:09:58.039 14:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:58.039 14:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:58.039 14:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:58.039 14:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:58.039 14:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:58.039 14:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:58.039 14:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:58.039 14:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:58.039 14:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:58.039 14:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:58.039 14:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:58.039 14:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:58.039 14:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:58.039 14:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.039 14:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:58.297 14:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:58.297 "name": "Existed_Raid", 00:09:58.297 "uuid": "2de935aa-405f-11ef-b2a4-e9dca065e82e", 00:09:58.297 "strip_size_kb": 64, 00:09:58.297 "state": "online", 00:09:58.297 "raid_level": "raid0", 00:09:58.297 "superblock": true, 00:09:58.297 "num_base_bdevs": 3, 00:09:58.297 "num_base_bdevs_discovered": 3, 00:09:58.297 "num_base_bdevs_operational": 3, 00:09:58.297 "base_bdevs_list": [ 00:09:58.297 { 00:09:58.297 "name": "BaseBdev1", 00:09:58.297 "uuid": "2cdacd05-405f-11ef-b2a4-e9dca065e82e", 00:09:58.297 "is_configured": true, 00:09:58.297 "data_offset": 2048, 00:09:58.297 "data_size": 63488 00:09:58.297 }, 00:09:58.297 { 00:09:58.297 "name": "BaseBdev2", 00:09:58.297 "uuid": "2e7cecc9-405f-11ef-b2a4-e9dca065e82e", 00:09:58.297 "is_configured": true, 00:09:58.297 "data_offset": 2048, 00:09:58.297 "data_size": 63488 00:09:58.297 }, 00:09:58.297 { 00:09:58.297 "name": "BaseBdev3", 00:09:58.297 "uuid": "2f57724c-405f-11ef-b2a4-e9dca065e82e", 00:09:58.297 "is_configured": true, 00:09:58.297 "data_offset": 2048, 00:09:58.297 "data_size": 63488 00:09:58.297 } 00:09:58.297 ] 00:09:58.297 }' 00:09:58.297 14:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:58.297 14:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.555 14:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:09:58.555 14:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:58.555 14:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:58.555 14:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:58.555 14:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:58.555 14:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:09:58.555 14:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:58.555 14:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:58.838 [2024-07-12 14:58:24.538687] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:58.838 14:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:58.838 "name": "Existed_Raid", 00:09:58.838 "aliases": [ 00:09:58.838 "2de935aa-405f-11ef-b2a4-e9dca065e82e" 00:09:58.838 ], 00:09:58.838 "product_name": "Raid Volume", 00:09:58.838 "block_size": 512, 00:09:58.838 "num_blocks": 190464, 00:09:58.838 "uuid": "2de935aa-405f-11ef-b2a4-e9dca065e82e", 00:09:58.838 "assigned_rate_limits": { 00:09:58.838 "rw_ios_per_sec": 0, 00:09:58.838 "rw_mbytes_per_sec": 0, 00:09:58.838 "r_mbytes_per_sec": 0, 00:09:58.838 "w_mbytes_per_sec": 0 00:09:58.838 }, 00:09:58.838 "claimed": false, 00:09:58.838 "zoned": false, 00:09:58.838 "supported_io_types": { 00:09:58.838 "read": true, 00:09:58.838 "write": true, 00:09:58.838 "unmap": true, 00:09:58.838 "flush": true, 00:09:58.838 "reset": true, 00:09:58.838 "nvme_admin": false, 00:09:58.838 "nvme_io": false, 00:09:58.838 "nvme_io_md": false, 00:09:58.838 "write_zeroes": true, 00:09:58.838 "zcopy": false, 00:09:58.838 "get_zone_info": false, 00:09:58.838 "zone_management": false, 00:09:58.838 "zone_append": false, 00:09:58.838 "compare": false, 00:09:58.838 "compare_and_write": false, 00:09:58.838 "abort": false, 00:09:58.838 "seek_hole": false, 00:09:58.838 "seek_data": false, 00:09:58.838 "copy": false, 00:09:58.838 "nvme_iov_md": false 00:09:58.838 }, 00:09:58.838 "memory_domains": [ 00:09:58.838 { 00:09:58.838 "dma_device_id": "system", 00:09:58.838 "dma_device_type": 1 00:09:58.838 }, 00:09:58.838 { 00:09:58.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.838 "dma_device_type": 2 00:09:58.838 }, 00:09:58.838 { 00:09:58.838 "dma_device_id": "system", 00:09:58.838 "dma_device_type": 1 00:09:58.838 }, 00:09:58.838 { 00:09:58.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.838 "dma_device_type": 2 00:09:58.838 }, 00:09:58.838 { 00:09:58.838 "dma_device_id": "system", 00:09:58.838 "dma_device_type": 1 00:09:58.838 }, 00:09:58.838 { 00:09:58.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.838 "dma_device_type": 2 00:09:58.838 } 00:09:58.838 ], 00:09:58.838 "driver_specific": { 00:09:58.838 "raid": { 00:09:58.838 "uuid": "2de935aa-405f-11ef-b2a4-e9dca065e82e", 00:09:58.838 "strip_size_kb": 64, 00:09:58.838 "state": "online", 00:09:58.838 "raid_level": "raid0", 00:09:58.838 "superblock": true, 00:09:58.838 "num_base_bdevs": 3, 00:09:58.838 "num_base_bdevs_discovered": 3, 00:09:58.838 "num_base_bdevs_operational": 3, 00:09:58.838 "base_bdevs_list": [ 00:09:58.838 { 00:09:58.838 "name": "BaseBdev1", 00:09:58.838 "uuid": "2cdacd05-405f-11ef-b2a4-e9dca065e82e", 00:09:58.838 "is_configured": true, 00:09:58.838 "data_offset": 2048, 00:09:58.838 "data_size": 63488 00:09:58.838 }, 00:09:58.838 { 00:09:58.838 "name": "BaseBdev2", 00:09:58.838 "uuid": "2e7cecc9-405f-11ef-b2a4-e9dca065e82e", 00:09:58.838 "is_configured": true, 00:09:58.838 "data_offset": 2048, 00:09:58.838 "data_size": 63488 00:09:58.838 }, 00:09:58.838 { 00:09:58.838 "name": "BaseBdev3", 00:09:58.838 "uuid": "2f57724c-405f-11ef-b2a4-e9dca065e82e", 00:09:58.838 "is_configured": true, 00:09:58.838 "data_offset": 2048, 00:09:58.838 "data_size": 63488 00:09:58.838 } 00:09:58.838 ] 00:09:58.838 } 00:09:58.838 } 00:09:58.838 }' 00:09:58.838 14:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:58.838 14:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:09:58.838 BaseBdev2 00:09:58.838 BaseBdev3' 00:09:58.838 14:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:58.838 14:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:58.838 14:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:59.112 14:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:59.112 "name": "BaseBdev1", 00:09:59.112 "aliases": [ 00:09:59.112 "2cdacd05-405f-11ef-b2a4-e9dca065e82e" 00:09:59.112 ], 00:09:59.112 "product_name": "Malloc disk", 00:09:59.112 "block_size": 512, 00:09:59.112 "num_blocks": 65536, 00:09:59.112 "uuid": "2cdacd05-405f-11ef-b2a4-e9dca065e82e", 00:09:59.112 "assigned_rate_limits": { 00:09:59.112 "rw_ios_per_sec": 0, 00:09:59.112 "rw_mbytes_per_sec": 0, 00:09:59.112 "r_mbytes_per_sec": 0, 00:09:59.112 "w_mbytes_per_sec": 0 00:09:59.112 }, 00:09:59.112 "claimed": true, 00:09:59.112 "claim_type": "exclusive_write", 00:09:59.112 "zoned": false, 00:09:59.112 "supported_io_types": { 00:09:59.112 "read": true, 00:09:59.112 "write": true, 00:09:59.112 "unmap": true, 00:09:59.112 "flush": true, 00:09:59.112 "reset": true, 00:09:59.112 "nvme_admin": false, 00:09:59.112 "nvme_io": false, 00:09:59.112 "nvme_io_md": false, 00:09:59.112 "write_zeroes": true, 00:09:59.112 "zcopy": true, 00:09:59.112 "get_zone_info": false, 00:09:59.112 "zone_management": false, 00:09:59.112 "zone_append": false, 00:09:59.112 "compare": false, 00:09:59.112 "compare_and_write": false, 00:09:59.112 "abort": true, 00:09:59.112 "seek_hole": false, 00:09:59.112 "seek_data": false, 00:09:59.112 "copy": true, 00:09:59.112 "nvme_iov_md": false 00:09:59.112 }, 00:09:59.112 "memory_domains": [ 00:09:59.112 { 00:09:59.112 "dma_device_id": "system", 00:09:59.112 "dma_device_type": 1 00:09:59.112 }, 00:09:59.112 { 00:09:59.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.112 "dma_device_type": 2 00:09:59.112 } 00:09:59.112 ], 00:09:59.112 "driver_specific": {} 00:09:59.112 }' 00:09:59.112 14:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:59.112 14:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:59.112 14:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:59.112 14:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:59.112 14:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:59.112 14:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:59.112 14:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:59.112 14:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:59.112 14:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:59.112 14:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:59.112 14:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:59.112 14:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:59.112 14:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:59.112 14:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:59.112 14:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:59.370 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:59.370 "name": "BaseBdev2", 00:09:59.370 "aliases": [ 00:09:59.370 "2e7cecc9-405f-11ef-b2a4-e9dca065e82e" 00:09:59.370 ], 00:09:59.370 "product_name": "Malloc disk", 00:09:59.370 "block_size": 512, 00:09:59.370 "num_blocks": 65536, 00:09:59.370 "uuid": "2e7cecc9-405f-11ef-b2a4-e9dca065e82e", 00:09:59.370 "assigned_rate_limits": { 00:09:59.370 "rw_ios_per_sec": 0, 00:09:59.370 "rw_mbytes_per_sec": 0, 00:09:59.370 "r_mbytes_per_sec": 0, 00:09:59.370 "w_mbytes_per_sec": 0 00:09:59.370 }, 00:09:59.370 "claimed": true, 00:09:59.370 "claim_type": "exclusive_write", 00:09:59.370 "zoned": false, 00:09:59.370 "supported_io_types": { 00:09:59.370 "read": true, 00:09:59.370 "write": true, 00:09:59.370 "unmap": true, 00:09:59.370 "flush": true, 00:09:59.370 "reset": true, 00:09:59.370 "nvme_admin": false, 00:09:59.370 "nvme_io": false, 00:09:59.370 "nvme_io_md": false, 00:09:59.370 "write_zeroes": true, 00:09:59.370 "zcopy": true, 00:09:59.370 "get_zone_info": false, 00:09:59.370 "zone_management": false, 00:09:59.370 "zone_append": false, 00:09:59.370 "compare": false, 00:09:59.370 "compare_and_write": false, 00:09:59.370 "abort": true, 00:09:59.370 "seek_hole": false, 00:09:59.370 "seek_data": false, 00:09:59.370 "copy": true, 00:09:59.370 "nvme_iov_md": false 00:09:59.370 }, 00:09:59.370 "memory_domains": [ 00:09:59.370 { 00:09:59.370 "dma_device_id": "system", 00:09:59.370 "dma_device_type": 1 00:09:59.370 }, 00:09:59.370 { 00:09:59.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.370 "dma_device_type": 2 00:09:59.370 } 00:09:59.370 ], 00:09:59.370 "driver_specific": {} 00:09:59.371 }' 00:09:59.371 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:59.371 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:59.371 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:59.371 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:59.371 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:59.371 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:59.371 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:59.371 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:59.371 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:59.371 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:59.371 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:59.371 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:59.371 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:59.371 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:09:59.371 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:59.629 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:59.629 "name": "BaseBdev3", 00:09:59.629 "aliases": [ 00:09:59.629 "2f57724c-405f-11ef-b2a4-e9dca065e82e" 00:09:59.629 ], 00:09:59.629 "product_name": "Malloc disk", 00:09:59.629 "block_size": 512, 00:09:59.629 "num_blocks": 65536, 00:09:59.629 "uuid": "2f57724c-405f-11ef-b2a4-e9dca065e82e", 00:09:59.629 "assigned_rate_limits": { 00:09:59.629 "rw_ios_per_sec": 0, 00:09:59.629 "rw_mbytes_per_sec": 0, 00:09:59.629 "r_mbytes_per_sec": 0, 00:09:59.629 "w_mbytes_per_sec": 0 00:09:59.629 }, 00:09:59.629 "claimed": true, 00:09:59.629 "claim_type": "exclusive_write", 00:09:59.629 "zoned": false, 00:09:59.629 "supported_io_types": { 00:09:59.629 "read": true, 00:09:59.629 "write": true, 00:09:59.629 "unmap": true, 00:09:59.629 "flush": true, 00:09:59.629 "reset": true, 00:09:59.629 "nvme_admin": false, 00:09:59.629 "nvme_io": false, 00:09:59.629 "nvme_io_md": false, 00:09:59.629 "write_zeroes": true, 00:09:59.629 "zcopy": true, 00:09:59.629 "get_zone_info": false, 00:09:59.629 "zone_management": false, 00:09:59.629 "zone_append": false, 00:09:59.629 "compare": false, 00:09:59.629 "compare_and_write": false, 00:09:59.629 "abort": true, 00:09:59.629 "seek_hole": false, 00:09:59.629 "seek_data": false, 00:09:59.629 "copy": true, 00:09:59.629 "nvme_iov_md": false 00:09:59.629 }, 00:09:59.629 "memory_domains": [ 00:09:59.629 { 00:09:59.629 "dma_device_id": "system", 00:09:59.629 "dma_device_type": 1 00:09:59.629 }, 00:09:59.629 { 00:09:59.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.629 "dma_device_type": 2 00:09:59.629 } 00:09:59.629 ], 00:09:59.629 "driver_specific": {} 00:09:59.629 }' 00:09:59.629 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:59.629 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:59.629 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:59.629 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:59.629 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:59.629 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:59.629 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:59.887 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:59.888 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:59.888 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:59.888 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:59.888 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:59.888 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:59.888 [2024-07-12 14:58:25.702738] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:59.888 [2024-07-12 14:58:25.702767] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:59.888 [2024-07-12 14:58:25.702785] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:00.146 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:00.146 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:10:00.146 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:00.146 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:10:00.146 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:10:00.146 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:00.146 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:00.146 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:10:00.146 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:00.146 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:00.146 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:00.146 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:00.146 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:00.146 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:00.146 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:00.146 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:00.146 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.405 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:00.405 "name": "Existed_Raid", 00:10:00.405 "uuid": "2de935aa-405f-11ef-b2a4-e9dca065e82e", 00:10:00.405 "strip_size_kb": 64, 00:10:00.405 "state": "offline", 00:10:00.405 "raid_level": "raid0", 00:10:00.405 "superblock": true, 00:10:00.405 "num_base_bdevs": 3, 00:10:00.405 "num_base_bdevs_discovered": 2, 00:10:00.405 "num_base_bdevs_operational": 2, 00:10:00.405 "base_bdevs_list": [ 00:10:00.405 { 00:10:00.405 "name": null, 00:10:00.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.405 "is_configured": false, 00:10:00.405 "data_offset": 2048, 00:10:00.405 "data_size": 63488 00:10:00.405 }, 00:10:00.405 { 00:10:00.405 "name": "BaseBdev2", 00:10:00.405 "uuid": "2e7cecc9-405f-11ef-b2a4-e9dca065e82e", 00:10:00.405 "is_configured": true, 00:10:00.405 "data_offset": 2048, 00:10:00.405 "data_size": 63488 00:10:00.405 }, 00:10:00.405 { 00:10:00.405 "name": "BaseBdev3", 00:10:00.405 "uuid": "2f57724c-405f-11ef-b2a4-e9dca065e82e", 00:10:00.405 "is_configured": true, 00:10:00.405 "data_offset": 2048, 00:10:00.405 "data_size": 63488 00:10:00.405 } 00:10:00.405 ] 00:10:00.405 }' 00:10:00.405 14:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:00.405 14:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.664 14:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:00.664 14:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:00.664 14:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:00.664 14:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:00.922 14:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:00.922 14:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:00.922 14:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:01.181 [2024-07-12 14:58:26.796551] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:01.181 14:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:01.181 14:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:01.181 14:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:01.181 14:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:01.439 14:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:01.439 14:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:01.439 14:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:10:01.697 [2024-07-12 14:58:27.378367] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:01.697 [2024-07-12 14:58:27.378399] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3bd24434a00 name Existed_Raid, state offline 00:10:01.697 14:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:01.697 14:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:01.697 14:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:01.697 14:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:01.955 14:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:01.955 14:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:01.955 14:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:10:01.955 14:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:10:01.955 14:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:01.955 14:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:02.213 BaseBdev2 00:10:02.213 14:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:10:02.213 14:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:02.213 14:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:02.213 14:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:02.213 14:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:02.213 14:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:02.213 14:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:02.471 14:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:02.800 [ 00:10:02.800 { 00:10:02.800 "name": "BaseBdev2", 00:10:02.800 "aliases": [ 00:10:02.800 "322b9102-405f-11ef-b2a4-e9dca065e82e" 00:10:02.800 ], 00:10:02.800 "product_name": "Malloc disk", 00:10:02.800 "block_size": 512, 00:10:02.800 "num_blocks": 65536, 00:10:02.800 "uuid": "322b9102-405f-11ef-b2a4-e9dca065e82e", 00:10:02.800 "assigned_rate_limits": { 00:10:02.800 "rw_ios_per_sec": 0, 00:10:02.800 "rw_mbytes_per_sec": 0, 00:10:02.800 "r_mbytes_per_sec": 0, 00:10:02.800 "w_mbytes_per_sec": 0 00:10:02.800 }, 00:10:02.800 "claimed": false, 00:10:02.800 "zoned": false, 00:10:02.800 "supported_io_types": { 00:10:02.800 "read": true, 00:10:02.800 "write": true, 00:10:02.800 "unmap": true, 00:10:02.800 "flush": true, 00:10:02.800 "reset": true, 00:10:02.800 "nvme_admin": false, 00:10:02.800 "nvme_io": false, 00:10:02.800 "nvme_io_md": false, 00:10:02.800 "write_zeroes": true, 00:10:02.800 "zcopy": true, 00:10:02.800 "get_zone_info": false, 00:10:02.800 "zone_management": false, 00:10:02.800 "zone_append": false, 00:10:02.800 "compare": false, 00:10:02.800 "compare_and_write": false, 00:10:02.800 "abort": true, 00:10:02.800 "seek_hole": false, 00:10:02.800 "seek_data": false, 00:10:02.800 "copy": true, 00:10:02.800 "nvme_iov_md": false 00:10:02.800 }, 00:10:02.800 "memory_domains": [ 00:10:02.800 { 00:10:02.800 "dma_device_id": "system", 00:10:02.800 "dma_device_type": 1 00:10:02.800 }, 00:10:02.800 { 00:10:02.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.800 "dma_device_type": 2 00:10:02.800 } 00:10:02.800 ], 00:10:02.800 "driver_specific": {} 00:10:02.800 } 00:10:02.800 ] 00:10:02.800 14:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:02.800 14:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:02.800 14:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:02.800 14:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:03.060 BaseBdev3 00:10:03.060 14:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:10:03.060 14:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:10:03.060 14:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:03.060 14:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:03.060 14:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:03.060 14:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:03.060 14:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:03.319 14:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:03.579 [ 00:10:03.579 { 00:10:03.579 "name": "BaseBdev3", 00:10:03.579 "aliases": [ 00:10:03.579 "32ac5cf1-405f-11ef-b2a4-e9dca065e82e" 00:10:03.579 ], 00:10:03.579 "product_name": "Malloc disk", 00:10:03.579 "block_size": 512, 00:10:03.579 "num_blocks": 65536, 00:10:03.579 "uuid": "32ac5cf1-405f-11ef-b2a4-e9dca065e82e", 00:10:03.579 "assigned_rate_limits": { 00:10:03.579 "rw_ios_per_sec": 0, 00:10:03.579 "rw_mbytes_per_sec": 0, 00:10:03.579 "r_mbytes_per_sec": 0, 00:10:03.579 "w_mbytes_per_sec": 0 00:10:03.579 }, 00:10:03.579 "claimed": false, 00:10:03.579 "zoned": false, 00:10:03.579 "supported_io_types": { 00:10:03.579 "read": true, 00:10:03.579 "write": true, 00:10:03.579 "unmap": true, 00:10:03.579 "flush": true, 00:10:03.579 "reset": true, 00:10:03.579 "nvme_admin": false, 00:10:03.579 "nvme_io": false, 00:10:03.579 "nvme_io_md": false, 00:10:03.579 "write_zeroes": true, 00:10:03.579 "zcopy": true, 00:10:03.579 "get_zone_info": false, 00:10:03.579 "zone_management": false, 00:10:03.579 "zone_append": false, 00:10:03.579 "compare": false, 00:10:03.579 "compare_and_write": false, 00:10:03.579 "abort": true, 00:10:03.579 "seek_hole": false, 00:10:03.579 "seek_data": false, 00:10:03.579 "copy": true, 00:10:03.579 "nvme_iov_md": false 00:10:03.579 }, 00:10:03.579 "memory_domains": [ 00:10:03.579 { 00:10:03.579 "dma_device_id": "system", 00:10:03.579 "dma_device_type": 1 00:10:03.579 }, 00:10:03.579 { 00:10:03.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.579 "dma_device_type": 2 00:10:03.579 } 00:10:03.579 ], 00:10:03.579 "driver_specific": {} 00:10:03.579 } 00:10:03.579 ] 00:10:03.579 14:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:03.579 14:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:03.579 14:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:03.579 14:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:03.838 [2024-07-12 14:58:29.660286] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:03.838 [2024-07-12 14:58:29.660360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:03.838 [2024-07-12 14:58:29.660369] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:03.838 [2024-07-12 14:58:29.660930] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.097 14:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:04.097 14:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:04.097 14:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:04.097 14:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:04.097 14:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:04.097 14:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:04.097 14:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:04.097 14:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:04.097 14:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:04.097 14:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:04.097 14:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:04.097 14:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.355 14:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:04.355 "name": "Existed_Raid", 00:10:04.355 "uuid": "332f98bb-405f-11ef-b2a4-e9dca065e82e", 00:10:04.355 "strip_size_kb": 64, 00:10:04.355 "state": "configuring", 00:10:04.355 "raid_level": "raid0", 00:10:04.355 "superblock": true, 00:10:04.355 "num_base_bdevs": 3, 00:10:04.355 "num_base_bdevs_discovered": 2, 00:10:04.355 "num_base_bdevs_operational": 3, 00:10:04.355 "base_bdevs_list": [ 00:10:04.355 { 00:10:04.355 "name": "BaseBdev1", 00:10:04.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.355 "is_configured": false, 00:10:04.355 "data_offset": 0, 00:10:04.355 "data_size": 0 00:10:04.355 }, 00:10:04.355 { 00:10:04.355 "name": "BaseBdev2", 00:10:04.355 "uuid": "322b9102-405f-11ef-b2a4-e9dca065e82e", 00:10:04.355 "is_configured": true, 00:10:04.355 "data_offset": 2048, 00:10:04.355 "data_size": 63488 00:10:04.355 }, 00:10:04.355 { 00:10:04.355 "name": "BaseBdev3", 00:10:04.355 "uuid": "32ac5cf1-405f-11ef-b2a4-e9dca065e82e", 00:10:04.355 "is_configured": true, 00:10:04.355 "data_offset": 2048, 00:10:04.355 "data_size": 63488 00:10:04.355 } 00:10:04.355 ] 00:10:04.355 }' 00:10:04.355 14:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:04.355 14:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.616 14:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:10:04.875 [2024-07-12 14:58:30.520312] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:04.876 14:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:04.876 14:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:04.876 14:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:04.876 14:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:04.876 14:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:04.876 14:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:04.876 14:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:04.876 14:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:04.876 14:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:04.876 14:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:04.876 14:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:04.876 14:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.135 14:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:05.135 "name": "Existed_Raid", 00:10:05.135 "uuid": "332f98bb-405f-11ef-b2a4-e9dca065e82e", 00:10:05.135 "strip_size_kb": 64, 00:10:05.135 "state": "configuring", 00:10:05.135 "raid_level": "raid0", 00:10:05.135 "superblock": true, 00:10:05.135 "num_base_bdevs": 3, 00:10:05.135 "num_base_bdevs_discovered": 1, 00:10:05.135 "num_base_bdevs_operational": 3, 00:10:05.135 "base_bdevs_list": [ 00:10:05.135 { 00:10:05.135 "name": "BaseBdev1", 00:10:05.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.135 "is_configured": false, 00:10:05.135 "data_offset": 0, 00:10:05.135 "data_size": 0 00:10:05.135 }, 00:10:05.135 { 00:10:05.135 "name": null, 00:10:05.135 "uuid": "322b9102-405f-11ef-b2a4-e9dca065e82e", 00:10:05.135 "is_configured": false, 00:10:05.135 "data_offset": 2048, 00:10:05.135 "data_size": 63488 00:10:05.135 }, 00:10:05.135 { 00:10:05.135 "name": "BaseBdev3", 00:10:05.135 "uuid": "32ac5cf1-405f-11ef-b2a4-e9dca065e82e", 00:10:05.135 "is_configured": true, 00:10:05.135 "data_offset": 2048, 00:10:05.135 "data_size": 63488 00:10:05.135 } 00:10:05.135 ] 00:10:05.135 }' 00:10:05.135 14:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:05.135 14:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.394 14:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:05.394 14:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:05.652 14:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:10:05.652 14:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:05.911 [2024-07-12 14:58:31.640397] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:05.912 BaseBdev1 00:10:05.912 14:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:10:05.912 14:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:10:05.912 14:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:05.912 14:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:05.912 14:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:05.912 14:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:05.912 14:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:06.171 14:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:06.491 [ 00:10:06.491 { 00:10:06.491 "name": "BaseBdev1", 00:10:06.491 "aliases": [ 00:10:06.491 "345db888-405f-11ef-b2a4-e9dca065e82e" 00:10:06.491 ], 00:10:06.491 "product_name": "Malloc disk", 00:10:06.491 "block_size": 512, 00:10:06.491 "num_blocks": 65536, 00:10:06.491 "uuid": "345db888-405f-11ef-b2a4-e9dca065e82e", 00:10:06.491 "assigned_rate_limits": { 00:10:06.491 "rw_ios_per_sec": 0, 00:10:06.491 "rw_mbytes_per_sec": 0, 00:10:06.491 "r_mbytes_per_sec": 0, 00:10:06.491 "w_mbytes_per_sec": 0 00:10:06.491 }, 00:10:06.491 "claimed": true, 00:10:06.491 "claim_type": "exclusive_write", 00:10:06.491 "zoned": false, 00:10:06.491 "supported_io_types": { 00:10:06.491 "read": true, 00:10:06.491 "write": true, 00:10:06.491 "unmap": true, 00:10:06.491 "flush": true, 00:10:06.491 "reset": true, 00:10:06.491 "nvme_admin": false, 00:10:06.491 "nvme_io": false, 00:10:06.491 "nvme_io_md": false, 00:10:06.491 "write_zeroes": true, 00:10:06.491 "zcopy": true, 00:10:06.491 "get_zone_info": false, 00:10:06.491 "zone_management": false, 00:10:06.491 "zone_append": false, 00:10:06.491 "compare": false, 00:10:06.491 "compare_and_write": false, 00:10:06.491 "abort": true, 00:10:06.491 "seek_hole": false, 00:10:06.491 "seek_data": false, 00:10:06.491 "copy": true, 00:10:06.491 "nvme_iov_md": false 00:10:06.491 }, 00:10:06.491 "memory_domains": [ 00:10:06.491 { 00:10:06.491 "dma_device_id": "system", 00:10:06.491 "dma_device_type": 1 00:10:06.491 }, 00:10:06.491 { 00:10:06.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.491 "dma_device_type": 2 00:10:06.491 } 00:10:06.491 ], 00:10:06.491 "driver_specific": {} 00:10:06.491 } 00:10:06.491 ] 00:10:06.491 14:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:06.491 14:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:06.491 14:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:06.491 14:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:06.491 14:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:06.491 14:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:06.491 14:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:06.491 14:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:06.491 14:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:06.491 14:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:06.491 14:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:06.491 14:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:06.491 14:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.765 14:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:06.765 "name": "Existed_Raid", 00:10:06.765 "uuid": "332f98bb-405f-11ef-b2a4-e9dca065e82e", 00:10:06.765 "strip_size_kb": 64, 00:10:06.765 "state": "configuring", 00:10:06.765 "raid_level": "raid0", 00:10:06.765 "superblock": true, 00:10:06.765 "num_base_bdevs": 3, 00:10:06.765 "num_base_bdevs_discovered": 2, 00:10:06.765 "num_base_bdevs_operational": 3, 00:10:06.765 "base_bdevs_list": [ 00:10:06.765 { 00:10:06.765 "name": "BaseBdev1", 00:10:06.765 "uuid": "345db888-405f-11ef-b2a4-e9dca065e82e", 00:10:06.765 "is_configured": true, 00:10:06.765 "data_offset": 2048, 00:10:06.765 "data_size": 63488 00:10:06.765 }, 00:10:06.765 { 00:10:06.765 "name": null, 00:10:06.765 "uuid": "322b9102-405f-11ef-b2a4-e9dca065e82e", 00:10:06.765 "is_configured": false, 00:10:06.765 "data_offset": 2048, 00:10:06.765 "data_size": 63488 00:10:06.765 }, 00:10:06.765 { 00:10:06.765 "name": "BaseBdev3", 00:10:06.765 "uuid": "32ac5cf1-405f-11ef-b2a4-e9dca065e82e", 00:10:06.765 "is_configured": true, 00:10:06.765 "data_offset": 2048, 00:10:06.765 "data_size": 63488 00:10:06.765 } 00:10:06.765 ] 00:10:06.765 }' 00:10:06.765 14:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:06.765 14:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.024 14:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:07.024 14:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:07.281 14:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:10:07.281 14:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:10:07.539 [2024-07-12 14:58:33.176207] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:07.539 14:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:07.539 14:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:07.539 14:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:07.539 14:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:07.539 14:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:07.539 14:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:07.539 14:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:07.539 14:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:07.539 14:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:07.539 14:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:07.539 14:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:07.539 14:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.797 14:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:07.797 "name": "Existed_Raid", 00:10:07.797 "uuid": "332f98bb-405f-11ef-b2a4-e9dca065e82e", 00:10:07.797 "strip_size_kb": 64, 00:10:07.797 "state": "configuring", 00:10:07.797 "raid_level": "raid0", 00:10:07.797 "superblock": true, 00:10:07.797 "num_base_bdevs": 3, 00:10:07.797 "num_base_bdevs_discovered": 1, 00:10:07.797 "num_base_bdevs_operational": 3, 00:10:07.797 "base_bdevs_list": [ 00:10:07.797 { 00:10:07.797 "name": "BaseBdev1", 00:10:07.797 "uuid": "345db888-405f-11ef-b2a4-e9dca065e82e", 00:10:07.797 "is_configured": true, 00:10:07.797 "data_offset": 2048, 00:10:07.797 "data_size": 63488 00:10:07.797 }, 00:10:07.797 { 00:10:07.797 "name": null, 00:10:07.797 "uuid": "322b9102-405f-11ef-b2a4-e9dca065e82e", 00:10:07.797 "is_configured": false, 00:10:07.797 "data_offset": 2048, 00:10:07.797 "data_size": 63488 00:10:07.797 }, 00:10:07.797 { 00:10:07.797 "name": null, 00:10:07.797 "uuid": "32ac5cf1-405f-11ef-b2a4-e9dca065e82e", 00:10:07.797 "is_configured": false, 00:10:07.797 "data_offset": 2048, 00:10:07.797 "data_size": 63488 00:10:07.797 } 00:10:07.797 ] 00:10:07.797 }' 00:10:07.797 14:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:07.797 14:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.056 14:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:08.056 14:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:08.313 14:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:10:08.313 14:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:08.571 [2024-07-12 14:58:34.336187] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:08.571 14:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:08.571 14:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:08.571 14:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:08.571 14:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:08.571 14:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:08.571 14:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:08.571 14:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:08.571 14:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:08.571 14:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:08.571 14:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:08.571 14:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:08.571 14:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.830 14:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:08.830 "name": "Existed_Raid", 00:10:08.830 "uuid": "332f98bb-405f-11ef-b2a4-e9dca065e82e", 00:10:08.830 "strip_size_kb": 64, 00:10:08.830 "state": "configuring", 00:10:08.830 "raid_level": "raid0", 00:10:08.830 "superblock": true, 00:10:08.830 "num_base_bdevs": 3, 00:10:08.830 "num_base_bdevs_discovered": 2, 00:10:08.830 "num_base_bdevs_operational": 3, 00:10:08.830 "base_bdevs_list": [ 00:10:08.830 { 00:10:08.830 "name": "BaseBdev1", 00:10:08.830 "uuid": "345db888-405f-11ef-b2a4-e9dca065e82e", 00:10:08.830 "is_configured": true, 00:10:08.830 "data_offset": 2048, 00:10:08.830 "data_size": 63488 00:10:08.830 }, 00:10:08.830 { 00:10:08.830 "name": null, 00:10:08.830 "uuid": "322b9102-405f-11ef-b2a4-e9dca065e82e", 00:10:08.830 "is_configured": false, 00:10:08.830 "data_offset": 2048, 00:10:08.830 "data_size": 63488 00:10:08.830 }, 00:10:08.830 { 00:10:08.830 "name": "BaseBdev3", 00:10:08.830 "uuid": "32ac5cf1-405f-11ef-b2a4-e9dca065e82e", 00:10:08.830 "is_configured": true, 00:10:08.830 "data_offset": 2048, 00:10:08.830 "data_size": 63488 00:10:08.830 } 00:10:08.830 ] 00:10:08.830 }' 00:10:08.830 14:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:08.830 14:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.396 14:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:09.396 14:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:09.655 14:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:10:09.655 14:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:09.913 [2024-07-12 14:58:35.512173] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:09.913 14:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:09.913 14:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:09.913 14:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:09.913 14:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:09.913 14:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:09.913 14:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:09.913 14:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:09.913 14:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:09.913 14:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:09.913 14:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:09.913 14:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:09.913 14:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.172 14:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:10.172 "name": "Existed_Raid", 00:10:10.172 "uuid": "332f98bb-405f-11ef-b2a4-e9dca065e82e", 00:10:10.172 "strip_size_kb": 64, 00:10:10.172 "state": "configuring", 00:10:10.172 "raid_level": "raid0", 00:10:10.172 "superblock": true, 00:10:10.172 "num_base_bdevs": 3, 00:10:10.172 "num_base_bdevs_discovered": 1, 00:10:10.172 "num_base_bdevs_operational": 3, 00:10:10.172 "base_bdevs_list": [ 00:10:10.172 { 00:10:10.172 "name": null, 00:10:10.172 "uuid": "345db888-405f-11ef-b2a4-e9dca065e82e", 00:10:10.172 "is_configured": false, 00:10:10.172 "data_offset": 2048, 00:10:10.172 "data_size": 63488 00:10:10.172 }, 00:10:10.172 { 00:10:10.172 "name": null, 00:10:10.172 "uuid": "322b9102-405f-11ef-b2a4-e9dca065e82e", 00:10:10.172 "is_configured": false, 00:10:10.172 "data_offset": 2048, 00:10:10.172 "data_size": 63488 00:10:10.172 }, 00:10:10.172 { 00:10:10.172 "name": "BaseBdev3", 00:10:10.172 "uuid": "32ac5cf1-405f-11ef-b2a4-e9dca065e82e", 00:10:10.172 "is_configured": true, 00:10:10.172 "data_offset": 2048, 00:10:10.172 "data_size": 63488 00:10:10.172 } 00:10:10.172 ] 00:10:10.172 }' 00:10:10.172 14:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:10.172 14:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.430 14:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:10.430 14:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:10.689 14:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:10:10.689 14:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:11.257 [2024-07-12 14:58:36.789996] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:11.257 14:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:11.257 14:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:11.257 14:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:11.257 14:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:11.257 14:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:11.257 14:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:11.257 14:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:11.257 14:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:11.257 14:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:11.257 14:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:11.257 14:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:11.257 14:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.257 14:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:11.257 "name": "Existed_Raid", 00:10:11.257 "uuid": "332f98bb-405f-11ef-b2a4-e9dca065e82e", 00:10:11.257 "strip_size_kb": 64, 00:10:11.257 "state": "configuring", 00:10:11.257 "raid_level": "raid0", 00:10:11.257 "superblock": true, 00:10:11.257 "num_base_bdevs": 3, 00:10:11.257 "num_base_bdevs_discovered": 2, 00:10:11.257 "num_base_bdevs_operational": 3, 00:10:11.257 "base_bdevs_list": [ 00:10:11.257 { 00:10:11.257 "name": null, 00:10:11.257 "uuid": "345db888-405f-11ef-b2a4-e9dca065e82e", 00:10:11.257 "is_configured": false, 00:10:11.257 "data_offset": 2048, 00:10:11.257 "data_size": 63488 00:10:11.257 }, 00:10:11.257 { 00:10:11.257 "name": "BaseBdev2", 00:10:11.257 "uuid": "322b9102-405f-11ef-b2a4-e9dca065e82e", 00:10:11.257 "is_configured": true, 00:10:11.257 "data_offset": 2048, 00:10:11.257 "data_size": 63488 00:10:11.257 }, 00:10:11.257 { 00:10:11.257 "name": "BaseBdev3", 00:10:11.257 "uuid": "32ac5cf1-405f-11ef-b2a4-e9dca065e82e", 00:10:11.257 "is_configured": true, 00:10:11.257 "data_offset": 2048, 00:10:11.257 "data_size": 63488 00:10:11.257 } 00:10:11.257 ] 00:10:11.257 }' 00:10:11.257 14:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:11.257 14:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.517 14:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:11.517 14:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:12.085 14:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:10:12.085 14:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:12.085 14:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:12.085 14:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 345db888-405f-11ef-b2a4-e9dca065e82e 00:10:12.344 [2024-07-12 14:58:38.098135] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:12.344 [2024-07-12 14:58:38.098201] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3bd24434a00 00:10:12.344 [2024-07-12 14:58:38.098207] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:12.344 [2024-07-12 14:58:38.098227] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3bd24497e20 00:10:12.344 [2024-07-12 14:58:38.098280] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3bd24434a00 00:10:12.344 [2024-07-12 14:58:38.098285] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3bd24434a00 00:10:12.344 [2024-07-12 14:58:38.098305] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.344 NewBaseBdev 00:10:12.344 14:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:10:12.344 14:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:10:12.344 14:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:12.344 14:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:12.344 14:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:12.344 14:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:12.344 14:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:12.602 14:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:12.861 [ 00:10:12.861 { 00:10:12.861 "name": "NewBaseBdev", 00:10:12.861 "aliases": [ 00:10:12.861 "345db888-405f-11ef-b2a4-e9dca065e82e" 00:10:12.861 ], 00:10:12.861 "product_name": "Malloc disk", 00:10:12.861 "block_size": 512, 00:10:12.861 "num_blocks": 65536, 00:10:12.861 "uuid": "345db888-405f-11ef-b2a4-e9dca065e82e", 00:10:12.861 "assigned_rate_limits": { 00:10:12.861 "rw_ios_per_sec": 0, 00:10:12.861 "rw_mbytes_per_sec": 0, 00:10:12.861 "r_mbytes_per_sec": 0, 00:10:12.861 "w_mbytes_per_sec": 0 00:10:12.861 }, 00:10:12.861 "claimed": true, 00:10:12.861 "claim_type": "exclusive_write", 00:10:12.861 "zoned": false, 00:10:12.861 "supported_io_types": { 00:10:12.861 "read": true, 00:10:12.861 "write": true, 00:10:12.861 "unmap": true, 00:10:12.861 "flush": true, 00:10:12.861 "reset": true, 00:10:12.861 "nvme_admin": false, 00:10:12.861 "nvme_io": false, 00:10:12.861 "nvme_io_md": false, 00:10:12.861 "write_zeroes": true, 00:10:12.861 "zcopy": true, 00:10:12.861 "get_zone_info": false, 00:10:12.861 "zone_management": false, 00:10:12.861 "zone_append": false, 00:10:12.861 "compare": false, 00:10:12.861 "compare_and_write": false, 00:10:12.861 "abort": true, 00:10:12.861 "seek_hole": false, 00:10:12.861 "seek_data": false, 00:10:12.861 "copy": true, 00:10:12.861 "nvme_iov_md": false 00:10:12.861 }, 00:10:12.861 "memory_domains": [ 00:10:12.861 { 00:10:12.861 "dma_device_id": "system", 00:10:12.861 "dma_device_type": 1 00:10:12.861 }, 00:10:12.861 { 00:10:12.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.861 "dma_device_type": 2 00:10:12.861 } 00:10:12.861 ], 00:10:12.861 "driver_specific": {} 00:10:12.861 } 00:10:12.861 ] 00:10:12.861 14:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:12.861 14:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:12.861 14:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:12.861 14:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:12.861 14:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:12.861 14:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:12.861 14:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:12.861 14:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:12.861 14:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:12.861 14:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:12.861 14:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:12.861 14:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.861 14:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:13.119 14:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:13.119 "name": "Existed_Raid", 00:10:13.119 "uuid": "332f98bb-405f-11ef-b2a4-e9dca065e82e", 00:10:13.119 "strip_size_kb": 64, 00:10:13.119 "state": "online", 00:10:13.119 "raid_level": "raid0", 00:10:13.119 "superblock": true, 00:10:13.119 "num_base_bdevs": 3, 00:10:13.119 "num_base_bdevs_discovered": 3, 00:10:13.119 "num_base_bdevs_operational": 3, 00:10:13.119 "base_bdevs_list": [ 00:10:13.119 { 00:10:13.119 "name": "NewBaseBdev", 00:10:13.119 "uuid": "345db888-405f-11ef-b2a4-e9dca065e82e", 00:10:13.119 "is_configured": true, 00:10:13.119 "data_offset": 2048, 00:10:13.119 "data_size": 63488 00:10:13.119 }, 00:10:13.119 { 00:10:13.119 "name": "BaseBdev2", 00:10:13.119 "uuid": "322b9102-405f-11ef-b2a4-e9dca065e82e", 00:10:13.119 "is_configured": true, 00:10:13.119 "data_offset": 2048, 00:10:13.119 "data_size": 63488 00:10:13.119 }, 00:10:13.119 { 00:10:13.119 "name": "BaseBdev3", 00:10:13.119 "uuid": "32ac5cf1-405f-11ef-b2a4-e9dca065e82e", 00:10:13.119 "is_configured": true, 00:10:13.119 "data_offset": 2048, 00:10:13.119 "data_size": 63488 00:10:13.119 } 00:10:13.119 ] 00:10:13.119 }' 00:10:13.119 14:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:13.119 14:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.377 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:10:13.377 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:13.377 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:13.377 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:13.377 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:13.377 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:10:13.377 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:13.377 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:13.634 [2024-07-12 14:58:39.405991] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.634 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:13.634 "name": "Existed_Raid", 00:10:13.634 "aliases": [ 00:10:13.634 "332f98bb-405f-11ef-b2a4-e9dca065e82e" 00:10:13.634 ], 00:10:13.634 "product_name": "Raid Volume", 00:10:13.634 "block_size": 512, 00:10:13.634 "num_blocks": 190464, 00:10:13.634 "uuid": "332f98bb-405f-11ef-b2a4-e9dca065e82e", 00:10:13.634 "assigned_rate_limits": { 00:10:13.634 "rw_ios_per_sec": 0, 00:10:13.634 "rw_mbytes_per_sec": 0, 00:10:13.634 "r_mbytes_per_sec": 0, 00:10:13.634 "w_mbytes_per_sec": 0 00:10:13.634 }, 00:10:13.634 "claimed": false, 00:10:13.634 "zoned": false, 00:10:13.634 "supported_io_types": { 00:10:13.634 "read": true, 00:10:13.634 "write": true, 00:10:13.634 "unmap": true, 00:10:13.634 "flush": true, 00:10:13.634 "reset": true, 00:10:13.634 "nvme_admin": false, 00:10:13.634 "nvme_io": false, 00:10:13.634 "nvme_io_md": false, 00:10:13.634 "write_zeroes": true, 00:10:13.634 "zcopy": false, 00:10:13.634 "get_zone_info": false, 00:10:13.634 "zone_management": false, 00:10:13.634 "zone_append": false, 00:10:13.634 "compare": false, 00:10:13.634 "compare_and_write": false, 00:10:13.634 "abort": false, 00:10:13.634 "seek_hole": false, 00:10:13.634 "seek_data": false, 00:10:13.634 "copy": false, 00:10:13.634 "nvme_iov_md": false 00:10:13.634 }, 00:10:13.634 "memory_domains": [ 00:10:13.634 { 00:10:13.634 "dma_device_id": "system", 00:10:13.634 "dma_device_type": 1 00:10:13.634 }, 00:10:13.634 { 00:10:13.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.634 "dma_device_type": 2 00:10:13.634 }, 00:10:13.634 { 00:10:13.634 "dma_device_id": "system", 00:10:13.634 "dma_device_type": 1 00:10:13.634 }, 00:10:13.634 { 00:10:13.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.634 "dma_device_type": 2 00:10:13.634 }, 00:10:13.634 { 00:10:13.634 "dma_device_id": "system", 00:10:13.634 "dma_device_type": 1 00:10:13.634 }, 00:10:13.634 { 00:10:13.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.634 "dma_device_type": 2 00:10:13.634 } 00:10:13.634 ], 00:10:13.634 "driver_specific": { 00:10:13.634 "raid": { 00:10:13.634 "uuid": "332f98bb-405f-11ef-b2a4-e9dca065e82e", 00:10:13.634 "strip_size_kb": 64, 00:10:13.634 "state": "online", 00:10:13.634 "raid_level": "raid0", 00:10:13.634 "superblock": true, 00:10:13.634 "num_base_bdevs": 3, 00:10:13.634 "num_base_bdevs_discovered": 3, 00:10:13.634 "num_base_bdevs_operational": 3, 00:10:13.634 "base_bdevs_list": [ 00:10:13.634 { 00:10:13.634 "name": "NewBaseBdev", 00:10:13.634 "uuid": "345db888-405f-11ef-b2a4-e9dca065e82e", 00:10:13.634 "is_configured": true, 00:10:13.634 "data_offset": 2048, 00:10:13.634 "data_size": 63488 00:10:13.634 }, 00:10:13.634 { 00:10:13.634 "name": "BaseBdev2", 00:10:13.634 "uuid": "322b9102-405f-11ef-b2a4-e9dca065e82e", 00:10:13.634 "is_configured": true, 00:10:13.634 "data_offset": 2048, 00:10:13.634 "data_size": 63488 00:10:13.634 }, 00:10:13.634 { 00:10:13.634 "name": "BaseBdev3", 00:10:13.634 "uuid": "32ac5cf1-405f-11ef-b2a4-e9dca065e82e", 00:10:13.634 "is_configured": true, 00:10:13.634 "data_offset": 2048, 00:10:13.634 "data_size": 63488 00:10:13.634 } 00:10:13.634 ] 00:10:13.634 } 00:10:13.634 } 00:10:13.634 }' 00:10:13.634 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:13.634 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:10:13.634 BaseBdev2 00:10:13.634 BaseBdev3' 00:10:13.634 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:13.634 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:13.634 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:10:13.892 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:13.892 "name": "NewBaseBdev", 00:10:13.892 "aliases": [ 00:10:13.892 "345db888-405f-11ef-b2a4-e9dca065e82e" 00:10:13.892 ], 00:10:13.892 "product_name": "Malloc disk", 00:10:13.892 "block_size": 512, 00:10:13.892 "num_blocks": 65536, 00:10:13.892 "uuid": "345db888-405f-11ef-b2a4-e9dca065e82e", 00:10:13.892 "assigned_rate_limits": { 00:10:13.892 "rw_ios_per_sec": 0, 00:10:13.892 "rw_mbytes_per_sec": 0, 00:10:13.892 "r_mbytes_per_sec": 0, 00:10:13.892 "w_mbytes_per_sec": 0 00:10:13.892 }, 00:10:13.892 "claimed": true, 00:10:13.892 "claim_type": "exclusive_write", 00:10:13.892 "zoned": false, 00:10:13.892 "supported_io_types": { 00:10:13.892 "read": true, 00:10:13.892 "write": true, 00:10:13.892 "unmap": true, 00:10:13.892 "flush": true, 00:10:13.892 "reset": true, 00:10:13.892 "nvme_admin": false, 00:10:13.892 "nvme_io": false, 00:10:13.892 "nvme_io_md": false, 00:10:13.892 "write_zeroes": true, 00:10:13.892 "zcopy": true, 00:10:13.892 "get_zone_info": false, 00:10:13.892 "zone_management": false, 00:10:13.892 "zone_append": false, 00:10:13.892 "compare": false, 00:10:13.892 "compare_and_write": false, 00:10:13.892 "abort": true, 00:10:13.892 "seek_hole": false, 00:10:13.893 "seek_data": false, 00:10:13.893 "copy": true, 00:10:13.893 "nvme_iov_md": false 00:10:13.893 }, 00:10:13.893 "memory_domains": [ 00:10:13.893 { 00:10:13.893 "dma_device_id": "system", 00:10:13.893 "dma_device_type": 1 00:10:13.893 }, 00:10:13.893 { 00:10:13.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.893 "dma_device_type": 2 00:10:13.893 } 00:10:13.893 ], 00:10:13.893 "driver_specific": {} 00:10:13.893 }' 00:10:13.893 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:13.893 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:13.893 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:13.893 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:13.893 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:13.893 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:13.893 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:13.893 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:13.893 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:13.893 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:13.893 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:14.150 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:14.150 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:14.150 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:14.150 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:14.150 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:14.150 "name": "BaseBdev2", 00:10:14.150 "aliases": [ 00:10:14.150 "322b9102-405f-11ef-b2a4-e9dca065e82e" 00:10:14.150 ], 00:10:14.150 "product_name": "Malloc disk", 00:10:14.150 "block_size": 512, 00:10:14.150 "num_blocks": 65536, 00:10:14.150 "uuid": "322b9102-405f-11ef-b2a4-e9dca065e82e", 00:10:14.150 "assigned_rate_limits": { 00:10:14.150 "rw_ios_per_sec": 0, 00:10:14.150 "rw_mbytes_per_sec": 0, 00:10:14.150 "r_mbytes_per_sec": 0, 00:10:14.150 "w_mbytes_per_sec": 0 00:10:14.150 }, 00:10:14.150 "claimed": true, 00:10:14.150 "claim_type": "exclusive_write", 00:10:14.150 "zoned": false, 00:10:14.150 "supported_io_types": { 00:10:14.150 "read": true, 00:10:14.150 "write": true, 00:10:14.150 "unmap": true, 00:10:14.150 "flush": true, 00:10:14.150 "reset": true, 00:10:14.150 "nvme_admin": false, 00:10:14.150 "nvme_io": false, 00:10:14.150 "nvme_io_md": false, 00:10:14.150 "write_zeroes": true, 00:10:14.150 "zcopy": true, 00:10:14.150 "get_zone_info": false, 00:10:14.150 "zone_management": false, 00:10:14.150 "zone_append": false, 00:10:14.150 "compare": false, 00:10:14.150 "compare_and_write": false, 00:10:14.150 "abort": true, 00:10:14.150 "seek_hole": false, 00:10:14.150 "seek_data": false, 00:10:14.150 "copy": true, 00:10:14.150 "nvme_iov_md": false 00:10:14.150 }, 00:10:14.150 "memory_domains": [ 00:10:14.150 { 00:10:14.150 "dma_device_id": "system", 00:10:14.150 "dma_device_type": 1 00:10:14.150 }, 00:10:14.150 { 00:10:14.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.150 "dma_device_type": 2 00:10:14.150 } 00:10:14.150 ], 00:10:14.150 "driver_specific": {} 00:10:14.150 }' 00:10:14.150 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:14.150 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:14.150 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:14.150 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:14.408 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:14.408 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:14.408 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:14.408 14:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:14.408 14:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:14.408 14:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:14.408 14:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:14.408 14:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:14.408 14:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:14.408 14:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:14.408 14:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:14.667 14:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:14.667 "name": "BaseBdev3", 00:10:14.667 "aliases": [ 00:10:14.667 "32ac5cf1-405f-11ef-b2a4-e9dca065e82e" 00:10:14.667 ], 00:10:14.667 "product_name": "Malloc disk", 00:10:14.667 "block_size": 512, 00:10:14.667 "num_blocks": 65536, 00:10:14.667 "uuid": "32ac5cf1-405f-11ef-b2a4-e9dca065e82e", 00:10:14.667 "assigned_rate_limits": { 00:10:14.667 "rw_ios_per_sec": 0, 00:10:14.667 "rw_mbytes_per_sec": 0, 00:10:14.667 "r_mbytes_per_sec": 0, 00:10:14.667 "w_mbytes_per_sec": 0 00:10:14.667 }, 00:10:14.667 "claimed": true, 00:10:14.667 "claim_type": "exclusive_write", 00:10:14.667 "zoned": false, 00:10:14.667 "supported_io_types": { 00:10:14.667 "read": true, 00:10:14.667 "write": true, 00:10:14.667 "unmap": true, 00:10:14.667 "flush": true, 00:10:14.667 "reset": true, 00:10:14.667 "nvme_admin": false, 00:10:14.667 "nvme_io": false, 00:10:14.667 "nvme_io_md": false, 00:10:14.667 "write_zeroes": true, 00:10:14.667 "zcopy": true, 00:10:14.667 "get_zone_info": false, 00:10:14.667 "zone_management": false, 00:10:14.667 "zone_append": false, 00:10:14.667 "compare": false, 00:10:14.667 "compare_and_write": false, 00:10:14.667 "abort": true, 00:10:14.667 "seek_hole": false, 00:10:14.667 "seek_data": false, 00:10:14.667 "copy": true, 00:10:14.667 "nvme_iov_md": false 00:10:14.667 }, 00:10:14.667 "memory_domains": [ 00:10:14.667 { 00:10:14.667 "dma_device_id": "system", 00:10:14.667 "dma_device_type": 1 00:10:14.667 }, 00:10:14.667 { 00:10:14.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.667 "dma_device_type": 2 00:10:14.667 } 00:10:14.667 ], 00:10:14.667 "driver_specific": {} 00:10:14.667 }' 00:10:14.667 14:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:14.667 14:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:14.667 14:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:14.667 14:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:14.667 14:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:14.667 14:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:14.667 14:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:14.667 14:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:14.667 14:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:14.667 14:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:14.667 14:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:14.667 14:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:14.667 14:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:14.925 [2024-07-12 14:58:40.513933] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:14.926 [2024-07-12 14:58:40.513955] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:14.926 [2024-07-12 14:58:40.513977] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.926 [2024-07-12 14:58:40.513990] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:14.926 [2024-07-12 14:58:40.513994] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3bd24434a00 name Existed_Raid, state offline 00:10:14.926 14:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 52688 00:10:14.926 14:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 52688 ']' 00:10:14.926 14:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 52688 00:10:14.926 14:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:10:14.926 14:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:10:14.926 14:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:10:14.926 14:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 52688 00:10:14.926 14:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:10:14.926 14:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:10:14.926 killing process with pid 52688 00:10:14.926 14:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 52688' 00:10:14.926 14:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 52688 00:10:14.926 [2024-07-12 14:58:40.538210] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:14.926 14:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 52688 00:10:14.926 [2024-07-12 14:58:40.555197] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:14.926 14:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:10:14.926 00:10:14.926 real 0m24.224s 00:10:14.926 user 0m44.364s 00:10:14.926 sys 0m3.240s 00:10:14.926 14:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:14.926 14:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.926 ************************************ 00:10:14.926 END TEST raid_state_function_test_sb 00:10:14.926 ************************************ 00:10:15.184 14:58:40 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:15.184 14:58:40 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:10:15.184 14:58:40 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:15.184 14:58:40 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:15.184 14:58:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:15.184 ************************************ 00:10:15.184 START TEST raid_superblock_test 00:10:15.184 ************************************ 00:10:15.184 14:58:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 3 00:10:15.184 14:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:10:15.184 14:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:10:15.184 14:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:10:15.184 14:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:10:15.184 14:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:10:15.184 14:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:10:15.184 14:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:10:15.184 14:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:10:15.184 14:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:10:15.184 14:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:10:15.184 14:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:10:15.184 14:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:10:15.184 14:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:10:15.184 14:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:10:15.184 14:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:10:15.184 14:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:10:15.184 14:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=53416 00:10:15.184 14:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:10:15.184 14:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 53416 /var/tmp/spdk-raid.sock 00:10:15.184 14:58:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 53416 ']' 00:10:15.184 14:58:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:15.184 14:58:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:15.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:15.184 14:58:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:15.184 14:58:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:15.184 14:58:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.184 [2024-07-12 14:58:40.781709] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:10:15.184 [2024-07-12 14:58:40.781924] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:15.751 EAL: TSC is not safe to use in SMP mode 00:10:15.751 EAL: TSC is not invariant 00:10:15.751 [2024-07-12 14:58:41.332841] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.751 [2024-07-12 14:58:41.415001] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:15.751 [2024-07-12 14:58:41.417091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.751 [2024-07-12 14:58:41.417884] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.751 [2024-07-12 14:58:41.417899] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.010 14:58:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:16.010 14:58:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:10:16.010 14:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:10:16.010 14:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:16.010 14:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:10:16.010 14:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:10:16.010 14:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:16.010 14:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:16.010 14:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:16.010 14:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:16.010 14:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:10:16.268 malloc1 00:10:16.527 14:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:16.527 [2024-07-12 14:58:42.345501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:16.527 [2024-07-12 14:58:42.345569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.527 [2024-07-12 14:58:42.345598] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25b543a34780 00:10:16.527 [2024-07-12 14:58:42.345606] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.527 [2024-07-12 14:58:42.346508] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.527 [2024-07-12 14:58:42.346537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:16.527 pt1 00:10:16.785 14:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:16.785 14:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:16.785 14:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:10:16.785 14:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:10:16.785 14:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:16.785 14:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:16.785 14:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:16.785 14:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:16.785 14:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:10:17.043 malloc2 00:10:17.043 14:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:17.043 [2024-07-12 14:58:42.849495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:17.043 [2024-07-12 14:58:42.849557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.043 [2024-07-12 14:58:42.849569] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25b543a34c80 00:10:17.043 [2024-07-12 14:58:42.849578] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.043 [2024-07-12 14:58:42.850224] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.043 [2024-07-12 14:58:42.850258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:17.043 pt2 00:10:17.044 14:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:17.044 14:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:17.044 14:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:10:17.044 14:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:10:17.044 14:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:17.044 14:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:17.044 14:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:17.044 14:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:17.044 14:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:10:17.609 malloc3 00:10:17.609 14:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:17.867 [2024-07-12 14:58:43.453492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:17.867 [2024-07-12 14:58:43.453542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.867 [2024-07-12 14:58:43.453554] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25b543a35180 00:10:17.867 [2024-07-12 14:58:43.453562] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.867 [2024-07-12 14:58:43.454200] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.867 [2024-07-12 14:58:43.454237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:17.867 pt3 00:10:17.867 14:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:17.867 14:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:17.867 14:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:10:17.867 [2024-07-12 14:58:43.685492] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:17.867 [2024-07-12 14:58:43.686077] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:17.867 [2024-07-12 14:58:43.686100] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:17.867 [2024-07-12 14:58:43.686151] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x25b543a35400 00:10:17.867 [2024-07-12 14:58:43.686157] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:17.867 [2024-07-12 14:58:43.686189] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25b543a97e20 00:10:17.867 [2024-07-12 14:58:43.686262] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x25b543a35400 00:10:17.867 [2024-07-12 14:58:43.686267] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x25b543a35400 00:10:17.867 [2024-07-12 14:58:43.686293] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.125 14:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:18.125 14:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:18.125 14:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:18.125 14:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:18.125 14:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:18.125 14:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:18.125 14:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:18.125 14:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:18.125 14:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:18.125 14:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:18.125 14:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:18.125 14:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.383 14:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:18.383 "name": "raid_bdev1", 00:10:18.383 "uuid": "3b8bac32-405f-11ef-b2a4-e9dca065e82e", 00:10:18.383 "strip_size_kb": 64, 00:10:18.383 "state": "online", 00:10:18.383 "raid_level": "raid0", 00:10:18.383 "superblock": true, 00:10:18.383 "num_base_bdevs": 3, 00:10:18.383 "num_base_bdevs_discovered": 3, 00:10:18.383 "num_base_bdevs_operational": 3, 00:10:18.383 "base_bdevs_list": [ 00:10:18.383 { 00:10:18.383 "name": "pt1", 00:10:18.383 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:18.383 "is_configured": true, 00:10:18.383 "data_offset": 2048, 00:10:18.383 "data_size": 63488 00:10:18.383 }, 00:10:18.383 { 00:10:18.383 "name": "pt2", 00:10:18.383 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.383 "is_configured": true, 00:10:18.383 "data_offset": 2048, 00:10:18.383 "data_size": 63488 00:10:18.383 }, 00:10:18.383 { 00:10:18.383 "name": "pt3", 00:10:18.383 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.383 "is_configured": true, 00:10:18.383 "data_offset": 2048, 00:10:18.383 "data_size": 63488 00:10:18.383 } 00:10:18.383 ] 00:10:18.383 }' 00:10:18.383 14:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:18.383 14:58:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.640 14:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:10:18.640 14:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:18.640 14:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:18.640 14:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:18.640 14:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:18.640 14:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:18.640 14:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:18.640 14:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:18.897 [2024-07-12 14:58:44.561534] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:18.897 14:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:18.897 "name": "raid_bdev1", 00:10:18.897 "aliases": [ 00:10:18.897 "3b8bac32-405f-11ef-b2a4-e9dca065e82e" 00:10:18.897 ], 00:10:18.897 "product_name": "Raid Volume", 00:10:18.897 "block_size": 512, 00:10:18.897 "num_blocks": 190464, 00:10:18.897 "uuid": "3b8bac32-405f-11ef-b2a4-e9dca065e82e", 00:10:18.897 "assigned_rate_limits": { 00:10:18.897 "rw_ios_per_sec": 0, 00:10:18.897 "rw_mbytes_per_sec": 0, 00:10:18.897 "r_mbytes_per_sec": 0, 00:10:18.897 "w_mbytes_per_sec": 0 00:10:18.897 }, 00:10:18.897 "claimed": false, 00:10:18.897 "zoned": false, 00:10:18.897 "supported_io_types": { 00:10:18.897 "read": true, 00:10:18.897 "write": true, 00:10:18.897 "unmap": true, 00:10:18.897 "flush": true, 00:10:18.897 "reset": true, 00:10:18.897 "nvme_admin": false, 00:10:18.897 "nvme_io": false, 00:10:18.897 "nvme_io_md": false, 00:10:18.897 "write_zeroes": true, 00:10:18.897 "zcopy": false, 00:10:18.897 "get_zone_info": false, 00:10:18.897 "zone_management": false, 00:10:18.897 "zone_append": false, 00:10:18.897 "compare": false, 00:10:18.897 "compare_and_write": false, 00:10:18.897 "abort": false, 00:10:18.897 "seek_hole": false, 00:10:18.897 "seek_data": false, 00:10:18.897 "copy": false, 00:10:18.897 "nvme_iov_md": false 00:10:18.897 }, 00:10:18.897 "memory_domains": [ 00:10:18.897 { 00:10:18.897 "dma_device_id": "system", 00:10:18.897 "dma_device_type": 1 00:10:18.897 }, 00:10:18.897 { 00:10:18.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.897 "dma_device_type": 2 00:10:18.897 }, 00:10:18.897 { 00:10:18.897 "dma_device_id": "system", 00:10:18.897 "dma_device_type": 1 00:10:18.897 }, 00:10:18.897 { 00:10:18.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.897 "dma_device_type": 2 00:10:18.897 }, 00:10:18.897 { 00:10:18.897 "dma_device_id": "system", 00:10:18.897 "dma_device_type": 1 00:10:18.897 }, 00:10:18.897 { 00:10:18.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.897 "dma_device_type": 2 00:10:18.897 } 00:10:18.897 ], 00:10:18.897 "driver_specific": { 00:10:18.897 "raid": { 00:10:18.897 "uuid": "3b8bac32-405f-11ef-b2a4-e9dca065e82e", 00:10:18.897 "strip_size_kb": 64, 00:10:18.897 "state": "online", 00:10:18.897 "raid_level": "raid0", 00:10:18.897 "superblock": true, 00:10:18.897 "num_base_bdevs": 3, 00:10:18.897 "num_base_bdevs_discovered": 3, 00:10:18.897 "num_base_bdevs_operational": 3, 00:10:18.897 "base_bdevs_list": [ 00:10:18.897 { 00:10:18.897 "name": "pt1", 00:10:18.897 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:18.897 "is_configured": true, 00:10:18.897 "data_offset": 2048, 00:10:18.897 "data_size": 63488 00:10:18.897 }, 00:10:18.897 { 00:10:18.897 "name": "pt2", 00:10:18.897 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.897 "is_configured": true, 00:10:18.897 "data_offset": 2048, 00:10:18.897 "data_size": 63488 00:10:18.897 }, 00:10:18.897 { 00:10:18.897 "name": "pt3", 00:10:18.897 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.897 "is_configured": true, 00:10:18.897 "data_offset": 2048, 00:10:18.897 "data_size": 63488 00:10:18.897 } 00:10:18.897 ] 00:10:18.897 } 00:10:18.897 } 00:10:18.897 }' 00:10:18.897 14:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:18.897 14:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:18.897 pt2 00:10:18.897 pt3' 00:10:18.897 14:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:18.897 14:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:18.897 14:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:19.154 14:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:19.154 "name": "pt1", 00:10:19.154 "aliases": [ 00:10:19.154 "00000000-0000-0000-0000-000000000001" 00:10:19.154 ], 00:10:19.154 "product_name": "passthru", 00:10:19.154 "block_size": 512, 00:10:19.154 "num_blocks": 65536, 00:10:19.154 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:19.154 "assigned_rate_limits": { 00:10:19.154 "rw_ios_per_sec": 0, 00:10:19.154 "rw_mbytes_per_sec": 0, 00:10:19.154 "r_mbytes_per_sec": 0, 00:10:19.154 "w_mbytes_per_sec": 0 00:10:19.154 }, 00:10:19.154 "claimed": true, 00:10:19.154 "claim_type": "exclusive_write", 00:10:19.154 "zoned": false, 00:10:19.154 "supported_io_types": { 00:10:19.154 "read": true, 00:10:19.154 "write": true, 00:10:19.154 "unmap": true, 00:10:19.154 "flush": true, 00:10:19.154 "reset": true, 00:10:19.154 "nvme_admin": false, 00:10:19.154 "nvme_io": false, 00:10:19.154 "nvme_io_md": false, 00:10:19.154 "write_zeroes": true, 00:10:19.154 "zcopy": true, 00:10:19.154 "get_zone_info": false, 00:10:19.154 "zone_management": false, 00:10:19.154 "zone_append": false, 00:10:19.154 "compare": false, 00:10:19.154 "compare_and_write": false, 00:10:19.154 "abort": true, 00:10:19.154 "seek_hole": false, 00:10:19.154 "seek_data": false, 00:10:19.154 "copy": true, 00:10:19.154 "nvme_iov_md": false 00:10:19.154 }, 00:10:19.154 "memory_domains": [ 00:10:19.154 { 00:10:19.154 "dma_device_id": "system", 00:10:19.154 "dma_device_type": 1 00:10:19.154 }, 00:10:19.154 { 00:10:19.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.154 "dma_device_type": 2 00:10:19.154 } 00:10:19.154 ], 00:10:19.154 "driver_specific": { 00:10:19.154 "passthru": { 00:10:19.154 "name": "pt1", 00:10:19.154 "base_bdev_name": "malloc1" 00:10:19.154 } 00:10:19.154 } 00:10:19.154 }' 00:10:19.154 14:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:19.154 14:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:19.154 14:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:19.154 14:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:19.154 14:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:19.154 14:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:19.154 14:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:19.154 14:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:19.154 14:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:19.154 14:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:19.154 14:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:19.154 14:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:19.154 14:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:19.154 14:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:19.154 14:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:19.411 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:19.411 "name": "pt2", 00:10:19.411 "aliases": [ 00:10:19.411 "00000000-0000-0000-0000-000000000002" 00:10:19.411 ], 00:10:19.411 "product_name": "passthru", 00:10:19.411 "block_size": 512, 00:10:19.411 "num_blocks": 65536, 00:10:19.411 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.411 "assigned_rate_limits": { 00:10:19.411 "rw_ios_per_sec": 0, 00:10:19.411 "rw_mbytes_per_sec": 0, 00:10:19.411 "r_mbytes_per_sec": 0, 00:10:19.411 "w_mbytes_per_sec": 0 00:10:19.411 }, 00:10:19.411 "claimed": true, 00:10:19.411 "claim_type": "exclusive_write", 00:10:19.411 "zoned": false, 00:10:19.411 "supported_io_types": { 00:10:19.411 "read": true, 00:10:19.411 "write": true, 00:10:19.411 "unmap": true, 00:10:19.411 "flush": true, 00:10:19.411 "reset": true, 00:10:19.411 "nvme_admin": false, 00:10:19.411 "nvme_io": false, 00:10:19.411 "nvme_io_md": false, 00:10:19.411 "write_zeroes": true, 00:10:19.411 "zcopy": true, 00:10:19.411 "get_zone_info": false, 00:10:19.411 "zone_management": false, 00:10:19.411 "zone_append": false, 00:10:19.411 "compare": false, 00:10:19.411 "compare_and_write": false, 00:10:19.411 "abort": true, 00:10:19.411 "seek_hole": false, 00:10:19.411 "seek_data": false, 00:10:19.411 "copy": true, 00:10:19.411 "nvme_iov_md": false 00:10:19.411 }, 00:10:19.411 "memory_domains": [ 00:10:19.411 { 00:10:19.411 "dma_device_id": "system", 00:10:19.411 "dma_device_type": 1 00:10:19.411 }, 00:10:19.411 { 00:10:19.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.411 "dma_device_type": 2 00:10:19.411 } 00:10:19.411 ], 00:10:19.411 "driver_specific": { 00:10:19.411 "passthru": { 00:10:19.411 "name": "pt2", 00:10:19.411 "base_bdev_name": "malloc2" 00:10:19.411 } 00:10:19.411 } 00:10:19.411 }' 00:10:19.411 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:19.411 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:19.411 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:19.411 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:19.411 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:19.411 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:19.411 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:19.411 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:19.411 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:19.411 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:19.411 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:19.411 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:19.411 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:19.411 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:10:19.411 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:19.747 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:19.747 "name": "pt3", 00:10:19.747 "aliases": [ 00:10:19.747 "00000000-0000-0000-0000-000000000003" 00:10:19.747 ], 00:10:19.747 "product_name": "passthru", 00:10:19.747 "block_size": 512, 00:10:19.747 "num_blocks": 65536, 00:10:19.747 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.747 "assigned_rate_limits": { 00:10:19.747 "rw_ios_per_sec": 0, 00:10:19.747 "rw_mbytes_per_sec": 0, 00:10:19.747 "r_mbytes_per_sec": 0, 00:10:19.747 "w_mbytes_per_sec": 0 00:10:19.747 }, 00:10:19.747 "claimed": true, 00:10:19.747 "claim_type": "exclusive_write", 00:10:19.747 "zoned": false, 00:10:19.747 "supported_io_types": { 00:10:19.747 "read": true, 00:10:19.747 "write": true, 00:10:19.747 "unmap": true, 00:10:19.747 "flush": true, 00:10:19.747 "reset": true, 00:10:19.747 "nvme_admin": false, 00:10:19.747 "nvme_io": false, 00:10:19.747 "nvme_io_md": false, 00:10:19.747 "write_zeroes": true, 00:10:19.747 "zcopy": true, 00:10:19.747 "get_zone_info": false, 00:10:19.747 "zone_management": false, 00:10:19.747 "zone_append": false, 00:10:19.747 "compare": false, 00:10:19.747 "compare_and_write": false, 00:10:19.747 "abort": true, 00:10:19.747 "seek_hole": false, 00:10:19.747 "seek_data": false, 00:10:19.747 "copy": true, 00:10:19.747 "nvme_iov_md": false 00:10:19.747 }, 00:10:19.747 "memory_domains": [ 00:10:19.747 { 00:10:19.747 "dma_device_id": "system", 00:10:19.747 "dma_device_type": 1 00:10:19.747 }, 00:10:19.747 { 00:10:19.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.747 "dma_device_type": 2 00:10:19.747 } 00:10:19.747 ], 00:10:19.747 "driver_specific": { 00:10:19.747 "passthru": { 00:10:19.747 "name": "pt3", 00:10:19.747 "base_bdev_name": "malloc3" 00:10:19.747 } 00:10:19.747 } 00:10:19.747 }' 00:10:19.747 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:19.747 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:19.747 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:19.747 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:19.747 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:19.747 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:19.747 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:19.747 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:19.747 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:19.747 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:19.747 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:19.747 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:19.747 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:19.748 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:10:20.006 [2024-07-12 14:58:45.785525] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.006 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=3b8bac32-405f-11ef-b2a4-e9dca065e82e 00:10:20.006 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 3b8bac32-405f-11ef-b2a4-e9dca065e82e ']' 00:10:20.006 14:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:20.264 [2024-07-12 14:58:46.077468] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:20.264 [2024-07-12 14:58:46.077492] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:20.264 [2024-07-12 14:58:46.077514] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.264 [2024-07-12 14:58:46.077528] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:20.264 [2024-07-12 14:58:46.077532] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25b543a35400 name raid_bdev1, state offline 00:10:20.522 14:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:20.522 14:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:10:20.522 14:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:10:20.522 14:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:10:20.522 14:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:20.522 14:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:21.088 14:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:21.088 14:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:21.088 14:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:21.088 14:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:10:21.346 14:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:10:21.346 14:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:21.605 14:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:10:21.605 14:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:10:21.605 14:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:10:21.605 14:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:10:21.605 14:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:21.605 14:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:21.605 14:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:21.605 14:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:21.605 14:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:21.605 14:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:21.605 14:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:21.605 14:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:21.605 14:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:10:21.863 [2024-07-12 14:58:47.601469] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:21.863 [2024-07-12 14:58:47.602024] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:21.863 [2024-07-12 14:58:47.602042] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:21.863 [2024-07-12 14:58:47.602057] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:21.863 [2024-07-12 14:58:47.602100] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:21.863 [2024-07-12 14:58:47.602112] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:21.863 [2024-07-12 14:58:47.602120] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:21.863 [2024-07-12 14:58:47.602124] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25b543a35180 name raid_bdev1, state configuring 00:10:21.863 request: 00:10:21.863 { 00:10:21.863 "name": "raid_bdev1", 00:10:21.863 "raid_level": "raid0", 00:10:21.863 "base_bdevs": [ 00:10:21.863 "malloc1", 00:10:21.863 "malloc2", 00:10:21.863 "malloc3" 00:10:21.863 ], 00:10:21.863 "strip_size_kb": 64, 00:10:21.863 "superblock": false, 00:10:21.863 "method": "bdev_raid_create", 00:10:21.863 "req_id": 1 00:10:21.863 } 00:10:21.863 Got JSON-RPC error response 00:10:21.863 response: 00:10:21.863 { 00:10:21.863 "code": -17, 00:10:21.863 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:21.863 } 00:10:21.863 14:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:10:21.863 14:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:21.863 14:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:21.863 14:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:21.863 14:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:21.863 14:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:10:22.121 14:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:10:22.121 14:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:10:22.121 14:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:22.380 [2024-07-12 14:58:48.089454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:22.380 [2024-07-12 14:58:48.089523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.380 [2024-07-12 14:58:48.089535] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25b543a34c80 00:10:22.380 [2024-07-12 14:58:48.089543] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.380 [2024-07-12 14:58:48.090188] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.380 [2024-07-12 14:58:48.090213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:22.380 [2024-07-12 14:58:48.090237] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:22.380 [2024-07-12 14:58:48.090248] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:22.380 pt1 00:10:22.380 14:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:22.380 14:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:22.380 14:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:22.380 14:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:22.380 14:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:22.380 14:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:22.380 14:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:22.380 14:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:22.380 14:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:22.380 14:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:22.380 14:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:22.380 14:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.639 14:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:22.639 "name": "raid_bdev1", 00:10:22.639 "uuid": "3b8bac32-405f-11ef-b2a4-e9dca065e82e", 00:10:22.639 "strip_size_kb": 64, 00:10:22.639 "state": "configuring", 00:10:22.639 "raid_level": "raid0", 00:10:22.639 "superblock": true, 00:10:22.639 "num_base_bdevs": 3, 00:10:22.639 "num_base_bdevs_discovered": 1, 00:10:22.639 "num_base_bdevs_operational": 3, 00:10:22.639 "base_bdevs_list": [ 00:10:22.639 { 00:10:22.639 "name": "pt1", 00:10:22.639 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:22.639 "is_configured": true, 00:10:22.639 "data_offset": 2048, 00:10:22.639 "data_size": 63488 00:10:22.639 }, 00:10:22.639 { 00:10:22.639 "name": null, 00:10:22.639 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.639 "is_configured": false, 00:10:22.639 "data_offset": 2048, 00:10:22.639 "data_size": 63488 00:10:22.639 }, 00:10:22.639 { 00:10:22.639 "name": null, 00:10:22.639 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:22.639 "is_configured": false, 00:10:22.639 "data_offset": 2048, 00:10:22.639 "data_size": 63488 00:10:22.639 } 00:10:22.639 ] 00:10:22.639 }' 00:10:22.639 14:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:22.639 14:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.207 14:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:10:23.207 14:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:23.207 [2024-07-12 14:58:48.969457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:23.207 [2024-07-12 14:58:48.969521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.207 [2024-07-12 14:58:48.969533] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25b543a35680 00:10:23.207 [2024-07-12 14:58:48.969541] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.207 [2024-07-12 14:58:48.969650] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.207 [2024-07-12 14:58:48.969661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:23.207 [2024-07-12 14:58:48.969683] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:23.207 [2024-07-12 14:58:48.969692] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:23.207 pt2 00:10:23.207 14:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:23.465 [2024-07-12 14:58:49.233458] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:23.465 14:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:23.465 14:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:23.465 14:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:23.465 14:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:23.465 14:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:23.466 14:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:23.466 14:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:23.466 14:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:23.466 14:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:23.466 14:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:23.466 14:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:23.466 14:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.724 14:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:23.724 "name": "raid_bdev1", 00:10:23.724 "uuid": "3b8bac32-405f-11ef-b2a4-e9dca065e82e", 00:10:23.724 "strip_size_kb": 64, 00:10:23.724 "state": "configuring", 00:10:23.724 "raid_level": "raid0", 00:10:23.724 "superblock": true, 00:10:23.724 "num_base_bdevs": 3, 00:10:23.724 "num_base_bdevs_discovered": 1, 00:10:23.724 "num_base_bdevs_operational": 3, 00:10:23.724 "base_bdevs_list": [ 00:10:23.724 { 00:10:23.724 "name": "pt1", 00:10:23.724 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:23.724 "is_configured": true, 00:10:23.724 "data_offset": 2048, 00:10:23.724 "data_size": 63488 00:10:23.724 }, 00:10:23.724 { 00:10:23.724 "name": null, 00:10:23.724 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:23.724 "is_configured": false, 00:10:23.724 "data_offset": 2048, 00:10:23.724 "data_size": 63488 00:10:23.724 }, 00:10:23.724 { 00:10:23.724 "name": null, 00:10:23.724 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:23.724 "is_configured": false, 00:10:23.724 "data_offset": 2048, 00:10:23.724 "data_size": 63488 00:10:23.724 } 00:10:23.724 ] 00:10:23.724 }' 00:10:23.724 14:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:23.724 14:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.292 14:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:10:24.292 14:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:24.293 14:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:24.293 [2024-07-12 14:58:50.085467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:24.293 [2024-07-12 14:58:50.085538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.293 [2024-07-12 14:58:50.085551] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25b543a35680 00:10:24.293 [2024-07-12 14:58:50.085559] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.293 [2024-07-12 14:58:50.085668] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.293 [2024-07-12 14:58:50.085679] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:24.293 [2024-07-12 14:58:50.085702] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:24.293 [2024-07-12 14:58:50.085711] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:24.293 pt2 00:10:24.293 14:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:10:24.293 14:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:24.293 14:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:24.551 [2024-07-12 14:58:50.357443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:24.551 [2024-07-12 14:58:50.357513] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.551 [2024-07-12 14:58:50.357525] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25b543a35400 00:10:24.551 [2024-07-12 14:58:50.357533] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.551 [2024-07-12 14:58:50.357647] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.551 [2024-07-12 14:58:50.357658] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:24.551 [2024-07-12 14:58:50.357684] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:24.551 [2024-07-12 14:58:50.357693] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:24.551 [2024-07-12 14:58:50.357719] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x25b543a34780 00:10:24.551 [2024-07-12 14:58:50.357723] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:24.551 [2024-07-12 14:58:50.357744] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25b543a97e20 00:10:24.551 [2024-07-12 14:58:50.357797] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x25b543a34780 00:10:24.551 [2024-07-12 14:58:50.357801] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x25b543a34780 00:10:24.551 [2024-07-12 14:58:50.357823] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.551 pt3 00:10:24.551 14:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:10:24.551 14:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:24.551 14:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:24.551 14:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:24.551 14:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:24.551 14:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:24.551 14:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:24.551 14:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:24.551 14:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:24.551 14:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:24.551 14:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:24.551 14:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:24.551 14:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:24.551 14:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.119 14:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:25.119 "name": "raid_bdev1", 00:10:25.119 "uuid": "3b8bac32-405f-11ef-b2a4-e9dca065e82e", 00:10:25.119 "strip_size_kb": 64, 00:10:25.119 "state": "online", 00:10:25.119 "raid_level": "raid0", 00:10:25.119 "superblock": true, 00:10:25.119 "num_base_bdevs": 3, 00:10:25.119 "num_base_bdevs_discovered": 3, 00:10:25.119 "num_base_bdevs_operational": 3, 00:10:25.119 "base_bdevs_list": [ 00:10:25.119 { 00:10:25.119 "name": "pt1", 00:10:25.119 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:25.119 "is_configured": true, 00:10:25.119 "data_offset": 2048, 00:10:25.119 "data_size": 63488 00:10:25.119 }, 00:10:25.119 { 00:10:25.119 "name": "pt2", 00:10:25.119 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:25.119 "is_configured": true, 00:10:25.119 "data_offset": 2048, 00:10:25.119 "data_size": 63488 00:10:25.119 }, 00:10:25.119 { 00:10:25.119 "name": "pt3", 00:10:25.119 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:25.119 "is_configured": true, 00:10:25.119 "data_offset": 2048, 00:10:25.119 "data_size": 63488 00:10:25.119 } 00:10:25.119 ] 00:10:25.119 }' 00:10:25.119 14:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:25.119 14:58:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.119 14:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:10:25.119 14:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:25.119 14:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:25.119 14:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:25.119 14:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:25.119 14:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:25.119 14:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:25.119 14:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:25.687 [2024-07-12 14:58:51.213500] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.687 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:25.687 "name": "raid_bdev1", 00:10:25.687 "aliases": [ 00:10:25.687 "3b8bac32-405f-11ef-b2a4-e9dca065e82e" 00:10:25.687 ], 00:10:25.687 "product_name": "Raid Volume", 00:10:25.687 "block_size": 512, 00:10:25.687 "num_blocks": 190464, 00:10:25.687 "uuid": "3b8bac32-405f-11ef-b2a4-e9dca065e82e", 00:10:25.687 "assigned_rate_limits": { 00:10:25.687 "rw_ios_per_sec": 0, 00:10:25.687 "rw_mbytes_per_sec": 0, 00:10:25.687 "r_mbytes_per_sec": 0, 00:10:25.687 "w_mbytes_per_sec": 0 00:10:25.687 }, 00:10:25.687 "claimed": false, 00:10:25.687 "zoned": false, 00:10:25.687 "supported_io_types": { 00:10:25.687 "read": true, 00:10:25.687 "write": true, 00:10:25.687 "unmap": true, 00:10:25.687 "flush": true, 00:10:25.687 "reset": true, 00:10:25.687 "nvme_admin": false, 00:10:25.687 "nvme_io": false, 00:10:25.687 "nvme_io_md": false, 00:10:25.687 "write_zeroes": true, 00:10:25.687 "zcopy": false, 00:10:25.687 "get_zone_info": false, 00:10:25.687 "zone_management": false, 00:10:25.687 "zone_append": false, 00:10:25.687 "compare": false, 00:10:25.687 "compare_and_write": false, 00:10:25.687 "abort": false, 00:10:25.687 "seek_hole": false, 00:10:25.687 "seek_data": false, 00:10:25.687 "copy": false, 00:10:25.687 "nvme_iov_md": false 00:10:25.687 }, 00:10:25.687 "memory_domains": [ 00:10:25.687 { 00:10:25.687 "dma_device_id": "system", 00:10:25.687 "dma_device_type": 1 00:10:25.687 }, 00:10:25.687 { 00:10:25.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.687 "dma_device_type": 2 00:10:25.687 }, 00:10:25.687 { 00:10:25.687 "dma_device_id": "system", 00:10:25.687 "dma_device_type": 1 00:10:25.687 }, 00:10:25.687 { 00:10:25.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.687 "dma_device_type": 2 00:10:25.687 }, 00:10:25.687 { 00:10:25.687 "dma_device_id": "system", 00:10:25.687 "dma_device_type": 1 00:10:25.687 }, 00:10:25.687 { 00:10:25.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.687 "dma_device_type": 2 00:10:25.687 } 00:10:25.687 ], 00:10:25.687 "driver_specific": { 00:10:25.687 "raid": { 00:10:25.687 "uuid": "3b8bac32-405f-11ef-b2a4-e9dca065e82e", 00:10:25.687 "strip_size_kb": 64, 00:10:25.687 "state": "online", 00:10:25.687 "raid_level": "raid0", 00:10:25.687 "superblock": true, 00:10:25.687 "num_base_bdevs": 3, 00:10:25.687 "num_base_bdevs_discovered": 3, 00:10:25.687 "num_base_bdevs_operational": 3, 00:10:25.687 "base_bdevs_list": [ 00:10:25.687 { 00:10:25.687 "name": "pt1", 00:10:25.687 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:25.687 "is_configured": true, 00:10:25.687 "data_offset": 2048, 00:10:25.687 "data_size": 63488 00:10:25.687 }, 00:10:25.687 { 00:10:25.687 "name": "pt2", 00:10:25.687 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:25.687 "is_configured": true, 00:10:25.687 "data_offset": 2048, 00:10:25.687 "data_size": 63488 00:10:25.687 }, 00:10:25.687 { 00:10:25.687 "name": "pt3", 00:10:25.687 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:25.687 "is_configured": true, 00:10:25.687 "data_offset": 2048, 00:10:25.687 "data_size": 63488 00:10:25.687 } 00:10:25.687 ] 00:10:25.687 } 00:10:25.687 } 00:10:25.687 }' 00:10:25.687 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:25.687 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:25.687 pt2 00:10:25.687 pt3' 00:10:25.687 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:25.687 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:25.687 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:25.687 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:25.687 "name": "pt1", 00:10:25.687 "aliases": [ 00:10:25.687 "00000000-0000-0000-0000-000000000001" 00:10:25.687 ], 00:10:25.687 "product_name": "passthru", 00:10:25.687 "block_size": 512, 00:10:25.687 "num_blocks": 65536, 00:10:25.687 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:25.687 "assigned_rate_limits": { 00:10:25.687 "rw_ios_per_sec": 0, 00:10:25.687 "rw_mbytes_per_sec": 0, 00:10:25.687 "r_mbytes_per_sec": 0, 00:10:25.687 "w_mbytes_per_sec": 0 00:10:25.687 }, 00:10:25.687 "claimed": true, 00:10:25.687 "claim_type": "exclusive_write", 00:10:25.687 "zoned": false, 00:10:25.687 "supported_io_types": { 00:10:25.687 "read": true, 00:10:25.687 "write": true, 00:10:25.687 "unmap": true, 00:10:25.687 "flush": true, 00:10:25.687 "reset": true, 00:10:25.687 "nvme_admin": false, 00:10:25.687 "nvme_io": false, 00:10:25.687 "nvme_io_md": false, 00:10:25.687 "write_zeroes": true, 00:10:25.687 "zcopy": true, 00:10:25.687 "get_zone_info": false, 00:10:25.687 "zone_management": false, 00:10:25.687 "zone_append": false, 00:10:25.687 "compare": false, 00:10:25.687 "compare_and_write": false, 00:10:25.687 "abort": true, 00:10:25.687 "seek_hole": false, 00:10:25.687 "seek_data": false, 00:10:25.687 "copy": true, 00:10:25.687 "nvme_iov_md": false 00:10:25.687 }, 00:10:25.687 "memory_domains": [ 00:10:25.687 { 00:10:25.687 "dma_device_id": "system", 00:10:25.687 "dma_device_type": 1 00:10:25.687 }, 00:10:25.687 { 00:10:25.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.687 "dma_device_type": 2 00:10:25.687 } 00:10:25.687 ], 00:10:25.687 "driver_specific": { 00:10:25.687 "passthru": { 00:10:25.687 "name": "pt1", 00:10:25.687 "base_bdev_name": "malloc1" 00:10:25.687 } 00:10:25.687 } 00:10:25.687 }' 00:10:25.946 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:25.946 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:25.946 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:25.946 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:25.946 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:25.946 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:25.946 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:25.946 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:25.946 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:25.946 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:25.946 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:25.946 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:25.946 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:25.946 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:25.946 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:26.204 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:26.204 "name": "pt2", 00:10:26.204 "aliases": [ 00:10:26.204 "00000000-0000-0000-0000-000000000002" 00:10:26.204 ], 00:10:26.204 "product_name": "passthru", 00:10:26.204 "block_size": 512, 00:10:26.204 "num_blocks": 65536, 00:10:26.204 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:26.204 "assigned_rate_limits": { 00:10:26.204 "rw_ios_per_sec": 0, 00:10:26.204 "rw_mbytes_per_sec": 0, 00:10:26.204 "r_mbytes_per_sec": 0, 00:10:26.204 "w_mbytes_per_sec": 0 00:10:26.204 }, 00:10:26.204 "claimed": true, 00:10:26.204 "claim_type": "exclusive_write", 00:10:26.204 "zoned": false, 00:10:26.204 "supported_io_types": { 00:10:26.204 "read": true, 00:10:26.204 "write": true, 00:10:26.204 "unmap": true, 00:10:26.204 "flush": true, 00:10:26.204 "reset": true, 00:10:26.204 "nvme_admin": false, 00:10:26.204 "nvme_io": false, 00:10:26.204 "nvme_io_md": false, 00:10:26.204 "write_zeroes": true, 00:10:26.204 "zcopy": true, 00:10:26.204 "get_zone_info": false, 00:10:26.204 "zone_management": false, 00:10:26.204 "zone_append": false, 00:10:26.204 "compare": false, 00:10:26.204 "compare_and_write": false, 00:10:26.204 "abort": true, 00:10:26.204 "seek_hole": false, 00:10:26.204 "seek_data": false, 00:10:26.204 "copy": true, 00:10:26.204 "nvme_iov_md": false 00:10:26.204 }, 00:10:26.204 "memory_domains": [ 00:10:26.204 { 00:10:26.204 "dma_device_id": "system", 00:10:26.204 "dma_device_type": 1 00:10:26.204 }, 00:10:26.204 { 00:10:26.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.204 "dma_device_type": 2 00:10:26.204 } 00:10:26.204 ], 00:10:26.204 "driver_specific": { 00:10:26.204 "passthru": { 00:10:26.204 "name": "pt2", 00:10:26.204 "base_bdev_name": "malloc2" 00:10:26.204 } 00:10:26.204 } 00:10:26.204 }' 00:10:26.204 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:26.204 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:26.204 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:26.204 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:26.204 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:26.204 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:26.204 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:26.204 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:26.204 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:26.204 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:26.204 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:26.204 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:26.204 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:26.204 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:26.204 14:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:10:26.463 14:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:26.463 "name": "pt3", 00:10:26.463 "aliases": [ 00:10:26.463 "00000000-0000-0000-0000-000000000003" 00:10:26.463 ], 00:10:26.463 "product_name": "passthru", 00:10:26.463 "block_size": 512, 00:10:26.463 "num_blocks": 65536, 00:10:26.463 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:26.463 "assigned_rate_limits": { 00:10:26.463 "rw_ios_per_sec": 0, 00:10:26.463 "rw_mbytes_per_sec": 0, 00:10:26.463 "r_mbytes_per_sec": 0, 00:10:26.463 "w_mbytes_per_sec": 0 00:10:26.463 }, 00:10:26.463 "claimed": true, 00:10:26.463 "claim_type": "exclusive_write", 00:10:26.463 "zoned": false, 00:10:26.463 "supported_io_types": { 00:10:26.463 "read": true, 00:10:26.463 "write": true, 00:10:26.463 "unmap": true, 00:10:26.463 "flush": true, 00:10:26.463 "reset": true, 00:10:26.463 "nvme_admin": false, 00:10:26.463 "nvme_io": false, 00:10:26.463 "nvme_io_md": false, 00:10:26.463 "write_zeroes": true, 00:10:26.463 "zcopy": true, 00:10:26.463 "get_zone_info": false, 00:10:26.463 "zone_management": false, 00:10:26.463 "zone_append": false, 00:10:26.463 "compare": false, 00:10:26.463 "compare_and_write": false, 00:10:26.463 "abort": true, 00:10:26.463 "seek_hole": false, 00:10:26.463 "seek_data": false, 00:10:26.463 "copy": true, 00:10:26.463 "nvme_iov_md": false 00:10:26.463 }, 00:10:26.463 "memory_domains": [ 00:10:26.463 { 00:10:26.463 "dma_device_id": "system", 00:10:26.463 "dma_device_type": 1 00:10:26.463 }, 00:10:26.463 { 00:10:26.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.463 "dma_device_type": 2 00:10:26.463 } 00:10:26.463 ], 00:10:26.463 "driver_specific": { 00:10:26.463 "passthru": { 00:10:26.463 "name": "pt3", 00:10:26.463 "base_bdev_name": "malloc3" 00:10:26.463 } 00:10:26.463 } 00:10:26.463 }' 00:10:26.463 14:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:26.463 14:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:26.463 14:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:26.463 14:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:26.463 14:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:26.463 14:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:26.463 14:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:26.463 14:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:26.721 14:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:26.721 14:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:26.721 14:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:26.721 14:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:26.721 14:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:26.721 14:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:10:26.721 [2024-07-12 14:58:52.529493] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:26.721 14:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 3b8bac32-405f-11ef-b2a4-e9dca065e82e '!=' 3b8bac32-405f-11ef-b2a4-e9dca065e82e ']' 00:10:26.721 14:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:10:26.721 14:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:26.721 14:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:26.721 14:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 53416 00:10:26.721 14:58:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 53416 ']' 00:10:26.721 14:58:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 53416 00:10:26.721 14:58:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:10:26.978 14:58:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:10:26.978 14:58:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 53416 00:10:26.978 14:58:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:10:26.978 14:58:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:10:26.978 14:58:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:10:26.978 14:58:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 53416' 00:10:26.978 killing process with pid 53416 00:10:26.978 14:58:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 53416 00:10:26.978 [2024-07-12 14:58:52.558333] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:26.978 [2024-07-12 14:58:52.558360] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:26.978 [2024-07-12 14:58:52.558374] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:26.978 [2024-07-12 14:58:52.558378] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25b543a34780 name raid_bdev1, state offline 00:10:26.978 14:58:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 53416 00:10:26.978 [2024-07-12 14:58:52.575521] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:26.978 14:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:10:26.978 00:10:26.978 real 0m11.982s 00:10:26.978 user 0m21.296s 00:10:26.978 sys 0m1.869s 00:10:26.978 ************************************ 00:10:26.978 END TEST raid_superblock_test 00:10:26.978 ************************************ 00:10:26.978 14:58:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:26.978 14:58:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.978 14:58:52 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:26.978 14:58:52 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:10:26.978 14:58:52 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:26.978 14:58:52 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:26.978 14:58:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:26.978 ************************************ 00:10:26.978 START TEST raid_read_error_test 00:10:26.978 ************************************ 00:10:26.978 14:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 read 00:10:26.978 14:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:10:26.978 14:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:10:26.978 14:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:10:26.978 14:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:26.978 14:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:26.978 14:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:26.978 14:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:26.978 14:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:26.978 14:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:26.978 14:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:26.978 14:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:26.978 14:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:10:26.978 14:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:26.978 14:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:26.978 14:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:26.978 14:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:26.978 14:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:26.978 14:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:26.978 14:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:27.234 14:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:27.234 14:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:27.234 14:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:10:27.234 14:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:27.234 14:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:27.234 14:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:27.234 14:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.uDVSRBBbAW 00:10:27.234 14:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=53767 00:10:27.234 14:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 53767 /var/tmp/spdk-raid.sock 00:10:27.234 14:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 53767 ']' 00:10:27.234 14:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:27.234 14:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:27.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:27.234 14:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:27.234 14:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:27.234 14:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.234 14:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:27.234 [2024-07-12 14:58:52.821353] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:10:27.234 [2024-07-12 14:58:52.821561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:27.799 EAL: TSC is not safe to use in SMP mode 00:10:27.799 EAL: TSC is not invariant 00:10:27.799 [2024-07-12 14:58:53.361038] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.799 [2024-07-12 14:58:53.442458] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:27.799 [2024-07-12 14:58:53.444541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.799 [2024-07-12 14:58:53.445302] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:27.799 [2024-07-12 14:58:53.445316] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:28.365 14:58:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:28.366 14:58:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:10:28.366 14:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:28.366 14:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:28.624 BaseBdev1_malloc 00:10:28.624 14:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:28.881 true 00:10:28.881 14:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:29.138 [2024-07-12 14:58:54.744631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:29.138 [2024-07-12 14:58:54.744693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.138 [2024-07-12 14:58:54.744720] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3ab5f5034780 00:10:29.138 [2024-07-12 14:58:54.744729] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.138 [2024-07-12 14:58:54.745389] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.138 [2024-07-12 14:58:54.745416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:29.138 BaseBdev1 00:10:29.138 14:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:29.138 14:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:29.395 BaseBdev2_malloc 00:10:29.395 14:58:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:29.652 true 00:10:29.652 14:58:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:29.910 [2024-07-12 14:58:55.524629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:29.910 [2024-07-12 14:58:55.524701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.910 [2024-07-12 14:58:55.524741] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3ab5f5034c80 00:10:29.910 [2024-07-12 14:58:55.524769] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.910 [2024-07-12 14:58:55.525428] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.910 [2024-07-12 14:58:55.525453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:29.910 BaseBdev2 00:10:29.910 14:58:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:29.910 14:58:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:30.168 BaseBdev3_malloc 00:10:30.168 14:58:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:10:30.426 true 00:10:30.426 14:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:30.684 [2024-07-12 14:58:56.268663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:30.684 [2024-07-12 14:58:56.268724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.684 [2024-07-12 14:58:56.268751] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3ab5f5035180 00:10:30.684 [2024-07-12 14:58:56.268760] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.684 [2024-07-12 14:58:56.269395] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.684 [2024-07-12 14:58:56.269419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:30.684 BaseBdev3 00:10:30.684 14:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:10:30.684 [2024-07-12 14:58:56.492683] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:30.684 [2024-07-12 14:58:56.493270] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:30.684 [2024-07-12 14:58:56.493296] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:30.684 [2024-07-12 14:58:56.493353] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3ab5f5035400 00:10:30.684 [2024-07-12 14:58:56.493359] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:30.684 [2024-07-12 14:58:56.493396] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3ab5f50a0e20 00:10:30.684 [2024-07-12 14:58:56.493469] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3ab5f5035400 00:10:30.685 [2024-07-12 14:58:56.493473] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3ab5f5035400 00:10:30.685 [2024-07-12 14:58:56.493500] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.685 14:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:30.685 14:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:30.685 14:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:30.685 14:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:30.685 14:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:30.685 14:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:30.685 14:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:30.685 14:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:30.685 14:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:30.685 14:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:30.685 14:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:30.685 14:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.942 14:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:30.943 "name": "raid_bdev1", 00:10:30.943 "uuid": "432de511-405f-11ef-b2a4-e9dca065e82e", 00:10:30.943 "strip_size_kb": 64, 00:10:30.943 "state": "online", 00:10:30.943 "raid_level": "raid0", 00:10:30.943 "superblock": true, 00:10:30.943 "num_base_bdevs": 3, 00:10:30.943 "num_base_bdevs_discovered": 3, 00:10:30.943 "num_base_bdevs_operational": 3, 00:10:30.943 "base_bdevs_list": [ 00:10:30.943 { 00:10:30.943 "name": "BaseBdev1", 00:10:30.943 "uuid": "204a1342-97fd-5859-b628-eb69d4363728", 00:10:30.943 "is_configured": true, 00:10:30.943 "data_offset": 2048, 00:10:30.943 "data_size": 63488 00:10:30.943 }, 00:10:30.943 { 00:10:30.943 "name": "BaseBdev2", 00:10:30.943 "uuid": "fd43ea5f-515c-c050-88af-c519c6412c5f", 00:10:30.943 "is_configured": true, 00:10:30.943 "data_offset": 2048, 00:10:30.943 "data_size": 63488 00:10:30.943 }, 00:10:30.943 { 00:10:30.943 "name": "BaseBdev3", 00:10:30.943 "uuid": "b4b04105-355b-d05a-92f5-9dad77bd2670", 00:10:30.943 "is_configured": true, 00:10:30.943 "data_offset": 2048, 00:10:30.943 "data_size": 63488 00:10:30.943 } 00:10:30.943 ] 00:10:30.943 }' 00:10:30.943 14:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:30.943 14:58:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.507 14:58:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:31.508 14:58:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:31.508 [2024-07-12 14:58:57.184871] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3ab5f50a0ec0 00:10:32.445 14:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:32.703 14:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:32.703 14:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:32.703 14:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:10:32.703 14:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:32.703 14:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:32.703 14:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:32.703 14:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:32.703 14:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:32.703 14:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:32.703 14:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:32.703 14:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:32.703 14:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:32.703 14:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:32.703 14:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:32.703 14:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.270 14:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:33.270 "name": "raid_bdev1", 00:10:33.270 "uuid": "432de511-405f-11ef-b2a4-e9dca065e82e", 00:10:33.270 "strip_size_kb": 64, 00:10:33.270 "state": "online", 00:10:33.270 "raid_level": "raid0", 00:10:33.270 "superblock": true, 00:10:33.270 "num_base_bdevs": 3, 00:10:33.270 "num_base_bdevs_discovered": 3, 00:10:33.270 "num_base_bdevs_operational": 3, 00:10:33.270 "base_bdevs_list": [ 00:10:33.270 { 00:10:33.270 "name": "BaseBdev1", 00:10:33.270 "uuid": "204a1342-97fd-5859-b628-eb69d4363728", 00:10:33.270 "is_configured": true, 00:10:33.270 "data_offset": 2048, 00:10:33.270 "data_size": 63488 00:10:33.270 }, 00:10:33.270 { 00:10:33.270 "name": "BaseBdev2", 00:10:33.270 "uuid": "fd43ea5f-515c-c050-88af-c519c6412c5f", 00:10:33.270 "is_configured": true, 00:10:33.270 "data_offset": 2048, 00:10:33.270 "data_size": 63488 00:10:33.270 }, 00:10:33.270 { 00:10:33.270 "name": "BaseBdev3", 00:10:33.270 "uuid": "b4b04105-355b-d05a-92f5-9dad77bd2670", 00:10:33.270 "is_configured": true, 00:10:33.270 "data_offset": 2048, 00:10:33.270 "data_size": 63488 00:10:33.270 } 00:10:33.270 ] 00:10:33.270 }' 00:10:33.270 14:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:33.270 14:58:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.529 14:58:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:33.786 [2024-07-12 14:58:59.452107] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:33.786 [2024-07-12 14:58:59.452134] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:33.786 [2024-07-12 14:58:59.452464] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:33.786 [2024-07-12 14:58:59.452483] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.786 [2024-07-12 14:58:59.452491] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:33.786 [2024-07-12 14:58:59.452495] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3ab5f5035400 name raid_bdev1, state offline 00:10:33.786 0 00:10:33.786 14:58:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 53767 00:10:33.786 14:58:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 53767 ']' 00:10:33.786 14:58:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 53767 00:10:33.786 14:58:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:10:33.786 14:58:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:10:33.786 14:58:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 53767 00:10:33.786 14:58:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:10:33.786 14:58:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:10:33.786 14:58:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:10:33.786 killing process with pid 53767 00:10:33.786 14:58:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 53767' 00:10:33.786 14:58:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 53767 00:10:33.786 [2024-07-12 14:58:59.482956] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:33.786 14:58:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 53767 00:10:33.786 [2024-07-12 14:58:59.499566] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:34.043 14:58:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.uDVSRBBbAW 00:10:34.043 14:58:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:34.043 14:58:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:34.043 14:58:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.44 00:10:34.043 14:58:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:10:34.043 14:58:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:34.043 14:58:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:34.043 14:58:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.44 != \0\.\0\0 ]] 00:10:34.043 00:10:34.043 real 0m6.872s 00:10:34.043 user 0m10.868s 00:10:34.043 sys 0m1.143s 00:10:34.043 14:58:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:34.044 ************************************ 00:10:34.044 END TEST raid_read_error_test 00:10:34.044 ************************************ 00:10:34.044 14:58:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.044 14:58:59 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:34.044 14:58:59 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:10:34.044 14:58:59 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:34.044 14:58:59 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:34.044 14:58:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:34.044 ************************************ 00:10:34.044 START TEST raid_write_error_test 00:10:34.044 ************************************ 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 write 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.TgV1XdgCpJ 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=53902 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 53902 /var/tmp/spdk-raid.sock 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 53902 ']' 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:34.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.044 14:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:34.044 [2024-07-12 14:58:59.732324] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:10:34.044 [2024-07-12 14:58:59.732506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:34.608 EAL: TSC is not safe to use in SMP mode 00:10:34.608 EAL: TSC is not invariant 00:10:34.608 [2024-07-12 14:59:00.246283] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.608 [2024-07-12 14:59:00.327235] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:34.608 [2024-07-12 14:59:00.329413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.608 [2024-07-12 14:59:00.330251] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.608 [2024-07-12 14:59:00.330268] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.173 14:59:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:35.173 14:59:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:10:35.173 14:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:35.173 14:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:35.430 BaseBdev1_malloc 00:10:35.430 14:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:35.688 true 00:10:35.688 14:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:35.945 [2024-07-12 14:59:01.605568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:35.945 [2024-07-12 14:59:01.605655] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.945 [2024-07-12 14:59:01.605689] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5f44f434780 00:10:35.945 [2024-07-12 14:59:01.605703] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.945 [2024-07-12 14:59:01.606447] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.945 [2024-07-12 14:59:01.606485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:35.945 BaseBdev1 00:10:35.945 14:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:35.945 14:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:36.202 BaseBdev2_malloc 00:10:36.202 14:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:36.459 true 00:10:36.459 14:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:36.715 [2024-07-12 14:59:02.301571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:36.715 [2024-07-12 14:59:02.301636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.715 [2024-07-12 14:59:02.301670] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5f44f434c80 00:10:36.716 [2024-07-12 14:59:02.301684] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.716 [2024-07-12 14:59:02.302436] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.716 [2024-07-12 14:59:02.302477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:36.716 BaseBdev2 00:10:36.716 14:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:36.716 14:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:36.973 BaseBdev3_malloc 00:10:36.973 14:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:10:37.242 true 00:10:37.242 14:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:37.242 [2024-07-12 14:59:03.069576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:37.242 [2024-07-12 14:59:03.069642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.242 [2024-07-12 14:59:03.069677] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5f44f435180 00:10:37.242 [2024-07-12 14:59:03.069699] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.499 [2024-07-12 14:59:03.070420] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.499 [2024-07-12 14:59:03.070456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:37.499 BaseBdev3 00:10:37.499 14:59:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:10:37.499 [2024-07-12 14:59:03.301598] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:37.499 [2024-07-12 14:59:03.302227] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:37.499 [2024-07-12 14:59:03.302263] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:37.499 [2024-07-12 14:59:03.302346] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x5f44f435400 00:10:37.499 [2024-07-12 14:59:03.302357] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:37.499 [2024-07-12 14:59:03.302412] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x5f44f4a0e20 00:10:37.499 [2024-07-12 14:59:03.302515] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x5f44f435400 00:10:37.499 [2024-07-12 14:59:03.302521] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x5f44f435400 00:10:37.499 [2024-07-12 14:59:03.302561] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:37.499 14:59:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:37.499 14:59:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:37.499 14:59:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:37.499 14:59:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:37.499 14:59:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:37.499 14:59:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:37.499 14:59:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:37.499 14:59:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:37.499 14:59:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:37.499 14:59:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:37.499 14:59:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:37.499 14:59:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.760 14:59:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:37.760 "name": "raid_bdev1", 00:10:37.760 "uuid": "473cda41-405f-11ef-b2a4-e9dca065e82e", 00:10:37.760 "strip_size_kb": 64, 00:10:37.760 "state": "online", 00:10:37.760 "raid_level": "raid0", 00:10:37.760 "superblock": true, 00:10:37.760 "num_base_bdevs": 3, 00:10:37.760 "num_base_bdevs_discovered": 3, 00:10:37.760 "num_base_bdevs_operational": 3, 00:10:37.760 "base_bdevs_list": [ 00:10:37.760 { 00:10:37.760 "name": "BaseBdev1", 00:10:37.760 "uuid": "08731192-182d-c953-b6ab-e5b789f0cc77", 00:10:37.760 "is_configured": true, 00:10:37.760 "data_offset": 2048, 00:10:37.760 "data_size": 63488 00:10:37.760 }, 00:10:37.760 { 00:10:37.760 "name": "BaseBdev2", 00:10:37.760 "uuid": "a4de7e67-c02f-4755-b711-e717f524b378", 00:10:37.760 "is_configured": true, 00:10:37.760 "data_offset": 2048, 00:10:37.760 "data_size": 63488 00:10:37.760 }, 00:10:37.760 { 00:10:37.760 "name": "BaseBdev3", 00:10:37.760 "uuid": "5657407c-b71a-2654-bafc-6f24904c7c69", 00:10:37.760 "is_configured": true, 00:10:37.760 "data_offset": 2048, 00:10:37.760 "data_size": 63488 00:10:37.760 } 00:10:37.760 ] 00:10:37.760 }' 00:10:37.760 14:59:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:37.760 14:59:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.018 14:59:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:38.018 14:59:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:38.276 [2024-07-12 14:59:03.969774] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x5f44f4a0ec0 00:10:39.209 14:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:39.468 14:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:39.468 14:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:39.468 14:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:10:39.468 14:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:39.468 14:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:39.468 14:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:39.468 14:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:39.468 14:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:39.468 14:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:39.468 14:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:39.468 14:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:39.468 14:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:39.468 14:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:39.468 14:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:39.468 14:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.726 14:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:39.726 "name": "raid_bdev1", 00:10:39.726 "uuid": "473cda41-405f-11ef-b2a4-e9dca065e82e", 00:10:39.726 "strip_size_kb": 64, 00:10:39.726 "state": "online", 00:10:39.726 "raid_level": "raid0", 00:10:39.726 "superblock": true, 00:10:39.726 "num_base_bdevs": 3, 00:10:39.726 "num_base_bdevs_discovered": 3, 00:10:39.726 "num_base_bdevs_operational": 3, 00:10:39.726 "base_bdevs_list": [ 00:10:39.726 { 00:10:39.726 "name": "BaseBdev1", 00:10:39.726 "uuid": "08731192-182d-c953-b6ab-e5b789f0cc77", 00:10:39.726 "is_configured": true, 00:10:39.726 "data_offset": 2048, 00:10:39.726 "data_size": 63488 00:10:39.726 }, 00:10:39.726 { 00:10:39.726 "name": "BaseBdev2", 00:10:39.726 "uuid": "a4de7e67-c02f-4755-b711-e717f524b378", 00:10:39.726 "is_configured": true, 00:10:39.726 "data_offset": 2048, 00:10:39.726 "data_size": 63488 00:10:39.727 }, 00:10:39.727 { 00:10:39.727 "name": "BaseBdev3", 00:10:39.727 "uuid": "5657407c-b71a-2654-bafc-6f24904c7c69", 00:10:39.727 "is_configured": true, 00:10:39.727 "data_offset": 2048, 00:10:39.727 "data_size": 63488 00:10:39.727 } 00:10:39.727 ] 00:10:39.727 }' 00:10:39.727 14:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:39.727 14:59:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.984 14:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:40.242 [2024-07-12 14:59:06.063719] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:40.242 [2024-07-12 14:59:06.063749] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:40.242 [2024-07-12 14:59:06.064105] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:40.242 [2024-07-12 14:59:06.064123] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.242 [2024-07-12 14:59:06.064132] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:40.242 [2024-07-12 14:59:06.064136] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x5f44f435400 name raid_bdev1, state offline 00:10:40.242 0 00:10:40.500 14:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 53902 00:10:40.500 14:59:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 53902 ']' 00:10:40.500 14:59:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 53902 00:10:40.500 14:59:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:10:40.500 14:59:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:10:40.500 14:59:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 53902 00:10:40.500 14:59:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:10:40.500 14:59:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:10:40.500 killing process with pid 53902 00:10:40.500 14:59:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:10:40.500 14:59:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 53902' 00:10:40.500 14:59:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 53902 00:10:40.500 [2024-07-12 14:59:06.091555] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:40.500 14:59:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 53902 00:10:40.500 [2024-07-12 14:59:06.109070] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:40.500 14:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.TgV1XdgCpJ 00:10:40.500 14:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:40.500 14:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:40.500 14:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.48 00:10:40.500 14:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:10:40.500 14:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:40.500 14:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:40.500 14:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.48 != \0\.\0\0 ]] 00:10:40.500 ************************************ 00:10:40.500 END TEST raid_write_error_test 00:10:40.500 ************************************ 00:10:40.500 00:10:40.500 real 0m6.569s 00:10:40.501 user 0m10.340s 00:10:40.501 sys 0m1.113s 00:10:40.501 14:59:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:40.501 14:59:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.501 14:59:06 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:40.501 14:59:06 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:10:40.501 14:59:06 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:10:40.501 14:59:06 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:40.501 14:59:06 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:40.501 14:59:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:40.758 ************************************ 00:10:40.758 START TEST raid_state_function_test 00:10:40.758 ************************************ 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 false 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=54031 00:10:40.758 Process raid pid: 54031 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 54031' 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 54031 /var/tmp/spdk-raid.sock 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 54031 ']' 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:40.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:40.758 14:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.758 [2024-07-12 14:59:06.346239] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:10:40.758 [2024-07-12 14:59:06.346501] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:41.349 EAL: TSC is not safe to use in SMP mode 00:10:41.349 EAL: TSC is not invariant 00:10:41.349 [2024-07-12 14:59:06.859344] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.349 [2024-07-12 14:59:06.942127] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:41.349 [2024-07-12 14:59:06.944253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.349 [2024-07-12 14:59:06.945032] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:41.349 [2024-07-12 14:59:06.945049] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:41.607 14:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:41.607 14:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:10:41.607 14:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:41.917 [2024-07-12 14:59:07.644342] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:41.917 [2024-07-12 14:59:07.644395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:41.917 [2024-07-12 14:59:07.644401] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:41.917 [2024-07-12 14:59:07.644410] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:41.917 [2024-07-12 14:59:07.644414] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:41.917 [2024-07-12 14:59:07.644421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:41.917 14:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:41.917 14:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:41.917 14:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:41.917 14:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:41.917 14:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:41.917 14:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:41.917 14:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:41.917 14:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:41.917 14:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:41.917 14:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:41.917 14:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:41.917 14:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.175 14:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:42.175 "name": "Existed_Raid", 00:10:42.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.175 "strip_size_kb": 64, 00:10:42.175 "state": "configuring", 00:10:42.175 "raid_level": "concat", 00:10:42.175 "superblock": false, 00:10:42.175 "num_base_bdevs": 3, 00:10:42.175 "num_base_bdevs_discovered": 0, 00:10:42.175 "num_base_bdevs_operational": 3, 00:10:42.175 "base_bdevs_list": [ 00:10:42.175 { 00:10:42.175 "name": "BaseBdev1", 00:10:42.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.175 "is_configured": false, 00:10:42.176 "data_offset": 0, 00:10:42.176 "data_size": 0 00:10:42.176 }, 00:10:42.176 { 00:10:42.176 "name": "BaseBdev2", 00:10:42.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.176 "is_configured": false, 00:10:42.176 "data_offset": 0, 00:10:42.176 "data_size": 0 00:10:42.176 }, 00:10:42.176 { 00:10:42.176 "name": "BaseBdev3", 00:10:42.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.176 "is_configured": false, 00:10:42.176 "data_offset": 0, 00:10:42.176 "data_size": 0 00:10:42.176 } 00:10:42.176 ] 00:10:42.176 }' 00:10:42.176 14:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:42.176 14:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.435 14:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:42.692 [2024-07-12 14:59:08.392365] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:42.692 [2024-07-12 14:59:08.392390] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x39ac89634500 name Existed_Raid, state configuring 00:10:42.692 14:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:42.949 [2024-07-12 14:59:08.668381] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:42.949 [2024-07-12 14:59:08.668432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:42.949 [2024-07-12 14:59:08.668437] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:42.949 [2024-07-12 14:59:08.668445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:42.949 [2024-07-12 14:59:08.668449] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:42.949 [2024-07-12 14:59:08.668456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:42.949 14:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:43.207 [2024-07-12 14:59:08.905394] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:43.207 BaseBdev1 00:10:43.207 14:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:43.207 14:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:10:43.207 14:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:43.207 14:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:43.207 14:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:43.207 14:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:43.207 14:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:43.465 14:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:43.725 [ 00:10:43.725 { 00:10:43.725 "name": "BaseBdev1", 00:10:43.725 "aliases": [ 00:10:43.725 "4a93c659-405f-11ef-b2a4-e9dca065e82e" 00:10:43.725 ], 00:10:43.725 "product_name": "Malloc disk", 00:10:43.725 "block_size": 512, 00:10:43.725 "num_blocks": 65536, 00:10:43.725 "uuid": "4a93c659-405f-11ef-b2a4-e9dca065e82e", 00:10:43.725 "assigned_rate_limits": { 00:10:43.725 "rw_ios_per_sec": 0, 00:10:43.725 "rw_mbytes_per_sec": 0, 00:10:43.725 "r_mbytes_per_sec": 0, 00:10:43.725 "w_mbytes_per_sec": 0 00:10:43.725 }, 00:10:43.725 "claimed": true, 00:10:43.725 "claim_type": "exclusive_write", 00:10:43.725 "zoned": false, 00:10:43.725 "supported_io_types": { 00:10:43.725 "read": true, 00:10:43.725 "write": true, 00:10:43.725 "unmap": true, 00:10:43.725 "flush": true, 00:10:43.725 "reset": true, 00:10:43.725 "nvme_admin": false, 00:10:43.725 "nvme_io": false, 00:10:43.725 "nvme_io_md": false, 00:10:43.725 "write_zeroes": true, 00:10:43.725 "zcopy": true, 00:10:43.725 "get_zone_info": false, 00:10:43.725 "zone_management": false, 00:10:43.725 "zone_append": false, 00:10:43.725 "compare": false, 00:10:43.725 "compare_and_write": false, 00:10:43.725 "abort": true, 00:10:43.725 "seek_hole": false, 00:10:43.725 "seek_data": false, 00:10:43.725 "copy": true, 00:10:43.725 "nvme_iov_md": false 00:10:43.725 }, 00:10:43.725 "memory_domains": [ 00:10:43.725 { 00:10:43.725 "dma_device_id": "system", 00:10:43.725 "dma_device_type": 1 00:10:43.725 }, 00:10:43.725 { 00:10:43.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.725 "dma_device_type": 2 00:10:43.725 } 00:10:43.725 ], 00:10:43.725 "driver_specific": {} 00:10:43.725 } 00:10:43.725 ] 00:10:43.725 14:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:43.725 14:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:43.725 14:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:43.725 14:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:43.725 14:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:43.725 14:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:43.725 14:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:43.725 14:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:43.725 14:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:43.725 14:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:43.725 14:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:43.725 14:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:43.725 14:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.984 14:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:43.984 "name": "Existed_Raid", 00:10:43.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.984 "strip_size_kb": 64, 00:10:43.984 "state": "configuring", 00:10:43.984 "raid_level": "concat", 00:10:43.984 "superblock": false, 00:10:43.984 "num_base_bdevs": 3, 00:10:43.984 "num_base_bdevs_discovered": 1, 00:10:43.984 "num_base_bdevs_operational": 3, 00:10:43.984 "base_bdevs_list": [ 00:10:43.984 { 00:10:43.984 "name": "BaseBdev1", 00:10:43.984 "uuid": "4a93c659-405f-11ef-b2a4-e9dca065e82e", 00:10:43.984 "is_configured": true, 00:10:43.984 "data_offset": 0, 00:10:43.984 "data_size": 65536 00:10:43.984 }, 00:10:43.984 { 00:10:43.984 "name": "BaseBdev2", 00:10:43.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.984 "is_configured": false, 00:10:43.984 "data_offset": 0, 00:10:43.984 "data_size": 0 00:10:43.984 }, 00:10:43.984 { 00:10:43.984 "name": "BaseBdev3", 00:10:43.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.984 "is_configured": false, 00:10:43.984 "data_offset": 0, 00:10:43.984 "data_size": 0 00:10:43.984 } 00:10:43.984 ] 00:10:43.984 }' 00:10:43.984 14:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:43.984 14:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.242 14:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:44.501 [2024-07-12 14:59:10.228414] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:44.501 [2024-07-12 14:59:10.228444] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x39ac89634500 name Existed_Raid, state configuring 00:10:44.501 14:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:44.759 [2024-07-12 14:59:10.468465] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:44.759 [2024-07-12 14:59:10.469292] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:44.759 [2024-07-12 14:59:10.469330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:44.760 [2024-07-12 14:59:10.469336] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:44.760 [2024-07-12 14:59:10.469345] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:44.760 14:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:44.760 14:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:44.760 14:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:44.760 14:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:44.760 14:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:44.760 14:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:44.760 14:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:44.760 14:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:44.760 14:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:44.760 14:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:44.760 14:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:44.760 14:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:44.760 14:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:44.760 14:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.019 14:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:45.019 "name": "Existed_Raid", 00:10:45.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.019 "strip_size_kb": 64, 00:10:45.019 "state": "configuring", 00:10:45.019 "raid_level": "concat", 00:10:45.019 "superblock": false, 00:10:45.019 "num_base_bdevs": 3, 00:10:45.019 "num_base_bdevs_discovered": 1, 00:10:45.019 "num_base_bdevs_operational": 3, 00:10:45.019 "base_bdevs_list": [ 00:10:45.019 { 00:10:45.019 "name": "BaseBdev1", 00:10:45.019 "uuid": "4a93c659-405f-11ef-b2a4-e9dca065e82e", 00:10:45.019 "is_configured": true, 00:10:45.019 "data_offset": 0, 00:10:45.019 "data_size": 65536 00:10:45.019 }, 00:10:45.019 { 00:10:45.019 "name": "BaseBdev2", 00:10:45.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.019 "is_configured": false, 00:10:45.019 "data_offset": 0, 00:10:45.019 "data_size": 0 00:10:45.019 }, 00:10:45.019 { 00:10:45.019 "name": "BaseBdev3", 00:10:45.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.019 "is_configured": false, 00:10:45.019 "data_offset": 0, 00:10:45.019 "data_size": 0 00:10:45.019 } 00:10:45.019 ] 00:10:45.019 }' 00:10:45.019 14:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:45.019 14:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.344 14:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:45.618 [2024-07-12 14:59:11.364569] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:45.618 BaseBdev2 00:10:45.618 14:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:45.618 14:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:45.618 14:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:45.618 14:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:45.618 14:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:45.618 14:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:45.618 14:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:45.877 14:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:46.136 [ 00:10:46.136 { 00:10:46.136 "name": "BaseBdev2", 00:10:46.136 "aliases": [ 00:10:46.136 "4c0b25ca-405f-11ef-b2a4-e9dca065e82e" 00:10:46.136 ], 00:10:46.136 "product_name": "Malloc disk", 00:10:46.136 "block_size": 512, 00:10:46.136 "num_blocks": 65536, 00:10:46.136 "uuid": "4c0b25ca-405f-11ef-b2a4-e9dca065e82e", 00:10:46.136 "assigned_rate_limits": { 00:10:46.136 "rw_ios_per_sec": 0, 00:10:46.136 "rw_mbytes_per_sec": 0, 00:10:46.136 "r_mbytes_per_sec": 0, 00:10:46.136 "w_mbytes_per_sec": 0 00:10:46.136 }, 00:10:46.136 "claimed": true, 00:10:46.136 "claim_type": "exclusive_write", 00:10:46.136 "zoned": false, 00:10:46.136 "supported_io_types": { 00:10:46.136 "read": true, 00:10:46.136 "write": true, 00:10:46.136 "unmap": true, 00:10:46.136 "flush": true, 00:10:46.136 "reset": true, 00:10:46.136 "nvme_admin": false, 00:10:46.136 "nvme_io": false, 00:10:46.136 "nvme_io_md": false, 00:10:46.136 "write_zeroes": true, 00:10:46.136 "zcopy": true, 00:10:46.136 "get_zone_info": false, 00:10:46.136 "zone_management": false, 00:10:46.136 "zone_append": false, 00:10:46.136 "compare": false, 00:10:46.136 "compare_and_write": false, 00:10:46.136 "abort": true, 00:10:46.136 "seek_hole": false, 00:10:46.136 "seek_data": false, 00:10:46.136 "copy": true, 00:10:46.136 "nvme_iov_md": false 00:10:46.136 }, 00:10:46.136 "memory_domains": [ 00:10:46.136 { 00:10:46.136 "dma_device_id": "system", 00:10:46.136 "dma_device_type": 1 00:10:46.136 }, 00:10:46.136 { 00:10:46.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.136 "dma_device_type": 2 00:10:46.136 } 00:10:46.136 ], 00:10:46.136 "driver_specific": {} 00:10:46.136 } 00:10:46.136 ] 00:10:46.136 14:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:46.136 14:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:46.137 14:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:46.137 14:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:46.137 14:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:46.137 14:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:46.137 14:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:46.137 14:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:46.137 14:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:46.137 14:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:46.137 14:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:46.137 14:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:46.137 14:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:46.137 14:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.137 14:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:46.396 14:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:46.396 "name": "Existed_Raid", 00:10:46.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.396 "strip_size_kb": 64, 00:10:46.396 "state": "configuring", 00:10:46.396 "raid_level": "concat", 00:10:46.396 "superblock": false, 00:10:46.396 "num_base_bdevs": 3, 00:10:46.396 "num_base_bdevs_discovered": 2, 00:10:46.396 "num_base_bdevs_operational": 3, 00:10:46.396 "base_bdevs_list": [ 00:10:46.396 { 00:10:46.396 "name": "BaseBdev1", 00:10:46.396 "uuid": "4a93c659-405f-11ef-b2a4-e9dca065e82e", 00:10:46.396 "is_configured": true, 00:10:46.396 "data_offset": 0, 00:10:46.396 "data_size": 65536 00:10:46.396 }, 00:10:46.396 { 00:10:46.396 "name": "BaseBdev2", 00:10:46.396 "uuid": "4c0b25ca-405f-11ef-b2a4-e9dca065e82e", 00:10:46.396 "is_configured": true, 00:10:46.396 "data_offset": 0, 00:10:46.396 "data_size": 65536 00:10:46.396 }, 00:10:46.396 { 00:10:46.396 "name": "BaseBdev3", 00:10:46.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.396 "is_configured": false, 00:10:46.396 "data_offset": 0, 00:10:46.396 "data_size": 0 00:10:46.396 } 00:10:46.396 ] 00:10:46.396 }' 00:10:46.396 14:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:46.396 14:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.654 14:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:46.913 [2024-07-12 14:59:12.656605] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:46.913 [2024-07-12 14:59:12.656632] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x39ac89634a00 00:10:46.913 [2024-07-12 14:59:12.656636] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:46.913 [2024-07-12 14:59:12.656677] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x39ac89697e20 00:10:46.913 [2024-07-12 14:59:12.656788] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x39ac89634a00 00:10:46.913 [2024-07-12 14:59:12.656793] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x39ac89634a00 00:10:46.913 [2024-07-12 14:59:12.656825] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:46.913 BaseBdev3 00:10:46.913 14:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:10:46.913 14:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:10:46.913 14:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:46.913 14:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:46.913 14:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:46.913 14:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:46.913 14:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:47.171 14:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:47.430 [ 00:10:47.430 { 00:10:47.430 "name": "BaseBdev3", 00:10:47.430 "aliases": [ 00:10:47.430 "4cd04c77-405f-11ef-b2a4-e9dca065e82e" 00:10:47.430 ], 00:10:47.430 "product_name": "Malloc disk", 00:10:47.430 "block_size": 512, 00:10:47.430 "num_blocks": 65536, 00:10:47.430 "uuid": "4cd04c77-405f-11ef-b2a4-e9dca065e82e", 00:10:47.430 "assigned_rate_limits": { 00:10:47.430 "rw_ios_per_sec": 0, 00:10:47.430 "rw_mbytes_per_sec": 0, 00:10:47.430 "r_mbytes_per_sec": 0, 00:10:47.430 "w_mbytes_per_sec": 0 00:10:47.430 }, 00:10:47.430 "claimed": true, 00:10:47.430 "claim_type": "exclusive_write", 00:10:47.430 "zoned": false, 00:10:47.430 "supported_io_types": { 00:10:47.430 "read": true, 00:10:47.430 "write": true, 00:10:47.430 "unmap": true, 00:10:47.430 "flush": true, 00:10:47.430 "reset": true, 00:10:47.430 "nvme_admin": false, 00:10:47.430 "nvme_io": false, 00:10:47.430 "nvme_io_md": false, 00:10:47.430 "write_zeroes": true, 00:10:47.430 "zcopy": true, 00:10:47.430 "get_zone_info": false, 00:10:47.430 "zone_management": false, 00:10:47.430 "zone_append": false, 00:10:47.430 "compare": false, 00:10:47.430 "compare_and_write": false, 00:10:47.430 "abort": true, 00:10:47.430 "seek_hole": false, 00:10:47.430 "seek_data": false, 00:10:47.430 "copy": true, 00:10:47.430 "nvme_iov_md": false 00:10:47.430 }, 00:10:47.430 "memory_domains": [ 00:10:47.430 { 00:10:47.430 "dma_device_id": "system", 00:10:47.430 "dma_device_type": 1 00:10:47.430 }, 00:10:47.430 { 00:10:47.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.430 "dma_device_type": 2 00:10:47.430 } 00:10:47.430 ], 00:10:47.430 "driver_specific": {} 00:10:47.430 } 00:10:47.430 ] 00:10:47.430 14:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:47.430 14:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:47.430 14:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:47.430 14:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:47.430 14:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:47.430 14:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:47.430 14:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:47.430 14:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:47.430 14:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:47.430 14:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:47.430 14:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:47.430 14:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:47.430 14:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:47.430 14:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:47.430 14:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.688 14:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:47.689 "name": "Existed_Raid", 00:10:47.689 "uuid": "4cd052af-405f-11ef-b2a4-e9dca065e82e", 00:10:47.689 "strip_size_kb": 64, 00:10:47.689 "state": "online", 00:10:47.689 "raid_level": "concat", 00:10:47.689 "superblock": false, 00:10:47.689 "num_base_bdevs": 3, 00:10:47.689 "num_base_bdevs_discovered": 3, 00:10:47.689 "num_base_bdevs_operational": 3, 00:10:47.689 "base_bdevs_list": [ 00:10:47.689 { 00:10:47.689 "name": "BaseBdev1", 00:10:47.689 "uuid": "4a93c659-405f-11ef-b2a4-e9dca065e82e", 00:10:47.689 "is_configured": true, 00:10:47.689 "data_offset": 0, 00:10:47.689 "data_size": 65536 00:10:47.689 }, 00:10:47.689 { 00:10:47.689 "name": "BaseBdev2", 00:10:47.689 "uuid": "4c0b25ca-405f-11ef-b2a4-e9dca065e82e", 00:10:47.689 "is_configured": true, 00:10:47.689 "data_offset": 0, 00:10:47.689 "data_size": 65536 00:10:47.689 }, 00:10:47.689 { 00:10:47.689 "name": "BaseBdev3", 00:10:47.689 "uuid": "4cd04c77-405f-11ef-b2a4-e9dca065e82e", 00:10:47.689 "is_configured": true, 00:10:47.689 "data_offset": 0, 00:10:47.689 "data_size": 65536 00:10:47.689 } 00:10:47.689 ] 00:10:47.689 }' 00:10:47.689 14:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:47.689 14:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.947 14:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:47.947 14:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:47.947 14:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:47.947 14:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:47.947 14:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:47.947 14:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:47.947 14:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:47.947 14:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:48.206 [2024-07-12 14:59:13.968575] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:48.206 14:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:48.206 "name": "Existed_Raid", 00:10:48.206 "aliases": [ 00:10:48.206 "4cd052af-405f-11ef-b2a4-e9dca065e82e" 00:10:48.206 ], 00:10:48.206 "product_name": "Raid Volume", 00:10:48.206 "block_size": 512, 00:10:48.206 "num_blocks": 196608, 00:10:48.206 "uuid": "4cd052af-405f-11ef-b2a4-e9dca065e82e", 00:10:48.206 "assigned_rate_limits": { 00:10:48.206 "rw_ios_per_sec": 0, 00:10:48.206 "rw_mbytes_per_sec": 0, 00:10:48.206 "r_mbytes_per_sec": 0, 00:10:48.206 "w_mbytes_per_sec": 0 00:10:48.206 }, 00:10:48.206 "claimed": false, 00:10:48.206 "zoned": false, 00:10:48.206 "supported_io_types": { 00:10:48.206 "read": true, 00:10:48.206 "write": true, 00:10:48.206 "unmap": true, 00:10:48.206 "flush": true, 00:10:48.206 "reset": true, 00:10:48.206 "nvme_admin": false, 00:10:48.206 "nvme_io": false, 00:10:48.206 "nvme_io_md": false, 00:10:48.206 "write_zeroes": true, 00:10:48.206 "zcopy": false, 00:10:48.206 "get_zone_info": false, 00:10:48.206 "zone_management": false, 00:10:48.206 "zone_append": false, 00:10:48.206 "compare": false, 00:10:48.206 "compare_and_write": false, 00:10:48.206 "abort": false, 00:10:48.206 "seek_hole": false, 00:10:48.206 "seek_data": false, 00:10:48.206 "copy": false, 00:10:48.206 "nvme_iov_md": false 00:10:48.206 }, 00:10:48.206 "memory_domains": [ 00:10:48.206 { 00:10:48.206 "dma_device_id": "system", 00:10:48.206 "dma_device_type": 1 00:10:48.206 }, 00:10:48.206 { 00:10:48.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.206 "dma_device_type": 2 00:10:48.206 }, 00:10:48.206 { 00:10:48.206 "dma_device_id": "system", 00:10:48.206 "dma_device_type": 1 00:10:48.206 }, 00:10:48.206 { 00:10:48.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.206 "dma_device_type": 2 00:10:48.206 }, 00:10:48.206 { 00:10:48.206 "dma_device_id": "system", 00:10:48.206 "dma_device_type": 1 00:10:48.206 }, 00:10:48.206 { 00:10:48.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.206 "dma_device_type": 2 00:10:48.206 } 00:10:48.206 ], 00:10:48.206 "driver_specific": { 00:10:48.206 "raid": { 00:10:48.206 "uuid": "4cd052af-405f-11ef-b2a4-e9dca065e82e", 00:10:48.206 "strip_size_kb": 64, 00:10:48.206 "state": "online", 00:10:48.206 "raid_level": "concat", 00:10:48.206 "superblock": false, 00:10:48.206 "num_base_bdevs": 3, 00:10:48.206 "num_base_bdevs_discovered": 3, 00:10:48.206 "num_base_bdevs_operational": 3, 00:10:48.206 "base_bdevs_list": [ 00:10:48.206 { 00:10:48.206 "name": "BaseBdev1", 00:10:48.206 "uuid": "4a93c659-405f-11ef-b2a4-e9dca065e82e", 00:10:48.206 "is_configured": true, 00:10:48.206 "data_offset": 0, 00:10:48.206 "data_size": 65536 00:10:48.206 }, 00:10:48.206 { 00:10:48.206 "name": "BaseBdev2", 00:10:48.206 "uuid": "4c0b25ca-405f-11ef-b2a4-e9dca065e82e", 00:10:48.206 "is_configured": true, 00:10:48.206 "data_offset": 0, 00:10:48.206 "data_size": 65536 00:10:48.206 }, 00:10:48.206 { 00:10:48.206 "name": "BaseBdev3", 00:10:48.206 "uuid": "4cd04c77-405f-11ef-b2a4-e9dca065e82e", 00:10:48.206 "is_configured": true, 00:10:48.206 "data_offset": 0, 00:10:48.206 "data_size": 65536 00:10:48.206 } 00:10:48.206 ] 00:10:48.206 } 00:10:48.206 } 00:10:48.206 }' 00:10:48.206 14:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:48.206 14:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:48.206 BaseBdev2 00:10:48.206 BaseBdev3' 00:10:48.206 14:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:48.206 14:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:48.206 14:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:48.465 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:48.465 "name": "BaseBdev1", 00:10:48.465 "aliases": [ 00:10:48.465 "4a93c659-405f-11ef-b2a4-e9dca065e82e" 00:10:48.465 ], 00:10:48.465 "product_name": "Malloc disk", 00:10:48.465 "block_size": 512, 00:10:48.465 "num_blocks": 65536, 00:10:48.465 "uuid": "4a93c659-405f-11ef-b2a4-e9dca065e82e", 00:10:48.465 "assigned_rate_limits": { 00:10:48.465 "rw_ios_per_sec": 0, 00:10:48.465 "rw_mbytes_per_sec": 0, 00:10:48.465 "r_mbytes_per_sec": 0, 00:10:48.465 "w_mbytes_per_sec": 0 00:10:48.465 }, 00:10:48.465 "claimed": true, 00:10:48.465 "claim_type": "exclusive_write", 00:10:48.465 "zoned": false, 00:10:48.465 "supported_io_types": { 00:10:48.465 "read": true, 00:10:48.465 "write": true, 00:10:48.465 "unmap": true, 00:10:48.465 "flush": true, 00:10:48.465 "reset": true, 00:10:48.465 "nvme_admin": false, 00:10:48.465 "nvme_io": false, 00:10:48.465 "nvme_io_md": false, 00:10:48.465 "write_zeroes": true, 00:10:48.465 "zcopy": true, 00:10:48.465 "get_zone_info": false, 00:10:48.465 "zone_management": false, 00:10:48.465 "zone_append": false, 00:10:48.465 "compare": false, 00:10:48.465 "compare_and_write": false, 00:10:48.465 "abort": true, 00:10:48.465 "seek_hole": false, 00:10:48.465 "seek_data": false, 00:10:48.465 "copy": true, 00:10:48.465 "nvme_iov_md": false 00:10:48.465 }, 00:10:48.465 "memory_domains": [ 00:10:48.465 { 00:10:48.465 "dma_device_id": "system", 00:10:48.465 "dma_device_type": 1 00:10:48.465 }, 00:10:48.465 { 00:10:48.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.465 "dma_device_type": 2 00:10:48.465 } 00:10:48.465 ], 00:10:48.465 "driver_specific": {} 00:10:48.465 }' 00:10:48.465 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:48.465 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:48.465 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:48.465 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:48.465 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:48.465 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:48.465 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:48.465 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:48.465 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:48.465 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:48.465 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:48.465 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:48.465 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:48.465 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:48.465 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:48.723 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:48.723 "name": "BaseBdev2", 00:10:48.723 "aliases": [ 00:10:48.723 "4c0b25ca-405f-11ef-b2a4-e9dca065e82e" 00:10:48.723 ], 00:10:48.723 "product_name": "Malloc disk", 00:10:48.723 "block_size": 512, 00:10:48.723 "num_blocks": 65536, 00:10:48.723 "uuid": "4c0b25ca-405f-11ef-b2a4-e9dca065e82e", 00:10:48.723 "assigned_rate_limits": { 00:10:48.723 "rw_ios_per_sec": 0, 00:10:48.723 "rw_mbytes_per_sec": 0, 00:10:48.723 "r_mbytes_per_sec": 0, 00:10:48.723 "w_mbytes_per_sec": 0 00:10:48.723 }, 00:10:48.723 "claimed": true, 00:10:48.723 "claim_type": "exclusive_write", 00:10:48.723 "zoned": false, 00:10:48.723 "supported_io_types": { 00:10:48.723 "read": true, 00:10:48.723 "write": true, 00:10:48.723 "unmap": true, 00:10:48.723 "flush": true, 00:10:48.723 "reset": true, 00:10:48.723 "nvme_admin": false, 00:10:48.723 "nvme_io": false, 00:10:48.723 "nvme_io_md": false, 00:10:48.723 "write_zeroes": true, 00:10:48.723 "zcopy": true, 00:10:48.723 "get_zone_info": false, 00:10:48.723 "zone_management": false, 00:10:48.723 "zone_append": false, 00:10:48.723 "compare": false, 00:10:48.723 "compare_and_write": false, 00:10:48.723 "abort": true, 00:10:48.723 "seek_hole": false, 00:10:48.723 "seek_data": false, 00:10:48.723 "copy": true, 00:10:48.723 "nvme_iov_md": false 00:10:48.723 }, 00:10:48.723 "memory_domains": [ 00:10:48.723 { 00:10:48.723 "dma_device_id": "system", 00:10:48.723 "dma_device_type": 1 00:10:48.723 }, 00:10:48.723 { 00:10:48.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.723 "dma_device_type": 2 00:10:48.723 } 00:10:48.723 ], 00:10:48.723 "driver_specific": {} 00:10:48.723 }' 00:10:48.723 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:48.723 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:48.723 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:48.723 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:48.723 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:49.079 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:49.079 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:49.079 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:49.079 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:49.079 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:49.079 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:49.079 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:49.079 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:49.079 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:49.079 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:49.079 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:49.079 "name": "BaseBdev3", 00:10:49.079 "aliases": [ 00:10:49.079 "4cd04c77-405f-11ef-b2a4-e9dca065e82e" 00:10:49.079 ], 00:10:49.079 "product_name": "Malloc disk", 00:10:49.079 "block_size": 512, 00:10:49.079 "num_blocks": 65536, 00:10:49.079 "uuid": "4cd04c77-405f-11ef-b2a4-e9dca065e82e", 00:10:49.079 "assigned_rate_limits": { 00:10:49.079 "rw_ios_per_sec": 0, 00:10:49.079 "rw_mbytes_per_sec": 0, 00:10:49.079 "r_mbytes_per_sec": 0, 00:10:49.079 "w_mbytes_per_sec": 0 00:10:49.079 }, 00:10:49.079 "claimed": true, 00:10:49.079 "claim_type": "exclusive_write", 00:10:49.079 "zoned": false, 00:10:49.079 "supported_io_types": { 00:10:49.079 "read": true, 00:10:49.079 "write": true, 00:10:49.079 "unmap": true, 00:10:49.079 "flush": true, 00:10:49.079 "reset": true, 00:10:49.079 "nvme_admin": false, 00:10:49.079 "nvme_io": false, 00:10:49.079 "nvme_io_md": false, 00:10:49.079 "write_zeroes": true, 00:10:49.079 "zcopy": true, 00:10:49.079 "get_zone_info": false, 00:10:49.079 "zone_management": false, 00:10:49.079 "zone_append": false, 00:10:49.079 "compare": false, 00:10:49.079 "compare_and_write": false, 00:10:49.079 "abort": true, 00:10:49.079 "seek_hole": false, 00:10:49.079 "seek_data": false, 00:10:49.079 "copy": true, 00:10:49.079 "nvme_iov_md": false 00:10:49.079 }, 00:10:49.079 "memory_domains": [ 00:10:49.079 { 00:10:49.079 "dma_device_id": "system", 00:10:49.079 "dma_device_type": 1 00:10:49.079 }, 00:10:49.079 { 00:10:49.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.079 "dma_device_type": 2 00:10:49.079 } 00:10:49.079 ], 00:10:49.079 "driver_specific": {} 00:10:49.079 }' 00:10:49.079 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:49.079 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:49.079 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:49.079 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:49.079 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:49.079 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:49.079 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:49.079 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:49.079 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:49.079 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:49.079 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:49.336 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:49.336 14:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:49.336 [2024-07-12 14:59:15.096611] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:49.336 [2024-07-12 14:59:15.096636] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:49.336 [2024-07-12 14:59:15.096651] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.336 14:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:49.336 14:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:10:49.336 14:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:49.336 14:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:49.336 14:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:10:49.337 14:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:49.337 14:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:49.337 14:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:10:49.337 14:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:49.337 14:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:49.337 14:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:49.337 14:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:49.337 14:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:49.337 14:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:49.337 14:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:49.337 14:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.337 14:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:49.594 14:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:49.594 "name": "Existed_Raid", 00:10:49.594 "uuid": "4cd052af-405f-11ef-b2a4-e9dca065e82e", 00:10:49.594 "strip_size_kb": 64, 00:10:49.594 "state": "offline", 00:10:49.594 "raid_level": "concat", 00:10:49.594 "superblock": false, 00:10:49.594 "num_base_bdevs": 3, 00:10:49.594 "num_base_bdevs_discovered": 2, 00:10:49.594 "num_base_bdevs_operational": 2, 00:10:49.594 "base_bdevs_list": [ 00:10:49.594 { 00:10:49.594 "name": null, 00:10:49.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.594 "is_configured": false, 00:10:49.594 "data_offset": 0, 00:10:49.594 "data_size": 65536 00:10:49.594 }, 00:10:49.594 { 00:10:49.594 "name": "BaseBdev2", 00:10:49.594 "uuid": "4c0b25ca-405f-11ef-b2a4-e9dca065e82e", 00:10:49.594 "is_configured": true, 00:10:49.594 "data_offset": 0, 00:10:49.594 "data_size": 65536 00:10:49.594 }, 00:10:49.594 { 00:10:49.594 "name": "BaseBdev3", 00:10:49.594 "uuid": "4cd04c77-405f-11ef-b2a4-e9dca065e82e", 00:10:49.594 "is_configured": true, 00:10:49.594 "data_offset": 0, 00:10:49.594 "data_size": 65536 00:10:49.594 } 00:10:49.594 ] 00:10:49.594 }' 00:10:49.594 14:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:49.594 14:59:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.852 14:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:49.852 14:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:49.852 14:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:49.852 14:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:50.111 14:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:50.111 14:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:50.111 14:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:50.370 [2024-07-12 14:59:16.194251] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:50.628 14:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:50.628 14:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:50.628 14:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:50.628 14:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:50.886 14:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:50.886 14:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:50.886 14:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:10:51.144 [2024-07-12 14:59:16.727859] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:51.144 [2024-07-12 14:59:16.727894] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x39ac89634a00 name Existed_Raid, state offline 00:10:51.144 14:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:51.144 14:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:51.144 14:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:51.144 14:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:51.403 14:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:51.403 14:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:51.403 14:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:10:51.403 14:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:10:51.403 14:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:51.403 14:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:51.403 BaseBdev2 00:10:51.403 14:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:10:51.403 14:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:51.403 14:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:51.403 14:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:51.403 14:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:51.403 14:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:51.403 14:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:51.969 14:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:51.969 [ 00:10:51.969 { 00:10:51.969 "name": "BaseBdev2", 00:10:51.969 "aliases": [ 00:10:51.969 "4f8708d9-405f-11ef-b2a4-e9dca065e82e" 00:10:51.969 ], 00:10:51.969 "product_name": "Malloc disk", 00:10:51.969 "block_size": 512, 00:10:51.969 "num_blocks": 65536, 00:10:51.969 "uuid": "4f8708d9-405f-11ef-b2a4-e9dca065e82e", 00:10:51.969 "assigned_rate_limits": { 00:10:51.969 "rw_ios_per_sec": 0, 00:10:51.969 "rw_mbytes_per_sec": 0, 00:10:51.969 "r_mbytes_per_sec": 0, 00:10:51.969 "w_mbytes_per_sec": 0 00:10:51.969 }, 00:10:51.969 "claimed": false, 00:10:51.969 "zoned": false, 00:10:51.969 "supported_io_types": { 00:10:51.969 "read": true, 00:10:51.969 "write": true, 00:10:51.969 "unmap": true, 00:10:51.969 "flush": true, 00:10:51.969 "reset": true, 00:10:51.969 "nvme_admin": false, 00:10:51.969 "nvme_io": false, 00:10:51.969 "nvme_io_md": false, 00:10:51.969 "write_zeroes": true, 00:10:51.969 "zcopy": true, 00:10:51.969 "get_zone_info": false, 00:10:51.969 "zone_management": false, 00:10:51.969 "zone_append": false, 00:10:51.969 "compare": false, 00:10:51.969 "compare_and_write": false, 00:10:51.969 "abort": true, 00:10:51.969 "seek_hole": false, 00:10:51.969 "seek_data": false, 00:10:51.969 "copy": true, 00:10:51.969 "nvme_iov_md": false 00:10:51.969 }, 00:10:51.969 "memory_domains": [ 00:10:51.969 { 00:10:51.969 "dma_device_id": "system", 00:10:51.969 "dma_device_type": 1 00:10:51.969 }, 00:10:51.969 { 00:10:51.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.969 "dma_device_type": 2 00:10:51.969 } 00:10:51.969 ], 00:10:51.969 "driver_specific": {} 00:10:51.969 } 00:10:51.969 ] 00:10:51.969 14:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:51.970 14:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:51.970 14:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:51.970 14:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:52.228 BaseBdev3 00:10:52.228 14:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:10:52.228 14:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:10:52.228 14:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:52.228 14:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:52.228 14:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:52.228 14:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:52.228 14:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:52.486 14:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:52.744 [ 00:10:52.744 { 00:10:52.744 "name": "BaseBdev3", 00:10:52.744 "aliases": [ 00:10:52.744 "4ffc39ec-405f-11ef-b2a4-e9dca065e82e" 00:10:52.744 ], 00:10:52.744 "product_name": "Malloc disk", 00:10:52.744 "block_size": 512, 00:10:52.744 "num_blocks": 65536, 00:10:52.744 "uuid": "4ffc39ec-405f-11ef-b2a4-e9dca065e82e", 00:10:52.744 "assigned_rate_limits": { 00:10:52.744 "rw_ios_per_sec": 0, 00:10:52.744 "rw_mbytes_per_sec": 0, 00:10:52.744 "r_mbytes_per_sec": 0, 00:10:52.744 "w_mbytes_per_sec": 0 00:10:52.744 }, 00:10:52.744 "claimed": false, 00:10:52.744 "zoned": false, 00:10:52.744 "supported_io_types": { 00:10:52.744 "read": true, 00:10:52.744 "write": true, 00:10:52.744 "unmap": true, 00:10:52.744 "flush": true, 00:10:52.744 "reset": true, 00:10:52.744 "nvme_admin": false, 00:10:52.744 "nvme_io": false, 00:10:52.744 "nvme_io_md": false, 00:10:52.744 "write_zeroes": true, 00:10:52.744 "zcopy": true, 00:10:52.744 "get_zone_info": false, 00:10:52.744 "zone_management": false, 00:10:52.744 "zone_append": false, 00:10:52.744 "compare": false, 00:10:52.744 "compare_and_write": false, 00:10:52.744 "abort": true, 00:10:52.744 "seek_hole": false, 00:10:52.744 "seek_data": false, 00:10:52.744 "copy": true, 00:10:52.744 "nvme_iov_md": false 00:10:52.744 }, 00:10:52.744 "memory_domains": [ 00:10:52.744 { 00:10:52.744 "dma_device_id": "system", 00:10:52.744 "dma_device_type": 1 00:10:52.744 }, 00:10:52.744 { 00:10:52.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.744 "dma_device_type": 2 00:10:52.744 } 00:10:52.744 ], 00:10:52.744 "driver_specific": {} 00:10:52.744 } 00:10:52.744 ] 00:10:52.744 14:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:52.744 14:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:52.744 14:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:52.744 14:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:53.002 [2024-07-12 14:59:18.753555] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:53.002 [2024-07-12 14:59:18.753606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:53.002 [2024-07-12 14:59:18.753615] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:53.002 [2024-07-12 14:59:18.754155] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:53.002 14:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:53.002 14:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:53.002 14:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:53.002 14:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:53.002 14:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:53.002 14:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:53.002 14:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:53.002 14:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:53.002 14:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:53.002 14:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:53.002 14:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:53.002 14:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.260 14:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:53.260 "name": "Existed_Raid", 00:10:53.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.260 "strip_size_kb": 64, 00:10:53.260 "state": "configuring", 00:10:53.260 "raid_level": "concat", 00:10:53.260 "superblock": false, 00:10:53.260 "num_base_bdevs": 3, 00:10:53.260 "num_base_bdevs_discovered": 2, 00:10:53.260 "num_base_bdevs_operational": 3, 00:10:53.260 "base_bdevs_list": [ 00:10:53.260 { 00:10:53.260 "name": "BaseBdev1", 00:10:53.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.260 "is_configured": false, 00:10:53.260 "data_offset": 0, 00:10:53.260 "data_size": 0 00:10:53.260 }, 00:10:53.260 { 00:10:53.260 "name": "BaseBdev2", 00:10:53.260 "uuid": "4f8708d9-405f-11ef-b2a4-e9dca065e82e", 00:10:53.260 "is_configured": true, 00:10:53.260 "data_offset": 0, 00:10:53.260 "data_size": 65536 00:10:53.260 }, 00:10:53.260 { 00:10:53.260 "name": "BaseBdev3", 00:10:53.260 "uuid": "4ffc39ec-405f-11ef-b2a4-e9dca065e82e", 00:10:53.260 "is_configured": true, 00:10:53.260 "data_offset": 0, 00:10:53.260 "data_size": 65536 00:10:53.260 } 00:10:53.260 ] 00:10:53.260 }' 00:10:53.260 14:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:53.260 14:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.518 14:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:10:53.776 [2024-07-12 14:59:19.533589] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:53.776 14:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:53.776 14:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:53.776 14:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:53.776 14:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:53.776 14:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:53.776 14:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:53.776 14:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:53.776 14:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:53.776 14:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:53.776 14:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:53.776 14:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:53.776 14:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.034 14:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:54.034 "name": "Existed_Raid", 00:10:54.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.034 "strip_size_kb": 64, 00:10:54.034 "state": "configuring", 00:10:54.034 "raid_level": "concat", 00:10:54.034 "superblock": false, 00:10:54.034 "num_base_bdevs": 3, 00:10:54.034 "num_base_bdevs_discovered": 1, 00:10:54.034 "num_base_bdevs_operational": 3, 00:10:54.034 "base_bdevs_list": [ 00:10:54.034 { 00:10:54.034 "name": "BaseBdev1", 00:10:54.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.034 "is_configured": false, 00:10:54.034 "data_offset": 0, 00:10:54.034 "data_size": 0 00:10:54.034 }, 00:10:54.034 { 00:10:54.034 "name": null, 00:10:54.034 "uuid": "4f8708d9-405f-11ef-b2a4-e9dca065e82e", 00:10:54.034 "is_configured": false, 00:10:54.034 "data_offset": 0, 00:10:54.034 "data_size": 65536 00:10:54.034 }, 00:10:54.034 { 00:10:54.034 "name": "BaseBdev3", 00:10:54.034 "uuid": "4ffc39ec-405f-11ef-b2a4-e9dca065e82e", 00:10:54.034 "is_configured": true, 00:10:54.034 "data_offset": 0, 00:10:54.034 "data_size": 65536 00:10:54.034 } 00:10:54.034 ] 00:10:54.034 }' 00:10:54.034 14:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:54.034 14:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.293 14:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:54.293 14:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:54.859 14:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:10:54.859 14:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:54.859 [2024-07-12 14:59:20.625766] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:54.859 BaseBdev1 00:10:54.859 14:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:10:54.859 14:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:10:54.859 14:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:54.859 14:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:54.859 14:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:54.859 14:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:54.859 14:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:55.117 14:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:55.375 [ 00:10:55.375 { 00:10:55.375 "name": "BaseBdev1", 00:10:55.375 "aliases": [ 00:10:55.375 "51904ba1-405f-11ef-b2a4-e9dca065e82e" 00:10:55.375 ], 00:10:55.375 "product_name": "Malloc disk", 00:10:55.375 "block_size": 512, 00:10:55.375 "num_blocks": 65536, 00:10:55.375 "uuid": "51904ba1-405f-11ef-b2a4-e9dca065e82e", 00:10:55.375 "assigned_rate_limits": { 00:10:55.375 "rw_ios_per_sec": 0, 00:10:55.375 "rw_mbytes_per_sec": 0, 00:10:55.375 "r_mbytes_per_sec": 0, 00:10:55.375 "w_mbytes_per_sec": 0 00:10:55.375 }, 00:10:55.375 "claimed": true, 00:10:55.375 "claim_type": "exclusive_write", 00:10:55.375 "zoned": false, 00:10:55.375 "supported_io_types": { 00:10:55.375 "read": true, 00:10:55.375 "write": true, 00:10:55.375 "unmap": true, 00:10:55.375 "flush": true, 00:10:55.375 "reset": true, 00:10:55.375 "nvme_admin": false, 00:10:55.375 "nvme_io": false, 00:10:55.375 "nvme_io_md": false, 00:10:55.375 "write_zeroes": true, 00:10:55.375 "zcopy": true, 00:10:55.375 "get_zone_info": false, 00:10:55.375 "zone_management": false, 00:10:55.375 "zone_append": false, 00:10:55.375 "compare": false, 00:10:55.375 "compare_and_write": false, 00:10:55.375 "abort": true, 00:10:55.375 "seek_hole": false, 00:10:55.375 "seek_data": false, 00:10:55.375 "copy": true, 00:10:55.375 "nvme_iov_md": false 00:10:55.375 }, 00:10:55.375 "memory_domains": [ 00:10:55.375 { 00:10:55.375 "dma_device_id": "system", 00:10:55.375 "dma_device_type": 1 00:10:55.375 }, 00:10:55.375 { 00:10:55.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.375 "dma_device_type": 2 00:10:55.375 } 00:10:55.375 ], 00:10:55.375 "driver_specific": {} 00:10:55.375 } 00:10:55.375 ] 00:10:55.375 14:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:55.375 14:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:55.375 14:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:55.375 14:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:55.375 14:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:55.375 14:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:55.375 14:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:55.375 14:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:55.375 14:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:55.375 14:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:55.375 14:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:55.375 14:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:55.375 14:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.634 14:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:55.634 "name": "Existed_Raid", 00:10:55.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.634 "strip_size_kb": 64, 00:10:55.634 "state": "configuring", 00:10:55.634 "raid_level": "concat", 00:10:55.634 "superblock": false, 00:10:55.634 "num_base_bdevs": 3, 00:10:55.634 "num_base_bdevs_discovered": 2, 00:10:55.634 "num_base_bdevs_operational": 3, 00:10:55.634 "base_bdevs_list": [ 00:10:55.634 { 00:10:55.634 "name": "BaseBdev1", 00:10:55.634 "uuid": "51904ba1-405f-11ef-b2a4-e9dca065e82e", 00:10:55.634 "is_configured": true, 00:10:55.634 "data_offset": 0, 00:10:55.634 "data_size": 65536 00:10:55.634 }, 00:10:55.634 { 00:10:55.634 "name": null, 00:10:55.634 "uuid": "4f8708d9-405f-11ef-b2a4-e9dca065e82e", 00:10:55.634 "is_configured": false, 00:10:55.634 "data_offset": 0, 00:10:55.634 "data_size": 65536 00:10:55.634 }, 00:10:55.634 { 00:10:55.634 "name": "BaseBdev3", 00:10:55.634 "uuid": "4ffc39ec-405f-11ef-b2a4-e9dca065e82e", 00:10:55.634 "is_configured": true, 00:10:55.634 "data_offset": 0, 00:10:55.634 "data_size": 65536 00:10:55.634 } 00:10:55.634 ] 00:10:55.634 }' 00:10:55.634 14:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:55.634 14:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.892 14:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:55.892 14:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:56.177 14:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:10:56.177 14:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:10:56.445 [2024-07-12 14:59:22.145687] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:56.445 14:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:56.445 14:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:56.445 14:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:56.445 14:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:56.445 14:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:56.445 14:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:56.445 14:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:56.445 14:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:56.445 14:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:56.445 14:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:56.445 14:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:56.445 14:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.704 14:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:56.704 "name": "Existed_Raid", 00:10:56.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.704 "strip_size_kb": 64, 00:10:56.704 "state": "configuring", 00:10:56.704 "raid_level": "concat", 00:10:56.704 "superblock": false, 00:10:56.704 "num_base_bdevs": 3, 00:10:56.704 "num_base_bdevs_discovered": 1, 00:10:56.704 "num_base_bdevs_operational": 3, 00:10:56.704 "base_bdevs_list": [ 00:10:56.704 { 00:10:56.704 "name": "BaseBdev1", 00:10:56.704 "uuid": "51904ba1-405f-11ef-b2a4-e9dca065e82e", 00:10:56.704 "is_configured": true, 00:10:56.704 "data_offset": 0, 00:10:56.704 "data_size": 65536 00:10:56.704 }, 00:10:56.704 { 00:10:56.704 "name": null, 00:10:56.704 "uuid": "4f8708d9-405f-11ef-b2a4-e9dca065e82e", 00:10:56.704 "is_configured": false, 00:10:56.704 "data_offset": 0, 00:10:56.704 "data_size": 65536 00:10:56.704 }, 00:10:56.704 { 00:10:56.704 "name": null, 00:10:56.704 "uuid": "4ffc39ec-405f-11ef-b2a4-e9dca065e82e", 00:10:56.704 "is_configured": false, 00:10:56.704 "data_offset": 0, 00:10:56.704 "data_size": 65536 00:10:56.704 } 00:10:56.704 ] 00:10:56.704 }' 00:10:56.704 14:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:56.704 14:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.270 14:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:57.270 14:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:57.270 14:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:10:57.270 14:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:57.528 [2024-07-12 14:59:23.289733] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:57.528 14:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:57.528 14:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:57.528 14:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:57.528 14:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:57.528 14:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:57.528 14:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:57.528 14:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:57.528 14:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:57.528 14:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:57.528 14:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:57.528 14:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:57.528 14:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.785 14:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:57.785 "name": "Existed_Raid", 00:10:57.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.785 "strip_size_kb": 64, 00:10:57.785 "state": "configuring", 00:10:57.785 "raid_level": "concat", 00:10:57.785 "superblock": false, 00:10:57.785 "num_base_bdevs": 3, 00:10:57.785 "num_base_bdevs_discovered": 2, 00:10:57.785 "num_base_bdevs_operational": 3, 00:10:57.785 "base_bdevs_list": [ 00:10:57.785 { 00:10:57.785 "name": "BaseBdev1", 00:10:57.785 "uuid": "51904ba1-405f-11ef-b2a4-e9dca065e82e", 00:10:57.785 "is_configured": true, 00:10:57.785 "data_offset": 0, 00:10:57.785 "data_size": 65536 00:10:57.785 }, 00:10:57.785 { 00:10:57.785 "name": null, 00:10:57.785 "uuid": "4f8708d9-405f-11ef-b2a4-e9dca065e82e", 00:10:57.785 "is_configured": false, 00:10:57.785 "data_offset": 0, 00:10:57.785 "data_size": 65536 00:10:57.785 }, 00:10:57.785 { 00:10:57.785 "name": "BaseBdev3", 00:10:57.785 "uuid": "4ffc39ec-405f-11ef-b2a4-e9dca065e82e", 00:10:57.785 "is_configured": true, 00:10:57.785 "data_offset": 0, 00:10:57.785 "data_size": 65536 00:10:57.785 } 00:10:57.785 ] 00:10:57.785 }' 00:10:57.785 14:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:57.785 14:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.044 14:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:58.044 14:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:58.611 14:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:10:58.611 14:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:58.611 [2024-07-12 14:59:24.389766] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:58.611 14:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:58.611 14:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:58.611 14:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:58.611 14:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:58.611 14:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:58.611 14:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:58.611 14:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:58.611 14:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:58.611 14:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:58.611 14:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:58.611 14:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:58.611 14:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.870 14:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:58.870 "name": "Existed_Raid", 00:10:58.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.870 "strip_size_kb": 64, 00:10:58.870 "state": "configuring", 00:10:58.870 "raid_level": "concat", 00:10:58.870 "superblock": false, 00:10:58.871 "num_base_bdevs": 3, 00:10:58.871 "num_base_bdevs_discovered": 1, 00:10:58.871 "num_base_bdevs_operational": 3, 00:10:58.871 "base_bdevs_list": [ 00:10:58.871 { 00:10:58.871 "name": null, 00:10:58.871 "uuid": "51904ba1-405f-11ef-b2a4-e9dca065e82e", 00:10:58.871 "is_configured": false, 00:10:58.871 "data_offset": 0, 00:10:58.871 "data_size": 65536 00:10:58.871 }, 00:10:58.871 { 00:10:58.871 "name": null, 00:10:58.871 "uuid": "4f8708d9-405f-11ef-b2a4-e9dca065e82e", 00:10:58.871 "is_configured": false, 00:10:58.871 "data_offset": 0, 00:10:58.871 "data_size": 65536 00:10:58.871 }, 00:10:58.871 { 00:10:58.871 "name": "BaseBdev3", 00:10:58.871 "uuid": "4ffc39ec-405f-11ef-b2a4-e9dca065e82e", 00:10:58.871 "is_configured": true, 00:10:58.871 "data_offset": 0, 00:10:58.871 "data_size": 65536 00:10:58.871 } 00:10:58.871 ] 00:10:58.871 }' 00:10:58.871 14:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:58.871 14:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.129 14:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:59.129 14:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:59.388 14:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:10:59.389 14:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:59.647 [2024-07-12 14:59:25.427666] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:59.647 14:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:59.647 14:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:59.647 14:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:59.647 14:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:59.647 14:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:59.647 14:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:59.647 14:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:59.647 14:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:59.647 14:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:59.647 14:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:59.647 14:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:59.647 14:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.906 14:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:59.906 "name": "Existed_Raid", 00:10:59.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.906 "strip_size_kb": 64, 00:10:59.906 "state": "configuring", 00:10:59.906 "raid_level": "concat", 00:10:59.906 "superblock": false, 00:10:59.906 "num_base_bdevs": 3, 00:10:59.906 "num_base_bdevs_discovered": 2, 00:10:59.906 "num_base_bdevs_operational": 3, 00:10:59.906 "base_bdevs_list": [ 00:10:59.906 { 00:10:59.906 "name": null, 00:10:59.906 "uuid": "51904ba1-405f-11ef-b2a4-e9dca065e82e", 00:10:59.906 "is_configured": false, 00:10:59.906 "data_offset": 0, 00:10:59.906 "data_size": 65536 00:10:59.906 }, 00:10:59.906 { 00:10:59.906 "name": "BaseBdev2", 00:10:59.906 "uuid": "4f8708d9-405f-11ef-b2a4-e9dca065e82e", 00:10:59.906 "is_configured": true, 00:10:59.906 "data_offset": 0, 00:10:59.906 "data_size": 65536 00:10:59.906 }, 00:10:59.906 { 00:10:59.906 "name": "BaseBdev3", 00:10:59.906 "uuid": "4ffc39ec-405f-11ef-b2a4-e9dca065e82e", 00:10:59.906 "is_configured": true, 00:10:59.907 "data_offset": 0, 00:10:59.907 "data_size": 65536 00:10:59.907 } 00:10:59.907 ] 00:10:59.907 }' 00:10:59.907 14:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:59.907 14:59:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.474 14:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:00.474 14:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:00.732 14:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:11:00.732 14:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:00.732 14:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:01.000 14:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 51904ba1-405f-11ef-b2a4-e9dca065e82e 00:11:01.259 [2024-07-12 14:59:26.895853] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:01.259 [2024-07-12 14:59:26.895877] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x39ac89634a00 00:11:01.259 [2024-07-12 14:59:26.895882] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:01.259 [2024-07-12 14:59:26.895904] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x39ac89697e20 00:11:01.259 [2024-07-12 14:59:26.895971] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x39ac89634a00 00:11:01.259 [2024-07-12 14:59:26.895976] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x39ac89634a00 00:11:01.259 [2024-07-12 14:59:26.896008] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.259 NewBaseBdev 00:11:01.259 14:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:11:01.259 14:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:11:01.259 14:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:01.259 14:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:01.259 14:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:01.259 14:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:01.259 14:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:01.518 14:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:01.777 [ 00:11:01.777 { 00:11:01.777 "name": "NewBaseBdev", 00:11:01.777 "aliases": [ 00:11:01.777 "51904ba1-405f-11ef-b2a4-e9dca065e82e" 00:11:01.777 ], 00:11:01.777 "product_name": "Malloc disk", 00:11:01.777 "block_size": 512, 00:11:01.777 "num_blocks": 65536, 00:11:01.777 "uuid": "51904ba1-405f-11ef-b2a4-e9dca065e82e", 00:11:01.777 "assigned_rate_limits": { 00:11:01.777 "rw_ios_per_sec": 0, 00:11:01.777 "rw_mbytes_per_sec": 0, 00:11:01.777 "r_mbytes_per_sec": 0, 00:11:01.777 "w_mbytes_per_sec": 0 00:11:01.777 }, 00:11:01.777 "claimed": true, 00:11:01.777 "claim_type": "exclusive_write", 00:11:01.777 "zoned": false, 00:11:01.777 "supported_io_types": { 00:11:01.777 "read": true, 00:11:01.777 "write": true, 00:11:01.777 "unmap": true, 00:11:01.777 "flush": true, 00:11:01.777 "reset": true, 00:11:01.777 "nvme_admin": false, 00:11:01.777 "nvme_io": false, 00:11:01.777 "nvme_io_md": false, 00:11:01.777 "write_zeroes": true, 00:11:01.777 "zcopy": true, 00:11:01.777 "get_zone_info": false, 00:11:01.777 "zone_management": false, 00:11:01.777 "zone_append": false, 00:11:01.777 "compare": false, 00:11:01.777 "compare_and_write": false, 00:11:01.777 "abort": true, 00:11:01.777 "seek_hole": false, 00:11:01.777 "seek_data": false, 00:11:01.777 "copy": true, 00:11:01.777 "nvme_iov_md": false 00:11:01.777 }, 00:11:01.777 "memory_domains": [ 00:11:01.777 { 00:11:01.777 "dma_device_id": "system", 00:11:01.777 "dma_device_type": 1 00:11:01.777 }, 00:11:01.777 { 00:11:01.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.777 "dma_device_type": 2 00:11:01.777 } 00:11:01.777 ], 00:11:01.777 "driver_specific": {} 00:11:01.777 } 00:11:01.777 ] 00:11:01.777 14:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:01.777 14:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:01.777 14:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:01.777 14:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:01.777 14:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:01.777 14:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:01.777 14:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:01.777 14:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:01.777 14:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:01.777 14:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:01.777 14:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:01.777 14:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:01.777 14:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.035 14:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:02.035 "name": "Existed_Raid", 00:11:02.035 "uuid": "554d0ea0-405f-11ef-b2a4-e9dca065e82e", 00:11:02.035 "strip_size_kb": 64, 00:11:02.035 "state": "online", 00:11:02.035 "raid_level": "concat", 00:11:02.035 "superblock": false, 00:11:02.035 "num_base_bdevs": 3, 00:11:02.035 "num_base_bdevs_discovered": 3, 00:11:02.035 "num_base_bdevs_operational": 3, 00:11:02.035 "base_bdevs_list": [ 00:11:02.035 { 00:11:02.035 "name": "NewBaseBdev", 00:11:02.035 "uuid": "51904ba1-405f-11ef-b2a4-e9dca065e82e", 00:11:02.035 "is_configured": true, 00:11:02.035 "data_offset": 0, 00:11:02.035 "data_size": 65536 00:11:02.035 }, 00:11:02.035 { 00:11:02.035 "name": "BaseBdev2", 00:11:02.035 "uuid": "4f8708d9-405f-11ef-b2a4-e9dca065e82e", 00:11:02.035 "is_configured": true, 00:11:02.035 "data_offset": 0, 00:11:02.036 "data_size": 65536 00:11:02.036 }, 00:11:02.036 { 00:11:02.036 "name": "BaseBdev3", 00:11:02.036 "uuid": "4ffc39ec-405f-11ef-b2a4-e9dca065e82e", 00:11:02.036 "is_configured": true, 00:11:02.036 "data_offset": 0, 00:11:02.036 "data_size": 65536 00:11:02.036 } 00:11:02.036 ] 00:11:02.036 }' 00:11:02.036 14:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:02.036 14:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.295 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:11:02.295 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:02.295 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:02.295 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:02.295 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:02.295 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:02.295 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:02.295 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:02.553 [2024-07-12 14:59:28.299799] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:02.553 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:02.554 "name": "Existed_Raid", 00:11:02.554 "aliases": [ 00:11:02.554 "554d0ea0-405f-11ef-b2a4-e9dca065e82e" 00:11:02.554 ], 00:11:02.554 "product_name": "Raid Volume", 00:11:02.554 "block_size": 512, 00:11:02.554 "num_blocks": 196608, 00:11:02.554 "uuid": "554d0ea0-405f-11ef-b2a4-e9dca065e82e", 00:11:02.554 "assigned_rate_limits": { 00:11:02.554 "rw_ios_per_sec": 0, 00:11:02.554 "rw_mbytes_per_sec": 0, 00:11:02.554 "r_mbytes_per_sec": 0, 00:11:02.554 "w_mbytes_per_sec": 0 00:11:02.554 }, 00:11:02.554 "claimed": false, 00:11:02.554 "zoned": false, 00:11:02.554 "supported_io_types": { 00:11:02.554 "read": true, 00:11:02.554 "write": true, 00:11:02.554 "unmap": true, 00:11:02.554 "flush": true, 00:11:02.554 "reset": true, 00:11:02.554 "nvme_admin": false, 00:11:02.554 "nvme_io": false, 00:11:02.554 "nvme_io_md": false, 00:11:02.554 "write_zeroes": true, 00:11:02.554 "zcopy": false, 00:11:02.554 "get_zone_info": false, 00:11:02.554 "zone_management": false, 00:11:02.554 "zone_append": false, 00:11:02.554 "compare": false, 00:11:02.554 "compare_and_write": false, 00:11:02.554 "abort": false, 00:11:02.554 "seek_hole": false, 00:11:02.554 "seek_data": false, 00:11:02.554 "copy": false, 00:11:02.554 "nvme_iov_md": false 00:11:02.554 }, 00:11:02.554 "memory_domains": [ 00:11:02.554 { 00:11:02.554 "dma_device_id": "system", 00:11:02.554 "dma_device_type": 1 00:11:02.554 }, 00:11:02.554 { 00:11:02.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.554 "dma_device_type": 2 00:11:02.554 }, 00:11:02.554 { 00:11:02.554 "dma_device_id": "system", 00:11:02.554 "dma_device_type": 1 00:11:02.554 }, 00:11:02.554 { 00:11:02.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.554 "dma_device_type": 2 00:11:02.554 }, 00:11:02.554 { 00:11:02.554 "dma_device_id": "system", 00:11:02.554 "dma_device_type": 1 00:11:02.554 }, 00:11:02.554 { 00:11:02.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.554 "dma_device_type": 2 00:11:02.554 } 00:11:02.554 ], 00:11:02.554 "driver_specific": { 00:11:02.554 "raid": { 00:11:02.554 "uuid": "554d0ea0-405f-11ef-b2a4-e9dca065e82e", 00:11:02.554 "strip_size_kb": 64, 00:11:02.554 "state": "online", 00:11:02.554 "raid_level": "concat", 00:11:02.554 "superblock": false, 00:11:02.554 "num_base_bdevs": 3, 00:11:02.554 "num_base_bdevs_discovered": 3, 00:11:02.554 "num_base_bdevs_operational": 3, 00:11:02.554 "base_bdevs_list": [ 00:11:02.554 { 00:11:02.554 "name": "NewBaseBdev", 00:11:02.554 "uuid": "51904ba1-405f-11ef-b2a4-e9dca065e82e", 00:11:02.554 "is_configured": true, 00:11:02.554 "data_offset": 0, 00:11:02.554 "data_size": 65536 00:11:02.554 }, 00:11:02.554 { 00:11:02.554 "name": "BaseBdev2", 00:11:02.554 "uuid": "4f8708d9-405f-11ef-b2a4-e9dca065e82e", 00:11:02.554 "is_configured": true, 00:11:02.554 "data_offset": 0, 00:11:02.554 "data_size": 65536 00:11:02.554 }, 00:11:02.554 { 00:11:02.554 "name": "BaseBdev3", 00:11:02.554 "uuid": "4ffc39ec-405f-11ef-b2a4-e9dca065e82e", 00:11:02.554 "is_configured": true, 00:11:02.554 "data_offset": 0, 00:11:02.554 "data_size": 65536 00:11:02.554 } 00:11:02.554 ] 00:11:02.554 } 00:11:02.554 } 00:11:02.554 }' 00:11:02.554 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:02.554 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:11:02.554 BaseBdev2 00:11:02.554 BaseBdev3' 00:11:02.554 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:02.554 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:11:02.554 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:02.821 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:02.821 "name": "NewBaseBdev", 00:11:02.821 "aliases": [ 00:11:02.821 "51904ba1-405f-11ef-b2a4-e9dca065e82e" 00:11:02.821 ], 00:11:02.821 "product_name": "Malloc disk", 00:11:02.821 "block_size": 512, 00:11:02.821 "num_blocks": 65536, 00:11:02.821 "uuid": "51904ba1-405f-11ef-b2a4-e9dca065e82e", 00:11:02.821 "assigned_rate_limits": { 00:11:02.821 "rw_ios_per_sec": 0, 00:11:02.821 "rw_mbytes_per_sec": 0, 00:11:02.821 "r_mbytes_per_sec": 0, 00:11:02.821 "w_mbytes_per_sec": 0 00:11:02.821 }, 00:11:02.821 "claimed": true, 00:11:02.821 "claim_type": "exclusive_write", 00:11:02.821 "zoned": false, 00:11:02.821 "supported_io_types": { 00:11:02.821 "read": true, 00:11:02.821 "write": true, 00:11:02.821 "unmap": true, 00:11:02.821 "flush": true, 00:11:02.821 "reset": true, 00:11:02.821 "nvme_admin": false, 00:11:02.821 "nvme_io": false, 00:11:02.821 "nvme_io_md": false, 00:11:02.821 "write_zeroes": true, 00:11:02.821 "zcopy": true, 00:11:02.821 "get_zone_info": false, 00:11:02.821 "zone_management": false, 00:11:02.821 "zone_append": false, 00:11:02.821 "compare": false, 00:11:02.821 "compare_and_write": false, 00:11:02.821 "abort": true, 00:11:02.821 "seek_hole": false, 00:11:02.821 "seek_data": false, 00:11:02.821 "copy": true, 00:11:02.821 "nvme_iov_md": false 00:11:02.821 }, 00:11:02.821 "memory_domains": [ 00:11:02.821 { 00:11:02.821 "dma_device_id": "system", 00:11:02.821 "dma_device_type": 1 00:11:02.821 }, 00:11:02.821 { 00:11:02.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.821 "dma_device_type": 2 00:11:02.821 } 00:11:02.821 ], 00:11:02.821 "driver_specific": {} 00:11:02.821 }' 00:11:02.821 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:02.821 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:02.821 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:02.821 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:02.821 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:02.821 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:02.821 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:02.821 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:02.821 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:02.821 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:03.080 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:03.080 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:03.080 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:03.080 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:03.080 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:03.080 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:03.080 "name": "BaseBdev2", 00:11:03.080 "aliases": [ 00:11:03.080 "4f8708d9-405f-11ef-b2a4-e9dca065e82e" 00:11:03.080 ], 00:11:03.080 "product_name": "Malloc disk", 00:11:03.080 "block_size": 512, 00:11:03.080 "num_blocks": 65536, 00:11:03.080 "uuid": "4f8708d9-405f-11ef-b2a4-e9dca065e82e", 00:11:03.080 "assigned_rate_limits": { 00:11:03.080 "rw_ios_per_sec": 0, 00:11:03.080 "rw_mbytes_per_sec": 0, 00:11:03.080 "r_mbytes_per_sec": 0, 00:11:03.080 "w_mbytes_per_sec": 0 00:11:03.080 }, 00:11:03.080 "claimed": true, 00:11:03.080 "claim_type": "exclusive_write", 00:11:03.080 "zoned": false, 00:11:03.080 "supported_io_types": { 00:11:03.080 "read": true, 00:11:03.080 "write": true, 00:11:03.080 "unmap": true, 00:11:03.080 "flush": true, 00:11:03.080 "reset": true, 00:11:03.080 "nvme_admin": false, 00:11:03.080 "nvme_io": false, 00:11:03.080 "nvme_io_md": false, 00:11:03.080 "write_zeroes": true, 00:11:03.080 "zcopy": true, 00:11:03.080 "get_zone_info": false, 00:11:03.080 "zone_management": false, 00:11:03.080 "zone_append": false, 00:11:03.080 "compare": false, 00:11:03.080 "compare_and_write": false, 00:11:03.080 "abort": true, 00:11:03.080 "seek_hole": false, 00:11:03.080 "seek_data": false, 00:11:03.080 "copy": true, 00:11:03.080 "nvme_iov_md": false 00:11:03.080 }, 00:11:03.080 "memory_domains": [ 00:11:03.080 { 00:11:03.080 "dma_device_id": "system", 00:11:03.080 "dma_device_type": 1 00:11:03.080 }, 00:11:03.080 { 00:11:03.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.080 "dma_device_type": 2 00:11:03.080 } 00:11:03.080 ], 00:11:03.080 "driver_specific": {} 00:11:03.080 }' 00:11:03.080 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:03.080 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:03.080 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:03.080 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:03.080 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:03.339 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:03.339 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:03.339 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:03.339 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:03.339 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:03.339 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:03.339 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:03.339 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:03.339 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:03.339 14:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:03.597 14:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:03.597 "name": "BaseBdev3", 00:11:03.597 "aliases": [ 00:11:03.597 "4ffc39ec-405f-11ef-b2a4-e9dca065e82e" 00:11:03.597 ], 00:11:03.597 "product_name": "Malloc disk", 00:11:03.597 "block_size": 512, 00:11:03.597 "num_blocks": 65536, 00:11:03.597 "uuid": "4ffc39ec-405f-11ef-b2a4-e9dca065e82e", 00:11:03.597 "assigned_rate_limits": { 00:11:03.597 "rw_ios_per_sec": 0, 00:11:03.597 "rw_mbytes_per_sec": 0, 00:11:03.597 "r_mbytes_per_sec": 0, 00:11:03.597 "w_mbytes_per_sec": 0 00:11:03.597 }, 00:11:03.597 "claimed": true, 00:11:03.597 "claim_type": "exclusive_write", 00:11:03.597 "zoned": false, 00:11:03.597 "supported_io_types": { 00:11:03.597 "read": true, 00:11:03.597 "write": true, 00:11:03.597 "unmap": true, 00:11:03.597 "flush": true, 00:11:03.597 "reset": true, 00:11:03.597 "nvme_admin": false, 00:11:03.597 "nvme_io": false, 00:11:03.597 "nvme_io_md": false, 00:11:03.597 "write_zeroes": true, 00:11:03.597 "zcopy": true, 00:11:03.597 "get_zone_info": false, 00:11:03.597 "zone_management": false, 00:11:03.597 "zone_append": false, 00:11:03.597 "compare": false, 00:11:03.597 "compare_and_write": false, 00:11:03.597 "abort": true, 00:11:03.597 "seek_hole": false, 00:11:03.597 "seek_data": false, 00:11:03.597 "copy": true, 00:11:03.597 "nvme_iov_md": false 00:11:03.597 }, 00:11:03.597 "memory_domains": [ 00:11:03.597 { 00:11:03.597 "dma_device_id": "system", 00:11:03.597 "dma_device_type": 1 00:11:03.597 }, 00:11:03.597 { 00:11:03.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.597 "dma_device_type": 2 00:11:03.597 } 00:11:03.597 ], 00:11:03.597 "driver_specific": {} 00:11:03.597 }' 00:11:03.597 14:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:03.597 14:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:03.597 14:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:03.597 14:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:03.597 14:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:03.597 14:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:03.597 14:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:03.597 14:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:03.597 14:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:03.597 14:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:03.597 14:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:03.597 14:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:03.597 14:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:03.856 [2024-07-12 14:59:29.459822] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:03.856 [2024-07-12 14:59:29.459847] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:03.856 [2024-07-12 14:59:29.459868] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:03.856 [2024-07-12 14:59:29.459881] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:03.856 [2024-07-12 14:59:29.459885] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x39ac89634a00 name Existed_Raid, state offline 00:11:03.856 14:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 54031 00:11:03.856 14:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 54031 ']' 00:11:03.856 14:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 54031 00:11:03.856 14:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:11:03.856 14:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:11:03.856 14:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 54031 00:11:03.856 14:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:11:03.856 14:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:11:03.856 14:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:11:03.856 killing process with pid 54031 00:11:03.856 14:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 54031' 00:11:03.856 14:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 54031 00:11:03.856 [2024-07-12 14:59:29.487268] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:03.856 14:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 54031 00:11:03.856 [2024-07-12 14:59:29.504119] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:03.856 14:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:11:03.856 00:11:03.856 real 0m23.342s 00:11:03.856 user 0m42.779s 00:11:03.856 sys 0m3.068s 00:11:03.856 14:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:03.856 ************************************ 00:11:03.856 14:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.856 END TEST raid_state_function_test 00:11:03.856 ************************************ 00:11:04.115 14:59:29 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:04.115 14:59:29 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:11:04.115 14:59:29 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:04.115 14:59:29 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:04.115 14:59:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:04.115 ************************************ 00:11:04.115 START TEST raid_state_function_test_sb 00:11:04.115 ************************************ 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 true 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=54756 00:11:04.116 Process raid pid: 54756 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 54756' 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 54756 /var/tmp/spdk-raid.sock 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 54756 ']' 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:04.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:04.116 14:59:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.116 [2024-07-12 14:59:29.736382] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:11:04.116 [2024-07-12 14:59:29.736557] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:11:04.684 EAL: TSC is not safe to use in SMP mode 00:11:04.684 EAL: TSC is not invariant 00:11:04.684 [2024-07-12 14:59:30.280197] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.684 [2024-07-12 14:59:30.363028] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:04.684 [2024-07-12 14:59:30.365106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.684 [2024-07-12 14:59:30.365860] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.684 [2024-07-12 14:59:30.365875] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:05.251 14:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:05.251 14:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:11:05.251 14:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:05.251 [2024-07-12 14:59:31.053535] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:05.251 [2024-07-12 14:59:31.053606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:05.251 [2024-07-12 14:59:31.053612] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:05.251 [2024-07-12 14:59:31.053637] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:05.251 [2024-07-12 14:59:31.053641] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:05.251 [2024-07-12 14:59:31.053649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:05.251 14:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:05.251 14:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:05.251 14:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:05.251 14:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:05.251 14:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:05.251 14:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:05.251 14:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:05.251 14:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:05.251 14:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:05.251 14:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:05.251 14:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:05.251 14:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.818 14:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:05.818 "name": "Existed_Raid", 00:11:05.818 "uuid": "57c77675-405f-11ef-b2a4-e9dca065e82e", 00:11:05.818 "strip_size_kb": 64, 00:11:05.818 "state": "configuring", 00:11:05.818 "raid_level": "concat", 00:11:05.818 "superblock": true, 00:11:05.818 "num_base_bdevs": 3, 00:11:05.818 "num_base_bdevs_discovered": 0, 00:11:05.818 "num_base_bdevs_operational": 3, 00:11:05.818 "base_bdevs_list": [ 00:11:05.818 { 00:11:05.818 "name": "BaseBdev1", 00:11:05.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.818 "is_configured": false, 00:11:05.818 "data_offset": 0, 00:11:05.818 "data_size": 0 00:11:05.818 }, 00:11:05.818 { 00:11:05.818 "name": "BaseBdev2", 00:11:05.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.818 "is_configured": false, 00:11:05.818 "data_offset": 0, 00:11:05.818 "data_size": 0 00:11:05.818 }, 00:11:05.818 { 00:11:05.818 "name": "BaseBdev3", 00:11:05.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.818 "is_configured": false, 00:11:05.818 "data_offset": 0, 00:11:05.818 "data_size": 0 00:11:05.818 } 00:11:05.818 ] 00:11:05.818 }' 00:11:05.818 14:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:05.818 14:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.076 14:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:06.335 [2024-07-12 14:59:31.937561] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:06.335 [2024-07-12 14:59:31.937589] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18d2b2234500 name Existed_Raid, state configuring 00:11:06.335 14:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:06.594 [2024-07-12 14:59:32.173604] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:06.594 [2024-07-12 14:59:32.173670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:06.594 [2024-07-12 14:59:32.173676] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:06.594 [2024-07-12 14:59:32.173685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:06.594 [2024-07-12 14:59:32.173689] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:06.594 [2024-07-12 14:59:32.173696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:06.594 14:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:06.594 [2024-07-12 14:59:32.410615] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:06.594 BaseBdev1 00:11:06.853 14:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:06.853 14:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:06.853 14:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:06.853 14:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:06.853 14:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:06.853 14:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:06.853 14:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:07.112 14:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:07.112 [ 00:11:07.112 { 00:11:07.112 "name": "BaseBdev1", 00:11:07.112 "aliases": [ 00:11:07.112 "58966261-405f-11ef-b2a4-e9dca065e82e" 00:11:07.112 ], 00:11:07.112 "product_name": "Malloc disk", 00:11:07.112 "block_size": 512, 00:11:07.112 "num_blocks": 65536, 00:11:07.112 "uuid": "58966261-405f-11ef-b2a4-e9dca065e82e", 00:11:07.112 "assigned_rate_limits": { 00:11:07.112 "rw_ios_per_sec": 0, 00:11:07.112 "rw_mbytes_per_sec": 0, 00:11:07.112 "r_mbytes_per_sec": 0, 00:11:07.112 "w_mbytes_per_sec": 0 00:11:07.112 }, 00:11:07.112 "claimed": true, 00:11:07.112 "claim_type": "exclusive_write", 00:11:07.112 "zoned": false, 00:11:07.112 "supported_io_types": { 00:11:07.112 "read": true, 00:11:07.112 "write": true, 00:11:07.112 "unmap": true, 00:11:07.112 "flush": true, 00:11:07.112 "reset": true, 00:11:07.112 "nvme_admin": false, 00:11:07.112 "nvme_io": false, 00:11:07.112 "nvme_io_md": false, 00:11:07.112 "write_zeroes": true, 00:11:07.112 "zcopy": true, 00:11:07.112 "get_zone_info": false, 00:11:07.112 "zone_management": false, 00:11:07.112 "zone_append": false, 00:11:07.112 "compare": false, 00:11:07.112 "compare_and_write": false, 00:11:07.112 "abort": true, 00:11:07.112 "seek_hole": false, 00:11:07.112 "seek_data": false, 00:11:07.112 "copy": true, 00:11:07.112 "nvme_iov_md": false 00:11:07.112 }, 00:11:07.112 "memory_domains": [ 00:11:07.112 { 00:11:07.112 "dma_device_id": "system", 00:11:07.112 "dma_device_type": 1 00:11:07.112 }, 00:11:07.112 { 00:11:07.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.112 "dma_device_type": 2 00:11:07.112 } 00:11:07.112 ], 00:11:07.112 "driver_specific": {} 00:11:07.112 } 00:11:07.112 ] 00:11:07.112 14:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:07.112 14:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:07.112 14:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:07.112 14:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:07.112 14:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:07.112 14:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:07.112 14:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:07.112 14:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:07.112 14:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:07.112 14:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:07.112 14:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:07.112 14:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:07.112 14:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.371 14:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:07.371 "name": "Existed_Raid", 00:11:07.371 "uuid": "58725f33-405f-11ef-b2a4-e9dca065e82e", 00:11:07.371 "strip_size_kb": 64, 00:11:07.371 "state": "configuring", 00:11:07.371 "raid_level": "concat", 00:11:07.371 "superblock": true, 00:11:07.371 "num_base_bdevs": 3, 00:11:07.371 "num_base_bdevs_discovered": 1, 00:11:07.371 "num_base_bdevs_operational": 3, 00:11:07.371 "base_bdevs_list": [ 00:11:07.371 { 00:11:07.371 "name": "BaseBdev1", 00:11:07.371 "uuid": "58966261-405f-11ef-b2a4-e9dca065e82e", 00:11:07.371 "is_configured": true, 00:11:07.371 "data_offset": 2048, 00:11:07.371 "data_size": 63488 00:11:07.371 }, 00:11:07.371 { 00:11:07.371 "name": "BaseBdev2", 00:11:07.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.371 "is_configured": false, 00:11:07.371 "data_offset": 0, 00:11:07.371 "data_size": 0 00:11:07.371 }, 00:11:07.371 { 00:11:07.371 "name": "BaseBdev3", 00:11:07.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.371 "is_configured": false, 00:11:07.371 "data_offset": 0, 00:11:07.371 "data_size": 0 00:11:07.371 } 00:11:07.371 ] 00:11:07.371 }' 00:11:07.371 14:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:07.371 14:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.937 14:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:07.937 [2024-07-12 14:59:33.709672] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:07.937 [2024-07-12 14:59:33.709708] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18d2b2234500 name Existed_Raid, state configuring 00:11:07.937 14:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:08.193 [2024-07-12 14:59:33.933695] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:08.193 [2024-07-12 14:59:33.934511] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:08.193 [2024-07-12 14:59:33.934549] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:08.193 [2024-07-12 14:59:33.934555] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:08.193 [2024-07-12 14:59:33.934563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:08.193 14:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:08.193 14:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:08.193 14:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:08.193 14:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:08.193 14:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:08.193 14:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:08.194 14:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:08.194 14:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:08.194 14:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:08.194 14:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:08.194 14:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:08.194 14:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:08.194 14:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:08.194 14:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.451 14:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:08.451 "name": "Existed_Raid", 00:11:08.451 "uuid": "597ef093-405f-11ef-b2a4-e9dca065e82e", 00:11:08.451 "strip_size_kb": 64, 00:11:08.451 "state": "configuring", 00:11:08.451 "raid_level": "concat", 00:11:08.451 "superblock": true, 00:11:08.451 "num_base_bdevs": 3, 00:11:08.451 "num_base_bdevs_discovered": 1, 00:11:08.451 "num_base_bdevs_operational": 3, 00:11:08.451 "base_bdevs_list": [ 00:11:08.451 { 00:11:08.451 "name": "BaseBdev1", 00:11:08.451 "uuid": "58966261-405f-11ef-b2a4-e9dca065e82e", 00:11:08.451 "is_configured": true, 00:11:08.451 "data_offset": 2048, 00:11:08.451 "data_size": 63488 00:11:08.451 }, 00:11:08.451 { 00:11:08.451 "name": "BaseBdev2", 00:11:08.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.451 "is_configured": false, 00:11:08.451 "data_offset": 0, 00:11:08.451 "data_size": 0 00:11:08.451 }, 00:11:08.451 { 00:11:08.451 "name": "BaseBdev3", 00:11:08.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.451 "is_configured": false, 00:11:08.451 "data_offset": 0, 00:11:08.451 "data_size": 0 00:11:08.451 } 00:11:08.451 ] 00:11:08.451 }' 00:11:08.451 14:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:08.451 14:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.708 14:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:08.991 [2024-07-12 14:59:34.793863] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:08.991 BaseBdev2 00:11:08.991 14:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:08.991 14:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:08.991 14:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:08.991 14:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:08.991 14:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:08.991 14:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:08.991 14:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:09.556 14:59:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:09.556 [ 00:11:09.556 { 00:11:09.556 "name": "BaseBdev2", 00:11:09.556 "aliases": [ 00:11:09.556 "5a022c49-405f-11ef-b2a4-e9dca065e82e" 00:11:09.556 ], 00:11:09.556 "product_name": "Malloc disk", 00:11:09.556 "block_size": 512, 00:11:09.556 "num_blocks": 65536, 00:11:09.556 "uuid": "5a022c49-405f-11ef-b2a4-e9dca065e82e", 00:11:09.556 "assigned_rate_limits": { 00:11:09.556 "rw_ios_per_sec": 0, 00:11:09.556 "rw_mbytes_per_sec": 0, 00:11:09.556 "r_mbytes_per_sec": 0, 00:11:09.556 "w_mbytes_per_sec": 0 00:11:09.556 }, 00:11:09.556 "claimed": true, 00:11:09.556 "claim_type": "exclusive_write", 00:11:09.556 "zoned": false, 00:11:09.556 "supported_io_types": { 00:11:09.556 "read": true, 00:11:09.556 "write": true, 00:11:09.556 "unmap": true, 00:11:09.556 "flush": true, 00:11:09.556 "reset": true, 00:11:09.556 "nvme_admin": false, 00:11:09.556 "nvme_io": false, 00:11:09.556 "nvme_io_md": false, 00:11:09.556 "write_zeroes": true, 00:11:09.556 "zcopy": true, 00:11:09.556 "get_zone_info": false, 00:11:09.556 "zone_management": false, 00:11:09.556 "zone_append": false, 00:11:09.556 "compare": false, 00:11:09.556 "compare_and_write": false, 00:11:09.556 "abort": true, 00:11:09.556 "seek_hole": false, 00:11:09.556 "seek_data": false, 00:11:09.556 "copy": true, 00:11:09.556 "nvme_iov_md": false 00:11:09.556 }, 00:11:09.556 "memory_domains": [ 00:11:09.556 { 00:11:09.556 "dma_device_id": "system", 00:11:09.556 "dma_device_type": 1 00:11:09.556 }, 00:11:09.556 { 00:11:09.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.556 "dma_device_type": 2 00:11:09.556 } 00:11:09.556 ], 00:11:09.556 "driver_specific": {} 00:11:09.556 } 00:11:09.556 ] 00:11:09.557 14:59:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:09.557 14:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:09.557 14:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:09.557 14:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:09.557 14:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:09.557 14:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:09.557 14:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:09.557 14:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:09.557 14:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:09.557 14:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:09.557 14:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:09.557 14:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:09.557 14:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:09.557 14:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.557 14:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:09.814 14:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:09.814 "name": "Existed_Raid", 00:11:09.814 "uuid": "597ef093-405f-11ef-b2a4-e9dca065e82e", 00:11:09.814 "strip_size_kb": 64, 00:11:09.814 "state": "configuring", 00:11:09.814 "raid_level": "concat", 00:11:09.814 "superblock": true, 00:11:09.814 "num_base_bdevs": 3, 00:11:09.814 "num_base_bdevs_discovered": 2, 00:11:09.814 "num_base_bdevs_operational": 3, 00:11:09.814 "base_bdevs_list": [ 00:11:09.814 { 00:11:09.814 "name": "BaseBdev1", 00:11:09.815 "uuid": "58966261-405f-11ef-b2a4-e9dca065e82e", 00:11:09.815 "is_configured": true, 00:11:09.815 "data_offset": 2048, 00:11:09.815 "data_size": 63488 00:11:09.815 }, 00:11:09.815 { 00:11:09.815 "name": "BaseBdev2", 00:11:09.815 "uuid": "5a022c49-405f-11ef-b2a4-e9dca065e82e", 00:11:09.815 "is_configured": true, 00:11:09.815 "data_offset": 2048, 00:11:09.815 "data_size": 63488 00:11:09.815 }, 00:11:09.815 { 00:11:09.815 "name": "BaseBdev3", 00:11:09.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.815 "is_configured": false, 00:11:09.815 "data_offset": 0, 00:11:09.815 "data_size": 0 00:11:09.815 } 00:11:09.815 ] 00:11:09.815 }' 00:11:09.815 14:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:09.815 14:59:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.072 14:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:10.330 [2024-07-12 14:59:36.093921] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:10.330 [2024-07-12 14:59:36.093988] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x18d2b2234a00 00:11:10.330 [2024-07-12 14:59:36.093995] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:10.330 [2024-07-12 14:59:36.094017] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18d2b2297e20 00:11:10.330 [2024-07-12 14:59:36.094080] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x18d2b2234a00 00:11:10.330 [2024-07-12 14:59:36.094085] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x18d2b2234a00 00:11:10.330 [2024-07-12 14:59:36.094106] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.330 BaseBdev3 00:11:10.330 14:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:11:10.330 14:59:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:11:10.330 14:59:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:10.330 14:59:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:10.330 14:59:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:10.330 14:59:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:10.330 14:59:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:10.588 14:59:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:11.155 [ 00:11:11.155 { 00:11:11.155 "name": "BaseBdev3", 00:11:11.155 "aliases": [ 00:11:11.155 "5ac88bf4-405f-11ef-b2a4-e9dca065e82e" 00:11:11.155 ], 00:11:11.155 "product_name": "Malloc disk", 00:11:11.155 "block_size": 512, 00:11:11.155 "num_blocks": 65536, 00:11:11.155 "uuid": "5ac88bf4-405f-11ef-b2a4-e9dca065e82e", 00:11:11.155 "assigned_rate_limits": { 00:11:11.155 "rw_ios_per_sec": 0, 00:11:11.155 "rw_mbytes_per_sec": 0, 00:11:11.155 "r_mbytes_per_sec": 0, 00:11:11.155 "w_mbytes_per_sec": 0 00:11:11.155 }, 00:11:11.155 "claimed": true, 00:11:11.155 "claim_type": "exclusive_write", 00:11:11.155 "zoned": false, 00:11:11.155 "supported_io_types": { 00:11:11.155 "read": true, 00:11:11.155 "write": true, 00:11:11.155 "unmap": true, 00:11:11.155 "flush": true, 00:11:11.155 "reset": true, 00:11:11.155 "nvme_admin": false, 00:11:11.155 "nvme_io": false, 00:11:11.155 "nvme_io_md": false, 00:11:11.155 "write_zeroes": true, 00:11:11.155 "zcopy": true, 00:11:11.155 "get_zone_info": false, 00:11:11.155 "zone_management": false, 00:11:11.155 "zone_append": false, 00:11:11.155 "compare": false, 00:11:11.155 "compare_and_write": false, 00:11:11.155 "abort": true, 00:11:11.155 "seek_hole": false, 00:11:11.155 "seek_data": false, 00:11:11.155 "copy": true, 00:11:11.155 "nvme_iov_md": false 00:11:11.155 }, 00:11:11.155 "memory_domains": [ 00:11:11.155 { 00:11:11.155 "dma_device_id": "system", 00:11:11.155 "dma_device_type": 1 00:11:11.155 }, 00:11:11.155 { 00:11:11.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.155 "dma_device_type": 2 00:11:11.155 } 00:11:11.155 ], 00:11:11.155 "driver_specific": {} 00:11:11.155 } 00:11:11.155 ] 00:11:11.155 14:59:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:11.155 14:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:11.155 14:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:11.155 14:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:11.155 14:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:11.155 14:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:11.155 14:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:11.155 14:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:11.155 14:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:11.155 14:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:11.155 14:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:11.155 14:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:11.155 14:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:11.155 14:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:11.155 14:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.155 14:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:11.155 "name": "Existed_Raid", 00:11:11.155 "uuid": "597ef093-405f-11ef-b2a4-e9dca065e82e", 00:11:11.155 "strip_size_kb": 64, 00:11:11.155 "state": "online", 00:11:11.155 "raid_level": "concat", 00:11:11.155 "superblock": true, 00:11:11.155 "num_base_bdevs": 3, 00:11:11.155 "num_base_bdevs_discovered": 3, 00:11:11.155 "num_base_bdevs_operational": 3, 00:11:11.155 "base_bdevs_list": [ 00:11:11.155 { 00:11:11.155 "name": "BaseBdev1", 00:11:11.155 "uuid": "58966261-405f-11ef-b2a4-e9dca065e82e", 00:11:11.155 "is_configured": true, 00:11:11.155 "data_offset": 2048, 00:11:11.155 "data_size": 63488 00:11:11.155 }, 00:11:11.155 { 00:11:11.155 "name": "BaseBdev2", 00:11:11.155 "uuid": "5a022c49-405f-11ef-b2a4-e9dca065e82e", 00:11:11.155 "is_configured": true, 00:11:11.155 "data_offset": 2048, 00:11:11.155 "data_size": 63488 00:11:11.155 }, 00:11:11.155 { 00:11:11.155 "name": "BaseBdev3", 00:11:11.155 "uuid": "5ac88bf4-405f-11ef-b2a4-e9dca065e82e", 00:11:11.155 "is_configured": true, 00:11:11.155 "data_offset": 2048, 00:11:11.155 "data_size": 63488 00:11:11.155 } 00:11:11.155 ] 00:11:11.155 }' 00:11:11.155 14:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:11.155 14:59:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.721 14:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:11.721 14:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:11.721 14:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:11.721 14:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:11.721 14:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:11.721 14:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:11:11.721 14:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:11.721 14:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:11.979 [2024-07-12 14:59:37.621755] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:11.979 14:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:11.979 "name": "Existed_Raid", 00:11:11.979 "aliases": [ 00:11:11.979 "597ef093-405f-11ef-b2a4-e9dca065e82e" 00:11:11.979 ], 00:11:11.979 "product_name": "Raid Volume", 00:11:11.979 "block_size": 512, 00:11:11.979 "num_blocks": 190464, 00:11:11.979 "uuid": "597ef093-405f-11ef-b2a4-e9dca065e82e", 00:11:11.979 "assigned_rate_limits": { 00:11:11.979 "rw_ios_per_sec": 0, 00:11:11.979 "rw_mbytes_per_sec": 0, 00:11:11.979 "r_mbytes_per_sec": 0, 00:11:11.979 "w_mbytes_per_sec": 0 00:11:11.979 }, 00:11:11.979 "claimed": false, 00:11:11.979 "zoned": false, 00:11:11.979 "supported_io_types": { 00:11:11.979 "read": true, 00:11:11.979 "write": true, 00:11:11.979 "unmap": true, 00:11:11.979 "flush": true, 00:11:11.979 "reset": true, 00:11:11.979 "nvme_admin": false, 00:11:11.979 "nvme_io": false, 00:11:11.979 "nvme_io_md": false, 00:11:11.979 "write_zeroes": true, 00:11:11.979 "zcopy": false, 00:11:11.979 "get_zone_info": false, 00:11:11.979 "zone_management": false, 00:11:11.979 "zone_append": false, 00:11:11.979 "compare": false, 00:11:11.979 "compare_and_write": false, 00:11:11.979 "abort": false, 00:11:11.979 "seek_hole": false, 00:11:11.979 "seek_data": false, 00:11:11.979 "copy": false, 00:11:11.979 "nvme_iov_md": false 00:11:11.979 }, 00:11:11.979 "memory_domains": [ 00:11:11.979 { 00:11:11.979 "dma_device_id": "system", 00:11:11.979 "dma_device_type": 1 00:11:11.979 }, 00:11:11.979 { 00:11:11.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.979 "dma_device_type": 2 00:11:11.979 }, 00:11:11.979 { 00:11:11.979 "dma_device_id": "system", 00:11:11.979 "dma_device_type": 1 00:11:11.979 }, 00:11:11.979 { 00:11:11.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.979 "dma_device_type": 2 00:11:11.979 }, 00:11:11.979 { 00:11:11.979 "dma_device_id": "system", 00:11:11.979 "dma_device_type": 1 00:11:11.979 }, 00:11:11.979 { 00:11:11.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.979 "dma_device_type": 2 00:11:11.979 } 00:11:11.979 ], 00:11:11.979 "driver_specific": { 00:11:11.979 "raid": { 00:11:11.979 "uuid": "597ef093-405f-11ef-b2a4-e9dca065e82e", 00:11:11.979 "strip_size_kb": 64, 00:11:11.979 "state": "online", 00:11:11.979 "raid_level": "concat", 00:11:11.979 "superblock": true, 00:11:11.979 "num_base_bdevs": 3, 00:11:11.979 "num_base_bdevs_discovered": 3, 00:11:11.979 "num_base_bdevs_operational": 3, 00:11:11.979 "base_bdevs_list": [ 00:11:11.979 { 00:11:11.979 "name": "BaseBdev1", 00:11:11.979 "uuid": "58966261-405f-11ef-b2a4-e9dca065e82e", 00:11:11.979 "is_configured": true, 00:11:11.979 "data_offset": 2048, 00:11:11.979 "data_size": 63488 00:11:11.979 }, 00:11:11.979 { 00:11:11.979 "name": "BaseBdev2", 00:11:11.979 "uuid": "5a022c49-405f-11ef-b2a4-e9dca065e82e", 00:11:11.979 "is_configured": true, 00:11:11.979 "data_offset": 2048, 00:11:11.979 "data_size": 63488 00:11:11.979 }, 00:11:11.979 { 00:11:11.979 "name": "BaseBdev3", 00:11:11.979 "uuid": "5ac88bf4-405f-11ef-b2a4-e9dca065e82e", 00:11:11.979 "is_configured": true, 00:11:11.979 "data_offset": 2048, 00:11:11.979 "data_size": 63488 00:11:11.979 } 00:11:11.979 ] 00:11:11.979 } 00:11:11.979 } 00:11:11.979 }' 00:11:11.979 14:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:11.979 14:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:11.979 BaseBdev2 00:11:11.979 BaseBdev3' 00:11:11.979 14:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:11.979 14:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:11.979 14:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:12.237 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:12.237 "name": "BaseBdev1", 00:11:12.237 "aliases": [ 00:11:12.237 "58966261-405f-11ef-b2a4-e9dca065e82e" 00:11:12.237 ], 00:11:12.237 "product_name": "Malloc disk", 00:11:12.237 "block_size": 512, 00:11:12.237 "num_blocks": 65536, 00:11:12.237 "uuid": "58966261-405f-11ef-b2a4-e9dca065e82e", 00:11:12.237 "assigned_rate_limits": { 00:11:12.237 "rw_ios_per_sec": 0, 00:11:12.237 "rw_mbytes_per_sec": 0, 00:11:12.237 "r_mbytes_per_sec": 0, 00:11:12.237 "w_mbytes_per_sec": 0 00:11:12.237 }, 00:11:12.237 "claimed": true, 00:11:12.237 "claim_type": "exclusive_write", 00:11:12.237 "zoned": false, 00:11:12.237 "supported_io_types": { 00:11:12.237 "read": true, 00:11:12.237 "write": true, 00:11:12.237 "unmap": true, 00:11:12.237 "flush": true, 00:11:12.237 "reset": true, 00:11:12.237 "nvme_admin": false, 00:11:12.237 "nvme_io": false, 00:11:12.237 "nvme_io_md": false, 00:11:12.237 "write_zeroes": true, 00:11:12.237 "zcopy": true, 00:11:12.237 "get_zone_info": false, 00:11:12.237 "zone_management": false, 00:11:12.237 "zone_append": false, 00:11:12.237 "compare": false, 00:11:12.237 "compare_and_write": false, 00:11:12.237 "abort": true, 00:11:12.237 "seek_hole": false, 00:11:12.237 "seek_data": false, 00:11:12.237 "copy": true, 00:11:12.237 "nvme_iov_md": false 00:11:12.237 }, 00:11:12.237 "memory_domains": [ 00:11:12.237 { 00:11:12.237 "dma_device_id": "system", 00:11:12.237 "dma_device_type": 1 00:11:12.237 }, 00:11:12.237 { 00:11:12.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.238 "dma_device_type": 2 00:11:12.238 } 00:11:12.238 ], 00:11:12.238 "driver_specific": {} 00:11:12.238 }' 00:11:12.238 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:12.238 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:12.238 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:12.238 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:12.238 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:12.238 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:12.238 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:12.497 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:12.497 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:12.497 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:12.497 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:12.497 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:12.497 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:12.497 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:12.497 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:12.755 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:12.755 "name": "BaseBdev2", 00:11:12.755 "aliases": [ 00:11:12.755 "5a022c49-405f-11ef-b2a4-e9dca065e82e" 00:11:12.755 ], 00:11:12.755 "product_name": "Malloc disk", 00:11:12.755 "block_size": 512, 00:11:12.755 "num_blocks": 65536, 00:11:12.755 "uuid": "5a022c49-405f-11ef-b2a4-e9dca065e82e", 00:11:12.755 "assigned_rate_limits": { 00:11:12.755 "rw_ios_per_sec": 0, 00:11:12.755 "rw_mbytes_per_sec": 0, 00:11:12.755 "r_mbytes_per_sec": 0, 00:11:12.755 "w_mbytes_per_sec": 0 00:11:12.755 }, 00:11:12.755 "claimed": true, 00:11:12.755 "claim_type": "exclusive_write", 00:11:12.755 "zoned": false, 00:11:12.755 "supported_io_types": { 00:11:12.755 "read": true, 00:11:12.755 "write": true, 00:11:12.755 "unmap": true, 00:11:12.755 "flush": true, 00:11:12.755 "reset": true, 00:11:12.755 "nvme_admin": false, 00:11:12.755 "nvme_io": false, 00:11:12.755 "nvme_io_md": false, 00:11:12.755 "write_zeroes": true, 00:11:12.755 "zcopy": true, 00:11:12.755 "get_zone_info": false, 00:11:12.755 "zone_management": false, 00:11:12.755 "zone_append": false, 00:11:12.755 "compare": false, 00:11:12.755 "compare_and_write": false, 00:11:12.755 "abort": true, 00:11:12.755 "seek_hole": false, 00:11:12.755 "seek_data": false, 00:11:12.755 "copy": true, 00:11:12.755 "nvme_iov_md": false 00:11:12.755 }, 00:11:12.755 "memory_domains": [ 00:11:12.755 { 00:11:12.755 "dma_device_id": "system", 00:11:12.755 "dma_device_type": 1 00:11:12.755 }, 00:11:12.755 { 00:11:12.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.755 "dma_device_type": 2 00:11:12.755 } 00:11:12.755 ], 00:11:12.755 "driver_specific": {} 00:11:12.755 }' 00:11:12.755 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:12.755 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:12.755 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:12.755 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:12.755 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:12.755 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:12.755 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:12.755 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:12.755 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:12.755 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:12.755 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:12.755 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:12.755 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:12.755 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:12.755 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:13.013 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:13.013 "name": "BaseBdev3", 00:11:13.013 "aliases": [ 00:11:13.013 "5ac88bf4-405f-11ef-b2a4-e9dca065e82e" 00:11:13.013 ], 00:11:13.013 "product_name": "Malloc disk", 00:11:13.013 "block_size": 512, 00:11:13.013 "num_blocks": 65536, 00:11:13.013 "uuid": "5ac88bf4-405f-11ef-b2a4-e9dca065e82e", 00:11:13.013 "assigned_rate_limits": { 00:11:13.013 "rw_ios_per_sec": 0, 00:11:13.013 "rw_mbytes_per_sec": 0, 00:11:13.013 "r_mbytes_per_sec": 0, 00:11:13.013 "w_mbytes_per_sec": 0 00:11:13.013 }, 00:11:13.013 "claimed": true, 00:11:13.013 "claim_type": "exclusive_write", 00:11:13.013 "zoned": false, 00:11:13.013 "supported_io_types": { 00:11:13.014 "read": true, 00:11:13.014 "write": true, 00:11:13.014 "unmap": true, 00:11:13.014 "flush": true, 00:11:13.014 "reset": true, 00:11:13.014 "nvme_admin": false, 00:11:13.014 "nvme_io": false, 00:11:13.014 "nvme_io_md": false, 00:11:13.014 "write_zeroes": true, 00:11:13.014 "zcopy": true, 00:11:13.014 "get_zone_info": false, 00:11:13.014 "zone_management": false, 00:11:13.014 "zone_append": false, 00:11:13.014 "compare": false, 00:11:13.014 "compare_and_write": false, 00:11:13.014 "abort": true, 00:11:13.014 "seek_hole": false, 00:11:13.014 "seek_data": false, 00:11:13.014 "copy": true, 00:11:13.014 "nvme_iov_md": false 00:11:13.014 }, 00:11:13.014 "memory_domains": [ 00:11:13.014 { 00:11:13.014 "dma_device_id": "system", 00:11:13.014 "dma_device_type": 1 00:11:13.014 }, 00:11:13.014 { 00:11:13.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.014 "dma_device_type": 2 00:11:13.014 } 00:11:13.014 ], 00:11:13.014 "driver_specific": {} 00:11:13.014 }' 00:11:13.014 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:13.014 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:13.014 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:13.014 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:13.014 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:13.014 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:13.014 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:13.014 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:13.014 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:13.014 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:13.014 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:13.014 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:13.014 14:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:13.272 [2024-07-12 14:59:39.073513] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:13.272 [2024-07-12 14:59:39.073546] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:13.272 [2024-07-12 14:59:39.073573] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:13.272 14:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:13.272 14:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:11:13.272 14:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:13.272 14:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:11:13.272 14:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:11:13.272 14:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:11:13.272 14:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:13.272 14:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:11:13.272 14:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:13.272 14:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:13.272 14:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:13.272 14:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:13.272 14:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:13.272 14:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:13.272 14:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:13.272 14:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:13.272 14:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.530 14:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:13.530 "name": "Existed_Raid", 00:11:13.530 "uuid": "597ef093-405f-11ef-b2a4-e9dca065e82e", 00:11:13.530 "strip_size_kb": 64, 00:11:13.530 "state": "offline", 00:11:13.530 "raid_level": "concat", 00:11:13.530 "superblock": true, 00:11:13.530 "num_base_bdevs": 3, 00:11:13.530 "num_base_bdevs_discovered": 2, 00:11:13.530 "num_base_bdevs_operational": 2, 00:11:13.530 "base_bdevs_list": [ 00:11:13.530 { 00:11:13.530 "name": null, 00:11:13.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.530 "is_configured": false, 00:11:13.530 "data_offset": 2048, 00:11:13.530 "data_size": 63488 00:11:13.530 }, 00:11:13.530 { 00:11:13.530 "name": "BaseBdev2", 00:11:13.530 "uuid": "5a022c49-405f-11ef-b2a4-e9dca065e82e", 00:11:13.530 "is_configured": true, 00:11:13.530 "data_offset": 2048, 00:11:13.530 "data_size": 63488 00:11:13.530 }, 00:11:13.530 { 00:11:13.530 "name": "BaseBdev3", 00:11:13.530 "uuid": "5ac88bf4-405f-11ef-b2a4-e9dca065e82e", 00:11:13.530 "is_configured": true, 00:11:13.530 "data_offset": 2048, 00:11:13.530 "data_size": 63488 00:11:13.530 } 00:11:13.530 ] 00:11:13.530 }' 00:11:13.530 14:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:13.530 14:59:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.097 14:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:14.097 14:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:14.097 14:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:14.097 14:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:14.355 14:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:14.355 14:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:14.355 14:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:14.613 [2024-07-12 14:59:40.251331] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:14.613 14:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:14.613 14:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:14.613 14:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:14.613 14:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:14.872 14:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:14.872 14:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:14.872 14:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:15.131 [2024-07-12 14:59:40.733129] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:15.131 [2024-07-12 14:59:40.733166] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18d2b2234a00 name Existed_Raid, state offline 00:11:15.131 14:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:15.131 14:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:15.131 14:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:15.131 14:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:15.389 14:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:15.389 14:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:15.389 14:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:11:15.389 14:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:11:15.389 14:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:15.389 14:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:15.647 BaseBdev2 00:11:15.647 14:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:11:15.647 14:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:15.647 14:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:15.647 14:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:15.647 14:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:15.647 14:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:15.647 14:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:15.906 14:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:16.197 [ 00:11:16.197 { 00:11:16.197 "name": "BaseBdev2", 00:11:16.197 "aliases": [ 00:11:16.197 "5ddd44ac-405f-11ef-b2a4-e9dca065e82e" 00:11:16.197 ], 00:11:16.197 "product_name": "Malloc disk", 00:11:16.197 "block_size": 512, 00:11:16.198 "num_blocks": 65536, 00:11:16.198 "uuid": "5ddd44ac-405f-11ef-b2a4-e9dca065e82e", 00:11:16.198 "assigned_rate_limits": { 00:11:16.198 "rw_ios_per_sec": 0, 00:11:16.198 "rw_mbytes_per_sec": 0, 00:11:16.198 "r_mbytes_per_sec": 0, 00:11:16.198 "w_mbytes_per_sec": 0 00:11:16.198 }, 00:11:16.198 "claimed": false, 00:11:16.198 "zoned": false, 00:11:16.198 "supported_io_types": { 00:11:16.198 "read": true, 00:11:16.198 "write": true, 00:11:16.198 "unmap": true, 00:11:16.198 "flush": true, 00:11:16.198 "reset": true, 00:11:16.198 "nvme_admin": false, 00:11:16.198 "nvme_io": false, 00:11:16.198 "nvme_io_md": false, 00:11:16.198 "write_zeroes": true, 00:11:16.198 "zcopy": true, 00:11:16.198 "get_zone_info": false, 00:11:16.198 "zone_management": false, 00:11:16.198 "zone_append": false, 00:11:16.198 "compare": false, 00:11:16.198 "compare_and_write": false, 00:11:16.198 "abort": true, 00:11:16.198 "seek_hole": false, 00:11:16.198 "seek_data": false, 00:11:16.198 "copy": true, 00:11:16.198 "nvme_iov_md": false 00:11:16.198 }, 00:11:16.198 "memory_domains": [ 00:11:16.198 { 00:11:16.198 "dma_device_id": "system", 00:11:16.198 "dma_device_type": 1 00:11:16.198 }, 00:11:16.198 { 00:11:16.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.198 "dma_device_type": 2 00:11:16.198 } 00:11:16.198 ], 00:11:16.198 "driver_specific": {} 00:11:16.198 } 00:11:16.198 ] 00:11:16.198 14:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:16.198 14:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:16.198 14:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:16.198 14:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:16.471 BaseBdev3 00:11:16.471 14:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:11:16.471 14:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:11:16.471 14:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:16.471 14:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:16.471 14:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:16.471 14:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:16.471 14:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:16.730 14:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:16.987 [ 00:11:16.987 { 00:11:16.987 "name": "BaseBdev3", 00:11:16.987 "aliases": [ 00:11:16.987 "5e59c226-405f-11ef-b2a4-e9dca065e82e" 00:11:16.987 ], 00:11:16.987 "product_name": "Malloc disk", 00:11:16.987 "block_size": 512, 00:11:16.987 "num_blocks": 65536, 00:11:16.987 "uuid": "5e59c226-405f-11ef-b2a4-e9dca065e82e", 00:11:16.987 "assigned_rate_limits": { 00:11:16.987 "rw_ios_per_sec": 0, 00:11:16.987 "rw_mbytes_per_sec": 0, 00:11:16.987 "r_mbytes_per_sec": 0, 00:11:16.987 "w_mbytes_per_sec": 0 00:11:16.987 }, 00:11:16.987 "claimed": false, 00:11:16.987 "zoned": false, 00:11:16.987 "supported_io_types": { 00:11:16.987 "read": true, 00:11:16.987 "write": true, 00:11:16.987 "unmap": true, 00:11:16.987 "flush": true, 00:11:16.987 "reset": true, 00:11:16.987 "nvme_admin": false, 00:11:16.987 "nvme_io": false, 00:11:16.987 "nvme_io_md": false, 00:11:16.987 "write_zeroes": true, 00:11:16.987 "zcopy": true, 00:11:16.987 "get_zone_info": false, 00:11:16.987 "zone_management": false, 00:11:16.987 "zone_append": false, 00:11:16.987 "compare": false, 00:11:16.987 "compare_and_write": false, 00:11:16.987 "abort": true, 00:11:16.987 "seek_hole": false, 00:11:16.987 "seek_data": false, 00:11:16.987 "copy": true, 00:11:16.987 "nvme_iov_md": false 00:11:16.987 }, 00:11:16.987 "memory_domains": [ 00:11:16.987 { 00:11:16.987 "dma_device_id": "system", 00:11:16.987 "dma_device_type": 1 00:11:16.987 }, 00:11:16.987 { 00:11:16.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.987 "dma_device_type": 2 00:11:16.987 } 00:11:16.987 ], 00:11:16.987 "driver_specific": {} 00:11:16.987 } 00:11:16.987 ] 00:11:16.987 14:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:16.987 14:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:16.987 14:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:16.987 14:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:17.245 [2024-07-12 14:59:42.850488] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:17.245 [2024-07-12 14:59:42.850540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:17.245 [2024-07-12 14:59:42.850550] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:17.245 [2024-07-12 14:59:42.851103] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:17.245 14:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:17.245 14:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:17.245 14:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:17.245 14:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:17.245 14:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:17.245 14:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:17.245 14:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:17.245 14:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:17.245 14:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:17.245 14:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:17.245 14:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:17.245 14:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.503 14:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:17.503 "name": "Existed_Raid", 00:11:17.503 "uuid": "5ecf88ed-405f-11ef-b2a4-e9dca065e82e", 00:11:17.503 "strip_size_kb": 64, 00:11:17.503 "state": "configuring", 00:11:17.503 "raid_level": "concat", 00:11:17.503 "superblock": true, 00:11:17.503 "num_base_bdevs": 3, 00:11:17.503 "num_base_bdevs_discovered": 2, 00:11:17.503 "num_base_bdevs_operational": 3, 00:11:17.503 "base_bdevs_list": [ 00:11:17.503 { 00:11:17.503 "name": "BaseBdev1", 00:11:17.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.503 "is_configured": false, 00:11:17.503 "data_offset": 0, 00:11:17.503 "data_size": 0 00:11:17.503 }, 00:11:17.503 { 00:11:17.503 "name": "BaseBdev2", 00:11:17.503 "uuid": "5ddd44ac-405f-11ef-b2a4-e9dca065e82e", 00:11:17.503 "is_configured": true, 00:11:17.503 "data_offset": 2048, 00:11:17.503 "data_size": 63488 00:11:17.503 }, 00:11:17.503 { 00:11:17.503 "name": "BaseBdev3", 00:11:17.503 "uuid": "5e59c226-405f-11ef-b2a4-e9dca065e82e", 00:11:17.503 "is_configured": true, 00:11:17.503 "data_offset": 2048, 00:11:17.503 "data_size": 63488 00:11:17.503 } 00:11:17.503 ] 00:11:17.503 }' 00:11:17.503 14:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:17.503 14:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.761 14:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:11:18.019 [2024-07-12 14:59:43.666342] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:18.019 14:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:18.019 14:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:18.019 14:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:18.019 14:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:18.019 14:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:18.019 14:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:18.019 14:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:18.019 14:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:18.019 14:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:18.019 14:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:18.019 14:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.019 14:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:18.278 14:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:18.278 "name": "Existed_Raid", 00:11:18.278 "uuid": "5ecf88ed-405f-11ef-b2a4-e9dca065e82e", 00:11:18.278 "strip_size_kb": 64, 00:11:18.278 "state": "configuring", 00:11:18.278 "raid_level": "concat", 00:11:18.278 "superblock": true, 00:11:18.278 "num_base_bdevs": 3, 00:11:18.278 "num_base_bdevs_discovered": 1, 00:11:18.278 "num_base_bdevs_operational": 3, 00:11:18.278 "base_bdevs_list": [ 00:11:18.278 { 00:11:18.278 "name": "BaseBdev1", 00:11:18.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.278 "is_configured": false, 00:11:18.278 "data_offset": 0, 00:11:18.278 "data_size": 0 00:11:18.278 }, 00:11:18.278 { 00:11:18.278 "name": null, 00:11:18.278 "uuid": "5ddd44ac-405f-11ef-b2a4-e9dca065e82e", 00:11:18.278 "is_configured": false, 00:11:18.278 "data_offset": 2048, 00:11:18.278 "data_size": 63488 00:11:18.278 }, 00:11:18.278 { 00:11:18.278 "name": "BaseBdev3", 00:11:18.278 "uuid": "5e59c226-405f-11ef-b2a4-e9dca065e82e", 00:11:18.278 "is_configured": true, 00:11:18.278 "data_offset": 2048, 00:11:18.278 "data_size": 63488 00:11:18.278 } 00:11:18.278 ] 00:11:18.278 }' 00:11:18.278 14:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:18.278 14:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.536 14:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:18.536 14:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:18.794 14:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:11:18.794 14:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:19.053 [2024-07-12 14:59:44.838334] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:19.053 BaseBdev1 00:11:19.053 14:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:11:19.053 14:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:19.053 14:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:19.053 14:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:19.053 14:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:19.053 14:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:19.053 14:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:19.312 14:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:19.571 [ 00:11:19.571 { 00:11:19.571 "name": "BaseBdev1", 00:11:19.571 "aliases": [ 00:11:19.571 "5ffed6ea-405f-11ef-b2a4-e9dca065e82e" 00:11:19.571 ], 00:11:19.571 "product_name": "Malloc disk", 00:11:19.571 "block_size": 512, 00:11:19.571 "num_blocks": 65536, 00:11:19.571 "uuid": "5ffed6ea-405f-11ef-b2a4-e9dca065e82e", 00:11:19.571 "assigned_rate_limits": { 00:11:19.571 "rw_ios_per_sec": 0, 00:11:19.571 "rw_mbytes_per_sec": 0, 00:11:19.571 "r_mbytes_per_sec": 0, 00:11:19.571 "w_mbytes_per_sec": 0 00:11:19.571 }, 00:11:19.571 "claimed": true, 00:11:19.571 "claim_type": "exclusive_write", 00:11:19.571 "zoned": false, 00:11:19.571 "supported_io_types": { 00:11:19.571 "read": true, 00:11:19.571 "write": true, 00:11:19.571 "unmap": true, 00:11:19.571 "flush": true, 00:11:19.571 "reset": true, 00:11:19.571 "nvme_admin": false, 00:11:19.571 "nvme_io": false, 00:11:19.571 "nvme_io_md": false, 00:11:19.571 "write_zeroes": true, 00:11:19.571 "zcopy": true, 00:11:19.571 "get_zone_info": false, 00:11:19.571 "zone_management": false, 00:11:19.571 "zone_append": false, 00:11:19.571 "compare": false, 00:11:19.571 "compare_and_write": false, 00:11:19.571 "abort": true, 00:11:19.571 "seek_hole": false, 00:11:19.571 "seek_data": false, 00:11:19.571 "copy": true, 00:11:19.571 "nvme_iov_md": false 00:11:19.571 }, 00:11:19.571 "memory_domains": [ 00:11:19.571 { 00:11:19.571 "dma_device_id": "system", 00:11:19.571 "dma_device_type": 1 00:11:19.571 }, 00:11:19.571 { 00:11:19.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.571 "dma_device_type": 2 00:11:19.571 } 00:11:19.571 ], 00:11:19.571 "driver_specific": {} 00:11:19.571 } 00:11:19.571 ] 00:11:19.571 14:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:19.571 14:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:19.571 14:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:19.571 14:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:19.571 14:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:19.571 14:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:19.571 14:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:19.571 14:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:19.571 14:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:19.571 14:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:19.571 14:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:19.571 14:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.571 14:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:20.138 14:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:20.138 "name": "Existed_Raid", 00:11:20.138 "uuid": "5ecf88ed-405f-11ef-b2a4-e9dca065e82e", 00:11:20.138 "strip_size_kb": 64, 00:11:20.138 "state": "configuring", 00:11:20.138 "raid_level": "concat", 00:11:20.138 "superblock": true, 00:11:20.138 "num_base_bdevs": 3, 00:11:20.138 "num_base_bdevs_discovered": 2, 00:11:20.138 "num_base_bdevs_operational": 3, 00:11:20.138 "base_bdevs_list": [ 00:11:20.138 { 00:11:20.138 "name": "BaseBdev1", 00:11:20.138 "uuid": "5ffed6ea-405f-11ef-b2a4-e9dca065e82e", 00:11:20.138 "is_configured": true, 00:11:20.138 "data_offset": 2048, 00:11:20.138 "data_size": 63488 00:11:20.138 }, 00:11:20.138 { 00:11:20.138 "name": null, 00:11:20.138 "uuid": "5ddd44ac-405f-11ef-b2a4-e9dca065e82e", 00:11:20.138 "is_configured": false, 00:11:20.138 "data_offset": 2048, 00:11:20.138 "data_size": 63488 00:11:20.139 }, 00:11:20.139 { 00:11:20.139 "name": "BaseBdev3", 00:11:20.139 "uuid": "5e59c226-405f-11ef-b2a4-e9dca065e82e", 00:11:20.139 "is_configured": true, 00:11:20.139 "data_offset": 2048, 00:11:20.139 "data_size": 63488 00:11:20.139 } 00:11:20.139 ] 00:11:20.139 }' 00:11:20.139 14:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:20.139 14:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.396 14:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:20.396 14:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:20.654 14:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:11:20.654 14:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:11:20.911 [2024-07-12 14:59:46.653913] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:20.911 14:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:20.911 14:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:20.911 14:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:20.911 14:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:20.911 14:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:20.911 14:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:20.911 14:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:20.911 14:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:20.911 14:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:20.911 14:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:20.911 14:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:20.911 14:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.477 14:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:21.477 "name": "Existed_Raid", 00:11:21.477 "uuid": "5ecf88ed-405f-11ef-b2a4-e9dca065e82e", 00:11:21.477 "strip_size_kb": 64, 00:11:21.477 "state": "configuring", 00:11:21.477 "raid_level": "concat", 00:11:21.477 "superblock": true, 00:11:21.477 "num_base_bdevs": 3, 00:11:21.477 "num_base_bdevs_discovered": 1, 00:11:21.477 "num_base_bdevs_operational": 3, 00:11:21.477 "base_bdevs_list": [ 00:11:21.477 { 00:11:21.477 "name": "BaseBdev1", 00:11:21.477 "uuid": "5ffed6ea-405f-11ef-b2a4-e9dca065e82e", 00:11:21.477 "is_configured": true, 00:11:21.477 "data_offset": 2048, 00:11:21.477 "data_size": 63488 00:11:21.477 }, 00:11:21.477 { 00:11:21.477 "name": null, 00:11:21.477 "uuid": "5ddd44ac-405f-11ef-b2a4-e9dca065e82e", 00:11:21.477 "is_configured": false, 00:11:21.477 "data_offset": 2048, 00:11:21.477 "data_size": 63488 00:11:21.477 }, 00:11:21.477 { 00:11:21.477 "name": null, 00:11:21.477 "uuid": "5e59c226-405f-11ef-b2a4-e9dca065e82e", 00:11:21.477 "is_configured": false, 00:11:21.477 "data_offset": 2048, 00:11:21.477 "data_size": 63488 00:11:21.477 } 00:11:21.477 ] 00:11:21.477 }' 00:11:21.477 14:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:21.477 14:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.735 14:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:21.735 14:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:21.993 14:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:11:21.993 14:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:22.250 [2024-07-12 14:59:47.981764] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:22.250 14:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:22.250 14:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:22.250 14:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:22.250 14:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:22.250 14:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:22.250 14:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:22.250 14:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:22.250 14:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:22.250 14:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:22.250 14:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:22.250 14:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:22.250 14:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.815 14:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:22.815 "name": "Existed_Raid", 00:11:22.815 "uuid": "5ecf88ed-405f-11ef-b2a4-e9dca065e82e", 00:11:22.815 "strip_size_kb": 64, 00:11:22.815 "state": "configuring", 00:11:22.815 "raid_level": "concat", 00:11:22.815 "superblock": true, 00:11:22.815 "num_base_bdevs": 3, 00:11:22.815 "num_base_bdevs_discovered": 2, 00:11:22.815 "num_base_bdevs_operational": 3, 00:11:22.815 "base_bdevs_list": [ 00:11:22.815 { 00:11:22.815 "name": "BaseBdev1", 00:11:22.815 "uuid": "5ffed6ea-405f-11ef-b2a4-e9dca065e82e", 00:11:22.815 "is_configured": true, 00:11:22.815 "data_offset": 2048, 00:11:22.815 "data_size": 63488 00:11:22.815 }, 00:11:22.815 { 00:11:22.815 "name": null, 00:11:22.815 "uuid": "5ddd44ac-405f-11ef-b2a4-e9dca065e82e", 00:11:22.815 "is_configured": false, 00:11:22.815 "data_offset": 2048, 00:11:22.815 "data_size": 63488 00:11:22.815 }, 00:11:22.815 { 00:11:22.815 "name": "BaseBdev3", 00:11:22.815 "uuid": "5e59c226-405f-11ef-b2a4-e9dca065e82e", 00:11:22.815 "is_configured": true, 00:11:22.815 "data_offset": 2048, 00:11:22.815 "data_size": 63488 00:11:22.815 } 00:11:22.815 ] 00:11:22.815 }' 00:11:22.815 14:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:22.815 14:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.071 14:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:23.071 14:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:23.328 14:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:11:23.328 14:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:23.585 [2024-07-12 14:59:49.333585] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:23.585 14:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:23.585 14:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:23.585 14:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:23.585 14:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:23.585 14:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:23.585 14:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:23.585 14:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:23.585 14:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:23.585 14:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:23.585 14:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:23.585 14:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:23.585 14:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.148 14:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:24.148 "name": "Existed_Raid", 00:11:24.148 "uuid": "5ecf88ed-405f-11ef-b2a4-e9dca065e82e", 00:11:24.148 "strip_size_kb": 64, 00:11:24.148 "state": "configuring", 00:11:24.148 "raid_level": "concat", 00:11:24.148 "superblock": true, 00:11:24.148 "num_base_bdevs": 3, 00:11:24.148 "num_base_bdevs_discovered": 1, 00:11:24.148 "num_base_bdevs_operational": 3, 00:11:24.148 "base_bdevs_list": [ 00:11:24.148 { 00:11:24.148 "name": null, 00:11:24.148 "uuid": "5ffed6ea-405f-11ef-b2a4-e9dca065e82e", 00:11:24.148 "is_configured": false, 00:11:24.148 "data_offset": 2048, 00:11:24.148 "data_size": 63488 00:11:24.148 }, 00:11:24.148 { 00:11:24.148 "name": null, 00:11:24.148 "uuid": "5ddd44ac-405f-11ef-b2a4-e9dca065e82e", 00:11:24.148 "is_configured": false, 00:11:24.148 "data_offset": 2048, 00:11:24.148 "data_size": 63488 00:11:24.148 }, 00:11:24.148 { 00:11:24.148 "name": "BaseBdev3", 00:11:24.148 "uuid": "5e59c226-405f-11ef-b2a4-e9dca065e82e", 00:11:24.148 "is_configured": true, 00:11:24.148 "data_offset": 2048, 00:11:24.148 "data_size": 63488 00:11:24.148 } 00:11:24.148 ] 00:11:24.148 }' 00:11:24.148 14:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:24.148 14:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.405 14:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:24.405 14:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:24.663 14:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:11:24.664 14:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:24.922 [2024-07-12 14:59:50.619293] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:24.922 14:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:24.922 14:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:24.922 14:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:24.922 14:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:24.922 14:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:24.922 14:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:24.922 14:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:24.922 14:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:24.922 14:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:24.922 14:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:24.922 14:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:24.922 14:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.181 14:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:25.181 "name": "Existed_Raid", 00:11:25.181 "uuid": "5ecf88ed-405f-11ef-b2a4-e9dca065e82e", 00:11:25.181 "strip_size_kb": 64, 00:11:25.181 "state": "configuring", 00:11:25.181 "raid_level": "concat", 00:11:25.181 "superblock": true, 00:11:25.181 "num_base_bdevs": 3, 00:11:25.181 "num_base_bdevs_discovered": 2, 00:11:25.181 "num_base_bdevs_operational": 3, 00:11:25.181 "base_bdevs_list": [ 00:11:25.181 { 00:11:25.181 "name": null, 00:11:25.181 "uuid": "5ffed6ea-405f-11ef-b2a4-e9dca065e82e", 00:11:25.181 "is_configured": false, 00:11:25.181 "data_offset": 2048, 00:11:25.181 "data_size": 63488 00:11:25.181 }, 00:11:25.181 { 00:11:25.181 "name": "BaseBdev2", 00:11:25.181 "uuid": "5ddd44ac-405f-11ef-b2a4-e9dca065e82e", 00:11:25.181 "is_configured": true, 00:11:25.181 "data_offset": 2048, 00:11:25.181 "data_size": 63488 00:11:25.181 }, 00:11:25.181 { 00:11:25.181 "name": "BaseBdev3", 00:11:25.181 "uuid": "5e59c226-405f-11ef-b2a4-e9dca065e82e", 00:11:25.181 "is_configured": true, 00:11:25.181 "data_offset": 2048, 00:11:25.181 "data_size": 63488 00:11:25.181 } 00:11:25.181 ] 00:11:25.181 }' 00:11:25.181 14:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:25.181 14:59:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.505 14:59:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:25.505 14:59:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:25.762 14:59:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:11:25.762 14:59:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:25.762 14:59:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:26.019 14:59:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 5ffed6ea-405f-11ef-b2a4-e9dca065e82e 00:11:26.275 [2024-07-12 14:59:51.951266] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:26.275 [2024-07-12 14:59:51.951331] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x18d2b2234a00 00:11:26.275 [2024-07-12 14:59:51.951336] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:26.275 [2024-07-12 14:59:51.951357] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18d2b2297e20 00:11:26.275 [2024-07-12 14:59:51.951405] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x18d2b2234a00 00:11:26.275 [2024-07-12 14:59:51.951426] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x18d2b2234a00 00:11:26.275 [2024-07-12 14:59:51.951451] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.275 NewBaseBdev 00:11:26.275 14:59:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:11:26.275 14:59:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:11:26.275 14:59:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:26.275 14:59:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:26.275 14:59:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:26.275 14:59:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:26.275 14:59:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:26.533 14:59:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:26.791 [ 00:11:26.791 { 00:11:26.791 "name": "NewBaseBdev", 00:11:26.791 "aliases": [ 00:11:26.791 "5ffed6ea-405f-11ef-b2a4-e9dca065e82e" 00:11:26.791 ], 00:11:26.791 "product_name": "Malloc disk", 00:11:26.791 "block_size": 512, 00:11:26.791 "num_blocks": 65536, 00:11:26.791 "uuid": "5ffed6ea-405f-11ef-b2a4-e9dca065e82e", 00:11:26.791 "assigned_rate_limits": { 00:11:26.791 "rw_ios_per_sec": 0, 00:11:26.791 "rw_mbytes_per_sec": 0, 00:11:26.791 "r_mbytes_per_sec": 0, 00:11:26.791 "w_mbytes_per_sec": 0 00:11:26.791 }, 00:11:26.791 "claimed": true, 00:11:26.791 "claim_type": "exclusive_write", 00:11:26.791 "zoned": false, 00:11:26.791 "supported_io_types": { 00:11:26.791 "read": true, 00:11:26.791 "write": true, 00:11:26.791 "unmap": true, 00:11:26.791 "flush": true, 00:11:26.791 "reset": true, 00:11:26.791 "nvme_admin": false, 00:11:26.791 "nvme_io": false, 00:11:26.791 "nvme_io_md": false, 00:11:26.791 "write_zeroes": true, 00:11:26.791 "zcopy": true, 00:11:26.791 "get_zone_info": false, 00:11:26.791 "zone_management": false, 00:11:26.791 "zone_append": false, 00:11:26.791 "compare": false, 00:11:26.791 "compare_and_write": false, 00:11:26.791 "abort": true, 00:11:26.791 "seek_hole": false, 00:11:26.791 "seek_data": false, 00:11:26.791 "copy": true, 00:11:26.791 "nvme_iov_md": false 00:11:26.791 }, 00:11:26.791 "memory_domains": [ 00:11:26.791 { 00:11:26.791 "dma_device_id": "system", 00:11:26.791 "dma_device_type": 1 00:11:26.791 }, 00:11:26.791 { 00:11:26.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.791 "dma_device_type": 2 00:11:26.791 } 00:11:26.791 ], 00:11:26.791 "driver_specific": {} 00:11:26.791 } 00:11:26.791 ] 00:11:26.791 14:59:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:26.791 14:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:26.791 14:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:26.791 14:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:26.791 14:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:26.791 14:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:26.791 14:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:26.791 14:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:26.791 14:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:26.791 14:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:26.791 14:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:26.791 14:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:26.791 14:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.050 14:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:27.050 "name": "Existed_Raid", 00:11:27.050 "uuid": "5ecf88ed-405f-11ef-b2a4-e9dca065e82e", 00:11:27.050 "strip_size_kb": 64, 00:11:27.050 "state": "online", 00:11:27.050 "raid_level": "concat", 00:11:27.050 "superblock": true, 00:11:27.050 "num_base_bdevs": 3, 00:11:27.050 "num_base_bdevs_discovered": 3, 00:11:27.050 "num_base_bdevs_operational": 3, 00:11:27.050 "base_bdevs_list": [ 00:11:27.050 { 00:11:27.050 "name": "NewBaseBdev", 00:11:27.050 "uuid": "5ffed6ea-405f-11ef-b2a4-e9dca065e82e", 00:11:27.050 "is_configured": true, 00:11:27.050 "data_offset": 2048, 00:11:27.050 "data_size": 63488 00:11:27.050 }, 00:11:27.050 { 00:11:27.050 "name": "BaseBdev2", 00:11:27.050 "uuid": "5ddd44ac-405f-11ef-b2a4-e9dca065e82e", 00:11:27.050 "is_configured": true, 00:11:27.050 "data_offset": 2048, 00:11:27.050 "data_size": 63488 00:11:27.050 }, 00:11:27.050 { 00:11:27.050 "name": "BaseBdev3", 00:11:27.050 "uuid": "5e59c226-405f-11ef-b2a4-e9dca065e82e", 00:11:27.050 "is_configured": true, 00:11:27.050 "data_offset": 2048, 00:11:27.050 "data_size": 63488 00:11:27.050 } 00:11:27.050 ] 00:11:27.050 }' 00:11:27.050 14:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:27.050 14:59:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.308 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:11:27.308 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:27.308 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:27.308 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:27.308 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:27.308 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:11:27.308 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:27.308 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:27.566 [2024-07-12 14:59:53.283029] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:27.566 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:27.566 "name": "Existed_Raid", 00:11:27.566 "aliases": [ 00:11:27.566 "5ecf88ed-405f-11ef-b2a4-e9dca065e82e" 00:11:27.566 ], 00:11:27.566 "product_name": "Raid Volume", 00:11:27.566 "block_size": 512, 00:11:27.566 "num_blocks": 190464, 00:11:27.566 "uuid": "5ecf88ed-405f-11ef-b2a4-e9dca065e82e", 00:11:27.566 "assigned_rate_limits": { 00:11:27.566 "rw_ios_per_sec": 0, 00:11:27.566 "rw_mbytes_per_sec": 0, 00:11:27.566 "r_mbytes_per_sec": 0, 00:11:27.566 "w_mbytes_per_sec": 0 00:11:27.566 }, 00:11:27.566 "claimed": false, 00:11:27.566 "zoned": false, 00:11:27.566 "supported_io_types": { 00:11:27.566 "read": true, 00:11:27.566 "write": true, 00:11:27.566 "unmap": true, 00:11:27.566 "flush": true, 00:11:27.566 "reset": true, 00:11:27.566 "nvme_admin": false, 00:11:27.566 "nvme_io": false, 00:11:27.566 "nvme_io_md": false, 00:11:27.566 "write_zeroes": true, 00:11:27.566 "zcopy": false, 00:11:27.566 "get_zone_info": false, 00:11:27.566 "zone_management": false, 00:11:27.566 "zone_append": false, 00:11:27.566 "compare": false, 00:11:27.566 "compare_and_write": false, 00:11:27.566 "abort": false, 00:11:27.566 "seek_hole": false, 00:11:27.566 "seek_data": false, 00:11:27.566 "copy": false, 00:11:27.566 "nvme_iov_md": false 00:11:27.566 }, 00:11:27.566 "memory_domains": [ 00:11:27.566 { 00:11:27.566 "dma_device_id": "system", 00:11:27.566 "dma_device_type": 1 00:11:27.566 }, 00:11:27.566 { 00:11:27.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.566 "dma_device_type": 2 00:11:27.566 }, 00:11:27.566 { 00:11:27.566 "dma_device_id": "system", 00:11:27.566 "dma_device_type": 1 00:11:27.566 }, 00:11:27.566 { 00:11:27.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.566 "dma_device_type": 2 00:11:27.566 }, 00:11:27.566 { 00:11:27.566 "dma_device_id": "system", 00:11:27.566 "dma_device_type": 1 00:11:27.566 }, 00:11:27.566 { 00:11:27.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.566 "dma_device_type": 2 00:11:27.566 } 00:11:27.566 ], 00:11:27.566 "driver_specific": { 00:11:27.566 "raid": { 00:11:27.566 "uuid": "5ecf88ed-405f-11ef-b2a4-e9dca065e82e", 00:11:27.566 "strip_size_kb": 64, 00:11:27.566 "state": "online", 00:11:27.566 "raid_level": "concat", 00:11:27.566 "superblock": true, 00:11:27.566 "num_base_bdevs": 3, 00:11:27.566 "num_base_bdevs_discovered": 3, 00:11:27.566 "num_base_bdevs_operational": 3, 00:11:27.566 "base_bdevs_list": [ 00:11:27.566 { 00:11:27.566 "name": "NewBaseBdev", 00:11:27.566 "uuid": "5ffed6ea-405f-11ef-b2a4-e9dca065e82e", 00:11:27.566 "is_configured": true, 00:11:27.566 "data_offset": 2048, 00:11:27.566 "data_size": 63488 00:11:27.566 }, 00:11:27.566 { 00:11:27.566 "name": "BaseBdev2", 00:11:27.566 "uuid": "5ddd44ac-405f-11ef-b2a4-e9dca065e82e", 00:11:27.566 "is_configured": true, 00:11:27.566 "data_offset": 2048, 00:11:27.567 "data_size": 63488 00:11:27.567 }, 00:11:27.567 { 00:11:27.567 "name": "BaseBdev3", 00:11:27.567 "uuid": "5e59c226-405f-11ef-b2a4-e9dca065e82e", 00:11:27.567 "is_configured": true, 00:11:27.567 "data_offset": 2048, 00:11:27.567 "data_size": 63488 00:11:27.567 } 00:11:27.567 ] 00:11:27.567 } 00:11:27.567 } 00:11:27.567 }' 00:11:27.567 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:27.567 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:11:27.567 BaseBdev2 00:11:27.567 BaseBdev3' 00:11:27.567 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:27.567 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:11:27.567 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:27.825 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:27.825 "name": "NewBaseBdev", 00:11:27.825 "aliases": [ 00:11:27.825 "5ffed6ea-405f-11ef-b2a4-e9dca065e82e" 00:11:27.825 ], 00:11:27.825 "product_name": "Malloc disk", 00:11:27.825 "block_size": 512, 00:11:27.825 "num_blocks": 65536, 00:11:27.825 "uuid": "5ffed6ea-405f-11ef-b2a4-e9dca065e82e", 00:11:27.825 "assigned_rate_limits": { 00:11:27.825 "rw_ios_per_sec": 0, 00:11:27.825 "rw_mbytes_per_sec": 0, 00:11:27.825 "r_mbytes_per_sec": 0, 00:11:27.825 "w_mbytes_per_sec": 0 00:11:27.825 }, 00:11:27.825 "claimed": true, 00:11:27.825 "claim_type": "exclusive_write", 00:11:27.825 "zoned": false, 00:11:27.825 "supported_io_types": { 00:11:27.825 "read": true, 00:11:27.825 "write": true, 00:11:27.825 "unmap": true, 00:11:27.825 "flush": true, 00:11:27.825 "reset": true, 00:11:27.825 "nvme_admin": false, 00:11:27.825 "nvme_io": false, 00:11:27.825 "nvme_io_md": false, 00:11:27.825 "write_zeroes": true, 00:11:27.825 "zcopy": true, 00:11:27.825 "get_zone_info": false, 00:11:27.825 "zone_management": false, 00:11:27.825 "zone_append": false, 00:11:27.825 "compare": false, 00:11:27.825 "compare_and_write": false, 00:11:27.825 "abort": true, 00:11:27.825 "seek_hole": false, 00:11:27.825 "seek_data": false, 00:11:27.825 "copy": true, 00:11:27.825 "nvme_iov_md": false 00:11:27.825 }, 00:11:27.825 "memory_domains": [ 00:11:27.825 { 00:11:27.825 "dma_device_id": "system", 00:11:27.825 "dma_device_type": 1 00:11:27.825 }, 00:11:27.825 { 00:11:27.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.825 "dma_device_type": 2 00:11:27.825 } 00:11:27.825 ], 00:11:27.825 "driver_specific": {} 00:11:27.825 }' 00:11:27.825 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:27.825 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:27.825 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:27.825 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:27.825 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:27.825 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:27.825 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:27.825 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:27.825 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:27.825 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:28.083 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:28.083 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:28.083 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:28.083 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:28.083 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:28.341 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:28.341 "name": "BaseBdev2", 00:11:28.341 "aliases": [ 00:11:28.341 "5ddd44ac-405f-11ef-b2a4-e9dca065e82e" 00:11:28.341 ], 00:11:28.341 "product_name": "Malloc disk", 00:11:28.341 "block_size": 512, 00:11:28.341 "num_blocks": 65536, 00:11:28.341 "uuid": "5ddd44ac-405f-11ef-b2a4-e9dca065e82e", 00:11:28.341 "assigned_rate_limits": { 00:11:28.341 "rw_ios_per_sec": 0, 00:11:28.341 "rw_mbytes_per_sec": 0, 00:11:28.341 "r_mbytes_per_sec": 0, 00:11:28.341 "w_mbytes_per_sec": 0 00:11:28.341 }, 00:11:28.341 "claimed": true, 00:11:28.341 "claim_type": "exclusive_write", 00:11:28.341 "zoned": false, 00:11:28.341 "supported_io_types": { 00:11:28.341 "read": true, 00:11:28.341 "write": true, 00:11:28.341 "unmap": true, 00:11:28.341 "flush": true, 00:11:28.341 "reset": true, 00:11:28.341 "nvme_admin": false, 00:11:28.341 "nvme_io": false, 00:11:28.341 "nvme_io_md": false, 00:11:28.341 "write_zeroes": true, 00:11:28.341 "zcopy": true, 00:11:28.341 "get_zone_info": false, 00:11:28.341 "zone_management": false, 00:11:28.341 "zone_append": false, 00:11:28.341 "compare": false, 00:11:28.341 "compare_and_write": false, 00:11:28.341 "abort": true, 00:11:28.341 "seek_hole": false, 00:11:28.341 "seek_data": false, 00:11:28.341 "copy": true, 00:11:28.341 "nvme_iov_md": false 00:11:28.341 }, 00:11:28.341 "memory_domains": [ 00:11:28.341 { 00:11:28.341 "dma_device_id": "system", 00:11:28.341 "dma_device_type": 1 00:11:28.341 }, 00:11:28.341 { 00:11:28.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.341 "dma_device_type": 2 00:11:28.341 } 00:11:28.341 ], 00:11:28.341 "driver_specific": {} 00:11:28.341 }' 00:11:28.341 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:28.341 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:28.341 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:28.341 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:28.341 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:28.341 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:28.341 14:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:28.341 14:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:28.341 14:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:28.341 14:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:28.341 14:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:28.341 14:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:28.341 14:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:28.341 14:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:28.341 14:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:28.600 14:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:28.600 "name": "BaseBdev3", 00:11:28.600 "aliases": [ 00:11:28.600 "5e59c226-405f-11ef-b2a4-e9dca065e82e" 00:11:28.600 ], 00:11:28.600 "product_name": "Malloc disk", 00:11:28.600 "block_size": 512, 00:11:28.600 "num_blocks": 65536, 00:11:28.600 "uuid": "5e59c226-405f-11ef-b2a4-e9dca065e82e", 00:11:28.600 "assigned_rate_limits": { 00:11:28.600 "rw_ios_per_sec": 0, 00:11:28.600 "rw_mbytes_per_sec": 0, 00:11:28.600 "r_mbytes_per_sec": 0, 00:11:28.600 "w_mbytes_per_sec": 0 00:11:28.600 }, 00:11:28.600 "claimed": true, 00:11:28.600 "claim_type": "exclusive_write", 00:11:28.600 "zoned": false, 00:11:28.600 "supported_io_types": { 00:11:28.600 "read": true, 00:11:28.600 "write": true, 00:11:28.600 "unmap": true, 00:11:28.600 "flush": true, 00:11:28.600 "reset": true, 00:11:28.600 "nvme_admin": false, 00:11:28.600 "nvme_io": false, 00:11:28.600 "nvme_io_md": false, 00:11:28.600 "write_zeroes": true, 00:11:28.600 "zcopy": true, 00:11:28.600 "get_zone_info": false, 00:11:28.600 "zone_management": false, 00:11:28.600 "zone_append": false, 00:11:28.600 "compare": false, 00:11:28.600 "compare_and_write": false, 00:11:28.600 "abort": true, 00:11:28.600 "seek_hole": false, 00:11:28.600 "seek_data": false, 00:11:28.600 "copy": true, 00:11:28.600 "nvme_iov_md": false 00:11:28.600 }, 00:11:28.600 "memory_domains": [ 00:11:28.600 { 00:11:28.600 "dma_device_id": "system", 00:11:28.600 "dma_device_type": 1 00:11:28.600 }, 00:11:28.600 { 00:11:28.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.600 "dma_device_type": 2 00:11:28.600 } 00:11:28.600 ], 00:11:28.600 "driver_specific": {} 00:11:28.600 }' 00:11:28.600 14:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:28.600 14:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:28.600 14:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:28.600 14:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:28.600 14:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:28.600 14:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:28.600 14:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:28.600 14:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:28.600 14:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:28.600 14:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:28.600 14:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:28.600 14:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:28.600 14:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:28.859 [2024-07-12 14:59:54.646843] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:28.859 [2024-07-12 14:59:54.646871] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:28.859 [2024-07-12 14:59:54.646894] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:28.859 [2024-07-12 14:59:54.646908] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:28.859 [2024-07-12 14:59:54.646913] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18d2b2234a00 name Existed_Raid, state offline 00:11:28.859 14:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 54756 00:11:28.859 14:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 54756 ']' 00:11:28.859 14:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 54756 00:11:28.859 14:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:11:28.859 14:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:11:28.859 14:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 54756 00:11:28.859 14:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:11:28.859 14:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:11:28.859 14:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:11:28.859 14:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 54756' 00:11:28.859 killing process with pid 54756 00:11:28.859 14:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 54756 00:11:28.859 [2024-07-12 14:59:54.674538] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:28.859 14:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 54756 00:11:29.118 [2024-07-12 14:59:54.691403] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:29.118 14:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:11:29.118 00:11:29.118 real 0m25.137s 00:11:29.118 user 0m46.154s 00:11:29.118 sys 0m3.263s 00:11:29.118 14:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:29.118 14:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.118 ************************************ 00:11:29.118 END TEST raid_state_function_test_sb 00:11:29.118 ************************************ 00:11:29.118 14:59:54 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:29.118 14:59:54 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:11:29.118 14:59:54 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:29.118 14:59:54 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:29.118 14:59:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:29.118 ************************************ 00:11:29.118 START TEST raid_superblock_test 00:11:29.118 ************************************ 00:11:29.118 14:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 3 00:11:29.118 14:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:11:29.118 14:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:11:29.118 14:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:11:29.118 14:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:11:29.118 14:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:11:29.118 14:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:11:29.118 14:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:11:29.118 14:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:11:29.118 14:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:11:29.118 14:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:11:29.118 14:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:11:29.118 14:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:11:29.118 14:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:11:29.118 14:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:11:29.118 14:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:11:29.118 14:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:11:29.118 14:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=55488 00:11:29.118 14:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 55488 /var/tmp/spdk-raid.sock 00:11:29.118 14:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 55488 ']' 00:11:29.118 14:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:29.119 14:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:11:29.119 14:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:29.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:29.119 14:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:29.119 14:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:29.119 14:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.119 [2024-07-12 14:59:54.917746] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:11:29.119 [2024-07-12 14:59:54.918037] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:11:29.684 EAL: TSC is not safe to use in SMP mode 00:11:29.684 EAL: TSC is not invariant 00:11:29.684 [2024-07-12 14:59:55.480539] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.942 [2024-07-12 14:59:55.593746] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:29.942 [2024-07-12 14:59:55.596591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.942 [2024-07-12 14:59:55.597865] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:29.942 [2024-07-12 14:59:55.597889] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.199 14:59:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:30.199 14:59:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:11:30.199 14:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:11:30.199 14:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:30.199 14:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:11:30.199 14:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:11:30.199 14:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:30.199 14:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:30.199 14:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:30.199 14:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:30.199 14:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:11:30.458 malloc1 00:11:30.458 14:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:30.717 [2024-07-12 14:59:56.517942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:30.717 [2024-07-12 14:59:56.518002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.717 [2024-07-12 14:59:56.518015] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x642af434780 00:11:30.717 [2024-07-12 14:59:56.518023] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.717 [2024-07-12 14:59:56.518921] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.717 [2024-07-12 14:59:56.518948] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:30.717 pt1 00:11:30.717 14:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:30.717 14:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:30.717 14:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:11:30.717 14:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:11:30.717 14:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:30.717 14:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:30.717 14:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:30.717 14:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:30.717 14:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:11:30.988 malloc2 00:11:30.988 14:59:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:31.247 [2024-07-12 14:59:57.057888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:31.247 [2024-07-12 14:59:57.057945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.247 [2024-07-12 14:59:57.057958] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x642af434c80 00:11:31.247 [2024-07-12 14:59:57.057967] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.247 [2024-07-12 14:59:57.058625] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.247 [2024-07-12 14:59:57.058652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:31.247 pt2 00:11:31.505 14:59:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:31.505 14:59:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:31.505 14:59:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:11:31.505 14:59:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:11:31.505 14:59:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:31.505 14:59:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:31.505 14:59:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:31.505 14:59:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:31.505 14:59:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:11:31.764 malloc3 00:11:31.764 14:59:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:32.022 [2024-07-12 14:59:57.657830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:32.022 [2024-07-12 14:59:57.657900] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.022 [2024-07-12 14:59:57.657914] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x642af435180 00:11:32.022 [2024-07-12 14:59:57.657922] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.022 [2024-07-12 14:59:57.658582] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.022 [2024-07-12 14:59:57.658609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:32.022 pt3 00:11:32.022 14:59:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:32.022 14:59:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:32.022 14:59:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:11:32.281 [2024-07-12 14:59:57.945809] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:32.281 [2024-07-12 14:59:57.946401] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:32.281 [2024-07-12 14:59:57.946423] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:32.281 [2024-07-12 14:59:57.946476] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x642af435400 00:11:32.281 [2024-07-12 14:59:57.946482] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:32.281 [2024-07-12 14:59:57.946515] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x642af497e20 00:11:32.281 [2024-07-12 14:59:57.946591] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x642af435400 00:11:32.281 [2024-07-12 14:59:57.946596] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x642af435400 00:11:32.281 [2024-07-12 14:59:57.946622] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.281 14:59:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:32.281 14:59:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:32.281 14:59:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:32.281 14:59:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:32.281 14:59:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:32.281 14:59:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:32.281 14:59:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:32.281 14:59:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:32.281 14:59:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:32.281 14:59:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:32.281 14:59:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:32.281 14:59:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.539 14:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:32.539 "name": "raid_bdev1", 00:11:32.539 "uuid": "67cee5d7-405f-11ef-b2a4-e9dca065e82e", 00:11:32.539 "strip_size_kb": 64, 00:11:32.539 "state": "online", 00:11:32.539 "raid_level": "concat", 00:11:32.539 "superblock": true, 00:11:32.539 "num_base_bdevs": 3, 00:11:32.539 "num_base_bdevs_discovered": 3, 00:11:32.539 "num_base_bdevs_operational": 3, 00:11:32.539 "base_bdevs_list": [ 00:11:32.539 { 00:11:32.539 "name": "pt1", 00:11:32.539 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:32.539 "is_configured": true, 00:11:32.539 "data_offset": 2048, 00:11:32.539 "data_size": 63488 00:11:32.539 }, 00:11:32.539 { 00:11:32.539 "name": "pt2", 00:11:32.539 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:32.539 "is_configured": true, 00:11:32.539 "data_offset": 2048, 00:11:32.539 "data_size": 63488 00:11:32.539 }, 00:11:32.539 { 00:11:32.539 "name": "pt3", 00:11:32.540 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:32.540 "is_configured": true, 00:11:32.540 "data_offset": 2048, 00:11:32.540 "data_size": 63488 00:11:32.540 } 00:11:32.540 ] 00:11:32.540 }' 00:11:32.540 14:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:32.540 14:59:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.798 14:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:11:32.798 14:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:32.798 14:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:32.798 14:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:32.798 14:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:32.798 14:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:32.798 14:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:32.798 14:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:33.057 [2024-07-12 14:59:58.845763] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:33.057 14:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:33.057 "name": "raid_bdev1", 00:11:33.057 "aliases": [ 00:11:33.057 "67cee5d7-405f-11ef-b2a4-e9dca065e82e" 00:11:33.057 ], 00:11:33.057 "product_name": "Raid Volume", 00:11:33.057 "block_size": 512, 00:11:33.057 "num_blocks": 190464, 00:11:33.057 "uuid": "67cee5d7-405f-11ef-b2a4-e9dca065e82e", 00:11:33.057 "assigned_rate_limits": { 00:11:33.057 "rw_ios_per_sec": 0, 00:11:33.057 "rw_mbytes_per_sec": 0, 00:11:33.057 "r_mbytes_per_sec": 0, 00:11:33.057 "w_mbytes_per_sec": 0 00:11:33.057 }, 00:11:33.057 "claimed": false, 00:11:33.057 "zoned": false, 00:11:33.057 "supported_io_types": { 00:11:33.057 "read": true, 00:11:33.057 "write": true, 00:11:33.057 "unmap": true, 00:11:33.057 "flush": true, 00:11:33.057 "reset": true, 00:11:33.057 "nvme_admin": false, 00:11:33.057 "nvme_io": false, 00:11:33.057 "nvme_io_md": false, 00:11:33.057 "write_zeroes": true, 00:11:33.057 "zcopy": false, 00:11:33.057 "get_zone_info": false, 00:11:33.057 "zone_management": false, 00:11:33.057 "zone_append": false, 00:11:33.057 "compare": false, 00:11:33.057 "compare_and_write": false, 00:11:33.057 "abort": false, 00:11:33.057 "seek_hole": false, 00:11:33.057 "seek_data": false, 00:11:33.057 "copy": false, 00:11:33.057 "nvme_iov_md": false 00:11:33.057 }, 00:11:33.057 "memory_domains": [ 00:11:33.057 { 00:11:33.057 "dma_device_id": "system", 00:11:33.057 "dma_device_type": 1 00:11:33.057 }, 00:11:33.057 { 00:11:33.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.057 "dma_device_type": 2 00:11:33.057 }, 00:11:33.057 { 00:11:33.057 "dma_device_id": "system", 00:11:33.057 "dma_device_type": 1 00:11:33.057 }, 00:11:33.057 { 00:11:33.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.057 "dma_device_type": 2 00:11:33.057 }, 00:11:33.057 { 00:11:33.057 "dma_device_id": "system", 00:11:33.057 "dma_device_type": 1 00:11:33.057 }, 00:11:33.057 { 00:11:33.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.057 "dma_device_type": 2 00:11:33.057 } 00:11:33.057 ], 00:11:33.057 "driver_specific": { 00:11:33.057 "raid": { 00:11:33.057 "uuid": "67cee5d7-405f-11ef-b2a4-e9dca065e82e", 00:11:33.057 "strip_size_kb": 64, 00:11:33.057 "state": "online", 00:11:33.057 "raid_level": "concat", 00:11:33.057 "superblock": true, 00:11:33.057 "num_base_bdevs": 3, 00:11:33.057 "num_base_bdevs_discovered": 3, 00:11:33.057 "num_base_bdevs_operational": 3, 00:11:33.057 "base_bdevs_list": [ 00:11:33.057 { 00:11:33.057 "name": "pt1", 00:11:33.057 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:33.057 "is_configured": true, 00:11:33.057 "data_offset": 2048, 00:11:33.057 "data_size": 63488 00:11:33.057 }, 00:11:33.057 { 00:11:33.057 "name": "pt2", 00:11:33.057 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:33.057 "is_configured": true, 00:11:33.057 "data_offset": 2048, 00:11:33.057 "data_size": 63488 00:11:33.057 }, 00:11:33.057 { 00:11:33.057 "name": "pt3", 00:11:33.057 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:33.057 "is_configured": true, 00:11:33.057 "data_offset": 2048, 00:11:33.057 "data_size": 63488 00:11:33.057 } 00:11:33.057 ] 00:11:33.057 } 00:11:33.057 } 00:11:33.057 }' 00:11:33.057 14:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:33.057 14:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:33.057 pt2 00:11:33.057 pt3' 00:11:33.057 14:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:33.057 14:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:33.057 14:59:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:33.625 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:33.625 "name": "pt1", 00:11:33.625 "aliases": [ 00:11:33.625 "00000000-0000-0000-0000-000000000001" 00:11:33.625 ], 00:11:33.625 "product_name": "passthru", 00:11:33.625 "block_size": 512, 00:11:33.625 "num_blocks": 65536, 00:11:33.625 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:33.625 "assigned_rate_limits": { 00:11:33.625 "rw_ios_per_sec": 0, 00:11:33.625 "rw_mbytes_per_sec": 0, 00:11:33.625 "r_mbytes_per_sec": 0, 00:11:33.625 "w_mbytes_per_sec": 0 00:11:33.625 }, 00:11:33.625 "claimed": true, 00:11:33.625 "claim_type": "exclusive_write", 00:11:33.625 "zoned": false, 00:11:33.625 "supported_io_types": { 00:11:33.625 "read": true, 00:11:33.625 "write": true, 00:11:33.625 "unmap": true, 00:11:33.625 "flush": true, 00:11:33.625 "reset": true, 00:11:33.625 "nvme_admin": false, 00:11:33.625 "nvme_io": false, 00:11:33.625 "nvme_io_md": false, 00:11:33.625 "write_zeroes": true, 00:11:33.625 "zcopy": true, 00:11:33.625 "get_zone_info": false, 00:11:33.625 "zone_management": false, 00:11:33.625 "zone_append": false, 00:11:33.625 "compare": false, 00:11:33.625 "compare_and_write": false, 00:11:33.625 "abort": true, 00:11:33.625 "seek_hole": false, 00:11:33.625 "seek_data": false, 00:11:33.625 "copy": true, 00:11:33.625 "nvme_iov_md": false 00:11:33.625 }, 00:11:33.625 "memory_domains": [ 00:11:33.625 { 00:11:33.625 "dma_device_id": "system", 00:11:33.625 "dma_device_type": 1 00:11:33.625 }, 00:11:33.625 { 00:11:33.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.625 "dma_device_type": 2 00:11:33.625 } 00:11:33.625 ], 00:11:33.625 "driver_specific": { 00:11:33.625 "passthru": { 00:11:33.625 "name": "pt1", 00:11:33.625 "base_bdev_name": "malloc1" 00:11:33.625 } 00:11:33.625 } 00:11:33.625 }' 00:11:33.625 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:33.625 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:33.625 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:33.625 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:33.625 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:33.625 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:33.625 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:33.625 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:33.625 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:33.625 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:33.625 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:33.625 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:33.625 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:33.625 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:33.625 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:33.885 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:33.885 "name": "pt2", 00:11:33.885 "aliases": [ 00:11:33.885 "00000000-0000-0000-0000-000000000002" 00:11:33.885 ], 00:11:33.885 "product_name": "passthru", 00:11:33.885 "block_size": 512, 00:11:33.885 "num_blocks": 65536, 00:11:33.885 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:33.885 "assigned_rate_limits": { 00:11:33.885 "rw_ios_per_sec": 0, 00:11:33.885 "rw_mbytes_per_sec": 0, 00:11:33.885 "r_mbytes_per_sec": 0, 00:11:33.885 "w_mbytes_per_sec": 0 00:11:33.885 }, 00:11:33.885 "claimed": true, 00:11:33.885 "claim_type": "exclusive_write", 00:11:33.885 "zoned": false, 00:11:33.885 "supported_io_types": { 00:11:33.885 "read": true, 00:11:33.885 "write": true, 00:11:33.885 "unmap": true, 00:11:33.885 "flush": true, 00:11:33.885 "reset": true, 00:11:33.885 "nvme_admin": false, 00:11:33.885 "nvme_io": false, 00:11:33.885 "nvme_io_md": false, 00:11:33.885 "write_zeroes": true, 00:11:33.885 "zcopy": true, 00:11:33.885 "get_zone_info": false, 00:11:33.885 "zone_management": false, 00:11:33.885 "zone_append": false, 00:11:33.885 "compare": false, 00:11:33.885 "compare_and_write": false, 00:11:33.885 "abort": true, 00:11:33.885 "seek_hole": false, 00:11:33.885 "seek_data": false, 00:11:33.885 "copy": true, 00:11:33.885 "nvme_iov_md": false 00:11:33.885 }, 00:11:33.885 "memory_domains": [ 00:11:33.885 { 00:11:33.885 "dma_device_id": "system", 00:11:33.885 "dma_device_type": 1 00:11:33.885 }, 00:11:33.885 { 00:11:33.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.885 "dma_device_type": 2 00:11:33.885 } 00:11:33.885 ], 00:11:33.885 "driver_specific": { 00:11:33.885 "passthru": { 00:11:33.885 "name": "pt2", 00:11:33.885 "base_bdev_name": "malloc2" 00:11:33.885 } 00:11:33.885 } 00:11:33.885 }' 00:11:33.885 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:33.885 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:33.885 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:33.885 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:33.885 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:33.885 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:33.885 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:33.885 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:33.885 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:33.885 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:33.885 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:33.885 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:33.885 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:33.885 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:11:33.885 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:34.145 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:34.145 "name": "pt3", 00:11:34.145 "aliases": [ 00:11:34.145 "00000000-0000-0000-0000-000000000003" 00:11:34.145 ], 00:11:34.145 "product_name": "passthru", 00:11:34.145 "block_size": 512, 00:11:34.145 "num_blocks": 65536, 00:11:34.145 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:34.145 "assigned_rate_limits": { 00:11:34.145 "rw_ios_per_sec": 0, 00:11:34.145 "rw_mbytes_per_sec": 0, 00:11:34.145 "r_mbytes_per_sec": 0, 00:11:34.145 "w_mbytes_per_sec": 0 00:11:34.145 }, 00:11:34.145 "claimed": true, 00:11:34.145 "claim_type": "exclusive_write", 00:11:34.145 "zoned": false, 00:11:34.145 "supported_io_types": { 00:11:34.145 "read": true, 00:11:34.145 "write": true, 00:11:34.145 "unmap": true, 00:11:34.145 "flush": true, 00:11:34.145 "reset": true, 00:11:34.145 "nvme_admin": false, 00:11:34.145 "nvme_io": false, 00:11:34.145 "nvme_io_md": false, 00:11:34.145 "write_zeroes": true, 00:11:34.145 "zcopy": true, 00:11:34.145 "get_zone_info": false, 00:11:34.145 "zone_management": false, 00:11:34.145 "zone_append": false, 00:11:34.145 "compare": false, 00:11:34.145 "compare_and_write": false, 00:11:34.145 "abort": true, 00:11:34.145 "seek_hole": false, 00:11:34.145 "seek_data": false, 00:11:34.145 "copy": true, 00:11:34.145 "nvme_iov_md": false 00:11:34.145 }, 00:11:34.145 "memory_domains": [ 00:11:34.145 { 00:11:34.145 "dma_device_id": "system", 00:11:34.145 "dma_device_type": 1 00:11:34.145 }, 00:11:34.145 { 00:11:34.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.145 "dma_device_type": 2 00:11:34.145 } 00:11:34.145 ], 00:11:34.145 "driver_specific": { 00:11:34.145 "passthru": { 00:11:34.145 "name": "pt3", 00:11:34.145 "base_bdev_name": "malloc3" 00:11:34.145 } 00:11:34.145 } 00:11:34.145 }' 00:11:34.145 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:34.145 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:34.145 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:34.145 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:34.145 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:34.145 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:34.145 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:34.145 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:34.145 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:34.145 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:34.145 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:34.145 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:34.145 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:34.145 14:59:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:11:34.404 [2024-07-12 15:00:00.161660] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.404 15:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=67cee5d7-405f-11ef-b2a4-e9dca065e82e 00:11:34.404 15:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 67cee5d7-405f-11ef-b2a4-e9dca065e82e ']' 00:11:34.404 15:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:34.663 [2024-07-12 15:00:00.425581] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:34.663 [2024-07-12 15:00:00.425606] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:34.663 [2024-07-12 15:00:00.425629] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:34.663 [2024-07-12 15:00:00.425643] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:34.663 [2024-07-12 15:00:00.425648] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x642af435400 name raid_bdev1, state offline 00:11:34.663 15:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:34.663 15:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:11:34.922 15:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:11:34.922 15:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:11:34.922 15:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:11:34.922 15:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:35.178 15:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:11:35.178 15:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:35.435 15:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:11:35.435 15:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:11:35.693 15:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:35.693 15:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:11:35.950 15:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:11:35.950 15:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:11:35.950 15:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:11:35.950 15:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:11:35.950 15:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:35.950 15:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:35.950 15:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:35.950 15:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:35.950 15:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:35.950 15:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:35.950 15:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:35.950 15:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:35.950 15:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:11:36.206 [2024-07-12 15:00:01.985565] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:36.206 [2024-07-12 15:00:01.986141] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:36.206 [2024-07-12 15:00:01.986153] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:36.206 [2024-07-12 15:00:01.986168] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:36.206 [2024-07-12 15:00:01.986207] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:36.206 [2024-07-12 15:00:01.986219] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:36.206 [2024-07-12 15:00:01.986228] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:36.206 [2024-07-12 15:00:01.986232] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x642af435180 name raid_bdev1, state configuring 00:11:36.206 request: 00:11:36.206 { 00:11:36.206 "name": "raid_bdev1", 00:11:36.206 "raid_level": "concat", 00:11:36.206 "base_bdevs": [ 00:11:36.206 "malloc1", 00:11:36.206 "malloc2", 00:11:36.206 "malloc3" 00:11:36.206 ], 00:11:36.206 "strip_size_kb": 64, 00:11:36.206 "superblock": false, 00:11:36.206 "method": "bdev_raid_create", 00:11:36.206 "req_id": 1 00:11:36.206 } 00:11:36.206 Got JSON-RPC error response 00:11:36.206 response: 00:11:36.206 { 00:11:36.206 "code": -17, 00:11:36.206 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:36.206 } 00:11:36.206 15:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:11:36.206 15:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:36.206 15:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:36.206 15:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:36.206 15:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:36.206 15:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:11:36.461 15:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:11:36.461 15:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:11:36.461 15:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:36.799 [2024-07-12 15:00:02.449582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:36.799 [2024-07-12 15:00:02.449653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.799 [2024-07-12 15:00:02.449666] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x642af434c80 00:11:36.799 [2024-07-12 15:00:02.449674] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.799 [2024-07-12 15:00:02.450315] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.799 [2024-07-12 15:00:02.450343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:36.799 [2024-07-12 15:00:02.450368] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:36.799 [2024-07-12 15:00:02.450381] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:36.799 pt1 00:11:36.799 15:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:36.799 15:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:36.799 15:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:36.799 15:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:36.799 15:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:36.799 15:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:36.799 15:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:36.799 15:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:36.799 15:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:36.799 15:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:36.799 15:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:36.799 15:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.056 15:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:37.056 "name": "raid_bdev1", 00:11:37.056 "uuid": "67cee5d7-405f-11ef-b2a4-e9dca065e82e", 00:11:37.056 "strip_size_kb": 64, 00:11:37.056 "state": "configuring", 00:11:37.056 "raid_level": "concat", 00:11:37.056 "superblock": true, 00:11:37.056 "num_base_bdevs": 3, 00:11:37.056 "num_base_bdevs_discovered": 1, 00:11:37.056 "num_base_bdevs_operational": 3, 00:11:37.056 "base_bdevs_list": [ 00:11:37.056 { 00:11:37.056 "name": "pt1", 00:11:37.056 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:37.056 "is_configured": true, 00:11:37.057 "data_offset": 2048, 00:11:37.057 "data_size": 63488 00:11:37.057 }, 00:11:37.057 { 00:11:37.057 "name": null, 00:11:37.057 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:37.057 "is_configured": false, 00:11:37.057 "data_offset": 2048, 00:11:37.057 "data_size": 63488 00:11:37.057 }, 00:11:37.057 { 00:11:37.057 "name": null, 00:11:37.057 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:37.057 "is_configured": false, 00:11:37.057 "data_offset": 2048, 00:11:37.057 "data_size": 63488 00:11:37.057 } 00:11:37.057 ] 00:11:37.057 }' 00:11:37.057 15:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:37.057 15:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.314 15:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:11:37.314 15:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:37.571 [2024-07-12 15:00:03.365577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:37.571 [2024-07-12 15:00:03.365640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.571 [2024-07-12 15:00:03.365653] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x642af435680 00:11:37.571 [2024-07-12 15:00:03.365661] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.571 [2024-07-12 15:00:03.365776] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.571 [2024-07-12 15:00:03.365788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:37.571 [2024-07-12 15:00:03.365812] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:37.571 [2024-07-12 15:00:03.365821] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:37.571 pt2 00:11:37.571 15:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:38.136 [2024-07-12 15:00:03.665572] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:38.136 15:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:38.136 15:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:38.136 15:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:38.136 15:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:38.136 15:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:38.136 15:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:38.136 15:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:38.136 15:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:38.136 15:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:38.136 15:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:38.136 15:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:38.136 15:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.392 15:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:38.392 "name": "raid_bdev1", 00:11:38.392 "uuid": "67cee5d7-405f-11ef-b2a4-e9dca065e82e", 00:11:38.392 "strip_size_kb": 64, 00:11:38.392 "state": "configuring", 00:11:38.392 "raid_level": "concat", 00:11:38.392 "superblock": true, 00:11:38.392 "num_base_bdevs": 3, 00:11:38.392 "num_base_bdevs_discovered": 1, 00:11:38.392 "num_base_bdevs_operational": 3, 00:11:38.392 "base_bdevs_list": [ 00:11:38.392 { 00:11:38.393 "name": "pt1", 00:11:38.393 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:38.393 "is_configured": true, 00:11:38.393 "data_offset": 2048, 00:11:38.393 "data_size": 63488 00:11:38.393 }, 00:11:38.393 { 00:11:38.393 "name": null, 00:11:38.393 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:38.393 "is_configured": false, 00:11:38.393 "data_offset": 2048, 00:11:38.393 "data_size": 63488 00:11:38.393 }, 00:11:38.393 { 00:11:38.393 "name": null, 00:11:38.393 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:38.393 "is_configured": false, 00:11:38.393 "data_offset": 2048, 00:11:38.393 "data_size": 63488 00:11:38.393 } 00:11:38.393 ] 00:11:38.393 }' 00:11:38.393 15:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:38.393 15:00:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.650 15:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:11:38.650 15:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:11:38.650 15:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:38.911 [2024-07-12 15:00:04.537565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:38.911 [2024-07-12 15:00:04.537624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.911 [2024-07-12 15:00:04.537637] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x642af435680 00:11:38.911 [2024-07-12 15:00:04.537645] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.911 [2024-07-12 15:00:04.537761] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.911 [2024-07-12 15:00:04.537773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:38.911 [2024-07-12 15:00:04.537796] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:38.911 [2024-07-12 15:00:04.537805] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:38.911 pt2 00:11:38.911 15:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:11:38.911 15:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:11:38.911 15:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:39.169 [2024-07-12 15:00:04.765585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:39.169 [2024-07-12 15:00:04.765633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.169 [2024-07-12 15:00:04.765661] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x642af435400 00:11:39.169 [2024-07-12 15:00:04.765669] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.169 [2024-07-12 15:00:04.765783] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.169 [2024-07-12 15:00:04.765794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:39.169 [2024-07-12 15:00:04.765815] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:39.169 [2024-07-12 15:00:04.765823] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:39.169 [2024-07-12 15:00:04.765850] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x642af434780 00:11:39.170 [2024-07-12 15:00:04.765855] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:39.170 [2024-07-12 15:00:04.765892] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x642af497e20 00:11:39.170 [2024-07-12 15:00:04.765947] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x642af434780 00:11:39.170 [2024-07-12 15:00:04.765952] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x642af434780 00:11:39.170 [2024-07-12 15:00:04.765973] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.170 pt3 00:11:39.170 15:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:11:39.170 15:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:11:39.170 15:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:39.170 15:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:39.170 15:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:39.170 15:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:39.170 15:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:39.170 15:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:39.170 15:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:39.170 15:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:39.170 15:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:39.170 15:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:39.170 15:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.170 15:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:39.427 15:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:39.427 "name": "raid_bdev1", 00:11:39.427 "uuid": "67cee5d7-405f-11ef-b2a4-e9dca065e82e", 00:11:39.427 "strip_size_kb": 64, 00:11:39.427 "state": "online", 00:11:39.427 "raid_level": "concat", 00:11:39.427 "superblock": true, 00:11:39.427 "num_base_bdevs": 3, 00:11:39.427 "num_base_bdevs_discovered": 3, 00:11:39.427 "num_base_bdevs_operational": 3, 00:11:39.427 "base_bdevs_list": [ 00:11:39.427 { 00:11:39.427 "name": "pt1", 00:11:39.427 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:39.427 "is_configured": true, 00:11:39.427 "data_offset": 2048, 00:11:39.427 "data_size": 63488 00:11:39.427 }, 00:11:39.427 { 00:11:39.427 "name": "pt2", 00:11:39.427 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:39.427 "is_configured": true, 00:11:39.427 "data_offset": 2048, 00:11:39.427 "data_size": 63488 00:11:39.427 }, 00:11:39.427 { 00:11:39.427 "name": "pt3", 00:11:39.427 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:39.427 "is_configured": true, 00:11:39.427 "data_offset": 2048, 00:11:39.427 "data_size": 63488 00:11:39.427 } 00:11:39.427 ] 00:11:39.427 }' 00:11:39.427 15:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:39.427 15:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.684 15:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:11:39.684 15:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:39.684 15:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:39.684 15:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:39.684 15:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:39.684 15:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:39.684 15:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:39.684 15:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:39.942 [2024-07-12 15:00:05.609609] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:39.942 15:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:39.942 "name": "raid_bdev1", 00:11:39.942 "aliases": [ 00:11:39.942 "67cee5d7-405f-11ef-b2a4-e9dca065e82e" 00:11:39.942 ], 00:11:39.942 "product_name": "Raid Volume", 00:11:39.942 "block_size": 512, 00:11:39.942 "num_blocks": 190464, 00:11:39.942 "uuid": "67cee5d7-405f-11ef-b2a4-e9dca065e82e", 00:11:39.942 "assigned_rate_limits": { 00:11:39.942 "rw_ios_per_sec": 0, 00:11:39.942 "rw_mbytes_per_sec": 0, 00:11:39.942 "r_mbytes_per_sec": 0, 00:11:39.942 "w_mbytes_per_sec": 0 00:11:39.942 }, 00:11:39.942 "claimed": false, 00:11:39.942 "zoned": false, 00:11:39.942 "supported_io_types": { 00:11:39.942 "read": true, 00:11:39.942 "write": true, 00:11:39.942 "unmap": true, 00:11:39.942 "flush": true, 00:11:39.942 "reset": true, 00:11:39.942 "nvme_admin": false, 00:11:39.942 "nvme_io": false, 00:11:39.942 "nvme_io_md": false, 00:11:39.942 "write_zeroes": true, 00:11:39.942 "zcopy": false, 00:11:39.942 "get_zone_info": false, 00:11:39.942 "zone_management": false, 00:11:39.942 "zone_append": false, 00:11:39.942 "compare": false, 00:11:39.942 "compare_and_write": false, 00:11:39.942 "abort": false, 00:11:39.942 "seek_hole": false, 00:11:39.942 "seek_data": false, 00:11:39.942 "copy": false, 00:11:39.943 "nvme_iov_md": false 00:11:39.943 }, 00:11:39.943 "memory_domains": [ 00:11:39.943 { 00:11:39.943 "dma_device_id": "system", 00:11:39.943 "dma_device_type": 1 00:11:39.943 }, 00:11:39.943 { 00:11:39.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.943 "dma_device_type": 2 00:11:39.943 }, 00:11:39.943 { 00:11:39.943 "dma_device_id": "system", 00:11:39.943 "dma_device_type": 1 00:11:39.943 }, 00:11:39.943 { 00:11:39.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.943 "dma_device_type": 2 00:11:39.943 }, 00:11:39.943 { 00:11:39.943 "dma_device_id": "system", 00:11:39.943 "dma_device_type": 1 00:11:39.943 }, 00:11:39.943 { 00:11:39.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.943 "dma_device_type": 2 00:11:39.943 } 00:11:39.943 ], 00:11:39.943 "driver_specific": { 00:11:39.943 "raid": { 00:11:39.943 "uuid": "67cee5d7-405f-11ef-b2a4-e9dca065e82e", 00:11:39.943 "strip_size_kb": 64, 00:11:39.943 "state": "online", 00:11:39.943 "raid_level": "concat", 00:11:39.943 "superblock": true, 00:11:39.943 "num_base_bdevs": 3, 00:11:39.943 "num_base_bdevs_discovered": 3, 00:11:39.943 "num_base_bdevs_operational": 3, 00:11:39.943 "base_bdevs_list": [ 00:11:39.943 { 00:11:39.943 "name": "pt1", 00:11:39.943 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:39.943 "is_configured": true, 00:11:39.943 "data_offset": 2048, 00:11:39.943 "data_size": 63488 00:11:39.943 }, 00:11:39.943 { 00:11:39.943 "name": "pt2", 00:11:39.943 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:39.943 "is_configured": true, 00:11:39.943 "data_offset": 2048, 00:11:39.943 "data_size": 63488 00:11:39.943 }, 00:11:39.943 { 00:11:39.943 "name": "pt3", 00:11:39.943 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:39.943 "is_configured": true, 00:11:39.943 "data_offset": 2048, 00:11:39.943 "data_size": 63488 00:11:39.943 } 00:11:39.943 ] 00:11:39.943 } 00:11:39.943 } 00:11:39.943 }' 00:11:39.943 15:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:39.943 15:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:39.943 pt2 00:11:39.943 pt3' 00:11:39.943 15:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:39.943 15:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:39.943 15:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:40.243 15:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:40.243 "name": "pt1", 00:11:40.243 "aliases": [ 00:11:40.243 "00000000-0000-0000-0000-000000000001" 00:11:40.243 ], 00:11:40.243 "product_name": "passthru", 00:11:40.243 "block_size": 512, 00:11:40.243 "num_blocks": 65536, 00:11:40.243 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:40.243 "assigned_rate_limits": { 00:11:40.243 "rw_ios_per_sec": 0, 00:11:40.243 "rw_mbytes_per_sec": 0, 00:11:40.243 "r_mbytes_per_sec": 0, 00:11:40.243 "w_mbytes_per_sec": 0 00:11:40.243 }, 00:11:40.243 "claimed": true, 00:11:40.243 "claim_type": "exclusive_write", 00:11:40.243 "zoned": false, 00:11:40.243 "supported_io_types": { 00:11:40.243 "read": true, 00:11:40.243 "write": true, 00:11:40.243 "unmap": true, 00:11:40.243 "flush": true, 00:11:40.243 "reset": true, 00:11:40.243 "nvme_admin": false, 00:11:40.243 "nvme_io": false, 00:11:40.243 "nvme_io_md": false, 00:11:40.243 "write_zeroes": true, 00:11:40.243 "zcopy": true, 00:11:40.243 "get_zone_info": false, 00:11:40.243 "zone_management": false, 00:11:40.243 "zone_append": false, 00:11:40.243 "compare": false, 00:11:40.243 "compare_and_write": false, 00:11:40.243 "abort": true, 00:11:40.243 "seek_hole": false, 00:11:40.243 "seek_data": false, 00:11:40.243 "copy": true, 00:11:40.243 "nvme_iov_md": false 00:11:40.243 }, 00:11:40.243 "memory_domains": [ 00:11:40.243 { 00:11:40.243 "dma_device_id": "system", 00:11:40.243 "dma_device_type": 1 00:11:40.243 }, 00:11:40.243 { 00:11:40.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.243 "dma_device_type": 2 00:11:40.243 } 00:11:40.243 ], 00:11:40.243 "driver_specific": { 00:11:40.243 "passthru": { 00:11:40.243 "name": "pt1", 00:11:40.243 "base_bdev_name": "malloc1" 00:11:40.243 } 00:11:40.243 } 00:11:40.243 }' 00:11:40.243 15:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:40.243 15:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:40.243 15:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:40.243 15:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:40.243 15:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:40.243 15:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:40.243 15:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:40.243 15:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:40.243 15:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:40.243 15:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:40.243 15:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:40.243 15:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:40.243 15:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:40.243 15:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:40.243 15:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:40.506 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:40.506 "name": "pt2", 00:11:40.506 "aliases": [ 00:11:40.506 "00000000-0000-0000-0000-000000000002" 00:11:40.506 ], 00:11:40.506 "product_name": "passthru", 00:11:40.506 "block_size": 512, 00:11:40.506 "num_blocks": 65536, 00:11:40.506 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:40.506 "assigned_rate_limits": { 00:11:40.506 "rw_ios_per_sec": 0, 00:11:40.506 "rw_mbytes_per_sec": 0, 00:11:40.506 "r_mbytes_per_sec": 0, 00:11:40.506 "w_mbytes_per_sec": 0 00:11:40.506 }, 00:11:40.506 "claimed": true, 00:11:40.506 "claim_type": "exclusive_write", 00:11:40.506 "zoned": false, 00:11:40.506 "supported_io_types": { 00:11:40.506 "read": true, 00:11:40.506 "write": true, 00:11:40.506 "unmap": true, 00:11:40.506 "flush": true, 00:11:40.506 "reset": true, 00:11:40.506 "nvme_admin": false, 00:11:40.506 "nvme_io": false, 00:11:40.506 "nvme_io_md": false, 00:11:40.506 "write_zeroes": true, 00:11:40.506 "zcopy": true, 00:11:40.506 "get_zone_info": false, 00:11:40.506 "zone_management": false, 00:11:40.506 "zone_append": false, 00:11:40.506 "compare": false, 00:11:40.506 "compare_and_write": false, 00:11:40.506 "abort": true, 00:11:40.506 "seek_hole": false, 00:11:40.506 "seek_data": false, 00:11:40.506 "copy": true, 00:11:40.506 "nvme_iov_md": false 00:11:40.506 }, 00:11:40.506 "memory_domains": [ 00:11:40.506 { 00:11:40.506 "dma_device_id": "system", 00:11:40.506 "dma_device_type": 1 00:11:40.506 }, 00:11:40.506 { 00:11:40.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.506 "dma_device_type": 2 00:11:40.506 } 00:11:40.506 ], 00:11:40.506 "driver_specific": { 00:11:40.506 "passthru": { 00:11:40.506 "name": "pt2", 00:11:40.506 "base_bdev_name": "malloc2" 00:11:40.506 } 00:11:40.506 } 00:11:40.506 }' 00:11:40.506 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:40.506 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:40.506 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:40.506 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:40.506 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:40.506 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:40.506 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:40.506 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:40.506 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:40.506 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:40.506 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:40.506 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:40.506 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:40.506 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:11:40.506 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:40.765 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:40.765 "name": "pt3", 00:11:40.765 "aliases": [ 00:11:40.765 "00000000-0000-0000-0000-000000000003" 00:11:40.765 ], 00:11:40.765 "product_name": "passthru", 00:11:40.765 "block_size": 512, 00:11:40.765 "num_blocks": 65536, 00:11:40.765 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:40.765 "assigned_rate_limits": { 00:11:40.765 "rw_ios_per_sec": 0, 00:11:40.765 "rw_mbytes_per_sec": 0, 00:11:40.765 "r_mbytes_per_sec": 0, 00:11:40.765 "w_mbytes_per_sec": 0 00:11:40.765 }, 00:11:40.765 "claimed": true, 00:11:40.765 "claim_type": "exclusive_write", 00:11:40.765 "zoned": false, 00:11:40.765 "supported_io_types": { 00:11:40.765 "read": true, 00:11:40.765 "write": true, 00:11:40.765 "unmap": true, 00:11:40.765 "flush": true, 00:11:40.765 "reset": true, 00:11:40.765 "nvme_admin": false, 00:11:40.765 "nvme_io": false, 00:11:40.765 "nvme_io_md": false, 00:11:40.765 "write_zeroes": true, 00:11:40.765 "zcopy": true, 00:11:40.765 "get_zone_info": false, 00:11:40.765 "zone_management": false, 00:11:40.765 "zone_append": false, 00:11:40.765 "compare": false, 00:11:40.765 "compare_and_write": false, 00:11:40.765 "abort": true, 00:11:40.765 "seek_hole": false, 00:11:40.765 "seek_data": false, 00:11:40.765 "copy": true, 00:11:40.765 "nvme_iov_md": false 00:11:40.765 }, 00:11:40.765 "memory_domains": [ 00:11:40.765 { 00:11:40.765 "dma_device_id": "system", 00:11:40.765 "dma_device_type": 1 00:11:40.765 }, 00:11:40.765 { 00:11:40.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.765 "dma_device_type": 2 00:11:40.765 } 00:11:40.765 ], 00:11:40.765 "driver_specific": { 00:11:40.765 "passthru": { 00:11:40.765 "name": "pt3", 00:11:40.765 "base_bdev_name": "malloc3" 00:11:40.765 } 00:11:40.765 } 00:11:40.765 }' 00:11:40.765 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:40.765 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:40.765 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:40.765 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:41.024 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:41.024 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:41.024 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:41.024 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:41.024 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:41.024 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:41.024 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:41.024 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:41.024 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:41.024 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:11:41.282 [2024-07-12 15:00:06.857600] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:41.282 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 67cee5d7-405f-11ef-b2a4-e9dca065e82e '!=' 67cee5d7-405f-11ef-b2a4-e9dca065e82e ']' 00:11:41.282 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:11:41.282 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:41.282 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:41.282 15:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 55488 00:11:41.282 15:00:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 55488 ']' 00:11:41.282 15:00:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 55488 00:11:41.282 15:00:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:11:41.282 15:00:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:11:41.282 15:00:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 55488 00:11:41.282 15:00:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:11:41.282 15:00:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:11:41.282 15:00:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:11:41.282 killing process with pid 55488 00:11:41.282 15:00:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 55488' 00:11:41.282 15:00:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 55488 00:11:41.282 [2024-07-12 15:00:06.889509] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:41.282 [2024-07-12 15:00:06.889544] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:41.282 [2024-07-12 15:00:06.889558] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:41.282 [2024-07-12 15:00:06.889562] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x642af434780 name raid_bdev1, state offline 00:11:41.282 15:00:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 55488 00:11:41.282 [2024-07-12 15:00:06.907342] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:41.282 15:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:11:41.282 00:11:41.282 real 0m12.178s 00:11:41.282 user 0m21.764s 00:11:41.282 sys 0m1.833s 00:11:41.282 15:00:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:41.282 15:00:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.282 ************************************ 00:11:41.282 END TEST raid_superblock_test 00:11:41.282 ************************************ 00:11:41.540 15:00:07 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:41.540 15:00:07 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:11:41.540 15:00:07 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:41.540 15:00:07 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:41.540 15:00:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:41.540 ************************************ 00:11:41.540 START TEST raid_read_error_test 00:11:41.540 ************************************ 00:11:41.540 15:00:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 read 00:11:41.540 15:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:11:41.540 15:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:11:41.540 15:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:11:41.540 15:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:11:41.540 15:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:41.540 15:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:11:41.540 15:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:41.540 15:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:41.540 15:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:11:41.540 15:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:41.540 15:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:41.540 15:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:11:41.540 15:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:41.540 15:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:41.540 15:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:41.540 15:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:11:41.540 15:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:11:41.540 15:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:11:41.540 15:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:11:41.540 15:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:11:41.540 15:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:11:41.540 15:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:11:41.540 15:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:11:41.540 15:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:11:41.540 15:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:11:41.541 15:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.ajyhOaSpcE 00:11:41.541 15:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=55843 00:11:41.541 15:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:41.541 15:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 55843 /var/tmp/spdk-raid.sock 00:11:41.541 15:00:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 55843 ']' 00:11:41.541 15:00:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:41.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:41.541 15:00:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:41.541 15:00:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:41.541 15:00:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:41.541 15:00:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.541 [2024-07-12 15:00:07.146174] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:11:41.541 [2024-07-12 15:00:07.146377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:11:42.108 EAL: TSC is not safe to use in SMP mode 00:11:42.108 EAL: TSC is not invariant 00:11:42.108 [2024-07-12 15:00:07.671402] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.108 [2024-07-12 15:00:07.758730] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:42.108 [2024-07-12 15:00:07.760842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.108 [2024-07-12 15:00:07.761631] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.108 [2024-07-12 15:00:07.761645] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.366 15:00:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:42.366 15:00:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:11:42.366 15:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:42.366 15:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:42.624 BaseBdev1_malloc 00:11:42.624 15:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:42.882 true 00:11:42.882 15:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:43.140 [2024-07-12 15:00:08.893769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:43.140 [2024-07-12 15:00:08.893838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.140 [2024-07-12 15:00:08.893866] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3cff3834780 00:11:43.140 [2024-07-12 15:00:08.893876] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.140 [2024-07-12 15:00:08.894540] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.140 [2024-07-12 15:00:08.894565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:43.140 BaseBdev1 00:11:43.140 15:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:43.140 15:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:43.398 BaseBdev2_malloc 00:11:43.398 15:00:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:43.964 true 00:11:43.964 15:00:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:44.222 [2024-07-12 15:00:09.809751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:44.222 [2024-07-12 15:00:09.809835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.222 [2024-07-12 15:00:09.809872] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3cff3834c80 00:11:44.222 [2024-07-12 15:00:09.809885] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.222 [2024-07-12 15:00:09.810619] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.222 [2024-07-12 15:00:09.810656] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:44.222 BaseBdev2 00:11:44.222 15:00:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:44.222 15:00:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:44.478 BaseBdev3_malloc 00:11:44.478 15:00:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:11:45.043 true 00:11:45.044 15:00:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:45.302 [2024-07-12 15:00:10.925705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:45.302 [2024-07-12 15:00:10.925781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.302 [2024-07-12 15:00:10.925818] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3cff3835180 00:11:45.302 [2024-07-12 15:00:10.925831] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.302 [2024-07-12 15:00:10.926530] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.302 [2024-07-12 15:00:10.926561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:45.302 BaseBdev3 00:11:45.302 15:00:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:11:45.559 [2024-07-12 15:00:11.257712] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.559 [2024-07-12 15:00:11.258350] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:45.559 [2024-07-12 15:00:11.258384] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:45.559 [2024-07-12 15:00:11.258457] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3cff3835400 00:11:45.559 [2024-07-12 15:00:11.258466] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:45.559 [2024-07-12 15:00:11.258515] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3cff38a0e20 00:11:45.559 [2024-07-12 15:00:11.258601] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3cff3835400 00:11:45.559 [2024-07-12 15:00:11.258608] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3cff3835400 00:11:45.559 [2024-07-12 15:00:11.258646] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.559 15:00:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:45.559 15:00:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:45.559 15:00:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:45.559 15:00:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:45.559 15:00:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:45.559 15:00:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:45.559 15:00:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:45.559 15:00:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:45.559 15:00:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:45.559 15:00:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:45.559 15:00:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:45.559 15:00:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.816 15:00:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:45.816 "name": "raid_bdev1", 00:11:45.816 "uuid": "6fbe21d3-405f-11ef-b2a4-e9dca065e82e", 00:11:45.816 "strip_size_kb": 64, 00:11:45.816 "state": "online", 00:11:45.816 "raid_level": "concat", 00:11:45.816 "superblock": true, 00:11:45.816 "num_base_bdevs": 3, 00:11:45.816 "num_base_bdevs_discovered": 3, 00:11:45.816 "num_base_bdevs_operational": 3, 00:11:45.816 "base_bdevs_list": [ 00:11:45.816 { 00:11:45.816 "name": "BaseBdev1", 00:11:45.816 "uuid": "638a9f28-b836-8259-a6fa-abee7aa54560", 00:11:45.816 "is_configured": true, 00:11:45.816 "data_offset": 2048, 00:11:45.816 "data_size": 63488 00:11:45.816 }, 00:11:45.816 { 00:11:45.816 "name": "BaseBdev2", 00:11:45.816 "uuid": "f6b015c4-6025-a25e-9dff-f6c31814bece", 00:11:45.816 "is_configured": true, 00:11:45.816 "data_offset": 2048, 00:11:45.816 "data_size": 63488 00:11:45.816 }, 00:11:45.816 { 00:11:45.816 "name": "BaseBdev3", 00:11:45.816 "uuid": "bd2a6d54-646a-2750-964a-30a5a8e91935", 00:11:45.816 "is_configured": true, 00:11:45.817 "data_offset": 2048, 00:11:45.817 "data_size": 63488 00:11:45.817 } 00:11:45.817 ] 00:11:45.817 }' 00:11:45.817 15:00:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:45.817 15:00:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.381 15:00:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:11:46.381 15:00:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:46.381 [2024-07-12 15:00:12.137859] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3cff38a0ec0 00:11:47.329 15:00:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:47.589 15:00:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:11:47.589 15:00:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:11:47.589 15:00:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:47.589 15:00:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:47.589 15:00:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:47.589 15:00:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:47.589 15:00:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:47.589 15:00:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:47.589 15:00:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:47.589 15:00:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:47.589 15:00:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:47.589 15:00:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:47.589 15:00:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:47.589 15:00:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:47.589 15:00:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.846 15:00:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:47.846 "name": "raid_bdev1", 00:11:47.846 "uuid": "6fbe21d3-405f-11ef-b2a4-e9dca065e82e", 00:11:47.846 "strip_size_kb": 64, 00:11:47.846 "state": "online", 00:11:47.846 "raid_level": "concat", 00:11:47.846 "superblock": true, 00:11:47.846 "num_base_bdevs": 3, 00:11:47.846 "num_base_bdevs_discovered": 3, 00:11:47.846 "num_base_bdevs_operational": 3, 00:11:47.846 "base_bdevs_list": [ 00:11:47.846 { 00:11:47.846 "name": "BaseBdev1", 00:11:47.846 "uuid": "638a9f28-b836-8259-a6fa-abee7aa54560", 00:11:47.846 "is_configured": true, 00:11:47.846 "data_offset": 2048, 00:11:47.846 "data_size": 63488 00:11:47.846 }, 00:11:47.846 { 00:11:47.846 "name": "BaseBdev2", 00:11:47.846 "uuid": "f6b015c4-6025-a25e-9dff-f6c31814bece", 00:11:47.846 "is_configured": true, 00:11:47.846 "data_offset": 2048, 00:11:47.846 "data_size": 63488 00:11:47.846 }, 00:11:47.846 { 00:11:47.846 "name": "BaseBdev3", 00:11:47.846 "uuid": "bd2a6d54-646a-2750-964a-30a5a8e91935", 00:11:47.846 "is_configured": true, 00:11:47.846 "data_offset": 2048, 00:11:47.846 "data_size": 63488 00:11:47.846 } 00:11:47.846 ] 00:11:47.846 }' 00:11:47.846 15:00:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:47.846 15:00:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.104 15:00:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:48.361 [2024-07-12 15:00:14.043290] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:48.361 [2024-07-12 15:00:14.043320] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.361 [2024-07-12 15:00:14.043665] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.361 [2024-07-12 15:00:14.043675] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.361 [2024-07-12 15:00:14.043683] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.361 [2024-07-12 15:00:14.043688] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3cff3835400 name raid_bdev1, state offline 00:11:48.361 0 00:11:48.362 15:00:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 55843 00:11:48.362 15:00:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 55843 ']' 00:11:48.362 15:00:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 55843 00:11:48.362 15:00:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:11:48.362 15:00:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:11:48.362 15:00:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 55843 00:11:48.362 15:00:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:11:48.362 15:00:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:11:48.362 killing process with pid 55843 00:11:48.362 15:00:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:11:48.362 15:00:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 55843' 00:11:48.362 15:00:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 55843 00:11:48.362 [2024-07-12 15:00:14.071672] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:48.362 15:00:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 55843 00:11:48.362 [2024-07-12 15:00:14.088643] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:48.620 15:00:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:11:48.620 15:00:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.ajyhOaSpcE 00:11:48.620 15:00:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:11:48.620 15:00:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.53 00:11:48.620 15:00:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:11:48.620 15:00:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:48.620 15:00:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:48.620 15:00:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.53 != \0\.\0\0 ]] 00:11:48.620 00:11:48.620 real 0m7.138s 00:11:48.620 user 0m11.517s 00:11:48.620 sys 0m1.111s 00:11:48.620 15:00:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:48.620 15:00:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.620 ************************************ 00:11:48.620 END TEST raid_read_error_test 00:11:48.620 ************************************ 00:11:48.620 15:00:14 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:48.620 15:00:14 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:11:48.620 15:00:14 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:48.620 15:00:14 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:48.620 15:00:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:48.620 ************************************ 00:11:48.620 START TEST raid_write_error_test 00:11:48.620 ************************************ 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 write 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.uAl5hoRHD0 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=55974 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 55974 /var/tmp/spdk-raid.sock 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 55974 ']' 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:48.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:48.620 15:00:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.620 [2024-07-12 15:00:14.325362] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:11:48.620 [2024-07-12 15:00:14.325558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:11:49.187 EAL: TSC is not safe to use in SMP mode 00:11:49.187 EAL: TSC is not invariant 00:11:49.187 [2024-07-12 15:00:14.853555] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.187 [2024-07-12 15:00:14.952004] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:49.187 [2024-07-12 15:00:14.954118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.187 [2024-07-12 15:00:14.954866] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:49.187 [2024-07-12 15:00:14.954882] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:49.754 15:00:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:49.754 15:00:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:11:49.754 15:00:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:49.754 15:00:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:50.012 BaseBdev1_malloc 00:11:50.012 15:00:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:50.270 true 00:11:50.270 15:00:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:50.527 [2024-07-12 15:00:16.162592] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:50.527 [2024-07-12 15:00:16.162659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.527 [2024-07-12 15:00:16.162687] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x144a63034780 00:11:50.527 [2024-07-12 15:00:16.162707] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.527 [2024-07-12 15:00:16.163382] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.527 [2024-07-12 15:00:16.163403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:50.527 BaseBdev1 00:11:50.527 15:00:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:50.527 15:00:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:50.786 BaseBdev2_malloc 00:11:50.786 15:00:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:51.045 true 00:11:51.045 15:00:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:51.303 [2024-07-12 15:00:17.026565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:51.303 [2024-07-12 15:00:17.026629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.303 [2024-07-12 15:00:17.026655] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x144a63034c80 00:11:51.303 [2024-07-12 15:00:17.026665] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.303 [2024-07-12 15:00:17.027319] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.303 [2024-07-12 15:00:17.027345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:51.303 BaseBdev2 00:11:51.303 15:00:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:51.303 15:00:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:51.568 BaseBdev3_malloc 00:11:51.568 15:00:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:11:51.826 true 00:11:51.826 15:00:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:52.085 [2024-07-12 15:00:17.798550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:52.085 [2024-07-12 15:00:17.798612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.085 [2024-07-12 15:00:17.798640] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x144a63035180 00:11:52.085 [2024-07-12 15:00:17.798650] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.085 [2024-07-12 15:00:17.799302] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.085 [2024-07-12 15:00:17.799329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:52.085 BaseBdev3 00:11:52.085 15:00:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:11:52.343 [2024-07-12 15:00:18.074560] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:52.343 [2024-07-12 15:00:18.075153] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:52.343 [2024-07-12 15:00:18.075181] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:52.343 [2024-07-12 15:00:18.075240] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x144a63035400 00:11:52.343 [2024-07-12 15:00:18.075246] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:52.343 [2024-07-12 15:00:18.075285] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x144a630a0e20 00:11:52.343 [2024-07-12 15:00:18.075361] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x144a63035400 00:11:52.343 [2024-07-12 15:00:18.075365] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x144a63035400 00:11:52.343 [2024-07-12 15:00:18.075393] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.343 15:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:52.343 15:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:52.343 15:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:52.343 15:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:52.343 15:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:52.343 15:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:52.343 15:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:52.343 15:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:52.343 15:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:52.343 15:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:52.343 15:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:52.343 15:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.601 15:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:52.601 "name": "raid_bdev1", 00:11:52.601 "uuid": "73ce4d28-405f-11ef-b2a4-e9dca065e82e", 00:11:52.601 "strip_size_kb": 64, 00:11:52.601 "state": "online", 00:11:52.601 "raid_level": "concat", 00:11:52.601 "superblock": true, 00:11:52.601 "num_base_bdevs": 3, 00:11:52.601 "num_base_bdevs_discovered": 3, 00:11:52.601 "num_base_bdevs_operational": 3, 00:11:52.601 "base_bdevs_list": [ 00:11:52.601 { 00:11:52.601 "name": "BaseBdev1", 00:11:52.601 "uuid": "c20c9f74-d032-c051-8649-1726b1ecaec7", 00:11:52.601 "is_configured": true, 00:11:52.601 "data_offset": 2048, 00:11:52.601 "data_size": 63488 00:11:52.601 }, 00:11:52.601 { 00:11:52.601 "name": "BaseBdev2", 00:11:52.601 "uuid": "451a3389-0234-e75b-b87e-e60a5dea906c", 00:11:52.601 "is_configured": true, 00:11:52.601 "data_offset": 2048, 00:11:52.601 "data_size": 63488 00:11:52.601 }, 00:11:52.601 { 00:11:52.601 "name": "BaseBdev3", 00:11:52.601 "uuid": "e0f5e6c9-3b59-ee50-ae8b-5bd64cee60c2", 00:11:52.601 "is_configured": true, 00:11:52.601 "data_offset": 2048, 00:11:52.601 "data_size": 63488 00:11:52.601 } 00:11:52.601 ] 00:11:52.601 }' 00:11:52.601 15:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:52.601 15:00:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.166 15:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:11:53.166 15:00:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:53.166 [2024-07-12 15:00:18.822705] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x144a630a0ec0 00:11:54.099 15:00:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:54.357 15:00:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:11:54.357 15:00:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:11:54.357 15:00:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:54.357 15:00:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:54.357 15:00:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:54.357 15:00:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:54.357 15:00:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:54.357 15:00:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:54.357 15:00:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:54.357 15:00:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:54.357 15:00:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:54.357 15:00:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:54.357 15:00:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:54.357 15:00:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.357 15:00:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:54.614 15:00:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:54.614 "name": "raid_bdev1", 00:11:54.614 "uuid": "73ce4d28-405f-11ef-b2a4-e9dca065e82e", 00:11:54.615 "strip_size_kb": 64, 00:11:54.615 "state": "online", 00:11:54.615 "raid_level": "concat", 00:11:54.615 "superblock": true, 00:11:54.615 "num_base_bdevs": 3, 00:11:54.615 "num_base_bdevs_discovered": 3, 00:11:54.615 "num_base_bdevs_operational": 3, 00:11:54.615 "base_bdevs_list": [ 00:11:54.615 { 00:11:54.615 "name": "BaseBdev1", 00:11:54.615 "uuid": "c20c9f74-d032-c051-8649-1726b1ecaec7", 00:11:54.615 "is_configured": true, 00:11:54.615 "data_offset": 2048, 00:11:54.615 "data_size": 63488 00:11:54.615 }, 00:11:54.615 { 00:11:54.615 "name": "BaseBdev2", 00:11:54.615 "uuid": "451a3389-0234-e75b-b87e-e60a5dea906c", 00:11:54.615 "is_configured": true, 00:11:54.615 "data_offset": 2048, 00:11:54.615 "data_size": 63488 00:11:54.615 }, 00:11:54.615 { 00:11:54.615 "name": "BaseBdev3", 00:11:54.615 "uuid": "e0f5e6c9-3b59-ee50-ae8b-5bd64cee60c2", 00:11:54.615 "is_configured": true, 00:11:54.615 "data_offset": 2048, 00:11:54.615 "data_size": 63488 00:11:54.615 } 00:11:54.615 ] 00:11:54.615 }' 00:11:54.615 15:00:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:54.615 15:00:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.871 15:00:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:55.127 [2024-07-12 15:00:20.788203] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:55.127 [2024-07-12 15:00:20.788233] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:55.127 [2024-07-12 15:00:20.788574] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:55.127 [2024-07-12 15:00:20.788585] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.127 [2024-07-12 15:00:20.788593] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:55.127 [2024-07-12 15:00:20.788598] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x144a63035400 name raid_bdev1, state offline 00:11:55.127 0 00:11:55.127 15:00:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 55974 00:11:55.127 15:00:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 55974 ']' 00:11:55.127 15:00:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 55974 00:11:55.127 15:00:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:11:55.127 15:00:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:11:55.127 15:00:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 55974 00:11:55.127 15:00:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:11:55.127 15:00:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:11:55.127 15:00:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:11:55.127 killing process with pid 55974 00:11:55.127 15:00:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 55974' 00:11:55.127 15:00:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 55974 00:11:55.127 [2024-07-12 15:00:20.822272] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:55.127 15:00:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 55974 00:11:55.127 [2024-07-12 15:00:20.839525] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:55.385 15:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.uAl5hoRHD0 00:11:55.385 15:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:11:55.385 15:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:11:55.385 15:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.51 00:11:55.385 15:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:11:55.385 15:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:55.385 15:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:55.385 15:00:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.51 != \0\.\0\0 ]] 00:11:55.385 00:11:55.385 real 0m6.711s 00:11:55.385 user 0m10.659s 00:11:55.385 sys 0m1.093s 00:11:55.385 15:00:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:55.385 15:00:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.385 ************************************ 00:11:55.385 END TEST raid_write_error_test 00:11:55.385 ************************************ 00:11:55.385 15:00:21 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:55.385 15:00:21 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:11:55.385 15:00:21 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:11:55.385 15:00:21 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:55.385 15:00:21 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:55.385 15:00:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:55.385 ************************************ 00:11:55.385 START TEST raid_state_function_test 00:11:55.385 ************************************ 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 false 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=56107 00:11:55.385 Process raid pid: 56107 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 56107' 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 56107 /var/tmp/spdk-raid.sock 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 56107 ']' 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:55.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:55.385 15:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.385 [2024-07-12 15:00:21.082397] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:11:55.385 [2024-07-12 15:00:21.082668] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:11:55.950 EAL: TSC is not safe to use in SMP mode 00:11:55.950 EAL: TSC is not invariant 00:11:55.950 [2024-07-12 15:00:21.642290] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.950 [2024-07-12 15:00:21.741916] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:55.950 [2024-07-12 15:00:21.744712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.950 [2024-07-12 15:00:21.745706] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.950 [2024-07-12 15:00:21.745728] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.514 15:00:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:56.514 15:00:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:11:56.514 15:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:56.771 [2024-07-12 15:00:22.512021] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:56.771 [2024-07-12 15:00:22.512078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:56.771 [2024-07-12 15:00:22.512084] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:56.771 [2024-07-12 15:00:22.512093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:56.771 [2024-07-12 15:00:22.512096] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:56.771 [2024-07-12 15:00:22.512104] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:56.771 15:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:56.771 15:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:56.771 15:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:56.771 15:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:56.771 15:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:56.771 15:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:56.771 15:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:56.771 15:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:56.771 15:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:56.771 15:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:56.771 15:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.771 15:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:57.029 15:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:57.029 "name": "Existed_Raid", 00:11:57.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.029 "strip_size_kb": 0, 00:11:57.029 "state": "configuring", 00:11:57.029 "raid_level": "raid1", 00:11:57.029 "superblock": false, 00:11:57.029 "num_base_bdevs": 3, 00:11:57.029 "num_base_bdevs_discovered": 0, 00:11:57.029 "num_base_bdevs_operational": 3, 00:11:57.029 "base_bdevs_list": [ 00:11:57.029 { 00:11:57.029 "name": "BaseBdev1", 00:11:57.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.029 "is_configured": false, 00:11:57.029 "data_offset": 0, 00:11:57.029 "data_size": 0 00:11:57.029 }, 00:11:57.029 { 00:11:57.029 "name": "BaseBdev2", 00:11:57.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.029 "is_configured": false, 00:11:57.029 "data_offset": 0, 00:11:57.029 "data_size": 0 00:11:57.029 }, 00:11:57.029 { 00:11:57.029 "name": "BaseBdev3", 00:11:57.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.029 "is_configured": false, 00:11:57.029 "data_offset": 0, 00:11:57.029 "data_size": 0 00:11:57.029 } 00:11:57.029 ] 00:11:57.029 }' 00:11:57.029 15:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:57.029 15:00:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.594 15:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:57.594 [2024-07-12 15:00:23.384000] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:57.594 [2024-07-12 15:00:23.384027] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x99ba2434500 name Existed_Raid, state configuring 00:11:57.594 15:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:57.851 [2024-07-12 15:00:23.611992] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:57.851 [2024-07-12 15:00:23.612040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:57.851 [2024-07-12 15:00:23.612046] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:57.851 [2024-07-12 15:00:23.612055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:57.851 [2024-07-12 15:00:23.612058] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:57.851 [2024-07-12 15:00:23.612066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:57.851 15:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:58.109 [2024-07-12 15:00:23.841025] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:58.109 BaseBdev1 00:11:58.109 15:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:58.109 15:00:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:58.109 15:00:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:58.109 15:00:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:58.109 15:00:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:58.109 15:00:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:58.109 15:00:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:58.366 15:00:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:58.979 [ 00:11:58.979 { 00:11:58.979 "name": "BaseBdev1", 00:11:58.979 "aliases": [ 00:11:58.979 "773e0ae8-405f-11ef-b2a4-e9dca065e82e" 00:11:58.979 ], 00:11:58.979 "product_name": "Malloc disk", 00:11:58.979 "block_size": 512, 00:11:58.979 "num_blocks": 65536, 00:11:58.979 "uuid": "773e0ae8-405f-11ef-b2a4-e9dca065e82e", 00:11:58.979 "assigned_rate_limits": { 00:11:58.979 "rw_ios_per_sec": 0, 00:11:58.979 "rw_mbytes_per_sec": 0, 00:11:58.979 "r_mbytes_per_sec": 0, 00:11:58.979 "w_mbytes_per_sec": 0 00:11:58.979 }, 00:11:58.979 "claimed": true, 00:11:58.979 "claim_type": "exclusive_write", 00:11:58.979 "zoned": false, 00:11:58.979 "supported_io_types": { 00:11:58.979 "read": true, 00:11:58.979 "write": true, 00:11:58.979 "unmap": true, 00:11:58.979 "flush": true, 00:11:58.979 "reset": true, 00:11:58.979 "nvme_admin": false, 00:11:58.979 "nvme_io": false, 00:11:58.979 "nvme_io_md": false, 00:11:58.979 "write_zeroes": true, 00:11:58.979 "zcopy": true, 00:11:58.979 "get_zone_info": false, 00:11:58.979 "zone_management": false, 00:11:58.979 "zone_append": false, 00:11:58.979 "compare": false, 00:11:58.979 "compare_and_write": false, 00:11:58.979 "abort": true, 00:11:58.979 "seek_hole": false, 00:11:58.979 "seek_data": false, 00:11:58.979 "copy": true, 00:11:58.979 "nvme_iov_md": false 00:11:58.979 }, 00:11:58.979 "memory_domains": [ 00:11:58.979 { 00:11:58.979 "dma_device_id": "system", 00:11:58.979 "dma_device_type": 1 00:11:58.979 }, 00:11:58.979 { 00:11:58.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.979 "dma_device_type": 2 00:11:58.979 } 00:11:58.979 ], 00:11:58.979 "driver_specific": {} 00:11:58.979 } 00:11:58.979 ] 00:11:58.979 15:00:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:58.979 15:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:58.979 15:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:58.979 15:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:58.979 15:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:58.979 15:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:58.979 15:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:58.979 15:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:58.979 15:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:58.979 15:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:58.979 15:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:58.979 15:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:58.979 15:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.979 15:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:58.979 "name": "Existed_Raid", 00:11:58.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.979 "strip_size_kb": 0, 00:11:58.979 "state": "configuring", 00:11:58.979 "raid_level": "raid1", 00:11:58.979 "superblock": false, 00:11:58.979 "num_base_bdevs": 3, 00:11:58.979 "num_base_bdevs_discovered": 1, 00:11:58.979 "num_base_bdevs_operational": 3, 00:11:58.979 "base_bdevs_list": [ 00:11:58.979 { 00:11:58.979 "name": "BaseBdev1", 00:11:58.979 "uuid": "773e0ae8-405f-11ef-b2a4-e9dca065e82e", 00:11:58.979 "is_configured": true, 00:11:58.979 "data_offset": 0, 00:11:58.979 "data_size": 65536 00:11:58.979 }, 00:11:58.979 { 00:11:58.979 "name": "BaseBdev2", 00:11:58.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.979 "is_configured": false, 00:11:58.979 "data_offset": 0, 00:11:58.979 "data_size": 0 00:11:58.979 }, 00:11:58.979 { 00:11:58.979 "name": "BaseBdev3", 00:11:58.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.979 "is_configured": false, 00:11:58.979 "data_offset": 0, 00:11:58.979 "data_size": 0 00:11:58.979 } 00:11:58.979 ] 00:11:58.979 }' 00:11:58.979 15:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:58.979 15:00:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.543 15:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:59.543 [2024-07-12 15:00:25.292015] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:59.543 [2024-07-12 15:00:25.292051] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x99ba2434500 name Existed_Raid, state configuring 00:11:59.543 15:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:59.814 [2024-07-12 15:00:25.524054] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:59.814 [2024-07-12 15:00:25.524884] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:59.814 [2024-07-12 15:00:25.524928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:59.814 [2024-07-12 15:00:25.524933] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:59.814 [2024-07-12 15:00:25.524942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:59.814 15:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:59.814 15:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:59.814 15:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:59.814 15:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:59.814 15:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:59.814 15:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:59.814 15:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:59.814 15:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:59.814 15:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:59.814 15:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:59.814 15:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:59.814 15:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:59.814 15:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:59.814 15:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.079 15:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:00.080 "name": "Existed_Raid", 00:12:00.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.080 "strip_size_kb": 0, 00:12:00.080 "state": "configuring", 00:12:00.080 "raid_level": "raid1", 00:12:00.080 "superblock": false, 00:12:00.080 "num_base_bdevs": 3, 00:12:00.080 "num_base_bdevs_discovered": 1, 00:12:00.080 "num_base_bdevs_operational": 3, 00:12:00.080 "base_bdevs_list": [ 00:12:00.080 { 00:12:00.080 "name": "BaseBdev1", 00:12:00.080 "uuid": "773e0ae8-405f-11ef-b2a4-e9dca065e82e", 00:12:00.080 "is_configured": true, 00:12:00.080 "data_offset": 0, 00:12:00.080 "data_size": 65536 00:12:00.080 }, 00:12:00.080 { 00:12:00.080 "name": "BaseBdev2", 00:12:00.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.080 "is_configured": false, 00:12:00.080 "data_offset": 0, 00:12:00.080 "data_size": 0 00:12:00.080 }, 00:12:00.080 { 00:12:00.080 "name": "BaseBdev3", 00:12:00.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.080 "is_configured": false, 00:12:00.080 "data_offset": 0, 00:12:00.080 "data_size": 0 00:12:00.080 } 00:12:00.080 ] 00:12:00.080 }' 00:12:00.080 15:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:00.080 15:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.645 15:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:00.645 [2024-07-12 15:00:26.460168] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:00.645 BaseBdev2 00:12:00.902 15:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:00.902 15:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:00.902 15:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:00.902 15:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:00.902 15:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:00.902 15:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:00.902 15:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:00.902 15:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:01.467 [ 00:12:01.467 { 00:12:01.467 "name": "BaseBdev2", 00:12:01.467 "aliases": [ 00:12:01.467 "78cdd273-405f-11ef-b2a4-e9dca065e82e" 00:12:01.467 ], 00:12:01.467 "product_name": "Malloc disk", 00:12:01.467 "block_size": 512, 00:12:01.467 "num_blocks": 65536, 00:12:01.467 "uuid": "78cdd273-405f-11ef-b2a4-e9dca065e82e", 00:12:01.467 "assigned_rate_limits": { 00:12:01.467 "rw_ios_per_sec": 0, 00:12:01.467 "rw_mbytes_per_sec": 0, 00:12:01.467 "r_mbytes_per_sec": 0, 00:12:01.467 "w_mbytes_per_sec": 0 00:12:01.467 }, 00:12:01.467 "claimed": true, 00:12:01.467 "claim_type": "exclusive_write", 00:12:01.467 "zoned": false, 00:12:01.467 "supported_io_types": { 00:12:01.467 "read": true, 00:12:01.467 "write": true, 00:12:01.467 "unmap": true, 00:12:01.467 "flush": true, 00:12:01.467 "reset": true, 00:12:01.467 "nvme_admin": false, 00:12:01.467 "nvme_io": false, 00:12:01.467 "nvme_io_md": false, 00:12:01.467 "write_zeroes": true, 00:12:01.467 "zcopy": true, 00:12:01.467 "get_zone_info": false, 00:12:01.467 "zone_management": false, 00:12:01.467 "zone_append": false, 00:12:01.468 "compare": false, 00:12:01.468 "compare_and_write": false, 00:12:01.468 "abort": true, 00:12:01.468 "seek_hole": false, 00:12:01.468 "seek_data": false, 00:12:01.468 "copy": true, 00:12:01.468 "nvme_iov_md": false 00:12:01.468 }, 00:12:01.468 "memory_domains": [ 00:12:01.468 { 00:12:01.468 "dma_device_id": "system", 00:12:01.468 "dma_device_type": 1 00:12:01.468 }, 00:12:01.468 { 00:12:01.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.468 "dma_device_type": 2 00:12:01.468 } 00:12:01.468 ], 00:12:01.468 "driver_specific": {} 00:12:01.468 } 00:12:01.468 ] 00:12:01.468 15:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:01.468 15:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:01.468 15:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:01.468 15:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:01.468 15:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:01.468 15:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:01.468 15:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:01.468 15:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:01.468 15:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:01.468 15:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:01.468 15:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:01.468 15:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:01.468 15:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:01.468 15:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:01.468 15:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.468 15:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:01.468 "name": "Existed_Raid", 00:12:01.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.468 "strip_size_kb": 0, 00:12:01.468 "state": "configuring", 00:12:01.468 "raid_level": "raid1", 00:12:01.468 "superblock": false, 00:12:01.468 "num_base_bdevs": 3, 00:12:01.468 "num_base_bdevs_discovered": 2, 00:12:01.468 "num_base_bdevs_operational": 3, 00:12:01.468 "base_bdevs_list": [ 00:12:01.468 { 00:12:01.468 "name": "BaseBdev1", 00:12:01.468 "uuid": "773e0ae8-405f-11ef-b2a4-e9dca065e82e", 00:12:01.468 "is_configured": true, 00:12:01.468 "data_offset": 0, 00:12:01.468 "data_size": 65536 00:12:01.468 }, 00:12:01.468 { 00:12:01.468 "name": "BaseBdev2", 00:12:01.468 "uuid": "78cdd273-405f-11ef-b2a4-e9dca065e82e", 00:12:01.468 "is_configured": true, 00:12:01.468 "data_offset": 0, 00:12:01.468 "data_size": 65536 00:12:01.468 }, 00:12:01.468 { 00:12:01.468 "name": "BaseBdev3", 00:12:01.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.468 "is_configured": false, 00:12:01.468 "data_offset": 0, 00:12:01.468 "data_size": 0 00:12:01.468 } 00:12:01.468 ] 00:12:01.468 }' 00:12:01.468 15:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:01.468 15:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.034 15:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:02.034 [2024-07-12 15:00:27.820154] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:02.034 [2024-07-12 15:00:27.820185] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x99ba2434a00 00:12:02.034 [2024-07-12 15:00:27.820190] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:02.034 [2024-07-12 15:00:27.820217] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x99ba2497e20 00:12:02.034 [2024-07-12 15:00:27.820311] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x99ba2434a00 00:12:02.034 [2024-07-12 15:00:27.820316] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x99ba2434a00 00:12:02.034 [2024-07-12 15:00:27.820349] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.034 BaseBdev3 00:12:02.034 15:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:12:02.034 15:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:12:02.034 15:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:02.034 15:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:02.034 15:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:02.034 15:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:02.034 15:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:02.293 15:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:02.551 [ 00:12:02.551 { 00:12:02.551 "name": "BaseBdev3", 00:12:02.551 "aliases": [ 00:12:02.551 "799d57d3-405f-11ef-b2a4-e9dca065e82e" 00:12:02.551 ], 00:12:02.551 "product_name": "Malloc disk", 00:12:02.551 "block_size": 512, 00:12:02.551 "num_blocks": 65536, 00:12:02.551 "uuid": "799d57d3-405f-11ef-b2a4-e9dca065e82e", 00:12:02.551 "assigned_rate_limits": { 00:12:02.551 "rw_ios_per_sec": 0, 00:12:02.551 "rw_mbytes_per_sec": 0, 00:12:02.551 "r_mbytes_per_sec": 0, 00:12:02.551 "w_mbytes_per_sec": 0 00:12:02.551 }, 00:12:02.551 "claimed": true, 00:12:02.551 "claim_type": "exclusive_write", 00:12:02.551 "zoned": false, 00:12:02.551 "supported_io_types": { 00:12:02.551 "read": true, 00:12:02.551 "write": true, 00:12:02.551 "unmap": true, 00:12:02.551 "flush": true, 00:12:02.551 "reset": true, 00:12:02.551 "nvme_admin": false, 00:12:02.551 "nvme_io": false, 00:12:02.551 "nvme_io_md": false, 00:12:02.551 "write_zeroes": true, 00:12:02.551 "zcopy": true, 00:12:02.551 "get_zone_info": false, 00:12:02.551 "zone_management": false, 00:12:02.551 "zone_append": false, 00:12:02.551 "compare": false, 00:12:02.551 "compare_and_write": false, 00:12:02.551 "abort": true, 00:12:02.551 "seek_hole": false, 00:12:02.551 "seek_data": false, 00:12:02.551 "copy": true, 00:12:02.551 "nvme_iov_md": false 00:12:02.551 }, 00:12:02.551 "memory_domains": [ 00:12:02.551 { 00:12:02.551 "dma_device_id": "system", 00:12:02.551 "dma_device_type": 1 00:12:02.551 }, 00:12:02.551 { 00:12:02.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.551 "dma_device_type": 2 00:12:02.551 } 00:12:02.551 ], 00:12:02.551 "driver_specific": {} 00:12:02.551 } 00:12:02.551 ] 00:12:02.551 15:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:02.551 15:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:02.551 15:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:02.551 15:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:02.551 15:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:02.551 15:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:02.551 15:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:02.551 15:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:02.551 15:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:02.551 15:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:02.551 15:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:02.551 15:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:02.551 15:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:02.551 15:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:02.551 15:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.809 15:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:02.809 "name": "Existed_Raid", 00:12:02.809 "uuid": "799d5e63-405f-11ef-b2a4-e9dca065e82e", 00:12:02.809 "strip_size_kb": 0, 00:12:02.809 "state": "online", 00:12:02.809 "raid_level": "raid1", 00:12:02.809 "superblock": false, 00:12:02.809 "num_base_bdevs": 3, 00:12:02.809 "num_base_bdevs_discovered": 3, 00:12:02.809 "num_base_bdevs_operational": 3, 00:12:02.809 "base_bdevs_list": [ 00:12:02.809 { 00:12:02.809 "name": "BaseBdev1", 00:12:02.809 "uuid": "773e0ae8-405f-11ef-b2a4-e9dca065e82e", 00:12:02.809 "is_configured": true, 00:12:02.809 "data_offset": 0, 00:12:02.809 "data_size": 65536 00:12:02.809 }, 00:12:02.809 { 00:12:02.809 "name": "BaseBdev2", 00:12:02.809 "uuid": "78cdd273-405f-11ef-b2a4-e9dca065e82e", 00:12:02.809 "is_configured": true, 00:12:02.809 "data_offset": 0, 00:12:02.809 "data_size": 65536 00:12:02.809 }, 00:12:02.810 { 00:12:02.810 "name": "BaseBdev3", 00:12:02.810 "uuid": "799d57d3-405f-11ef-b2a4-e9dca065e82e", 00:12:02.810 "is_configured": true, 00:12:02.810 "data_offset": 0, 00:12:02.810 "data_size": 65536 00:12:02.810 } 00:12:02.810 ] 00:12:02.810 }' 00:12:02.810 15:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:02.810 15:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.376 15:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:03.376 15:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:03.376 15:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:03.376 15:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:03.376 15:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:03.376 15:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:03.376 15:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:03.376 15:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:03.376 [2024-07-12 15:00:29.160044] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:03.376 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:03.376 "name": "Existed_Raid", 00:12:03.376 "aliases": [ 00:12:03.376 "799d5e63-405f-11ef-b2a4-e9dca065e82e" 00:12:03.376 ], 00:12:03.376 "product_name": "Raid Volume", 00:12:03.376 "block_size": 512, 00:12:03.376 "num_blocks": 65536, 00:12:03.376 "uuid": "799d5e63-405f-11ef-b2a4-e9dca065e82e", 00:12:03.376 "assigned_rate_limits": { 00:12:03.376 "rw_ios_per_sec": 0, 00:12:03.376 "rw_mbytes_per_sec": 0, 00:12:03.376 "r_mbytes_per_sec": 0, 00:12:03.376 "w_mbytes_per_sec": 0 00:12:03.376 }, 00:12:03.376 "claimed": false, 00:12:03.376 "zoned": false, 00:12:03.376 "supported_io_types": { 00:12:03.376 "read": true, 00:12:03.376 "write": true, 00:12:03.376 "unmap": false, 00:12:03.376 "flush": false, 00:12:03.376 "reset": true, 00:12:03.376 "nvme_admin": false, 00:12:03.376 "nvme_io": false, 00:12:03.376 "nvme_io_md": false, 00:12:03.376 "write_zeroes": true, 00:12:03.376 "zcopy": false, 00:12:03.376 "get_zone_info": false, 00:12:03.377 "zone_management": false, 00:12:03.377 "zone_append": false, 00:12:03.377 "compare": false, 00:12:03.377 "compare_and_write": false, 00:12:03.377 "abort": false, 00:12:03.377 "seek_hole": false, 00:12:03.377 "seek_data": false, 00:12:03.377 "copy": false, 00:12:03.377 "nvme_iov_md": false 00:12:03.377 }, 00:12:03.377 "memory_domains": [ 00:12:03.377 { 00:12:03.377 "dma_device_id": "system", 00:12:03.377 "dma_device_type": 1 00:12:03.377 }, 00:12:03.377 { 00:12:03.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.377 "dma_device_type": 2 00:12:03.377 }, 00:12:03.377 { 00:12:03.377 "dma_device_id": "system", 00:12:03.377 "dma_device_type": 1 00:12:03.377 }, 00:12:03.377 { 00:12:03.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.377 "dma_device_type": 2 00:12:03.377 }, 00:12:03.377 { 00:12:03.377 "dma_device_id": "system", 00:12:03.377 "dma_device_type": 1 00:12:03.377 }, 00:12:03.377 { 00:12:03.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.377 "dma_device_type": 2 00:12:03.377 } 00:12:03.377 ], 00:12:03.377 "driver_specific": { 00:12:03.377 "raid": { 00:12:03.377 "uuid": "799d5e63-405f-11ef-b2a4-e9dca065e82e", 00:12:03.377 "strip_size_kb": 0, 00:12:03.377 "state": "online", 00:12:03.377 "raid_level": "raid1", 00:12:03.377 "superblock": false, 00:12:03.377 "num_base_bdevs": 3, 00:12:03.377 "num_base_bdevs_discovered": 3, 00:12:03.377 "num_base_bdevs_operational": 3, 00:12:03.377 "base_bdevs_list": [ 00:12:03.377 { 00:12:03.377 "name": "BaseBdev1", 00:12:03.377 "uuid": "773e0ae8-405f-11ef-b2a4-e9dca065e82e", 00:12:03.377 "is_configured": true, 00:12:03.377 "data_offset": 0, 00:12:03.377 "data_size": 65536 00:12:03.377 }, 00:12:03.377 { 00:12:03.377 "name": "BaseBdev2", 00:12:03.377 "uuid": "78cdd273-405f-11ef-b2a4-e9dca065e82e", 00:12:03.377 "is_configured": true, 00:12:03.377 "data_offset": 0, 00:12:03.377 "data_size": 65536 00:12:03.377 }, 00:12:03.377 { 00:12:03.377 "name": "BaseBdev3", 00:12:03.377 "uuid": "799d57d3-405f-11ef-b2a4-e9dca065e82e", 00:12:03.377 "is_configured": true, 00:12:03.377 "data_offset": 0, 00:12:03.377 "data_size": 65536 00:12:03.377 } 00:12:03.377 ] 00:12:03.377 } 00:12:03.377 } 00:12:03.377 }' 00:12:03.377 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:03.377 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:03.377 BaseBdev2 00:12:03.377 BaseBdev3' 00:12:03.377 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:03.377 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:03.377 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:03.943 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:03.943 "name": "BaseBdev1", 00:12:03.943 "aliases": [ 00:12:03.943 "773e0ae8-405f-11ef-b2a4-e9dca065e82e" 00:12:03.943 ], 00:12:03.943 "product_name": "Malloc disk", 00:12:03.943 "block_size": 512, 00:12:03.943 "num_blocks": 65536, 00:12:03.943 "uuid": "773e0ae8-405f-11ef-b2a4-e9dca065e82e", 00:12:03.943 "assigned_rate_limits": { 00:12:03.943 "rw_ios_per_sec": 0, 00:12:03.943 "rw_mbytes_per_sec": 0, 00:12:03.943 "r_mbytes_per_sec": 0, 00:12:03.943 "w_mbytes_per_sec": 0 00:12:03.943 }, 00:12:03.943 "claimed": true, 00:12:03.943 "claim_type": "exclusive_write", 00:12:03.943 "zoned": false, 00:12:03.943 "supported_io_types": { 00:12:03.943 "read": true, 00:12:03.943 "write": true, 00:12:03.943 "unmap": true, 00:12:03.943 "flush": true, 00:12:03.943 "reset": true, 00:12:03.943 "nvme_admin": false, 00:12:03.943 "nvme_io": false, 00:12:03.943 "nvme_io_md": false, 00:12:03.943 "write_zeroes": true, 00:12:03.943 "zcopy": true, 00:12:03.943 "get_zone_info": false, 00:12:03.943 "zone_management": false, 00:12:03.943 "zone_append": false, 00:12:03.943 "compare": false, 00:12:03.943 "compare_and_write": false, 00:12:03.943 "abort": true, 00:12:03.943 "seek_hole": false, 00:12:03.943 "seek_data": false, 00:12:03.943 "copy": true, 00:12:03.943 "nvme_iov_md": false 00:12:03.943 }, 00:12:03.943 "memory_domains": [ 00:12:03.943 { 00:12:03.943 "dma_device_id": "system", 00:12:03.943 "dma_device_type": 1 00:12:03.943 }, 00:12:03.943 { 00:12:03.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.943 "dma_device_type": 2 00:12:03.943 } 00:12:03.943 ], 00:12:03.943 "driver_specific": {} 00:12:03.943 }' 00:12:03.943 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:03.943 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:03.943 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:03.943 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:03.943 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:03.943 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:03.943 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:03.943 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:03.943 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:03.943 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:03.943 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:03.943 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:03.943 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:03.943 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:03.943 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:04.201 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:04.201 "name": "BaseBdev2", 00:12:04.201 "aliases": [ 00:12:04.201 "78cdd273-405f-11ef-b2a4-e9dca065e82e" 00:12:04.201 ], 00:12:04.201 "product_name": "Malloc disk", 00:12:04.201 "block_size": 512, 00:12:04.201 "num_blocks": 65536, 00:12:04.201 "uuid": "78cdd273-405f-11ef-b2a4-e9dca065e82e", 00:12:04.201 "assigned_rate_limits": { 00:12:04.202 "rw_ios_per_sec": 0, 00:12:04.202 "rw_mbytes_per_sec": 0, 00:12:04.202 "r_mbytes_per_sec": 0, 00:12:04.202 "w_mbytes_per_sec": 0 00:12:04.202 }, 00:12:04.202 "claimed": true, 00:12:04.202 "claim_type": "exclusive_write", 00:12:04.202 "zoned": false, 00:12:04.202 "supported_io_types": { 00:12:04.202 "read": true, 00:12:04.202 "write": true, 00:12:04.202 "unmap": true, 00:12:04.202 "flush": true, 00:12:04.202 "reset": true, 00:12:04.202 "nvme_admin": false, 00:12:04.202 "nvme_io": false, 00:12:04.202 "nvme_io_md": false, 00:12:04.202 "write_zeroes": true, 00:12:04.202 "zcopy": true, 00:12:04.202 "get_zone_info": false, 00:12:04.202 "zone_management": false, 00:12:04.202 "zone_append": false, 00:12:04.202 "compare": false, 00:12:04.202 "compare_and_write": false, 00:12:04.202 "abort": true, 00:12:04.202 "seek_hole": false, 00:12:04.202 "seek_data": false, 00:12:04.202 "copy": true, 00:12:04.202 "nvme_iov_md": false 00:12:04.202 }, 00:12:04.202 "memory_domains": [ 00:12:04.202 { 00:12:04.202 "dma_device_id": "system", 00:12:04.202 "dma_device_type": 1 00:12:04.202 }, 00:12:04.202 { 00:12:04.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.202 "dma_device_type": 2 00:12:04.202 } 00:12:04.202 ], 00:12:04.202 "driver_specific": {} 00:12:04.202 }' 00:12:04.202 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:04.202 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:04.202 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:04.202 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:04.202 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:04.202 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:04.202 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:04.202 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:04.202 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:04.202 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:04.202 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:04.202 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:04.202 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:04.202 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:04.202 15:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:04.460 15:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:04.460 "name": "BaseBdev3", 00:12:04.460 "aliases": [ 00:12:04.460 "799d57d3-405f-11ef-b2a4-e9dca065e82e" 00:12:04.460 ], 00:12:04.460 "product_name": "Malloc disk", 00:12:04.460 "block_size": 512, 00:12:04.460 "num_blocks": 65536, 00:12:04.460 "uuid": "799d57d3-405f-11ef-b2a4-e9dca065e82e", 00:12:04.460 "assigned_rate_limits": { 00:12:04.460 "rw_ios_per_sec": 0, 00:12:04.460 "rw_mbytes_per_sec": 0, 00:12:04.460 "r_mbytes_per_sec": 0, 00:12:04.460 "w_mbytes_per_sec": 0 00:12:04.460 }, 00:12:04.460 "claimed": true, 00:12:04.460 "claim_type": "exclusive_write", 00:12:04.460 "zoned": false, 00:12:04.460 "supported_io_types": { 00:12:04.460 "read": true, 00:12:04.460 "write": true, 00:12:04.460 "unmap": true, 00:12:04.460 "flush": true, 00:12:04.460 "reset": true, 00:12:04.460 "nvme_admin": false, 00:12:04.460 "nvme_io": false, 00:12:04.460 "nvme_io_md": false, 00:12:04.460 "write_zeroes": true, 00:12:04.460 "zcopy": true, 00:12:04.461 "get_zone_info": false, 00:12:04.461 "zone_management": false, 00:12:04.461 "zone_append": false, 00:12:04.461 "compare": false, 00:12:04.461 "compare_and_write": false, 00:12:04.461 "abort": true, 00:12:04.461 "seek_hole": false, 00:12:04.461 "seek_data": false, 00:12:04.461 "copy": true, 00:12:04.461 "nvme_iov_md": false 00:12:04.461 }, 00:12:04.461 "memory_domains": [ 00:12:04.461 { 00:12:04.461 "dma_device_id": "system", 00:12:04.461 "dma_device_type": 1 00:12:04.461 }, 00:12:04.461 { 00:12:04.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.461 "dma_device_type": 2 00:12:04.461 } 00:12:04.461 ], 00:12:04.461 "driver_specific": {} 00:12:04.461 }' 00:12:04.461 15:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:04.461 15:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:04.461 15:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:04.461 15:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:04.461 15:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:04.461 15:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:04.461 15:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:04.461 15:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:04.461 15:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:04.461 15:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:04.461 15:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:04.461 15:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:04.461 15:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:04.720 [2024-07-12 15:00:30.432012] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:04.720 15:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:04.720 15:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:12:04.720 15:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:04.720 15:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:12:04.720 15:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:12:04.720 15:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:04.720 15:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:04.720 15:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:04.720 15:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:04.720 15:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:04.720 15:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:04.720 15:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:04.720 15:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:04.720 15:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:04.720 15:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:04.720 15:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:04.720 15:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.978 15:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:04.978 "name": "Existed_Raid", 00:12:04.978 "uuid": "799d5e63-405f-11ef-b2a4-e9dca065e82e", 00:12:04.978 "strip_size_kb": 0, 00:12:04.978 "state": "online", 00:12:04.978 "raid_level": "raid1", 00:12:04.978 "superblock": false, 00:12:04.978 "num_base_bdevs": 3, 00:12:04.978 "num_base_bdevs_discovered": 2, 00:12:04.978 "num_base_bdevs_operational": 2, 00:12:04.978 "base_bdevs_list": [ 00:12:04.978 { 00:12:04.978 "name": null, 00:12:04.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.978 "is_configured": false, 00:12:04.978 "data_offset": 0, 00:12:04.978 "data_size": 65536 00:12:04.978 }, 00:12:04.978 { 00:12:04.978 "name": "BaseBdev2", 00:12:04.978 "uuid": "78cdd273-405f-11ef-b2a4-e9dca065e82e", 00:12:04.978 "is_configured": true, 00:12:04.978 "data_offset": 0, 00:12:04.978 "data_size": 65536 00:12:04.978 }, 00:12:04.978 { 00:12:04.978 "name": "BaseBdev3", 00:12:04.978 "uuid": "799d57d3-405f-11ef-b2a4-e9dca065e82e", 00:12:04.978 "is_configured": true, 00:12:04.978 "data_offset": 0, 00:12:04.978 "data_size": 65536 00:12:04.978 } 00:12:04.978 ] 00:12:04.978 }' 00:12:04.978 15:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:04.978 15:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.237 15:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:05.237 15:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:05.237 15:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:05.237 15:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:05.495 15:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:05.495 15:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:05.495 15:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:05.809 [2024-07-12 15:00:31.533816] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:05.809 15:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:05.809 15:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:05.809 15:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:05.810 15:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:06.068 15:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:06.068 15:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:06.068 15:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:06.325 [2024-07-12 15:00:32.094137] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:06.325 [2024-07-12 15:00:32.094189] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:06.325 [2024-07-12 15:00:32.102703] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:06.325 [2024-07-12 15:00:32.102722] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:06.325 [2024-07-12 15:00:32.102727] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x99ba2434a00 name Existed_Raid, state offline 00:12:06.325 15:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:06.325 15:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:06.325 15:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:06.325 15:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:06.584 15:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:06.584 15:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:06.584 15:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:12:06.584 15:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:12:06.584 15:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:06.584 15:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:06.842 BaseBdev2 00:12:06.842 15:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:12:06.842 15:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:06.842 15:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:06.842 15:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:06.842 15:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:06.842 15:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:06.842 15:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:07.407 15:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:07.665 [ 00:12:07.665 { 00:12:07.665 "name": "BaseBdev2", 00:12:07.665 "aliases": [ 00:12:07.665 "7c7857f4-405f-11ef-b2a4-e9dca065e82e" 00:12:07.665 ], 00:12:07.665 "product_name": "Malloc disk", 00:12:07.665 "block_size": 512, 00:12:07.665 "num_blocks": 65536, 00:12:07.665 "uuid": "7c7857f4-405f-11ef-b2a4-e9dca065e82e", 00:12:07.665 "assigned_rate_limits": { 00:12:07.665 "rw_ios_per_sec": 0, 00:12:07.665 "rw_mbytes_per_sec": 0, 00:12:07.665 "r_mbytes_per_sec": 0, 00:12:07.665 "w_mbytes_per_sec": 0 00:12:07.665 }, 00:12:07.665 "claimed": false, 00:12:07.665 "zoned": false, 00:12:07.665 "supported_io_types": { 00:12:07.665 "read": true, 00:12:07.665 "write": true, 00:12:07.665 "unmap": true, 00:12:07.665 "flush": true, 00:12:07.665 "reset": true, 00:12:07.665 "nvme_admin": false, 00:12:07.665 "nvme_io": false, 00:12:07.665 "nvme_io_md": false, 00:12:07.665 "write_zeroes": true, 00:12:07.665 "zcopy": true, 00:12:07.665 "get_zone_info": false, 00:12:07.665 "zone_management": false, 00:12:07.665 "zone_append": false, 00:12:07.665 "compare": false, 00:12:07.665 "compare_and_write": false, 00:12:07.665 "abort": true, 00:12:07.665 "seek_hole": false, 00:12:07.665 "seek_data": false, 00:12:07.665 "copy": true, 00:12:07.665 "nvme_iov_md": false 00:12:07.665 }, 00:12:07.665 "memory_domains": [ 00:12:07.665 { 00:12:07.665 "dma_device_id": "system", 00:12:07.665 "dma_device_type": 1 00:12:07.665 }, 00:12:07.665 { 00:12:07.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.665 "dma_device_type": 2 00:12:07.665 } 00:12:07.665 ], 00:12:07.665 "driver_specific": {} 00:12:07.665 } 00:12:07.665 ] 00:12:07.665 15:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:07.665 15:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:07.665 15:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:07.665 15:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:07.923 BaseBdev3 00:12:07.923 15:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:12:07.923 15:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:12:07.923 15:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:07.923 15:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:07.923 15:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:07.923 15:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:07.923 15:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:08.182 15:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:08.439 [ 00:12:08.439 { 00:12:08.439 "name": "BaseBdev3", 00:12:08.439 "aliases": [ 00:12:08.439 "7d0b7014-405f-11ef-b2a4-e9dca065e82e" 00:12:08.439 ], 00:12:08.439 "product_name": "Malloc disk", 00:12:08.439 "block_size": 512, 00:12:08.439 "num_blocks": 65536, 00:12:08.439 "uuid": "7d0b7014-405f-11ef-b2a4-e9dca065e82e", 00:12:08.439 "assigned_rate_limits": { 00:12:08.439 "rw_ios_per_sec": 0, 00:12:08.439 "rw_mbytes_per_sec": 0, 00:12:08.439 "r_mbytes_per_sec": 0, 00:12:08.439 "w_mbytes_per_sec": 0 00:12:08.439 }, 00:12:08.439 "claimed": false, 00:12:08.439 "zoned": false, 00:12:08.439 "supported_io_types": { 00:12:08.439 "read": true, 00:12:08.439 "write": true, 00:12:08.439 "unmap": true, 00:12:08.439 "flush": true, 00:12:08.439 "reset": true, 00:12:08.439 "nvme_admin": false, 00:12:08.439 "nvme_io": false, 00:12:08.439 "nvme_io_md": false, 00:12:08.439 "write_zeroes": true, 00:12:08.439 "zcopy": true, 00:12:08.439 "get_zone_info": false, 00:12:08.439 "zone_management": false, 00:12:08.439 "zone_append": false, 00:12:08.439 "compare": false, 00:12:08.439 "compare_and_write": false, 00:12:08.439 "abort": true, 00:12:08.439 "seek_hole": false, 00:12:08.439 "seek_data": false, 00:12:08.439 "copy": true, 00:12:08.439 "nvme_iov_md": false 00:12:08.439 }, 00:12:08.439 "memory_domains": [ 00:12:08.439 { 00:12:08.439 "dma_device_id": "system", 00:12:08.439 "dma_device_type": 1 00:12:08.439 }, 00:12:08.439 { 00:12:08.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.439 "dma_device_type": 2 00:12:08.439 } 00:12:08.439 ], 00:12:08.439 "driver_specific": {} 00:12:08.439 } 00:12:08.439 ] 00:12:08.439 15:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:08.439 15:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:08.439 15:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:08.439 15:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:08.696 [2024-07-12 15:00:34.350727] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:08.696 [2024-07-12 15:00:34.350800] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:08.696 [2024-07-12 15:00:34.350810] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:08.696 [2024-07-12 15:00:34.351376] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:08.696 15:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:08.696 15:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:08.696 15:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:08.696 15:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:08.696 15:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:08.696 15:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:08.696 15:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:08.696 15:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:08.696 15:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:08.696 15:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:08.696 15:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:08.696 15:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.963 15:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:08.963 "name": "Existed_Raid", 00:12:08.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.964 "strip_size_kb": 0, 00:12:08.964 "state": "configuring", 00:12:08.964 "raid_level": "raid1", 00:12:08.964 "superblock": false, 00:12:08.964 "num_base_bdevs": 3, 00:12:08.964 "num_base_bdevs_discovered": 2, 00:12:08.964 "num_base_bdevs_operational": 3, 00:12:08.964 "base_bdevs_list": [ 00:12:08.964 { 00:12:08.964 "name": "BaseBdev1", 00:12:08.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.964 "is_configured": false, 00:12:08.964 "data_offset": 0, 00:12:08.964 "data_size": 0 00:12:08.964 }, 00:12:08.964 { 00:12:08.964 "name": "BaseBdev2", 00:12:08.964 "uuid": "7c7857f4-405f-11ef-b2a4-e9dca065e82e", 00:12:08.964 "is_configured": true, 00:12:08.964 "data_offset": 0, 00:12:08.964 "data_size": 65536 00:12:08.964 }, 00:12:08.964 { 00:12:08.964 "name": "BaseBdev3", 00:12:08.964 "uuid": "7d0b7014-405f-11ef-b2a4-e9dca065e82e", 00:12:08.964 "is_configured": true, 00:12:08.964 "data_offset": 0, 00:12:08.964 "data_size": 65536 00:12:08.964 } 00:12:08.964 ] 00:12:08.964 }' 00:12:08.964 15:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:08.964 15:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.237 15:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:12:09.496 [2024-07-12 15:00:35.250727] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:09.496 15:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:09.496 15:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:09.496 15:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:09.496 15:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:09.496 15:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:09.496 15:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:09.496 15:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:09.496 15:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:09.496 15:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:09.496 15:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:09.496 15:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:09.496 15:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.062 15:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:10.062 "name": "Existed_Raid", 00:12:10.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.062 "strip_size_kb": 0, 00:12:10.062 "state": "configuring", 00:12:10.062 "raid_level": "raid1", 00:12:10.062 "superblock": false, 00:12:10.062 "num_base_bdevs": 3, 00:12:10.062 "num_base_bdevs_discovered": 1, 00:12:10.062 "num_base_bdevs_operational": 3, 00:12:10.062 "base_bdevs_list": [ 00:12:10.062 { 00:12:10.062 "name": "BaseBdev1", 00:12:10.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.062 "is_configured": false, 00:12:10.062 "data_offset": 0, 00:12:10.062 "data_size": 0 00:12:10.062 }, 00:12:10.062 { 00:12:10.062 "name": null, 00:12:10.062 "uuid": "7c7857f4-405f-11ef-b2a4-e9dca065e82e", 00:12:10.062 "is_configured": false, 00:12:10.062 "data_offset": 0, 00:12:10.062 "data_size": 65536 00:12:10.062 }, 00:12:10.062 { 00:12:10.062 "name": "BaseBdev3", 00:12:10.062 "uuid": "7d0b7014-405f-11ef-b2a4-e9dca065e82e", 00:12:10.062 "is_configured": true, 00:12:10.062 "data_offset": 0, 00:12:10.062 "data_size": 65536 00:12:10.062 } 00:12:10.062 ] 00:12:10.062 }' 00:12:10.062 15:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:10.062 15:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.320 15:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:10.320 15:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:10.577 15:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:12:10.577 15:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:10.832 [2024-07-12 15:00:36.438850] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:10.833 BaseBdev1 00:12:10.833 15:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:12:10.833 15:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:10.833 15:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:10.833 15:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:10.833 15:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:10.833 15:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:10.833 15:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:11.089 15:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:11.348 [ 00:12:11.348 { 00:12:11.348 "name": "BaseBdev1", 00:12:11.348 "aliases": [ 00:12:11.348 "7ec07383-405f-11ef-b2a4-e9dca065e82e" 00:12:11.348 ], 00:12:11.348 "product_name": "Malloc disk", 00:12:11.348 "block_size": 512, 00:12:11.348 "num_blocks": 65536, 00:12:11.348 "uuid": "7ec07383-405f-11ef-b2a4-e9dca065e82e", 00:12:11.348 "assigned_rate_limits": { 00:12:11.348 "rw_ios_per_sec": 0, 00:12:11.348 "rw_mbytes_per_sec": 0, 00:12:11.348 "r_mbytes_per_sec": 0, 00:12:11.348 "w_mbytes_per_sec": 0 00:12:11.348 }, 00:12:11.348 "claimed": true, 00:12:11.348 "claim_type": "exclusive_write", 00:12:11.348 "zoned": false, 00:12:11.348 "supported_io_types": { 00:12:11.348 "read": true, 00:12:11.348 "write": true, 00:12:11.348 "unmap": true, 00:12:11.348 "flush": true, 00:12:11.348 "reset": true, 00:12:11.348 "nvme_admin": false, 00:12:11.348 "nvme_io": false, 00:12:11.348 "nvme_io_md": false, 00:12:11.348 "write_zeroes": true, 00:12:11.348 "zcopy": true, 00:12:11.348 "get_zone_info": false, 00:12:11.348 "zone_management": false, 00:12:11.348 "zone_append": false, 00:12:11.348 "compare": false, 00:12:11.348 "compare_and_write": false, 00:12:11.348 "abort": true, 00:12:11.348 "seek_hole": false, 00:12:11.348 "seek_data": false, 00:12:11.348 "copy": true, 00:12:11.348 "nvme_iov_md": false 00:12:11.348 }, 00:12:11.348 "memory_domains": [ 00:12:11.348 { 00:12:11.348 "dma_device_id": "system", 00:12:11.348 "dma_device_type": 1 00:12:11.348 }, 00:12:11.348 { 00:12:11.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.348 "dma_device_type": 2 00:12:11.348 } 00:12:11.348 ], 00:12:11.348 "driver_specific": {} 00:12:11.348 } 00:12:11.348 ] 00:12:11.348 15:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:11.348 15:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:11.348 15:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:11.348 15:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:11.348 15:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:11.348 15:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:11.348 15:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:11.348 15:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:11.348 15:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:11.348 15:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:11.348 15:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:11.348 15:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:11.348 15:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.606 15:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:11.606 "name": "Existed_Raid", 00:12:11.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.606 "strip_size_kb": 0, 00:12:11.606 "state": "configuring", 00:12:11.606 "raid_level": "raid1", 00:12:11.606 "superblock": false, 00:12:11.606 "num_base_bdevs": 3, 00:12:11.606 "num_base_bdevs_discovered": 2, 00:12:11.606 "num_base_bdevs_operational": 3, 00:12:11.606 "base_bdevs_list": [ 00:12:11.606 { 00:12:11.606 "name": "BaseBdev1", 00:12:11.606 "uuid": "7ec07383-405f-11ef-b2a4-e9dca065e82e", 00:12:11.606 "is_configured": true, 00:12:11.606 "data_offset": 0, 00:12:11.606 "data_size": 65536 00:12:11.606 }, 00:12:11.606 { 00:12:11.606 "name": null, 00:12:11.606 "uuid": "7c7857f4-405f-11ef-b2a4-e9dca065e82e", 00:12:11.606 "is_configured": false, 00:12:11.606 "data_offset": 0, 00:12:11.606 "data_size": 65536 00:12:11.606 }, 00:12:11.606 { 00:12:11.606 "name": "BaseBdev3", 00:12:11.606 "uuid": "7d0b7014-405f-11ef-b2a4-e9dca065e82e", 00:12:11.606 "is_configured": true, 00:12:11.606 "data_offset": 0, 00:12:11.606 "data_size": 65536 00:12:11.606 } 00:12:11.606 ] 00:12:11.606 }' 00:12:11.606 15:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:11.606 15:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.863 15:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:11.863 15:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:12.121 15:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:12:12.121 15:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:12:12.379 [2024-07-12 15:00:38.178743] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:12.379 15:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:12.379 15:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:12.379 15:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:12.379 15:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:12.379 15:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:12.379 15:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:12.379 15:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:12.379 15:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:12.379 15:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:12.379 15:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:12.379 15:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:12.379 15:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.950 15:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:12.950 "name": "Existed_Raid", 00:12:12.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.950 "strip_size_kb": 0, 00:12:12.950 "state": "configuring", 00:12:12.950 "raid_level": "raid1", 00:12:12.950 "superblock": false, 00:12:12.950 "num_base_bdevs": 3, 00:12:12.950 "num_base_bdevs_discovered": 1, 00:12:12.950 "num_base_bdevs_operational": 3, 00:12:12.950 "base_bdevs_list": [ 00:12:12.950 { 00:12:12.950 "name": "BaseBdev1", 00:12:12.950 "uuid": "7ec07383-405f-11ef-b2a4-e9dca065e82e", 00:12:12.950 "is_configured": true, 00:12:12.950 "data_offset": 0, 00:12:12.950 "data_size": 65536 00:12:12.950 }, 00:12:12.950 { 00:12:12.950 "name": null, 00:12:12.950 "uuid": "7c7857f4-405f-11ef-b2a4-e9dca065e82e", 00:12:12.950 "is_configured": false, 00:12:12.950 "data_offset": 0, 00:12:12.950 "data_size": 65536 00:12:12.950 }, 00:12:12.950 { 00:12:12.950 "name": null, 00:12:12.950 "uuid": "7d0b7014-405f-11ef-b2a4-e9dca065e82e", 00:12:12.950 "is_configured": false, 00:12:12.950 "data_offset": 0, 00:12:12.950 "data_size": 65536 00:12:12.950 } 00:12:12.950 ] 00:12:12.950 }' 00:12:12.950 15:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:12.950 15:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.216 15:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:13.216 15:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:13.474 15:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:12:13.474 15:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:13.737 [2024-07-12 15:00:39.330774] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:13.737 15:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:13.737 15:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:13.737 15:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:13.737 15:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:13.737 15:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:13.737 15:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:13.737 15:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:13.737 15:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:13.737 15:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:13.737 15:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:13.737 15:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:13.737 15:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.995 15:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:13.995 "name": "Existed_Raid", 00:12:13.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.995 "strip_size_kb": 0, 00:12:13.995 "state": "configuring", 00:12:13.995 "raid_level": "raid1", 00:12:13.995 "superblock": false, 00:12:13.995 "num_base_bdevs": 3, 00:12:13.995 "num_base_bdevs_discovered": 2, 00:12:13.995 "num_base_bdevs_operational": 3, 00:12:13.995 "base_bdevs_list": [ 00:12:13.995 { 00:12:13.995 "name": "BaseBdev1", 00:12:13.995 "uuid": "7ec07383-405f-11ef-b2a4-e9dca065e82e", 00:12:13.995 "is_configured": true, 00:12:13.995 "data_offset": 0, 00:12:13.995 "data_size": 65536 00:12:13.995 }, 00:12:13.995 { 00:12:13.995 "name": null, 00:12:13.995 "uuid": "7c7857f4-405f-11ef-b2a4-e9dca065e82e", 00:12:13.995 "is_configured": false, 00:12:13.995 "data_offset": 0, 00:12:13.995 "data_size": 65536 00:12:13.995 }, 00:12:13.995 { 00:12:13.995 "name": "BaseBdev3", 00:12:13.995 "uuid": "7d0b7014-405f-11ef-b2a4-e9dca065e82e", 00:12:13.995 "is_configured": true, 00:12:13.995 "data_offset": 0, 00:12:13.995 "data_size": 65536 00:12:13.995 } 00:12:13.995 ] 00:12:13.995 }' 00:12:13.995 15:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:13.995 15:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.253 15:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:14.253 15:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:14.510 15:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:12:14.510 15:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:14.767 [2024-07-12 15:00:40.530802] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:14.767 15:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:14.767 15:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:14.767 15:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:14.768 15:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:14.768 15:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:14.768 15:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:14.768 15:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:14.768 15:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:14.768 15:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:14.768 15:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:14.768 15:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:14.768 15:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.027 15:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:15.027 "name": "Existed_Raid", 00:12:15.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.028 "strip_size_kb": 0, 00:12:15.028 "state": "configuring", 00:12:15.028 "raid_level": "raid1", 00:12:15.028 "superblock": false, 00:12:15.028 "num_base_bdevs": 3, 00:12:15.028 "num_base_bdevs_discovered": 1, 00:12:15.028 "num_base_bdevs_operational": 3, 00:12:15.028 "base_bdevs_list": [ 00:12:15.028 { 00:12:15.028 "name": null, 00:12:15.028 "uuid": "7ec07383-405f-11ef-b2a4-e9dca065e82e", 00:12:15.028 "is_configured": false, 00:12:15.028 "data_offset": 0, 00:12:15.028 "data_size": 65536 00:12:15.028 }, 00:12:15.028 { 00:12:15.028 "name": null, 00:12:15.028 "uuid": "7c7857f4-405f-11ef-b2a4-e9dca065e82e", 00:12:15.028 "is_configured": false, 00:12:15.028 "data_offset": 0, 00:12:15.028 "data_size": 65536 00:12:15.028 }, 00:12:15.028 { 00:12:15.028 "name": "BaseBdev3", 00:12:15.028 "uuid": "7d0b7014-405f-11ef-b2a4-e9dca065e82e", 00:12:15.028 "is_configured": true, 00:12:15.028 "data_offset": 0, 00:12:15.028 "data_size": 65536 00:12:15.028 } 00:12:15.028 ] 00:12:15.028 }' 00:12:15.028 15:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:15.028 15:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.287 15:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:15.287 15:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:15.865 15:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:12:15.865 15:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:15.865 [2024-07-12 15:00:41.668522] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:15.865 15:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:15.865 15:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:15.865 15:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:15.865 15:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:15.865 15:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:15.865 15:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:15.865 15:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:15.865 15:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:15.865 15:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:16.122 15:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:16.122 15:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:16.122 15:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.380 15:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:16.380 "name": "Existed_Raid", 00:12:16.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.380 "strip_size_kb": 0, 00:12:16.380 "state": "configuring", 00:12:16.380 "raid_level": "raid1", 00:12:16.380 "superblock": false, 00:12:16.380 "num_base_bdevs": 3, 00:12:16.380 "num_base_bdevs_discovered": 2, 00:12:16.380 "num_base_bdevs_operational": 3, 00:12:16.380 "base_bdevs_list": [ 00:12:16.380 { 00:12:16.380 "name": null, 00:12:16.380 "uuid": "7ec07383-405f-11ef-b2a4-e9dca065e82e", 00:12:16.380 "is_configured": false, 00:12:16.380 "data_offset": 0, 00:12:16.380 "data_size": 65536 00:12:16.380 }, 00:12:16.380 { 00:12:16.380 "name": "BaseBdev2", 00:12:16.380 "uuid": "7c7857f4-405f-11ef-b2a4-e9dca065e82e", 00:12:16.380 "is_configured": true, 00:12:16.380 "data_offset": 0, 00:12:16.380 "data_size": 65536 00:12:16.380 }, 00:12:16.380 { 00:12:16.380 "name": "BaseBdev3", 00:12:16.380 "uuid": "7d0b7014-405f-11ef-b2a4-e9dca065e82e", 00:12:16.380 "is_configured": true, 00:12:16.380 "data_offset": 0, 00:12:16.380 "data_size": 65536 00:12:16.380 } 00:12:16.380 ] 00:12:16.380 }' 00:12:16.380 15:00:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:16.380 15:00:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.656 15:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:16.656 15:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:16.924 15:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:12:16.924 15:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:16.924 15:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:16.924 15:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 7ec07383-405f-11ef-b2a4-e9dca065e82e 00:12:17.186 [2024-07-12 15:00:42.964687] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:17.186 [2024-07-12 15:00:42.964729] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x99ba2434f00 00:12:17.186 [2024-07-12 15:00:42.964735] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:17.186 [2024-07-12 15:00:42.964760] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x99ba2497e20 00:12:17.186 [2024-07-12 15:00:42.964833] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x99ba2434f00 00:12:17.186 [2024-07-12 15:00:42.964838] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x99ba2434f00 00:12:17.187 [2024-07-12 15:00:42.964873] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.187 NewBaseBdev 00:12:17.187 15:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:12:17.187 15:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:12:17.187 15:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:17.187 15:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:17.187 15:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:17.187 15:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:17.187 15:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:17.445 15:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:17.703 [ 00:12:17.703 { 00:12:17.703 "name": "NewBaseBdev", 00:12:17.703 "aliases": [ 00:12:17.703 "7ec07383-405f-11ef-b2a4-e9dca065e82e" 00:12:17.703 ], 00:12:17.703 "product_name": "Malloc disk", 00:12:17.703 "block_size": 512, 00:12:17.703 "num_blocks": 65536, 00:12:17.703 "uuid": "7ec07383-405f-11ef-b2a4-e9dca065e82e", 00:12:17.703 "assigned_rate_limits": { 00:12:17.703 "rw_ios_per_sec": 0, 00:12:17.703 "rw_mbytes_per_sec": 0, 00:12:17.703 "r_mbytes_per_sec": 0, 00:12:17.703 "w_mbytes_per_sec": 0 00:12:17.703 }, 00:12:17.703 "claimed": true, 00:12:17.703 "claim_type": "exclusive_write", 00:12:17.703 "zoned": false, 00:12:17.703 "supported_io_types": { 00:12:17.703 "read": true, 00:12:17.703 "write": true, 00:12:17.703 "unmap": true, 00:12:17.703 "flush": true, 00:12:17.703 "reset": true, 00:12:17.703 "nvme_admin": false, 00:12:17.703 "nvme_io": false, 00:12:17.703 "nvme_io_md": false, 00:12:17.703 "write_zeroes": true, 00:12:17.703 "zcopy": true, 00:12:17.703 "get_zone_info": false, 00:12:17.703 "zone_management": false, 00:12:17.703 "zone_append": false, 00:12:17.703 "compare": false, 00:12:17.703 "compare_and_write": false, 00:12:17.703 "abort": true, 00:12:17.703 "seek_hole": false, 00:12:17.703 "seek_data": false, 00:12:17.703 "copy": true, 00:12:17.703 "nvme_iov_md": false 00:12:17.703 }, 00:12:17.703 "memory_domains": [ 00:12:17.703 { 00:12:17.704 "dma_device_id": "system", 00:12:17.704 "dma_device_type": 1 00:12:17.704 }, 00:12:17.704 { 00:12:17.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.704 "dma_device_type": 2 00:12:17.704 } 00:12:17.704 ], 00:12:17.704 "driver_specific": {} 00:12:17.704 } 00:12:17.704 ] 00:12:17.704 15:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:17.704 15:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:17.704 15:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:17.704 15:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:17.704 15:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:17.704 15:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:17.704 15:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:17.704 15:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:17.704 15:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:17.704 15:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:17.704 15:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:17.704 15:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:17.704 15:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.269 15:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:18.269 "name": "Existed_Raid", 00:12:18.269 "uuid": "82a43e35-405f-11ef-b2a4-e9dca065e82e", 00:12:18.269 "strip_size_kb": 0, 00:12:18.269 "state": "online", 00:12:18.269 "raid_level": "raid1", 00:12:18.269 "superblock": false, 00:12:18.269 "num_base_bdevs": 3, 00:12:18.269 "num_base_bdevs_discovered": 3, 00:12:18.269 "num_base_bdevs_operational": 3, 00:12:18.269 "base_bdevs_list": [ 00:12:18.269 { 00:12:18.269 "name": "NewBaseBdev", 00:12:18.269 "uuid": "7ec07383-405f-11ef-b2a4-e9dca065e82e", 00:12:18.269 "is_configured": true, 00:12:18.269 "data_offset": 0, 00:12:18.269 "data_size": 65536 00:12:18.269 }, 00:12:18.269 { 00:12:18.269 "name": "BaseBdev2", 00:12:18.269 "uuid": "7c7857f4-405f-11ef-b2a4-e9dca065e82e", 00:12:18.269 "is_configured": true, 00:12:18.269 "data_offset": 0, 00:12:18.269 "data_size": 65536 00:12:18.269 }, 00:12:18.269 { 00:12:18.269 "name": "BaseBdev3", 00:12:18.269 "uuid": "7d0b7014-405f-11ef-b2a4-e9dca065e82e", 00:12:18.269 "is_configured": true, 00:12:18.269 "data_offset": 0, 00:12:18.269 "data_size": 65536 00:12:18.269 } 00:12:18.269 ] 00:12:18.269 }' 00:12:18.269 15:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:18.269 15:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.527 15:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:12:18.527 15:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:18.527 15:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:18.527 15:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:18.527 15:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:18.527 15:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:18.527 15:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:18.527 15:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:18.785 [2024-07-12 15:00:44.404563] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:18.785 15:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:18.785 "name": "Existed_Raid", 00:12:18.785 "aliases": [ 00:12:18.785 "82a43e35-405f-11ef-b2a4-e9dca065e82e" 00:12:18.785 ], 00:12:18.785 "product_name": "Raid Volume", 00:12:18.785 "block_size": 512, 00:12:18.785 "num_blocks": 65536, 00:12:18.785 "uuid": "82a43e35-405f-11ef-b2a4-e9dca065e82e", 00:12:18.785 "assigned_rate_limits": { 00:12:18.785 "rw_ios_per_sec": 0, 00:12:18.785 "rw_mbytes_per_sec": 0, 00:12:18.785 "r_mbytes_per_sec": 0, 00:12:18.785 "w_mbytes_per_sec": 0 00:12:18.785 }, 00:12:18.785 "claimed": false, 00:12:18.785 "zoned": false, 00:12:18.785 "supported_io_types": { 00:12:18.786 "read": true, 00:12:18.786 "write": true, 00:12:18.786 "unmap": false, 00:12:18.786 "flush": false, 00:12:18.786 "reset": true, 00:12:18.786 "nvme_admin": false, 00:12:18.786 "nvme_io": false, 00:12:18.786 "nvme_io_md": false, 00:12:18.786 "write_zeroes": true, 00:12:18.786 "zcopy": false, 00:12:18.786 "get_zone_info": false, 00:12:18.786 "zone_management": false, 00:12:18.786 "zone_append": false, 00:12:18.786 "compare": false, 00:12:18.786 "compare_and_write": false, 00:12:18.786 "abort": false, 00:12:18.786 "seek_hole": false, 00:12:18.786 "seek_data": false, 00:12:18.786 "copy": false, 00:12:18.786 "nvme_iov_md": false 00:12:18.786 }, 00:12:18.786 "memory_domains": [ 00:12:18.786 { 00:12:18.786 "dma_device_id": "system", 00:12:18.786 "dma_device_type": 1 00:12:18.786 }, 00:12:18.786 { 00:12:18.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.786 "dma_device_type": 2 00:12:18.786 }, 00:12:18.786 { 00:12:18.786 "dma_device_id": "system", 00:12:18.786 "dma_device_type": 1 00:12:18.786 }, 00:12:18.786 { 00:12:18.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.786 "dma_device_type": 2 00:12:18.786 }, 00:12:18.786 { 00:12:18.786 "dma_device_id": "system", 00:12:18.786 "dma_device_type": 1 00:12:18.786 }, 00:12:18.786 { 00:12:18.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.786 "dma_device_type": 2 00:12:18.786 } 00:12:18.786 ], 00:12:18.786 "driver_specific": { 00:12:18.786 "raid": { 00:12:18.786 "uuid": "82a43e35-405f-11ef-b2a4-e9dca065e82e", 00:12:18.786 "strip_size_kb": 0, 00:12:18.786 "state": "online", 00:12:18.786 "raid_level": "raid1", 00:12:18.786 "superblock": false, 00:12:18.786 "num_base_bdevs": 3, 00:12:18.786 "num_base_bdevs_discovered": 3, 00:12:18.786 "num_base_bdevs_operational": 3, 00:12:18.786 "base_bdevs_list": [ 00:12:18.786 { 00:12:18.786 "name": "NewBaseBdev", 00:12:18.786 "uuid": "7ec07383-405f-11ef-b2a4-e9dca065e82e", 00:12:18.786 "is_configured": true, 00:12:18.786 "data_offset": 0, 00:12:18.786 "data_size": 65536 00:12:18.786 }, 00:12:18.786 { 00:12:18.786 "name": "BaseBdev2", 00:12:18.786 "uuid": "7c7857f4-405f-11ef-b2a4-e9dca065e82e", 00:12:18.786 "is_configured": true, 00:12:18.786 "data_offset": 0, 00:12:18.786 "data_size": 65536 00:12:18.786 }, 00:12:18.786 { 00:12:18.786 "name": "BaseBdev3", 00:12:18.786 "uuid": "7d0b7014-405f-11ef-b2a4-e9dca065e82e", 00:12:18.786 "is_configured": true, 00:12:18.786 "data_offset": 0, 00:12:18.786 "data_size": 65536 00:12:18.786 } 00:12:18.786 ] 00:12:18.786 } 00:12:18.786 } 00:12:18.786 }' 00:12:18.786 15:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:18.786 15:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:12:18.786 BaseBdev2 00:12:18.786 BaseBdev3' 00:12:18.786 15:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:18.786 15:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:12:18.786 15:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:19.044 15:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:19.044 "name": "NewBaseBdev", 00:12:19.044 "aliases": [ 00:12:19.044 "7ec07383-405f-11ef-b2a4-e9dca065e82e" 00:12:19.044 ], 00:12:19.044 "product_name": "Malloc disk", 00:12:19.044 "block_size": 512, 00:12:19.044 "num_blocks": 65536, 00:12:19.044 "uuid": "7ec07383-405f-11ef-b2a4-e9dca065e82e", 00:12:19.044 "assigned_rate_limits": { 00:12:19.044 "rw_ios_per_sec": 0, 00:12:19.045 "rw_mbytes_per_sec": 0, 00:12:19.045 "r_mbytes_per_sec": 0, 00:12:19.045 "w_mbytes_per_sec": 0 00:12:19.045 }, 00:12:19.045 "claimed": true, 00:12:19.045 "claim_type": "exclusive_write", 00:12:19.045 "zoned": false, 00:12:19.045 "supported_io_types": { 00:12:19.045 "read": true, 00:12:19.045 "write": true, 00:12:19.045 "unmap": true, 00:12:19.045 "flush": true, 00:12:19.045 "reset": true, 00:12:19.045 "nvme_admin": false, 00:12:19.045 "nvme_io": false, 00:12:19.045 "nvme_io_md": false, 00:12:19.045 "write_zeroes": true, 00:12:19.045 "zcopy": true, 00:12:19.045 "get_zone_info": false, 00:12:19.045 "zone_management": false, 00:12:19.045 "zone_append": false, 00:12:19.045 "compare": false, 00:12:19.045 "compare_and_write": false, 00:12:19.045 "abort": true, 00:12:19.045 "seek_hole": false, 00:12:19.045 "seek_data": false, 00:12:19.045 "copy": true, 00:12:19.045 "nvme_iov_md": false 00:12:19.045 }, 00:12:19.045 "memory_domains": [ 00:12:19.045 { 00:12:19.045 "dma_device_id": "system", 00:12:19.045 "dma_device_type": 1 00:12:19.045 }, 00:12:19.045 { 00:12:19.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.045 "dma_device_type": 2 00:12:19.045 } 00:12:19.045 ], 00:12:19.045 "driver_specific": {} 00:12:19.045 }' 00:12:19.045 15:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:19.045 15:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:19.045 15:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:19.045 15:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:19.045 15:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:19.045 15:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:19.045 15:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:19.045 15:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:19.045 15:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:19.045 15:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:19.045 15:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:19.045 15:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:19.045 15:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:19.045 15:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:19.045 15:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:19.303 15:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:19.303 "name": "BaseBdev2", 00:12:19.303 "aliases": [ 00:12:19.303 "7c7857f4-405f-11ef-b2a4-e9dca065e82e" 00:12:19.303 ], 00:12:19.303 "product_name": "Malloc disk", 00:12:19.303 "block_size": 512, 00:12:19.303 "num_blocks": 65536, 00:12:19.303 "uuid": "7c7857f4-405f-11ef-b2a4-e9dca065e82e", 00:12:19.303 "assigned_rate_limits": { 00:12:19.303 "rw_ios_per_sec": 0, 00:12:19.303 "rw_mbytes_per_sec": 0, 00:12:19.303 "r_mbytes_per_sec": 0, 00:12:19.303 "w_mbytes_per_sec": 0 00:12:19.303 }, 00:12:19.303 "claimed": true, 00:12:19.303 "claim_type": "exclusive_write", 00:12:19.303 "zoned": false, 00:12:19.303 "supported_io_types": { 00:12:19.303 "read": true, 00:12:19.303 "write": true, 00:12:19.303 "unmap": true, 00:12:19.303 "flush": true, 00:12:19.303 "reset": true, 00:12:19.303 "nvme_admin": false, 00:12:19.303 "nvme_io": false, 00:12:19.303 "nvme_io_md": false, 00:12:19.303 "write_zeroes": true, 00:12:19.303 "zcopy": true, 00:12:19.303 "get_zone_info": false, 00:12:19.303 "zone_management": false, 00:12:19.303 "zone_append": false, 00:12:19.303 "compare": false, 00:12:19.303 "compare_and_write": false, 00:12:19.303 "abort": true, 00:12:19.303 "seek_hole": false, 00:12:19.303 "seek_data": false, 00:12:19.303 "copy": true, 00:12:19.303 "nvme_iov_md": false 00:12:19.303 }, 00:12:19.303 "memory_domains": [ 00:12:19.303 { 00:12:19.303 "dma_device_id": "system", 00:12:19.303 "dma_device_type": 1 00:12:19.303 }, 00:12:19.303 { 00:12:19.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.303 "dma_device_type": 2 00:12:19.303 } 00:12:19.303 ], 00:12:19.303 "driver_specific": {} 00:12:19.303 }' 00:12:19.303 15:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:19.303 15:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:19.303 15:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:19.303 15:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:19.303 15:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:19.303 15:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:19.303 15:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:19.303 15:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:19.303 15:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:19.303 15:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:19.303 15:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:19.303 15:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:19.303 15:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:19.303 15:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:19.303 15:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:19.561 15:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:19.561 "name": "BaseBdev3", 00:12:19.561 "aliases": [ 00:12:19.561 "7d0b7014-405f-11ef-b2a4-e9dca065e82e" 00:12:19.561 ], 00:12:19.561 "product_name": "Malloc disk", 00:12:19.561 "block_size": 512, 00:12:19.561 "num_blocks": 65536, 00:12:19.561 "uuid": "7d0b7014-405f-11ef-b2a4-e9dca065e82e", 00:12:19.561 "assigned_rate_limits": { 00:12:19.561 "rw_ios_per_sec": 0, 00:12:19.561 "rw_mbytes_per_sec": 0, 00:12:19.561 "r_mbytes_per_sec": 0, 00:12:19.561 "w_mbytes_per_sec": 0 00:12:19.561 }, 00:12:19.561 "claimed": true, 00:12:19.561 "claim_type": "exclusive_write", 00:12:19.561 "zoned": false, 00:12:19.561 "supported_io_types": { 00:12:19.561 "read": true, 00:12:19.561 "write": true, 00:12:19.561 "unmap": true, 00:12:19.561 "flush": true, 00:12:19.561 "reset": true, 00:12:19.561 "nvme_admin": false, 00:12:19.561 "nvme_io": false, 00:12:19.561 "nvme_io_md": false, 00:12:19.561 "write_zeroes": true, 00:12:19.561 "zcopy": true, 00:12:19.561 "get_zone_info": false, 00:12:19.561 "zone_management": false, 00:12:19.561 "zone_append": false, 00:12:19.561 "compare": false, 00:12:19.561 "compare_and_write": false, 00:12:19.561 "abort": true, 00:12:19.561 "seek_hole": false, 00:12:19.561 "seek_data": false, 00:12:19.561 "copy": true, 00:12:19.561 "nvme_iov_md": false 00:12:19.561 }, 00:12:19.561 "memory_domains": [ 00:12:19.561 { 00:12:19.561 "dma_device_id": "system", 00:12:19.561 "dma_device_type": 1 00:12:19.561 }, 00:12:19.561 { 00:12:19.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.561 "dma_device_type": 2 00:12:19.561 } 00:12:19.561 ], 00:12:19.561 "driver_specific": {} 00:12:19.561 }' 00:12:19.561 15:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:19.561 15:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:19.561 15:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:19.561 15:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:19.561 15:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:19.561 15:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:19.561 15:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:19.561 15:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:19.561 15:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:19.820 15:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:19.820 15:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:19.820 15:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:19.820 15:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:19.820 [2024-07-12 15:00:45.616461] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:19.820 [2024-07-12 15:00:45.616511] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:19.820 [2024-07-12 15:00:45.616567] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:19.820 [2024-07-12 15:00:45.616703] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:19.820 [2024-07-12 15:00:45.616713] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x99ba2434f00 name Existed_Raid, state offline 00:12:19.820 15:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 56107 00:12:19.820 15:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 56107 ']' 00:12:19.820 15:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 56107 00:12:19.820 15:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:12:19.820 15:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:12:19.820 15:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 56107 00:12:19.820 15:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:12:19.820 15:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:12:19.820 killing process with pid 56107 00:12:19.820 15:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:12:19.820 15:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 56107' 00:12:19.820 15:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 56107 00:12:19.820 15:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 56107 00:12:19.820 [2024-07-12 15:00:45.641623] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:20.078 [2024-07-12 15:00:45.669407] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:12:20.336 00:12:20.336 real 0m24.868s 00:12:20.336 user 0m45.459s 00:12:20.336 sys 0m3.351s 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.336 ************************************ 00:12:20.336 END TEST raid_state_function_test 00:12:20.336 ************************************ 00:12:20.336 15:00:45 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:20.336 15:00:45 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:12:20.336 15:00:45 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:20.336 15:00:45 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:20.336 15:00:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:20.336 ************************************ 00:12:20.336 START TEST raid_state_function_test_sb 00:12:20.336 ************************************ 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 true 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=56836 00:12:20.336 Process raid pid: 56836 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 56836' 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 56836 /var/tmp/spdk-raid.sock 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 56836 ']' 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:20.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:20.336 15:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.336 [2024-07-12 15:00:45.991917] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:12:20.336 [2024-07-12 15:00:45.992108] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:12:20.901 EAL: TSC is not safe to use in SMP mode 00:12:20.901 EAL: TSC is not invariant 00:12:20.901 [2024-07-12 15:00:46.526278] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.901 [2024-07-12 15:00:46.620809] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:20.901 [2024-07-12 15:00:46.622939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.901 [2024-07-12 15:00:46.623704] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.901 [2024-07-12 15:00:46.623720] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.465 15:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:21.465 15:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:12:21.465 15:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:21.723 [2024-07-12 15:00:47.311893] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:21.723 [2024-07-12 15:00:47.311946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:21.723 [2024-07-12 15:00:47.311952] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:21.723 [2024-07-12 15:00:47.311960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:21.723 [2024-07-12 15:00:47.311964] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:21.723 [2024-07-12 15:00:47.311971] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:21.723 15:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:21.723 15:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:21.723 15:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:21.723 15:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:21.723 15:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:21.723 15:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:21.723 15:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:21.723 15:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:21.723 15:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:21.723 15:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:21.723 15:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:21.723 15:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.981 15:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:21.981 "name": "Existed_Raid", 00:12:21.981 "uuid": "853b9090-405f-11ef-b2a4-e9dca065e82e", 00:12:21.981 "strip_size_kb": 0, 00:12:21.981 "state": "configuring", 00:12:21.981 "raid_level": "raid1", 00:12:21.981 "superblock": true, 00:12:21.981 "num_base_bdevs": 3, 00:12:21.981 "num_base_bdevs_discovered": 0, 00:12:21.981 "num_base_bdevs_operational": 3, 00:12:21.981 "base_bdevs_list": [ 00:12:21.981 { 00:12:21.981 "name": "BaseBdev1", 00:12:21.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.981 "is_configured": false, 00:12:21.981 "data_offset": 0, 00:12:21.981 "data_size": 0 00:12:21.981 }, 00:12:21.981 { 00:12:21.981 "name": "BaseBdev2", 00:12:21.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.981 "is_configured": false, 00:12:21.981 "data_offset": 0, 00:12:21.981 "data_size": 0 00:12:21.981 }, 00:12:21.981 { 00:12:21.981 "name": "BaseBdev3", 00:12:21.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.981 "is_configured": false, 00:12:21.981 "data_offset": 0, 00:12:21.981 "data_size": 0 00:12:21.981 } 00:12:21.981 ] 00:12:21.981 }' 00:12:21.981 15:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:21.981 15:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.239 15:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:22.495 [2024-07-12 15:00:48.087807] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:22.495 [2024-07-12 15:00:48.087836] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10a6ad434500 name Existed_Raid, state configuring 00:12:22.495 15:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:22.753 [2024-07-12 15:00:48.371797] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:22.753 [2024-07-12 15:00:48.371849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:22.753 [2024-07-12 15:00:48.371854] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:22.753 [2024-07-12 15:00:48.371863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:22.753 [2024-07-12 15:00:48.371866] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:22.753 [2024-07-12 15:00:48.371873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:22.753 15:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:23.010 [2024-07-12 15:00:48.632767] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:23.010 BaseBdev1 00:12:23.010 15:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:23.010 15:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:23.010 15:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:23.010 15:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:23.010 15:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:23.010 15:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:23.010 15:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:23.268 15:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:23.524 [ 00:12:23.524 { 00:12:23.524 "name": "BaseBdev1", 00:12:23.524 "aliases": [ 00:12:23.524 "8604f6c6-405f-11ef-b2a4-e9dca065e82e" 00:12:23.524 ], 00:12:23.524 "product_name": "Malloc disk", 00:12:23.524 "block_size": 512, 00:12:23.524 "num_blocks": 65536, 00:12:23.524 "uuid": "8604f6c6-405f-11ef-b2a4-e9dca065e82e", 00:12:23.524 "assigned_rate_limits": { 00:12:23.524 "rw_ios_per_sec": 0, 00:12:23.524 "rw_mbytes_per_sec": 0, 00:12:23.524 "r_mbytes_per_sec": 0, 00:12:23.524 "w_mbytes_per_sec": 0 00:12:23.524 }, 00:12:23.524 "claimed": true, 00:12:23.524 "claim_type": "exclusive_write", 00:12:23.524 "zoned": false, 00:12:23.524 "supported_io_types": { 00:12:23.524 "read": true, 00:12:23.524 "write": true, 00:12:23.524 "unmap": true, 00:12:23.524 "flush": true, 00:12:23.524 "reset": true, 00:12:23.524 "nvme_admin": false, 00:12:23.524 "nvme_io": false, 00:12:23.524 "nvme_io_md": false, 00:12:23.524 "write_zeroes": true, 00:12:23.524 "zcopy": true, 00:12:23.524 "get_zone_info": false, 00:12:23.524 "zone_management": false, 00:12:23.524 "zone_append": false, 00:12:23.524 "compare": false, 00:12:23.524 "compare_and_write": false, 00:12:23.524 "abort": true, 00:12:23.524 "seek_hole": false, 00:12:23.524 "seek_data": false, 00:12:23.524 "copy": true, 00:12:23.524 "nvme_iov_md": false 00:12:23.524 }, 00:12:23.524 "memory_domains": [ 00:12:23.524 { 00:12:23.524 "dma_device_id": "system", 00:12:23.524 "dma_device_type": 1 00:12:23.524 }, 00:12:23.524 { 00:12:23.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.524 "dma_device_type": 2 00:12:23.524 } 00:12:23.524 ], 00:12:23.524 "driver_specific": {} 00:12:23.524 } 00:12:23.524 ] 00:12:23.524 15:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:23.524 15:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:23.524 15:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:23.524 15:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:23.524 15:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:23.524 15:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:23.524 15:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:23.524 15:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:23.524 15:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:23.524 15:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:23.524 15:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:23.524 15:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:23.524 15:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.782 15:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:23.782 "name": "Existed_Raid", 00:12:23.782 "uuid": "85dd4b14-405f-11ef-b2a4-e9dca065e82e", 00:12:23.782 "strip_size_kb": 0, 00:12:23.782 "state": "configuring", 00:12:23.782 "raid_level": "raid1", 00:12:23.782 "superblock": true, 00:12:23.782 "num_base_bdevs": 3, 00:12:23.782 "num_base_bdevs_discovered": 1, 00:12:23.782 "num_base_bdevs_operational": 3, 00:12:23.782 "base_bdevs_list": [ 00:12:23.782 { 00:12:23.782 "name": "BaseBdev1", 00:12:23.782 "uuid": "8604f6c6-405f-11ef-b2a4-e9dca065e82e", 00:12:23.782 "is_configured": true, 00:12:23.782 "data_offset": 2048, 00:12:23.782 "data_size": 63488 00:12:23.782 }, 00:12:23.782 { 00:12:23.782 "name": "BaseBdev2", 00:12:23.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.782 "is_configured": false, 00:12:23.782 "data_offset": 0, 00:12:23.782 "data_size": 0 00:12:23.782 }, 00:12:23.782 { 00:12:23.782 "name": "BaseBdev3", 00:12:23.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.782 "is_configured": false, 00:12:23.782 "data_offset": 0, 00:12:23.782 "data_size": 0 00:12:23.782 } 00:12:23.782 ] 00:12:23.782 }' 00:12:23.782 15:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:23.782 15:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.039 15:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:24.309 [2024-07-12 15:00:49.975694] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:24.309 [2024-07-12 15:00:49.975738] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10a6ad434500 name Existed_Raid, state configuring 00:12:24.309 15:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:24.585 [2024-07-12 15:00:50.251697] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:24.585 [2024-07-12 15:00:50.252688] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:24.585 [2024-07-12 15:00:50.252742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:24.585 [2024-07-12 15:00:50.252748] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:24.585 [2024-07-12 15:00:50.252757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:24.586 15:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:24.586 15:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:24.586 15:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:24.586 15:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:24.586 15:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:24.586 15:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:24.586 15:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:24.586 15:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:24.586 15:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:24.586 15:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:24.586 15:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:24.586 15:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:24.586 15:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:24.586 15:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.844 15:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:24.844 "name": "Existed_Raid", 00:12:24.844 "uuid": "86fc244e-405f-11ef-b2a4-e9dca065e82e", 00:12:24.844 "strip_size_kb": 0, 00:12:24.844 "state": "configuring", 00:12:24.844 "raid_level": "raid1", 00:12:24.844 "superblock": true, 00:12:24.844 "num_base_bdevs": 3, 00:12:24.844 "num_base_bdevs_discovered": 1, 00:12:24.844 "num_base_bdevs_operational": 3, 00:12:24.844 "base_bdevs_list": [ 00:12:24.844 { 00:12:24.844 "name": "BaseBdev1", 00:12:24.844 "uuid": "8604f6c6-405f-11ef-b2a4-e9dca065e82e", 00:12:24.844 "is_configured": true, 00:12:24.844 "data_offset": 2048, 00:12:24.844 "data_size": 63488 00:12:24.844 }, 00:12:24.844 { 00:12:24.844 "name": "BaseBdev2", 00:12:24.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.844 "is_configured": false, 00:12:24.844 "data_offset": 0, 00:12:24.844 "data_size": 0 00:12:24.844 }, 00:12:24.844 { 00:12:24.844 "name": "BaseBdev3", 00:12:24.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.844 "is_configured": false, 00:12:24.844 "data_offset": 0, 00:12:24.844 "data_size": 0 00:12:24.844 } 00:12:24.844 ] 00:12:24.844 }' 00:12:24.844 15:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:24.844 15:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.102 15:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:25.361 [2024-07-12 15:00:51.163821] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:25.361 BaseBdev2 00:12:25.361 15:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:25.361 15:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:25.361 15:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:25.361 15:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:25.361 15:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:25.361 15:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:25.361 15:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:25.619 15:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:25.878 [ 00:12:25.878 { 00:12:25.878 "name": "BaseBdev2", 00:12:25.878 "aliases": [ 00:12:25.878 "87874bb8-405f-11ef-b2a4-e9dca065e82e" 00:12:25.878 ], 00:12:25.878 "product_name": "Malloc disk", 00:12:25.878 "block_size": 512, 00:12:25.878 "num_blocks": 65536, 00:12:25.878 "uuid": "87874bb8-405f-11ef-b2a4-e9dca065e82e", 00:12:25.878 "assigned_rate_limits": { 00:12:25.878 "rw_ios_per_sec": 0, 00:12:25.878 "rw_mbytes_per_sec": 0, 00:12:25.878 "r_mbytes_per_sec": 0, 00:12:25.878 "w_mbytes_per_sec": 0 00:12:25.878 }, 00:12:25.878 "claimed": true, 00:12:25.878 "claim_type": "exclusive_write", 00:12:25.878 "zoned": false, 00:12:25.878 "supported_io_types": { 00:12:25.878 "read": true, 00:12:25.878 "write": true, 00:12:25.878 "unmap": true, 00:12:25.878 "flush": true, 00:12:25.878 "reset": true, 00:12:25.878 "nvme_admin": false, 00:12:25.878 "nvme_io": false, 00:12:25.878 "nvme_io_md": false, 00:12:25.878 "write_zeroes": true, 00:12:25.878 "zcopy": true, 00:12:25.878 "get_zone_info": false, 00:12:25.878 "zone_management": false, 00:12:25.878 "zone_append": false, 00:12:25.878 "compare": false, 00:12:25.878 "compare_and_write": false, 00:12:25.878 "abort": true, 00:12:25.878 "seek_hole": false, 00:12:25.878 "seek_data": false, 00:12:25.878 "copy": true, 00:12:25.878 "nvme_iov_md": false 00:12:25.878 }, 00:12:25.878 "memory_domains": [ 00:12:25.878 { 00:12:25.878 "dma_device_id": "system", 00:12:25.878 "dma_device_type": 1 00:12:25.878 }, 00:12:25.878 { 00:12:25.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.878 "dma_device_type": 2 00:12:25.878 } 00:12:25.878 ], 00:12:25.878 "driver_specific": {} 00:12:25.878 } 00:12:25.878 ] 00:12:25.878 15:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:25.878 15:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:25.878 15:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:25.878 15:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:25.878 15:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:25.878 15:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:25.878 15:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:25.878 15:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:25.878 15:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:25.878 15:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:25.878 15:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:25.878 15:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:25.878 15:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:25.878 15:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.878 15:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:26.137 15:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:26.137 "name": "Existed_Raid", 00:12:26.137 "uuid": "86fc244e-405f-11ef-b2a4-e9dca065e82e", 00:12:26.137 "strip_size_kb": 0, 00:12:26.137 "state": "configuring", 00:12:26.137 "raid_level": "raid1", 00:12:26.137 "superblock": true, 00:12:26.137 "num_base_bdevs": 3, 00:12:26.137 "num_base_bdevs_discovered": 2, 00:12:26.137 "num_base_bdevs_operational": 3, 00:12:26.137 "base_bdevs_list": [ 00:12:26.137 { 00:12:26.137 "name": "BaseBdev1", 00:12:26.137 "uuid": "8604f6c6-405f-11ef-b2a4-e9dca065e82e", 00:12:26.137 "is_configured": true, 00:12:26.137 "data_offset": 2048, 00:12:26.137 "data_size": 63488 00:12:26.137 }, 00:12:26.137 { 00:12:26.137 "name": "BaseBdev2", 00:12:26.137 "uuid": "87874bb8-405f-11ef-b2a4-e9dca065e82e", 00:12:26.137 "is_configured": true, 00:12:26.137 "data_offset": 2048, 00:12:26.137 "data_size": 63488 00:12:26.137 }, 00:12:26.137 { 00:12:26.137 "name": "BaseBdev3", 00:12:26.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.137 "is_configured": false, 00:12:26.137 "data_offset": 0, 00:12:26.137 "data_size": 0 00:12:26.137 } 00:12:26.137 ] 00:12:26.137 }' 00:12:26.137 15:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:26.137 15:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.704 15:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:26.704 [2024-07-12 15:00:52.495766] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:26.704 [2024-07-12 15:00:52.495847] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x10a6ad434a00 00:12:26.704 [2024-07-12 15:00:52.495854] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:26.704 [2024-07-12 15:00:52.495877] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10a6ad497e20 00:12:26.704 [2024-07-12 15:00:52.495939] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10a6ad434a00 00:12:26.704 [2024-07-12 15:00:52.495943] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x10a6ad434a00 00:12:26.704 [2024-07-12 15:00:52.495966] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.704 BaseBdev3 00:12:26.704 15:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:12:26.704 15:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:12:26.704 15:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:26.704 15:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:26.704 15:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:26.704 15:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:26.704 15:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:27.271 15:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:27.271 [ 00:12:27.271 { 00:12:27.271 "name": "BaseBdev3", 00:12:27.271 "aliases": [ 00:12:27.271 "885289ee-405f-11ef-b2a4-e9dca065e82e" 00:12:27.271 ], 00:12:27.271 "product_name": "Malloc disk", 00:12:27.271 "block_size": 512, 00:12:27.271 "num_blocks": 65536, 00:12:27.271 "uuid": "885289ee-405f-11ef-b2a4-e9dca065e82e", 00:12:27.271 "assigned_rate_limits": { 00:12:27.271 "rw_ios_per_sec": 0, 00:12:27.271 "rw_mbytes_per_sec": 0, 00:12:27.271 "r_mbytes_per_sec": 0, 00:12:27.271 "w_mbytes_per_sec": 0 00:12:27.271 }, 00:12:27.271 "claimed": true, 00:12:27.271 "claim_type": "exclusive_write", 00:12:27.271 "zoned": false, 00:12:27.271 "supported_io_types": { 00:12:27.271 "read": true, 00:12:27.271 "write": true, 00:12:27.271 "unmap": true, 00:12:27.271 "flush": true, 00:12:27.271 "reset": true, 00:12:27.271 "nvme_admin": false, 00:12:27.271 "nvme_io": false, 00:12:27.271 "nvme_io_md": false, 00:12:27.271 "write_zeroes": true, 00:12:27.271 "zcopy": true, 00:12:27.271 "get_zone_info": false, 00:12:27.271 "zone_management": false, 00:12:27.271 "zone_append": false, 00:12:27.271 "compare": false, 00:12:27.271 "compare_and_write": false, 00:12:27.271 "abort": true, 00:12:27.271 "seek_hole": false, 00:12:27.271 "seek_data": false, 00:12:27.271 "copy": true, 00:12:27.271 "nvme_iov_md": false 00:12:27.271 }, 00:12:27.271 "memory_domains": [ 00:12:27.271 { 00:12:27.271 "dma_device_id": "system", 00:12:27.271 "dma_device_type": 1 00:12:27.271 }, 00:12:27.271 { 00:12:27.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.271 "dma_device_type": 2 00:12:27.271 } 00:12:27.271 ], 00:12:27.271 "driver_specific": {} 00:12:27.271 } 00:12:27.271 ] 00:12:27.271 15:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:27.271 15:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:27.271 15:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:27.271 15:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:27.271 15:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:27.271 15:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:27.271 15:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:27.271 15:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:27.271 15:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:27.271 15:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:27.271 15:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:27.271 15:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:27.271 15:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:27.271 15:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:27.271 15:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.528 15:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:27.528 "name": "Existed_Raid", 00:12:27.528 "uuid": "86fc244e-405f-11ef-b2a4-e9dca065e82e", 00:12:27.528 "strip_size_kb": 0, 00:12:27.528 "state": "online", 00:12:27.528 "raid_level": "raid1", 00:12:27.528 "superblock": true, 00:12:27.528 "num_base_bdevs": 3, 00:12:27.528 "num_base_bdevs_discovered": 3, 00:12:27.528 "num_base_bdevs_operational": 3, 00:12:27.528 "base_bdevs_list": [ 00:12:27.528 { 00:12:27.528 "name": "BaseBdev1", 00:12:27.528 "uuid": "8604f6c6-405f-11ef-b2a4-e9dca065e82e", 00:12:27.528 "is_configured": true, 00:12:27.528 "data_offset": 2048, 00:12:27.528 "data_size": 63488 00:12:27.528 }, 00:12:27.528 { 00:12:27.528 "name": "BaseBdev2", 00:12:27.528 "uuid": "87874bb8-405f-11ef-b2a4-e9dca065e82e", 00:12:27.528 "is_configured": true, 00:12:27.528 "data_offset": 2048, 00:12:27.528 "data_size": 63488 00:12:27.528 }, 00:12:27.528 { 00:12:27.528 "name": "BaseBdev3", 00:12:27.528 "uuid": "885289ee-405f-11ef-b2a4-e9dca065e82e", 00:12:27.528 "is_configured": true, 00:12:27.528 "data_offset": 2048, 00:12:27.528 "data_size": 63488 00:12:27.528 } 00:12:27.528 ] 00:12:27.528 }' 00:12:27.528 15:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:27.528 15:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.092 15:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:28.092 15:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:28.092 15:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:28.092 15:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:28.092 15:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:28.092 15:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:12:28.092 15:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:28.092 15:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:28.092 [2024-07-12 15:00:53.863624] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:28.092 15:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:28.092 "name": "Existed_Raid", 00:12:28.092 "aliases": [ 00:12:28.092 "86fc244e-405f-11ef-b2a4-e9dca065e82e" 00:12:28.092 ], 00:12:28.092 "product_name": "Raid Volume", 00:12:28.092 "block_size": 512, 00:12:28.092 "num_blocks": 63488, 00:12:28.092 "uuid": "86fc244e-405f-11ef-b2a4-e9dca065e82e", 00:12:28.092 "assigned_rate_limits": { 00:12:28.092 "rw_ios_per_sec": 0, 00:12:28.092 "rw_mbytes_per_sec": 0, 00:12:28.092 "r_mbytes_per_sec": 0, 00:12:28.092 "w_mbytes_per_sec": 0 00:12:28.092 }, 00:12:28.092 "claimed": false, 00:12:28.092 "zoned": false, 00:12:28.092 "supported_io_types": { 00:12:28.092 "read": true, 00:12:28.092 "write": true, 00:12:28.092 "unmap": false, 00:12:28.092 "flush": false, 00:12:28.092 "reset": true, 00:12:28.092 "nvme_admin": false, 00:12:28.092 "nvme_io": false, 00:12:28.092 "nvme_io_md": false, 00:12:28.092 "write_zeroes": true, 00:12:28.092 "zcopy": false, 00:12:28.092 "get_zone_info": false, 00:12:28.092 "zone_management": false, 00:12:28.092 "zone_append": false, 00:12:28.092 "compare": false, 00:12:28.092 "compare_and_write": false, 00:12:28.092 "abort": false, 00:12:28.092 "seek_hole": false, 00:12:28.092 "seek_data": false, 00:12:28.092 "copy": false, 00:12:28.092 "nvme_iov_md": false 00:12:28.092 }, 00:12:28.092 "memory_domains": [ 00:12:28.092 { 00:12:28.092 "dma_device_id": "system", 00:12:28.092 "dma_device_type": 1 00:12:28.092 }, 00:12:28.092 { 00:12:28.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.092 "dma_device_type": 2 00:12:28.092 }, 00:12:28.092 { 00:12:28.092 "dma_device_id": "system", 00:12:28.092 "dma_device_type": 1 00:12:28.092 }, 00:12:28.092 { 00:12:28.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.092 "dma_device_type": 2 00:12:28.092 }, 00:12:28.092 { 00:12:28.092 "dma_device_id": "system", 00:12:28.092 "dma_device_type": 1 00:12:28.092 }, 00:12:28.092 { 00:12:28.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.092 "dma_device_type": 2 00:12:28.092 } 00:12:28.092 ], 00:12:28.092 "driver_specific": { 00:12:28.092 "raid": { 00:12:28.092 "uuid": "86fc244e-405f-11ef-b2a4-e9dca065e82e", 00:12:28.092 "strip_size_kb": 0, 00:12:28.092 "state": "online", 00:12:28.092 "raid_level": "raid1", 00:12:28.092 "superblock": true, 00:12:28.092 "num_base_bdevs": 3, 00:12:28.092 "num_base_bdevs_discovered": 3, 00:12:28.092 "num_base_bdevs_operational": 3, 00:12:28.092 "base_bdevs_list": [ 00:12:28.092 { 00:12:28.092 "name": "BaseBdev1", 00:12:28.092 "uuid": "8604f6c6-405f-11ef-b2a4-e9dca065e82e", 00:12:28.092 "is_configured": true, 00:12:28.092 "data_offset": 2048, 00:12:28.092 "data_size": 63488 00:12:28.092 }, 00:12:28.092 { 00:12:28.092 "name": "BaseBdev2", 00:12:28.092 "uuid": "87874bb8-405f-11ef-b2a4-e9dca065e82e", 00:12:28.092 "is_configured": true, 00:12:28.092 "data_offset": 2048, 00:12:28.092 "data_size": 63488 00:12:28.092 }, 00:12:28.092 { 00:12:28.092 "name": "BaseBdev3", 00:12:28.092 "uuid": "885289ee-405f-11ef-b2a4-e9dca065e82e", 00:12:28.092 "is_configured": true, 00:12:28.092 "data_offset": 2048, 00:12:28.092 "data_size": 63488 00:12:28.092 } 00:12:28.092 ] 00:12:28.092 } 00:12:28.092 } 00:12:28.092 }' 00:12:28.092 15:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:28.092 15:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:28.092 BaseBdev2 00:12:28.092 BaseBdev3' 00:12:28.092 15:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:28.092 15:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:28.092 15:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:28.350 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:28.350 "name": "BaseBdev1", 00:12:28.350 "aliases": [ 00:12:28.350 "8604f6c6-405f-11ef-b2a4-e9dca065e82e" 00:12:28.350 ], 00:12:28.350 "product_name": "Malloc disk", 00:12:28.350 "block_size": 512, 00:12:28.350 "num_blocks": 65536, 00:12:28.350 "uuid": "8604f6c6-405f-11ef-b2a4-e9dca065e82e", 00:12:28.350 "assigned_rate_limits": { 00:12:28.350 "rw_ios_per_sec": 0, 00:12:28.350 "rw_mbytes_per_sec": 0, 00:12:28.350 "r_mbytes_per_sec": 0, 00:12:28.350 "w_mbytes_per_sec": 0 00:12:28.350 }, 00:12:28.350 "claimed": true, 00:12:28.350 "claim_type": "exclusive_write", 00:12:28.350 "zoned": false, 00:12:28.350 "supported_io_types": { 00:12:28.350 "read": true, 00:12:28.350 "write": true, 00:12:28.350 "unmap": true, 00:12:28.350 "flush": true, 00:12:28.350 "reset": true, 00:12:28.350 "nvme_admin": false, 00:12:28.350 "nvme_io": false, 00:12:28.350 "nvme_io_md": false, 00:12:28.350 "write_zeroes": true, 00:12:28.350 "zcopy": true, 00:12:28.350 "get_zone_info": false, 00:12:28.350 "zone_management": false, 00:12:28.350 "zone_append": false, 00:12:28.350 "compare": false, 00:12:28.350 "compare_and_write": false, 00:12:28.350 "abort": true, 00:12:28.350 "seek_hole": false, 00:12:28.350 "seek_data": false, 00:12:28.350 "copy": true, 00:12:28.350 "nvme_iov_md": false 00:12:28.350 }, 00:12:28.350 "memory_domains": [ 00:12:28.350 { 00:12:28.350 "dma_device_id": "system", 00:12:28.350 "dma_device_type": 1 00:12:28.350 }, 00:12:28.350 { 00:12:28.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.350 "dma_device_type": 2 00:12:28.350 } 00:12:28.350 ], 00:12:28.350 "driver_specific": {} 00:12:28.350 }' 00:12:28.350 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:28.350 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:28.350 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:28.350 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:28.350 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:28.350 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:28.350 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:28.350 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:28.350 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:28.350 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:28.608 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:28.608 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:28.608 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:28.608 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:28.608 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:28.608 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:28.608 "name": "BaseBdev2", 00:12:28.608 "aliases": [ 00:12:28.608 "87874bb8-405f-11ef-b2a4-e9dca065e82e" 00:12:28.608 ], 00:12:28.608 "product_name": "Malloc disk", 00:12:28.608 "block_size": 512, 00:12:28.608 "num_blocks": 65536, 00:12:28.608 "uuid": "87874bb8-405f-11ef-b2a4-e9dca065e82e", 00:12:28.608 "assigned_rate_limits": { 00:12:28.608 "rw_ios_per_sec": 0, 00:12:28.608 "rw_mbytes_per_sec": 0, 00:12:28.608 "r_mbytes_per_sec": 0, 00:12:28.608 "w_mbytes_per_sec": 0 00:12:28.608 }, 00:12:28.608 "claimed": true, 00:12:28.608 "claim_type": "exclusive_write", 00:12:28.608 "zoned": false, 00:12:28.608 "supported_io_types": { 00:12:28.608 "read": true, 00:12:28.608 "write": true, 00:12:28.608 "unmap": true, 00:12:28.608 "flush": true, 00:12:28.608 "reset": true, 00:12:28.608 "nvme_admin": false, 00:12:28.608 "nvme_io": false, 00:12:28.608 "nvme_io_md": false, 00:12:28.608 "write_zeroes": true, 00:12:28.608 "zcopy": true, 00:12:28.608 "get_zone_info": false, 00:12:28.608 "zone_management": false, 00:12:28.608 "zone_append": false, 00:12:28.608 "compare": false, 00:12:28.608 "compare_and_write": false, 00:12:28.608 "abort": true, 00:12:28.608 "seek_hole": false, 00:12:28.608 "seek_data": false, 00:12:28.608 "copy": true, 00:12:28.608 "nvme_iov_md": false 00:12:28.608 }, 00:12:28.608 "memory_domains": [ 00:12:28.608 { 00:12:28.608 "dma_device_id": "system", 00:12:28.608 "dma_device_type": 1 00:12:28.608 }, 00:12:28.608 { 00:12:28.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.608 "dma_device_type": 2 00:12:28.608 } 00:12:28.608 ], 00:12:28.608 "driver_specific": {} 00:12:28.608 }' 00:12:28.865 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:28.865 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:28.865 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:28.865 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:28.865 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:28.865 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:28.865 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:28.865 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:28.865 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:28.865 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:28.866 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:28.866 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:28.866 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:28.866 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:28.866 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:29.123 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:29.123 "name": "BaseBdev3", 00:12:29.123 "aliases": [ 00:12:29.123 "885289ee-405f-11ef-b2a4-e9dca065e82e" 00:12:29.123 ], 00:12:29.123 "product_name": "Malloc disk", 00:12:29.123 "block_size": 512, 00:12:29.123 "num_blocks": 65536, 00:12:29.123 "uuid": "885289ee-405f-11ef-b2a4-e9dca065e82e", 00:12:29.123 "assigned_rate_limits": { 00:12:29.123 "rw_ios_per_sec": 0, 00:12:29.123 "rw_mbytes_per_sec": 0, 00:12:29.123 "r_mbytes_per_sec": 0, 00:12:29.123 "w_mbytes_per_sec": 0 00:12:29.123 }, 00:12:29.124 "claimed": true, 00:12:29.124 "claim_type": "exclusive_write", 00:12:29.124 "zoned": false, 00:12:29.124 "supported_io_types": { 00:12:29.124 "read": true, 00:12:29.124 "write": true, 00:12:29.124 "unmap": true, 00:12:29.124 "flush": true, 00:12:29.124 "reset": true, 00:12:29.124 "nvme_admin": false, 00:12:29.124 "nvme_io": false, 00:12:29.124 "nvme_io_md": false, 00:12:29.124 "write_zeroes": true, 00:12:29.124 "zcopy": true, 00:12:29.124 "get_zone_info": false, 00:12:29.124 "zone_management": false, 00:12:29.124 "zone_append": false, 00:12:29.124 "compare": false, 00:12:29.124 "compare_and_write": false, 00:12:29.124 "abort": true, 00:12:29.124 "seek_hole": false, 00:12:29.124 "seek_data": false, 00:12:29.124 "copy": true, 00:12:29.124 "nvme_iov_md": false 00:12:29.124 }, 00:12:29.124 "memory_domains": [ 00:12:29.124 { 00:12:29.124 "dma_device_id": "system", 00:12:29.124 "dma_device_type": 1 00:12:29.124 }, 00:12:29.124 { 00:12:29.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.124 "dma_device_type": 2 00:12:29.124 } 00:12:29.124 ], 00:12:29.124 "driver_specific": {} 00:12:29.124 }' 00:12:29.124 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:29.124 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:29.124 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:29.124 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:29.124 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:29.124 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:29.124 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:29.124 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:29.124 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:29.124 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:29.124 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:29.124 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:29.124 15:00:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:29.381 [2024-07-12 15:00:55.043598] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:29.382 15:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:29.382 15:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:12:29.382 15:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:29.382 15:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:12:29.382 15:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:12:29.382 15:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:29.382 15:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:29.382 15:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:29.382 15:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:29.382 15:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:29.382 15:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:29.382 15:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:29.382 15:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:29.382 15:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:29.382 15:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:29.382 15:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.382 15:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:29.638 15:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:29.638 "name": "Existed_Raid", 00:12:29.638 "uuid": "86fc244e-405f-11ef-b2a4-e9dca065e82e", 00:12:29.638 "strip_size_kb": 0, 00:12:29.638 "state": "online", 00:12:29.638 "raid_level": "raid1", 00:12:29.638 "superblock": true, 00:12:29.638 "num_base_bdevs": 3, 00:12:29.638 "num_base_bdevs_discovered": 2, 00:12:29.638 "num_base_bdevs_operational": 2, 00:12:29.638 "base_bdevs_list": [ 00:12:29.638 { 00:12:29.638 "name": null, 00:12:29.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.638 "is_configured": false, 00:12:29.638 "data_offset": 2048, 00:12:29.638 "data_size": 63488 00:12:29.638 }, 00:12:29.638 { 00:12:29.638 "name": "BaseBdev2", 00:12:29.638 "uuid": "87874bb8-405f-11ef-b2a4-e9dca065e82e", 00:12:29.638 "is_configured": true, 00:12:29.638 "data_offset": 2048, 00:12:29.638 "data_size": 63488 00:12:29.638 }, 00:12:29.638 { 00:12:29.638 "name": "BaseBdev3", 00:12:29.639 "uuid": "885289ee-405f-11ef-b2a4-e9dca065e82e", 00:12:29.639 "is_configured": true, 00:12:29.639 "data_offset": 2048, 00:12:29.639 "data_size": 63488 00:12:29.639 } 00:12:29.639 ] 00:12:29.639 }' 00:12:29.639 15:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:29.639 15:00:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.895 15:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:29.895 15:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:29.895 15:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:29.895 15:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:30.151 15:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:30.151 15:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:30.151 15:00:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:30.408 [2024-07-12 15:00:56.215662] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:30.665 15:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:30.665 15:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:30.665 15:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:30.665 15:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:30.665 15:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:30.665 15:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:30.665 15:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:30.920 [2024-07-12 15:00:56.687617] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:30.920 [2024-07-12 15:00:56.687671] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:30.920 [2024-07-12 15:00:56.695985] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:30.920 [2024-07-12 15:00:56.696003] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:30.920 [2024-07-12 15:00:56.696009] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10a6ad434a00 name Existed_Raid, state offline 00:12:30.920 15:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:30.920 15:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:30.920 15:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:30.920 15:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:31.176 15:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:31.176 15:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:31.176 15:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:12:31.176 15:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:12:31.176 15:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:31.176 15:00:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:31.434 BaseBdev2 00:12:31.434 15:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:12:31.434 15:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:31.434 15:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:31.434 15:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:31.434 15:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:31.434 15:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:31.434 15:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:31.691 15:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:31.948 [ 00:12:31.948 { 00:12:31.948 "name": "BaseBdev2", 00:12:31.948 "aliases": [ 00:12:31.948 "8b1aded2-405f-11ef-b2a4-e9dca065e82e" 00:12:31.948 ], 00:12:31.948 "product_name": "Malloc disk", 00:12:31.948 "block_size": 512, 00:12:31.948 "num_blocks": 65536, 00:12:31.948 "uuid": "8b1aded2-405f-11ef-b2a4-e9dca065e82e", 00:12:31.948 "assigned_rate_limits": { 00:12:31.948 "rw_ios_per_sec": 0, 00:12:31.948 "rw_mbytes_per_sec": 0, 00:12:31.948 "r_mbytes_per_sec": 0, 00:12:31.948 "w_mbytes_per_sec": 0 00:12:31.948 }, 00:12:31.948 "claimed": false, 00:12:31.948 "zoned": false, 00:12:31.948 "supported_io_types": { 00:12:31.948 "read": true, 00:12:31.948 "write": true, 00:12:31.948 "unmap": true, 00:12:31.948 "flush": true, 00:12:31.948 "reset": true, 00:12:31.948 "nvme_admin": false, 00:12:31.948 "nvme_io": false, 00:12:31.948 "nvme_io_md": false, 00:12:31.948 "write_zeroes": true, 00:12:31.948 "zcopy": true, 00:12:31.948 "get_zone_info": false, 00:12:31.948 "zone_management": false, 00:12:31.948 "zone_append": false, 00:12:31.948 "compare": false, 00:12:31.948 "compare_and_write": false, 00:12:31.948 "abort": true, 00:12:31.948 "seek_hole": false, 00:12:31.948 "seek_data": false, 00:12:31.948 "copy": true, 00:12:31.948 "nvme_iov_md": false 00:12:31.948 }, 00:12:31.948 "memory_domains": [ 00:12:31.948 { 00:12:31.948 "dma_device_id": "system", 00:12:31.948 "dma_device_type": 1 00:12:31.948 }, 00:12:31.948 { 00:12:31.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.948 "dma_device_type": 2 00:12:31.948 } 00:12:31.948 ], 00:12:31.948 "driver_specific": {} 00:12:31.948 } 00:12:31.948 ] 00:12:31.948 15:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:31.948 15:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:31.948 15:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:31.948 15:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:32.262 BaseBdev3 00:12:32.262 15:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:12:32.262 15:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:12:32.262 15:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:32.262 15:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:32.262 15:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:32.262 15:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:32.262 15:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:32.520 15:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:32.778 [ 00:12:32.778 { 00:12:32.778 "name": "BaseBdev3", 00:12:32.778 "aliases": [ 00:12:32.778 "8b8b2c41-405f-11ef-b2a4-e9dca065e82e" 00:12:32.778 ], 00:12:32.778 "product_name": "Malloc disk", 00:12:32.778 "block_size": 512, 00:12:32.778 "num_blocks": 65536, 00:12:32.778 "uuid": "8b8b2c41-405f-11ef-b2a4-e9dca065e82e", 00:12:32.778 "assigned_rate_limits": { 00:12:32.778 "rw_ios_per_sec": 0, 00:12:32.778 "rw_mbytes_per_sec": 0, 00:12:32.778 "r_mbytes_per_sec": 0, 00:12:32.778 "w_mbytes_per_sec": 0 00:12:32.778 }, 00:12:32.778 "claimed": false, 00:12:32.778 "zoned": false, 00:12:32.778 "supported_io_types": { 00:12:32.778 "read": true, 00:12:32.778 "write": true, 00:12:32.778 "unmap": true, 00:12:32.778 "flush": true, 00:12:32.778 "reset": true, 00:12:32.778 "nvme_admin": false, 00:12:32.778 "nvme_io": false, 00:12:32.778 "nvme_io_md": false, 00:12:32.778 "write_zeroes": true, 00:12:32.778 "zcopy": true, 00:12:32.778 "get_zone_info": false, 00:12:32.778 "zone_management": false, 00:12:32.778 "zone_append": false, 00:12:32.778 "compare": false, 00:12:32.778 "compare_and_write": false, 00:12:32.778 "abort": true, 00:12:32.778 "seek_hole": false, 00:12:32.778 "seek_data": false, 00:12:32.778 "copy": true, 00:12:32.778 "nvme_iov_md": false 00:12:32.778 }, 00:12:32.778 "memory_domains": [ 00:12:32.778 { 00:12:32.778 "dma_device_id": "system", 00:12:32.778 "dma_device_type": 1 00:12:32.778 }, 00:12:32.778 { 00:12:32.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.778 "dma_device_type": 2 00:12:32.778 } 00:12:32.778 ], 00:12:32.778 "driver_specific": {} 00:12:32.778 } 00:12:32.778 ] 00:12:32.778 15:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:32.778 15:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:32.778 15:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:32.778 15:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:33.035 [2024-07-12 15:00:58.611917] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:33.035 [2024-07-12 15:00:58.611990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:33.035 [2024-07-12 15:00:58.612001] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:33.035 [2024-07-12 15:00:58.612762] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:33.035 15:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:33.035 15:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:33.035 15:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:33.035 15:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:33.035 15:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:33.035 15:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:33.035 15:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:33.035 15:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:33.035 15:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:33.035 15:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:33.035 15:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.036 15:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:33.293 15:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:33.293 "name": "Existed_Raid", 00:12:33.293 "uuid": "8bf7cf94-405f-11ef-b2a4-e9dca065e82e", 00:12:33.293 "strip_size_kb": 0, 00:12:33.293 "state": "configuring", 00:12:33.293 "raid_level": "raid1", 00:12:33.293 "superblock": true, 00:12:33.293 "num_base_bdevs": 3, 00:12:33.293 "num_base_bdevs_discovered": 2, 00:12:33.293 "num_base_bdevs_operational": 3, 00:12:33.293 "base_bdevs_list": [ 00:12:33.293 { 00:12:33.293 "name": "BaseBdev1", 00:12:33.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.293 "is_configured": false, 00:12:33.293 "data_offset": 0, 00:12:33.293 "data_size": 0 00:12:33.293 }, 00:12:33.293 { 00:12:33.293 "name": "BaseBdev2", 00:12:33.293 "uuid": "8b1aded2-405f-11ef-b2a4-e9dca065e82e", 00:12:33.293 "is_configured": true, 00:12:33.293 "data_offset": 2048, 00:12:33.293 "data_size": 63488 00:12:33.293 }, 00:12:33.293 { 00:12:33.293 "name": "BaseBdev3", 00:12:33.293 "uuid": "8b8b2c41-405f-11ef-b2a4-e9dca065e82e", 00:12:33.293 "is_configured": true, 00:12:33.293 "data_offset": 2048, 00:12:33.293 "data_size": 63488 00:12:33.293 } 00:12:33.293 ] 00:12:33.293 }' 00:12:33.293 15:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:33.293 15:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.552 15:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:12:33.809 [2024-07-12 15:00:59.527904] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:33.809 15:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:33.809 15:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:33.809 15:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:33.809 15:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:33.809 15:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:33.809 15:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:33.810 15:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:33.810 15:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:33.810 15:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:33.810 15:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:33.810 15:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:33.810 15:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.068 15:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:34.068 "name": "Existed_Raid", 00:12:34.068 "uuid": "8bf7cf94-405f-11ef-b2a4-e9dca065e82e", 00:12:34.068 "strip_size_kb": 0, 00:12:34.068 "state": "configuring", 00:12:34.068 "raid_level": "raid1", 00:12:34.068 "superblock": true, 00:12:34.068 "num_base_bdevs": 3, 00:12:34.068 "num_base_bdevs_discovered": 1, 00:12:34.068 "num_base_bdevs_operational": 3, 00:12:34.068 "base_bdevs_list": [ 00:12:34.068 { 00:12:34.068 "name": "BaseBdev1", 00:12:34.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.068 "is_configured": false, 00:12:34.068 "data_offset": 0, 00:12:34.068 "data_size": 0 00:12:34.068 }, 00:12:34.068 { 00:12:34.068 "name": null, 00:12:34.068 "uuid": "8b1aded2-405f-11ef-b2a4-e9dca065e82e", 00:12:34.068 "is_configured": false, 00:12:34.068 "data_offset": 2048, 00:12:34.068 "data_size": 63488 00:12:34.068 }, 00:12:34.068 { 00:12:34.068 "name": "BaseBdev3", 00:12:34.068 "uuid": "8b8b2c41-405f-11ef-b2a4-e9dca065e82e", 00:12:34.068 "is_configured": true, 00:12:34.068 "data_offset": 2048, 00:12:34.068 "data_size": 63488 00:12:34.068 } 00:12:34.068 ] 00:12:34.068 }' 00:12:34.068 15:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:34.068 15:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.326 15:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:34.326 15:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:34.585 15:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:12:34.585 15:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:35.152 [2024-07-12 15:01:00.668054] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:35.152 BaseBdev1 00:12:35.152 15:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:12:35.152 15:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:35.152 15:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:35.152 15:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:35.152 15:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:35.152 15:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:35.152 15:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:35.152 15:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:35.444 [ 00:12:35.444 { 00:12:35.444 "name": "BaseBdev1", 00:12:35.444 "aliases": [ 00:12:35.444 "8d3187c9-405f-11ef-b2a4-e9dca065e82e" 00:12:35.444 ], 00:12:35.444 "product_name": "Malloc disk", 00:12:35.444 "block_size": 512, 00:12:35.444 "num_blocks": 65536, 00:12:35.444 "uuid": "8d3187c9-405f-11ef-b2a4-e9dca065e82e", 00:12:35.444 "assigned_rate_limits": { 00:12:35.444 "rw_ios_per_sec": 0, 00:12:35.444 "rw_mbytes_per_sec": 0, 00:12:35.444 "r_mbytes_per_sec": 0, 00:12:35.444 "w_mbytes_per_sec": 0 00:12:35.444 }, 00:12:35.444 "claimed": true, 00:12:35.444 "claim_type": "exclusive_write", 00:12:35.444 "zoned": false, 00:12:35.444 "supported_io_types": { 00:12:35.444 "read": true, 00:12:35.444 "write": true, 00:12:35.444 "unmap": true, 00:12:35.444 "flush": true, 00:12:35.444 "reset": true, 00:12:35.444 "nvme_admin": false, 00:12:35.444 "nvme_io": false, 00:12:35.444 "nvme_io_md": false, 00:12:35.444 "write_zeroes": true, 00:12:35.444 "zcopy": true, 00:12:35.444 "get_zone_info": false, 00:12:35.444 "zone_management": false, 00:12:35.444 "zone_append": false, 00:12:35.444 "compare": false, 00:12:35.444 "compare_and_write": false, 00:12:35.444 "abort": true, 00:12:35.444 "seek_hole": false, 00:12:35.444 "seek_data": false, 00:12:35.444 "copy": true, 00:12:35.444 "nvme_iov_md": false 00:12:35.444 }, 00:12:35.444 "memory_domains": [ 00:12:35.444 { 00:12:35.444 "dma_device_id": "system", 00:12:35.444 "dma_device_type": 1 00:12:35.444 }, 00:12:35.444 { 00:12:35.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.444 "dma_device_type": 2 00:12:35.444 } 00:12:35.444 ], 00:12:35.444 "driver_specific": {} 00:12:35.444 } 00:12:35.444 ] 00:12:35.444 15:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:35.444 15:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:35.444 15:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:35.444 15:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:35.444 15:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:35.444 15:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:35.444 15:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:35.444 15:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:35.444 15:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:35.444 15:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:35.444 15:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:35.444 15:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:35.444 15:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.705 15:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:35.705 "name": "Existed_Raid", 00:12:35.705 "uuid": "8bf7cf94-405f-11ef-b2a4-e9dca065e82e", 00:12:35.705 "strip_size_kb": 0, 00:12:35.705 "state": "configuring", 00:12:35.705 "raid_level": "raid1", 00:12:35.705 "superblock": true, 00:12:35.705 "num_base_bdevs": 3, 00:12:35.705 "num_base_bdevs_discovered": 2, 00:12:35.705 "num_base_bdevs_operational": 3, 00:12:35.705 "base_bdevs_list": [ 00:12:35.705 { 00:12:35.705 "name": "BaseBdev1", 00:12:35.705 "uuid": "8d3187c9-405f-11ef-b2a4-e9dca065e82e", 00:12:35.705 "is_configured": true, 00:12:35.705 "data_offset": 2048, 00:12:35.705 "data_size": 63488 00:12:35.705 }, 00:12:35.705 { 00:12:35.705 "name": null, 00:12:35.705 "uuid": "8b1aded2-405f-11ef-b2a4-e9dca065e82e", 00:12:35.705 "is_configured": false, 00:12:35.705 "data_offset": 2048, 00:12:35.705 "data_size": 63488 00:12:35.705 }, 00:12:35.705 { 00:12:35.705 "name": "BaseBdev3", 00:12:35.705 "uuid": "8b8b2c41-405f-11ef-b2a4-e9dca065e82e", 00:12:35.705 "is_configured": true, 00:12:35.705 "data_offset": 2048, 00:12:35.705 "data_size": 63488 00:12:35.705 } 00:12:35.705 ] 00:12:35.705 }' 00:12:35.705 15:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:35.705 15:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.270 15:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:36.270 15:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:36.527 15:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:12:36.527 15:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:12:36.785 [2024-07-12 15:01:02.395868] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:36.785 15:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:36.785 15:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:36.785 15:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:36.785 15:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:36.785 15:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:36.785 15:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:36.785 15:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:36.785 15:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:36.785 15:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:36.785 15:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:36.785 15:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.785 15:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:37.042 15:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:37.042 "name": "Existed_Raid", 00:12:37.042 "uuid": "8bf7cf94-405f-11ef-b2a4-e9dca065e82e", 00:12:37.042 "strip_size_kb": 0, 00:12:37.042 "state": "configuring", 00:12:37.042 "raid_level": "raid1", 00:12:37.042 "superblock": true, 00:12:37.042 "num_base_bdevs": 3, 00:12:37.042 "num_base_bdevs_discovered": 1, 00:12:37.042 "num_base_bdevs_operational": 3, 00:12:37.042 "base_bdevs_list": [ 00:12:37.042 { 00:12:37.042 "name": "BaseBdev1", 00:12:37.042 "uuid": "8d3187c9-405f-11ef-b2a4-e9dca065e82e", 00:12:37.042 "is_configured": true, 00:12:37.042 "data_offset": 2048, 00:12:37.042 "data_size": 63488 00:12:37.042 }, 00:12:37.042 { 00:12:37.042 "name": null, 00:12:37.042 "uuid": "8b1aded2-405f-11ef-b2a4-e9dca065e82e", 00:12:37.042 "is_configured": false, 00:12:37.042 "data_offset": 2048, 00:12:37.042 "data_size": 63488 00:12:37.042 }, 00:12:37.042 { 00:12:37.042 "name": null, 00:12:37.042 "uuid": "8b8b2c41-405f-11ef-b2a4-e9dca065e82e", 00:12:37.042 "is_configured": false, 00:12:37.042 "data_offset": 2048, 00:12:37.042 "data_size": 63488 00:12:37.042 } 00:12:37.042 ] 00:12:37.042 }' 00:12:37.042 15:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:37.042 15:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.298 15:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:37.298 15:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:37.555 15:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:12:37.555 15:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:37.812 [2024-07-12 15:01:03.483817] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:37.812 15:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:37.812 15:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:37.812 15:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:37.812 15:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:37.812 15:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:37.812 15:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:37.812 15:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:37.812 15:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:37.812 15:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:37.812 15:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:37.812 15:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:37.812 15:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.111 15:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:38.111 "name": "Existed_Raid", 00:12:38.111 "uuid": "8bf7cf94-405f-11ef-b2a4-e9dca065e82e", 00:12:38.111 "strip_size_kb": 0, 00:12:38.111 "state": "configuring", 00:12:38.111 "raid_level": "raid1", 00:12:38.111 "superblock": true, 00:12:38.111 "num_base_bdevs": 3, 00:12:38.111 "num_base_bdevs_discovered": 2, 00:12:38.111 "num_base_bdevs_operational": 3, 00:12:38.111 "base_bdevs_list": [ 00:12:38.111 { 00:12:38.111 "name": "BaseBdev1", 00:12:38.111 "uuid": "8d3187c9-405f-11ef-b2a4-e9dca065e82e", 00:12:38.111 "is_configured": true, 00:12:38.111 "data_offset": 2048, 00:12:38.111 "data_size": 63488 00:12:38.111 }, 00:12:38.111 { 00:12:38.111 "name": null, 00:12:38.111 "uuid": "8b1aded2-405f-11ef-b2a4-e9dca065e82e", 00:12:38.111 "is_configured": false, 00:12:38.111 "data_offset": 2048, 00:12:38.111 "data_size": 63488 00:12:38.111 }, 00:12:38.111 { 00:12:38.111 "name": "BaseBdev3", 00:12:38.111 "uuid": "8b8b2c41-405f-11ef-b2a4-e9dca065e82e", 00:12:38.111 "is_configured": true, 00:12:38.111 "data_offset": 2048, 00:12:38.111 "data_size": 63488 00:12:38.111 } 00:12:38.111 ] 00:12:38.111 }' 00:12:38.111 15:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:38.111 15:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.382 15:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:38.382 15:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:38.640 15:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:12:38.640 15:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:38.898 [2024-07-12 15:01:04.603886] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:38.898 15:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:38.898 15:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:38.898 15:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:38.898 15:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:38.898 15:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:38.898 15:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:38.898 15:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:38.898 15:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:38.898 15:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:38.898 15:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:38.898 15:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:38.898 15:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.156 15:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:39.156 "name": "Existed_Raid", 00:12:39.156 "uuid": "8bf7cf94-405f-11ef-b2a4-e9dca065e82e", 00:12:39.156 "strip_size_kb": 0, 00:12:39.156 "state": "configuring", 00:12:39.156 "raid_level": "raid1", 00:12:39.156 "superblock": true, 00:12:39.156 "num_base_bdevs": 3, 00:12:39.156 "num_base_bdevs_discovered": 1, 00:12:39.156 "num_base_bdevs_operational": 3, 00:12:39.156 "base_bdevs_list": [ 00:12:39.156 { 00:12:39.156 "name": null, 00:12:39.156 "uuid": "8d3187c9-405f-11ef-b2a4-e9dca065e82e", 00:12:39.156 "is_configured": false, 00:12:39.156 "data_offset": 2048, 00:12:39.156 "data_size": 63488 00:12:39.156 }, 00:12:39.156 { 00:12:39.156 "name": null, 00:12:39.156 "uuid": "8b1aded2-405f-11ef-b2a4-e9dca065e82e", 00:12:39.156 "is_configured": false, 00:12:39.156 "data_offset": 2048, 00:12:39.156 "data_size": 63488 00:12:39.156 }, 00:12:39.156 { 00:12:39.156 "name": "BaseBdev3", 00:12:39.156 "uuid": "8b8b2c41-405f-11ef-b2a4-e9dca065e82e", 00:12:39.156 "is_configured": true, 00:12:39.157 "data_offset": 2048, 00:12:39.157 "data_size": 63488 00:12:39.157 } 00:12:39.157 ] 00:12:39.157 }' 00:12:39.157 15:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:39.157 15:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.731 15:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:39.731 15:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:39.731 15:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:12:39.731 15:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:39.995 [2024-07-12 15:01:05.760216] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:39.995 15:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:39.995 15:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:39.995 15:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:39.995 15:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:39.995 15:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:39.995 15:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:39.995 15:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:39.995 15:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:39.995 15:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:39.995 15:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:39.995 15:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:39.995 15:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.262 15:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:40.262 "name": "Existed_Raid", 00:12:40.262 "uuid": "8bf7cf94-405f-11ef-b2a4-e9dca065e82e", 00:12:40.262 "strip_size_kb": 0, 00:12:40.262 "state": "configuring", 00:12:40.262 "raid_level": "raid1", 00:12:40.262 "superblock": true, 00:12:40.262 "num_base_bdevs": 3, 00:12:40.262 "num_base_bdevs_discovered": 2, 00:12:40.262 "num_base_bdevs_operational": 3, 00:12:40.262 "base_bdevs_list": [ 00:12:40.262 { 00:12:40.262 "name": null, 00:12:40.262 "uuid": "8d3187c9-405f-11ef-b2a4-e9dca065e82e", 00:12:40.262 "is_configured": false, 00:12:40.262 "data_offset": 2048, 00:12:40.262 "data_size": 63488 00:12:40.262 }, 00:12:40.262 { 00:12:40.262 "name": "BaseBdev2", 00:12:40.262 "uuid": "8b1aded2-405f-11ef-b2a4-e9dca065e82e", 00:12:40.262 "is_configured": true, 00:12:40.262 "data_offset": 2048, 00:12:40.262 "data_size": 63488 00:12:40.262 }, 00:12:40.262 { 00:12:40.262 "name": "BaseBdev3", 00:12:40.262 "uuid": "8b8b2c41-405f-11ef-b2a4-e9dca065e82e", 00:12:40.262 "is_configured": true, 00:12:40.262 "data_offset": 2048, 00:12:40.262 "data_size": 63488 00:12:40.262 } 00:12:40.262 ] 00:12:40.262 }' 00:12:40.262 15:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:40.262 15:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.532 15:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:40.532 15:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:40.802 15:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:12:40.802 15:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:40.802 15:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:41.394 15:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 8d3187c9-405f-11ef-b2a4-e9dca065e82e 00:12:41.394 [2024-07-12 15:01:07.180489] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:41.394 [2024-07-12 15:01:07.180558] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x10a6ad434f00 00:12:41.394 [2024-07-12 15:01:07.180564] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:41.394 [2024-07-12 15:01:07.180587] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10a6ad497e20 00:12:41.394 [2024-07-12 15:01:07.180642] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10a6ad434f00 00:12:41.394 [2024-07-12 15:01:07.180647] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x10a6ad434f00 00:12:41.394 [2024-07-12 15:01:07.180670] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.394 NewBaseBdev 00:12:41.394 15:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:12:41.394 15:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:12:41.394 15:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:41.394 15:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:41.394 15:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:41.394 15:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:41.394 15:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:41.652 15:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:41.909 [ 00:12:41.909 { 00:12:41.909 "name": "NewBaseBdev", 00:12:41.909 "aliases": [ 00:12:41.909 "8d3187c9-405f-11ef-b2a4-e9dca065e82e" 00:12:41.909 ], 00:12:41.909 "product_name": "Malloc disk", 00:12:41.909 "block_size": 512, 00:12:41.909 "num_blocks": 65536, 00:12:41.909 "uuid": "8d3187c9-405f-11ef-b2a4-e9dca065e82e", 00:12:41.909 "assigned_rate_limits": { 00:12:41.909 "rw_ios_per_sec": 0, 00:12:41.909 "rw_mbytes_per_sec": 0, 00:12:41.909 "r_mbytes_per_sec": 0, 00:12:41.909 "w_mbytes_per_sec": 0 00:12:41.909 }, 00:12:41.909 "claimed": true, 00:12:41.909 "claim_type": "exclusive_write", 00:12:41.909 "zoned": false, 00:12:41.909 "supported_io_types": { 00:12:41.909 "read": true, 00:12:41.909 "write": true, 00:12:41.909 "unmap": true, 00:12:41.909 "flush": true, 00:12:41.909 "reset": true, 00:12:41.909 "nvme_admin": false, 00:12:41.909 "nvme_io": false, 00:12:41.909 "nvme_io_md": false, 00:12:41.909 "write_zeroes": true, 00:12:41.909 "zcopy": true, 00:12:41.909 "get_zone_info": false, 00:12:41.909 "zone_management": false, 00:12:41.909 "zone_append": false, 00:12:41.909 "compare": false, 00:12:41.909 "compare_and_write": false, 00:12:41.909 "abort": true, 00:12:41.909 "seek_hole": false, 00:12:41.909 "seek_data": false, 00:12:41.909 "copy": true, 00:12:41.909 "nvme_iov_md": false 00:12:41.909 }, 00:12:41.909 "memory_domains": [ 00:12:41.909 { 00:12:41.909 "dma_device_id": "system", 00:12:41.909 "dma_device_type": 1 00:12:41.909 }, 00:12:41.909 { 00:12:41.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.909 "dma_device_type": 2 00:12:41.909 } 00:12:41.909 ], 00:12:41.909 "driver_specific": {} 00:12:41.909 } 00:12:41.909 ] 00:12:41.909 15:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:41.909 15:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:41.909 15:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:41.909 15:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:41.909 15:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:41.909 15:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:41.909 15:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:41.909 15:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:41.909 15:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:41.909 15:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:41.909 15:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:41.909 15:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:41.909 15:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.167 15:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:42.167 "name": "Existed_Raid", 00:12:42.167 "uuid": "8bf7cf94-405f-11ef-b2a4-e9dca065e82e", 00:12:42.167 "strip_size_kb": 0, 00:12:42.167 "state": "online", 00:12:42.167 "raid_level": "raid1", 00:12:42.167 "superblock": true, 00:12:42.167 "num_base_bdevs": 3, 00:12:42.167 "num_base_bdevs_discovered": 3, 00:12:42.167 "num_base_bdevs_operational": 3, 00:12:42.167 "base_bdevs_list": [ 00:12:42.167 { 00:12:42.167 "name": "NewBaseBdev", 00:12:42.167 "uuid": "8d3187c9-405f-11ef-b2a4-e9dca065e82e", 00:12:42.167 "is_configured": true, 00:12:42.167 "data_offset": 2048, 00:12:42.167 "data_size": 63488 00:12:42.167 }, 00:12:42.167 { 00:12:42.167 "name": "BaseBdev2", 00:12:42.167 "uuid": "8b1aded2-405f-11ef-b2a4-e9dca065e82e", 00:12:42.167 "is_configured": true, 00:12:42.167 "data_offset": 2048, 00:12:42.167 "data_size": 63488 00:12:42.167 }, 00:12:42.167 { 00:12:42.167 "name": "BaseBdev3", 00:12:42.167 "uuid": "8b8b2c41-405f-11ef-b2a4-e9dca065e82e", 00:12:42.167 "is_configured": true, 00:12:42.167 "data_offset": 2048, 00:12:42.167 "data_size": 63488 00:12:42.167 } 00:12:42.167 ] 00:12:42.167 }' 00:12:42.167 15:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:42.167 15:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.734 15:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:12:42.734 15:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:42.734 15:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:42.734 15:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:42.734 15:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:42.734 15:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:12:42.734 15:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:42.734 15:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:42.734 [2024-07-12 15:01:08.480419] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:42.734 15:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:42.734 "name": "Existed_Raid", 00:12:42.734 "aliases": [ 00:12:42.734 "8bf7cf94-405f-11ef-b2a4-e9dca065e82e" 00:12:42.734 ], 00:12:42.734 "product_name": "Raid Volume", 00:12:42.734 "block_size": 512, 00:12:42.734 "num_blocks": 63488, 00:12:42.734 "uuid": "8bf7cf94-405f-11ef-b2a4-e9dca065e82e", 00:12:42.734 "assigned_rate_limits": { 00:12:42.734 "rw_ios_per_sec": 0, 00:12:42.734 "rw_mbytes_per_sec": 0, 00:12:42.734 "r_mbytes_per_sec": 0, 00:12:42.734 "w_mbytes_per_sec": 0 00:12:42.734 }, 00:12:42.734 "claimed": false, 00:12:42.734 "zoned": false, 00:12:42.734 "supported_io_types": { 00:12:42.734 "read": true, 00:12:42.734 "write": true, 00:12:42.734 "unmap": false, 00:12:42.734 "flush": false, 00:12:42.734 "reset": true, 00:12:42.734 "nvme_admin": false, 00:12:42.734 "nvme_io": false, 00:12:42.734 "nvme_io_md": false, 00:12:42.734 "write_zeroes": true, 00:12:42.734 "zcopy": false, 00:12:42.734 "get_zone_info": false, 00:12:42.734 "zone_management": false, 00:12:42.734 "zone_append": false, 00:12:42.734 "compare": false, 00:12:42.734 "compare_and_write": false, 00:12:42.734 "abort": false, 00:12:42.734 "seek_hole": false, 00:12:42.734 "seek_data": false, 00:12:42.734 "copy": false, 00:12:42.734 "nvme_iov_md": false 00:12:42.734 }, 00:12:42.734 "memory_domains": [ 00:12:42.734 { 00:12:42.734 "dma_device_id": "system", 00:12:42.735 "dma_device_type": 1 00:12:42.735 }, 00:12:42.735 { 00:12:42.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.735 "dma_device_type": 2 00:12:42.735 }, 00:12:42.735 { 00:12:42.735 "dma_device_id": "system", 00:12:42.735 "dma_device_type": 1 00:12:42.735 }, 00:12:42.735 { 00:12:42.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.735 "dma_device_type": 2 00:12:42.735 }, 00:12:42.735 { 00:12:42.735 "dma_device_id": "system", 00:12:42.735 "dma_device_type": 1 00:12:42.735 }, 00:12:42.735 { 00:12:42.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.735 "dma_device_type": 2 00:12:42.735 } 00:12:42.735 ], 00:12:42.735 "driver_specific": { 00:12:42.735 "raid": { 00:12:42.735 "uuid": "8bf7cf94-405f-11ef-b2a4-e9dca065e82e", 00:12:42.735 "strip_size_kb": 0, 00:12:42.735 "state": "online", 00:12:42.735 "raid_level": "raid1", 00:12:42.735 "superblock": true, 00:12:42.735 "num_base_bdevs": 3, 00:12:42.735 "num_base_bdevs_discovered": 3, 00:12:42.735 "num_base_bdevs_operational": 3, 00:12:42.735 "base_bdevs_list": [ 00:12:42.735 { 00:12:42.735 "name": "NewBaseBdev", 00:12:42.735 "uuid": "8d3187c9-405f-11ef-b2a4-e9dca065e82e", 00:12:42.735 "is_configured": true, 00:12:42.735 "data_offset": 2048, 00:12:42.735 "data_size": 63488 00:12:42.735 }, 00:12:42.735 { 00:12:42.735 "name": "BaseBdev2", 00:12:42.735 "uuid": "8b1aded2-405f-11ef-b2a4-e9dca065e82e", 00:12:42.735 "is_configured": true, 00:12:42.735 "data_offset": 2048, 00:12:42.735 "data_size": 63488 00:12:42.735 }, 00:12:42.735 { 00:12:42.735 "name": "BaseBdev3", 00:12:42.735 "uuid": "8b8b2c41-405f-11ef-b2a4-e9dca065e82e", 00:12:42.735 "is_configured": true, 00:12:42.735 "data_offset": 2048, 00:12:42.735 "data_size": 63488 00:12:42.735 } 00:12:42.735 ] 00:12:42.735 } 00:12:42.735 } 00:12:42.735 }' 00:12:42.735 15:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:42.735 15:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:12:42.735 BaseBdev2 00:12:42.735 BaseBdev3' 00:12:42.735 15:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:42.735 15:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:12:42.735 15:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:42.993 15:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:42.993 "name": "NewBaseBdev", 00:12:42.993 "aliases": [ 00:12:42.993 "8d3187c9-405f-11ef-b2a4-e9dca065e82e" 00:12:42.993 ], 00:12:42.993 "product_name": "Malloc disk", 00:12:42.993 "block_size": 512, 00:12:42.993 "num_blocks": 65536, 00:12:42.993 "uuid": "8d3187c9-405f-11ef-b2a4-e9dca065e82e", 00:12:42.993 "assigned_rate_limits": { 00:12:42.993 "rw_ios_per_sec": 0, 00:12:42.993 "rw_mbytes_per_sec": 0, 00:12:42.993 "r_mbytes_per_sec": 0, 00:12:42.993 "w_mbytes_per_sec": 0 00:12:42.993 }, 00:12:42.993 "claimed": true, 00:12:42.993 "claim_type": "exclusive_write", 00:12:42.993 "zoned": false, 00:12:42.993 "supported_io_types": { 00:12:42.993 "read": true, 00:12:42.993 "write": true, 00:12:42.993 "unmap": true, 00:12:42.993 "flush": true, 00:12:42.993 "reset": true, 00:12:42.993 "nvme_admin": false, 00:12:42.993 "nvme_io": false, 00:12:42.993 "nvme_io_md": false, 00:12:42.993 "write_zeroes": true, 00:12:42.993 "zcopy": true, 00:12:42.993 "get_zone_info": false, 00:12:42.993 "zone_management": false, 00:12:42.993 "zone_append": false, 00:12:42.993 "compare": false, 00:12:42.993 "compare_and_write": false, 00:12:42.993 "abort": true, 00:12:42.993 "seek_hole": false, 00:12:42.993 "seek_data": false, 00:12:42.993 "copy": true, 00:12:42.993 "nvme_iov_md": false 00:12:42.993 }, 00:12:42.993 "memory_domains": [ 00:12:42.993 { 00:12:42.993 "dma_device_id": "system", 00:12:42.993 "dma_device_type": 1 00:12:42.993 }, 00:12:42.993 { 00:12:42.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.993 "dma_device_type": 2 00:12:42.993 } 00:12:42.993 ], 00:12:42.993 "driver_specific": {} 00:12:42.993 }' 00:12:42.993 15:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:42.993 15:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:42.993 15:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:42.993 15:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:42.993 15:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:42.993 15:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:42.993 15:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:42.993 15:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:42.993 15:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:42.993 15:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:42.993 15:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:43.251 15:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:43.251 15:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:43.251 15:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:43.251 15:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:43.510 15:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:43.510 "name": "BaseBdev2", 00:12:43.510 "aliases": [ 00:12:43.510 "8b1aded2-405f-11ef-b2a4-e9dca065e82e" 00:12:43.510 ], 00:12:43.510 "product_name": "Malloc disk", 00:12:43.510 "block_size": 512, 00:12:43.510 "num_blocks": 65536, 00:12:43.510 "uuid": "8b1aded2-405f-11ef-b2a4-e9dca065e82e", 00:12:43.510 "assigned_rate_limits": { 00:12:43.510 "rw_ios_per_sec": 0, 00:12:43.510 "rw_mbytes_per_sec": 0, 00:12:43.510 "r_mbytes_per_sec": 0, 00:12:43.510 "w_mbytes_per_sec": 0 00:12:43.510 }, 00:12:43.510 "claimed": true, 00:12:43.510 "claim_type": "exclusive_write", 00:12:43.510 "zoned": false, 00:12:43.510 "supported_io_types": { 00:12:43.510 "read": true, 00:12:43.510 "write": true, 00:12:43.510 "unmap": true, 00:12:43.510 "flush": true, 00:12:43.510 "reset": true, 00:12:43.510 "nvme_admin": false, 00:12:43.510 "nvme_io": false, 00:12:43.510 "nvme_io_md": false, 00:12:43.510 "write_zeroes": true, 00:12:43.510 "zcopy": true, 00:12:43.510 "get_zone_info": false, 00:12:43.510 "zone_management": false, 00:12:43.510 "zone_append": false, 00:12:43.510 "compare": false, 00:12:43.510 "compare_and_write": false, 00:12:43.510 "abort": true, 00:12:43.510 "seek_hole": false, 00:12:43.510 "seek_data": false, 00:12:43.510 "copy": true, 00:12:43.510 "nvme_iov_md": false 00:12:43.510 }, 00:12:43.510 "memory_domains": [ 00:12:43.510 { 00:12:43.510 "dma_device_id": "system", 00:12:43.510 "dma_device_type": 1 00:12:43.510 }, 00:12:43.510 { 00:12:43.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.510 "dma_device_type": 2 00:12:43.510 } 00:12:43.510 ], 00:12:43.510 "driver_specific": {} 00:12:43.510 }' 00:12:43.510 15:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:43.510 15:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:43.510 15:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:43.510 15:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:43.510 15:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:43.510 15:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:43.510 15:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:43.510 15:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:43.510 15:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:43.510 15:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:43.510 15:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:43.510 15:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:43.510 15:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:43.510 15:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:43.510 15:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:43.769 15:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:43.769 "name": "BaseBdev3", 00:12:43.769 "aliases": [ 00:12:43.769 "8b8b2c41-405f-11ef-b2a4-e9dca065e82e" 00:12:43.769 ], 00:12:43.769 "product_name": "Malloc disk", 00:12:43.769 "block_size": 512, 00:12:43.769 "num_blocks": 65536, 00:12:43.769 "uuid": "8b8b2c41-405f-11ef-b2a4-e9dca065e82e", 00:12:43.769 "assigned_rate_limits": { 00:12:43.769 "rw_ios_per_sec": 0, 00:12:43.769 "rw_mbytes_per_sec": 0, 00:12:43.769 "r_mbytes_per_sec": 0, 00:12:43.769 "w_mbytes_per_sec": 0 00:12:43.769 }, 00:12:43.769 "claimed": true, 00:12:43.769 "claim_type": "exclusive_write", 00:12:43.769 "zoned": false, 00:12:43.769 "supported_io_types": { 00:12:43.769 "read": true, 00:12:43.769 "write": true, 00:12:43.769 "unmap": true, 00:12:43.769 "flush": true, 00:12:43.769 "reset": true, 00:12:43.769 "nvme_admin": false, 00:12:43.769 "nvme_io": false, 00:12:43.769 "nvme_io_md": false, 00:12:43.769 "write_zeroes": true, 00:12:43.769 "zcopy": true, 00:12:43.769 "get_zone_info": false, 00:12:43.769 "zone_management": false, 00:12:43.769 "zone_append": false, 00:12:43.769 "compare": false, 00:12:43.769 "compare_and_write": false, 00:12:43.769 "abort": true, 00:12:43.769 "seek_hole": false, 00:12:43.769 "seek_data": false, 00:12:43.769 "copy": true, 00:12:43.769 "nvme_iov_md": false 00:12:43.769 }, 00:12:43.769 "memory_domains": [ 00:12:43.769 { 00:12:43.769 "dma_device_id": "system", 00:12:43.769 "dma_device_type": 1 00:12:43.769 }, 00:12:43.769 { 00:12:43.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.769 "dma_device_type": 2 00:12:43.769 } 00:12:43.769 ], 00:12:43.769 "driver_specific": {} 00:12:43.769 }' 00:12:43.769 15:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:43.769 15:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:43.769 15:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:43.769 15:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:43.769 15:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:43.769 15:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:43.769 15:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:43.769 15:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:43.769 15:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:43.769 15:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:43.769 15:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:43.769 15:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:43.769 15:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:44.335 [2024-07-12 15:01:09.892373] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:44.335 [2024-07-12 15:01:09.892407] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:44.335 [2024-07-12 15:01:09.892437] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:44.335 [2024-07-12 15:01:09.892550] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:44.335 [2024-07-12 15:01:09.892555] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10a6ad434f00 name Existed_Raid, state offline 00:12:44.335 15:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 56836 00:12:44.335 15:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 56836 ']' 00:12:44.335 15:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 56836 00:12:44.335 15:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:12:44.335 15:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:12:44.335 15:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 56836 00:12:44.335 15:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:12:44.336 15:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:12:44.336 15:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:12:44.336 killing process with pid 56836 00:12:44.336 15:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 56836' 00:12:44.336 15:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 56836 00:12:44.336 15:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 56836 00:12:44.336 [2024-07-12 15:01:09.919724] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:44.336 [2024-07-12 15:01:09.946970] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:44.607 15:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:12:44.607 ************************************ 00:12:44.607 END TEST raid_state_function_test_sb 00:12:44.607 ************************************ 00:12:44.607 00:12:44.607 real 0m24.228s 00:12:44.607 user 0m44.084s 00:12:44.607 sys 0m3.442s 00:12:44.607 15:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:44.607 15:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.607 15:01:10 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:44.607 15:01:10 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:12:44.608 15:01:10 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:44.608 15:01:10 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:44.608 15:01:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:44.608 ************************************ 00:12:44.608 START TEST raid_superblock_test 00:12:44.608 ************************************ 00:12:44.608 15:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 3 00:12:44.608 15:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:12:44.608 15:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:12:44.608 15:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:12:44.608 15:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:12:44.608 15:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:12:44.608 15:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:12:44.608 15:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:12:44.608 15:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:12:44.608 15:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:12:44.608 15:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:12:44.608 15:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:12:44.608 15:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:12:44.608 15:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:12:44.608 15:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:12:44.608 15:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:12:44.608 15:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=57568 00:12:44.608 15:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 57568 /var/tmp/spdk-raid.sock 00:12:44.608 15:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 57568 ']' 00:12:44.608 15:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:12:44.608 15:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:44.608 15:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:44.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:44.608 15:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:44.608 15:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:44.608 15:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.608 [2024-07-12 15:01:10.263364] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:12:44.608 [2024-07-12 15:01:10.263556] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:12:45.177 EAL: TSC is not safe to use in SMP mode 00:12:45.177 EAL: TSC is not invariant 00:12:45.177 [2024-07-12 15:01:10.819164] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.177 [2024-07-12 15:01:10.918476] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:45.177 [2024-07-12 15:01:10.921212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.177 [2024-07-12 15:01:10.922318] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:45.177 [2024-07-12 15:01:10.922339] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:45.744 15:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:45.744 15:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:12:45.744 15:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:12:45.744 15:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:45.744 15:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:12:45.744 15:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:12:45.744 15:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:45.744 15:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:45.744 15:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:45.744 15:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:45.744 15:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:12:46.002 malloc1 00:12:46.002 15:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:46.260 [2024-07-12 15:01:11.932199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:46.260 [2024-07-12 15:01:11.932285] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.261 [2024-07-12 15:01:11.932302] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f1d56834780 00:12:46.261 [2024-07-12 15:01:11.932313] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.261 [2024-07-12 15:01:11.933576] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.261 [2024-07-12 15:01:11.933607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:46.261 pt1 00:12:46.261 15:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:46.261 15:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:46.261 15:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:12:46.261 15:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:12:46.261 15:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:46.261 15:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:46.261 15:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:46.261 15:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:46.261 15:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:12:46.520 malloc2 00:12:46.520 15:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:46.778 [2024-07-12 15:01:12.400196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:46.778 [2024-07-12 15:01:12.400267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.778 [2024-07-12 15:01:12.400282] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f1d56834c80 00:12:46.778 [2024-07-12 15:01:12.400291] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.778 [2024-07-12 15:01:12.401194] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.778 [2024-07-12 15:01:12.401217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:46.778 pt2 00:12:46.778 15:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:46.778 15:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:46.778 15:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:12:46.778 15:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:12:46.778 15:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:46.778 15:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:46.778 15:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:46.778 15:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:46.778 15:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:12:47.037 malloc3 00:12:47.037 15:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:47.295 [2024-07-12 15:01:12.916200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:47.295 [2024-07-12 15:01:12.916262] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.295 [2024-07-12 15:01:12.916276] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f1d56835180 00:12:47.295 [2024-07-12 15:01:12.916284] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.295 [2024-07-12 15:01:12.916994] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.295 [2024-07-12 15:01:12.917023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:47.295 pt3 00:12:47.295 15:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:47.295 15:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:47.295 15:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:12:47.552 [2024-07-12 15:01:13.156226] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:47.552 [2024-07-12 15:01:13.156824] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:47.552 [2024-07-12 15:01:13.156848] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:47.552 [2024-07-12 15:01:13.156901] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f1d56835400 00:12:47.552 [2024-07-12 15:01:13.156907] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:47.552 [2024-07-12 15:01:13.156943] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f1d56897e20 00:12:47.552 [2024-07-12 15:01:13.157032] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f1d56835400 00:12:47.552 [2024-07-12 15:01:13.157037] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f1d56835400 00:12:47.552 [2024-07-12 15:01:13.157065] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.552 15:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:47.552 15:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:47.552 15:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:47.552 15:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:47.552 15:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:47.552 15:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:47.552 15:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:47.552 15:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:47.552 15:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:47.552 15:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:47.552 15:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:47.552 15:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.809 15:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:47.809 "name": "raid_bdev1", 00:12:47.809 "uuid": "94a318a9-405f-11ef-b2a4-e9dca065e82e", 00:12:47.809 "strip_size_kb": 0, 00:12:47.809 "state": "online", 00:12:47.809 "raid_level": "raid1", 00:12:47.809 "superblock": true, 00:12:47.809 "num_base_bdevs": 3, 00:12:47.809 "num_base_bdevs_discovered": 3, 00:12:47.809 "num_base_bdevs_operational": 3, 00:12:47.809 "base_bdevs_list": [ 00:12:47.809 { 00:12:47.809 "name": "pt1", 00:12:47.809 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:47.809 "is_configured": true, 00:12:47.809 "data_offset": 2048, 00:12:47.809 "data_size": 63488 00:12:47.809 }, 00:12:47.809 { 00:12:47.809 "name": "pt2", 00:12:47.809 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:47.809 "is_configured": true, 00:12:47.809 "data_offset": 2048, 00:12:47.809 "data_size": 63488 00:12:47.809 }, 00:12:47.809 { 00:12:47.809 "name": "pt3", 00:12:47.809 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:47.809 "is_configured": true, 00:12:47.809 "data_offset": 2048, 00:12:47.809 "data_size": 63488 00:12:47.809 } 00:12:47.809 ] 00:12:47.809 }' 00:12:47.809 15:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:47.809 15:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.069 15:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:12:48.069 15:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:48.069 15:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:48.069 15:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:48.069 15:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:48.069 15:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:48.069 15:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:48.069 15:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:48.338 [2024-07-12 15:01:13.976230] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:48.338 15:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:48.338 "name": "raid_bdev1", 00:12:48.338 "aliases": [ 00:12:48.338 "94a318a9-405f-11ef-b2a4-e9dca065e82e" 00:12:48.338 ], 00:12:48.338 "product_name": "Raid Volume", 00:12:48.338 "block_size": 512, 00:12:48.338 "num_blocks": 63488, 00:12:48.338 "uuid": "94a318a9-405f-11ef-b2a4-e9dca065e82e", 00:12:48.338 "assigned_rate_limits": { 00:12:48.338 "rw_ios_per_sec": 0, 00:12:48.338 "rw_mbytes_per_sec": 0, 00:12:48.338 "r_mbytes_per_sec": 0, 00:12:48.338 "w_mbytes_per_sec": 0 00:12:48.338 }, 00:12:48.338 "claimed": false, 00:12:48.338 "zoned": false, 00:12:48.338 "supported_io_types": { 00:12:48.338 "read": true, 00:12:48.338 "write": true, 00:12:48.338 "unmap": false, 00:12:48.338 "flush": false, 00:12:48.338 "reset": true, 00:12:48.338 "nvme_admin": false, 00:12:48.338 "nvme_io": false, 00:12:48.338 "nvme_io_md": false, 00:12:48.338 "write_zeroes": true, 00:12:48.338 "zcopy": false, 00:12:48.338 "get_zone_info": false, 00:12:48.338 "zone_management": false, 00:12:48.338 "zone_append": false, 00:12:48.338 "compare": false, 00:12:48.338 "compare_and_write": false, 00:12:48.338 "abort": false, 00:12:48.338 "seek_hole": false, 00:12:48.338 "seek_data": false, 00:12:48.338 "copy": false, 00:12:48.338 "nvme_iov_md": false 00:12:48.338 }, 00:12:48.338 "memory_domains": [ 00:12:48.338 { 00:12:48.338 "dma_device_id": "system", 00:12:48.338 "dma_device_type": 1 00:12:48.338 }, 00:12:48.338 { 00:12:48.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.338 "dma_device_type": 2 00:12:48.338 }, 00:12:48.338 { 00:12:48.338 "dma_device_id": "system", 00:12:48.338 "dma_device_type": 1 00:12:48.338 }, 00:12:48.338 { 00:12:48.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.338 "dma_device_type": 2 00:12:48.338 }, 00:12:48.338 { 00:12:48.338 "dma_device_id": "system", 00:12:48.338 "dma_device_type": 1 00:12:48.338 }, 00:12:48.338 { 00:12:48.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.338 "dma_device_type": 2 00:12:48.338 } 00:12:48.338 ], 00:12:48.338 "driver_specific": { 00:12:48.338 "raid": { 00:12:48.338 "uuid": "94a318a9-405f-11ef-b2a4-e9dca065e82e", 00:12:48.338 "strip_size_kb": 0, 00:12:48.338 "state": "online", 00:12:48.338 "raid_level": "raid1", 00:12:48.338 "superblock": true, 00:12:48.338 "num_base_bdevs": 3, 00:12:48.338 "num_base_bdevs_discovered": 3, 00:12:48.338 "num_base_bdevs_operational": 3, 00:12:48.338 "base_bdevs_list": [ 00:12:48.338 { 00:12:48.338 "name": "pt1", 00:12:48.338 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:48.338 "is_configured": true, 00:12:48.338 "data_offset": 2048, 00:12:48.338 "data_size": 63488 00:12:48.338 }, 00:12:48.338 { 00:12:48.338 "name": "pt2", 00:12:48.338 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:48.338 "is_configured": true, 00:12:48.338 "data_offset": 2048, 00:12:48.338 "data_size": 63488 00:12:48.338 }, 00:12:48.338 { 00:12:48.338 "name": "pt3", 00:12:48.338 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:48.338 "is_configured": true, 00:12:48.338 "data_offset": 2048, 00:12:48.338 "data_size": 63488 00:12:48.338 } 00:12:48.338 ] 00:12:48.338 } 00:12:48.338 } 00:12:48.338 }' 00:12:48.338 15:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:48.338 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:48.338 pt2 00:12:48.338 pt3' 00:12:48.338 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:48.338 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:48.338 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:48.597 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:48.597 "name": "pt1", 00:12:48.597 "aliases": [ 00:12:48.597 "00000000-0000-0000-0000-000000000001" 00:12:48.597 ], 00:12:48.597 "product_name": "passthru", 00:12:48.597 "block_size": 512, 00:12:48.597 "num_blocks": 65536, 00:12:48.597 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:48.597 "assigned_rate_limits": { 00:12:48.597 "rw_ios_per_sec": 0, 00:12:48.597 "rw_mbytes_per_sec": 0, 00:12:48.597 "r_mbytes_per_sec": 0, 00:12:48.597 "w_mbytes_per_sec": 0 00:12:48.597 }, 00:12:48.597 "claimed": true, 00:12:48.597 "claim_type": "exclusive_write", 00:12:48.597 "zoned": false, 00:12:48.597 "supported_io_types": { 00:12:48.597 "read": true, 00:12:48.597 "write": true, 00:12:48.597 "unmap": true, 00:12:48.597 "flush": true, 00:12:48.597 "reset": true, 00:12:48.597 "nvme_admin": false, 00:12:48.597 "nvme_io": false, 00:12:48.597 "nvme_io_md": false, 00:12:48.597 "write_zeroes": true, 00:12:48.597 "zcopy": true, 00:12:48.597 "get_zone_info": false, 00:12:48.597 "zone_management": false, 00:12:48.597 "zone_append": false, 00:12:48.597 "compare": false, 00:12:48.597 "compare_and_write": false, 00:12:48.597 "abort": true, 00:12:48.597 "seek_hole": false, 00:12:48.597 "seek_data": false, 00:12:48.597 "copy": true, 00:12:48.597 "nvme_iov_md": false 00:12:48.597 }, 00:12:48.597 "memory_domains": [ 00:12:48.597 { 00:12:48.597 "dma_device_id": "system", 00:12:48.597 "dma_device_type": 1 00:12:48.597 }, 00:12:48.597 { 00:12:48.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.597 "dma_device_type": 2 00:12:48.597 } 00:12:48.597 ], 00:12:48.597 "driver_specific": { 00:12:48.597 "passthru": { 00:12:48.597 "name": "pt1", 00:12:48.597 "base_bdev_name": "malloc1" 00:12:48.597 } 00:12:48.597 } 00:12:48.597 }' 00:12:48.597 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:48.597 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:48.597 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:48.597 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:48.597 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:48.597 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:48.597 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:48.597 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:48.597 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:48.597 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:48.597 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:48.597 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:48.597 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:48.597 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:48.597 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:48.857 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:48.857 "name": "pt2", 00:12:48.857 "aliases": [ 00:12:48.857 "00000000-0000-0000-0000-000000000002" 00:12:48.857 ], 00:12:48.857 "product_name": "passthru", 00:12:48.857 "block_size": 512, 00:12:48.857 "num_blocks": 65536, 00:12:48.857 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:48.857 "assigned_rate_limits": { 00:12:48.857 "rw_ios_per_sec": 0, 00:12:48.857 "rw_mbytes_per_sec": 0, 00:12:48.857 "r_mbytes_per_sec": 0, 00:12:48.857 "w_mbytes_per_sec": 0 00:12:48.857 }, 00:12:48.857 "claimed": true, 00:12:48.857 "claim_type": "exclusive_write", 00:12:48.857 "zoned": false, 00:12:48.857 "supported_io_types": { 00:12:48.857 "read": true, 00:12:48.857 "write": true, 00:12:48.857 "unmap": true, 00:12:48.857 "flush": true, 00:12:48.857 "reset": true, 00:12:48.857 "nvme_admin": false, 00:12:48.857 "nvme_io": false, 00:12:48.857 "nvme_io_md": false, 00:12:48.857 "write_zeroes": true, 00:12:48.857 "zcopy": true, 00:12:48.857 "get_zone_info": false, 00:12:48.857 "zone_management": false, 00:12:48.857 "zone_append": false, 00:12:48.857 "compare": false, 00:12:48.857 "compare_and_write": false, 00:12:48.857 "abort": true, 00:12:48.857 "seek_hole": false, 00:12:48.857 "seek_data": false, 00:12:48.857 "copy": true, 00:12:48.857 "nvme_iov_md": false 00:12:48.857 }, 00:12:48.857 "memory_domains": [ 00:12:48.857 { 00:12:48.857 "dma_device_id": "system", 00:12:48.857 "dma_device_type": 1 00:12:48.857 }, 00:12:48.857 { 00:12:48.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.857 "dma_device_type": 2 00:12:48.857 } 00:12:48.857 ], 00:12:48.857 "driver_specific": { 00:12:48.857 "passthru": { 00:12:48.857 "name": "pt2", 00:12:48.857 "base_bdev_name": "malloc2" 00:12:48.857 } 00:12:48.857 } 00:12:48.857 }' 00:12:48.857 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:48.857 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:48.857 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:48.857 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:48.857 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:48.857 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:48.857 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:48.857 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:48.857 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:48.857 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:48.857 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:49.116 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:49.116 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:49.116 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:49.116 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:49.374 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:49.374 "name": "pt3", 00:12:49.374 "aliases": [ 00:12:49.374 "00000000-0000-0000-0000-000000000003" 00:12:49.374 ], 00:12:49.374 "product_name": "passthru", 00:12:49.374 "block_size": 512, 00:12:49.374 "num_blocks": 65536, 00:12:49.375 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:49.375 "assigned_rate_limits": { 00:12:49.375 "rw_ios_per_sec": 0, 00:12:49.375 "rw_mbytes_per_sec": 0, 00:12:49.375 "r_mbytes_per_sec": 0, 00:12:49.375 "w_mbytes_per_sec": 0 00:12:49.375 }, 00:12:49.375 "claimed": true, 00:12:49.375 "claim_type": "exclusive_write", 00:12:49.375 "zoned": false, 00:12:49.375 "supported_io_types": { 00:12:49.375 "read": true, 00:12:49.375 "write": true, 00:12:49.375 "unmap": true, 00:12:49.375 "flush": true, 00:12:49.375 "reset": true, 00:12:49.375 "nvme_admin": false, 00:12:49.375 "nvme_io": false, 00:12:49.375 "nvme_io_md": false, 00:12:49.375 "write_zeroes": true, 00:12:49.375 "zcopy": true, 00:12:49.375 "get_zone_info": false, 00:12:49.375 "zone_management": false, 00:12:49.375 "zone_append": false, 00:12:49.375 "compare": false, 00:12:49.375 "compare_and_write": false, 00:12:49.375 "abort": true, 00:12:49.375 "seek_hole": false, 00:12:49.375 "seek_data": false, 00:12:49.375 "copy": true, 00:12:49.375 "nvme_iov_md": false 00:12:49.375 }, 00:12:49.375 "memory_domains": [ 00:12:49.375 { 00:12:49.375 "dma_device_id": "system", 00:12:49.375 "dma_device_type": 1 00:12:49.375 }, 00:12:49.375 { 00:12:49.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.375 "dma_device_type": 2 00:12:49.375 } 00:12:49.375 ], 00:12:49.375 "driver_specific": { 00:12:49.375 "passthru": { 00:12:49.375 "name": "pt3", 00:12:49.375 "base_bdev_name": "malloc3" 00:12:49.375 } 00:12:49.375 } 00:12:49.375 }' 00:12:49.375 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:49.375 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:49.375 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:49.375 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:49.375 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:49.375 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:49.375 15:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:49.375 15:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:49.375 15:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:49.375 15:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:49.375 15:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:49.375 15:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:49.375 15:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:49.375 15:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:12:49.633 [2024-07-12 15:01:15.236256] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:49.633 15:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=94a318a9-405f-11ef-b2a4-e9dca065e82e 00:12:49.633 15:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 94a318a9-405f-11ef-b2a4-e9dca065e82e ']' 00:12:49.633 15:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:49.890 [2024-07-12 15:01:15.516223] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:49.890 [2024-07-12 15:01:15.516242] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:49.890 [2024-07-12 15:01:15.516264] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:49.890 [2024-07-12 15:01:15.516280] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:49.890 [2024-07-12 15:01:15.516285] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f1d56835400 name raid_bdev1, state offline 00:12:49.890 15:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:49.890 15:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:12:50.148 15:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:12:50.148 15:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:12:50.148 15:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:50.148 15:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:50.407 15:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:50.407 15:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:50.666 15:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:50.666 15:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:50.925 15:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:50.925 15:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:12:51.202 15:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:12:51.202 15:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:51.202 15:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:12:51.202 15:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:51.202 15:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:51.202 15:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:51.202 15:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:51.202 15:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:51.202 15:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:51.202 15:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:51.202 15:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:51.203 15:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:51.203 15:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:51.468 [2024-07-12 15:01:17.068333] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:51.468 [2024-07-12 15:01:17.068967] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:51.468 [2024-07-12 15:01:17.068987] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:51.469 [2024-07-12 15:01:17.069011] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:51.469 [2024-07-12 15:01:17.069051] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:51.469 [2024-07-12 15:01:17.069063] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:51.469 [2024-07-12 15:01:17.069071] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:51.469 [2024-07-12 15:01:17.069076] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f1d56835180 name raid_bdev1, state configuring 00:12:51.469 request: 00:12:51.469 { 00:12:51.469 "name": "raid_bdev1", 00:12:51.469 "raid_level": "raid1", 00:12:51.469 "base_bdevs": [ 00:12:51.469 "malloc1", 00:12:51.469 "malloc2", 00:12:51.469 "malloc3" 00:12:51.469 ], 00:12:51.469 "superblock": false, 00:12:51.469 "method": "bdev_raid_create", 00:12:51.469 "req_id": 1 00:12:51.469 } 00:12:51.469 Got JSON-RPC error response 00:12:51.469 response: 00:12:51.469 { 00:12:51.469 "code": -17, 00:12:51.469 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:51.469 } 00:12:51.469 15:01:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:12:51.469 15:01:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:51.469 15:01:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:51.469 15:01:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:51.469 15:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:51.469 15:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:12:51.727 15:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:12:51.727 15:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:12:51.727 15:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:51.986 [2024-07-12 15:01:17.560327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:51.986 [2024-07-12 15:01:17.560379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.986 [2024-07-12 15:01:17.560393] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f1d56834c80 00:12:51.986 [2024-07-12 15:01:17.560401] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.986 [2024-07-12 15:01:17.561050] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.986 [2024-07-12 15:01:17.561075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:51.986 [2024-07-12 15:01:17.561102] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:51.986 [2024-07-12 15:01:17.561123] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:51.986 pt1 00:12:51.986 15:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:51.986 15:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:51.986 15:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:51.986 15:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:51.986 15:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:51.986 15:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:51.986 15:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:51.986 15:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:51.986 15:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:51.986 15:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:51.986 15:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:51.986 15:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.245 15:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:52.245 "name": "raid_bdev1", 00:12:52.245 "uuid": "94a318a9-405f-11ef-b2a4-e9dca065e82e", 00:12:52.245 "strip_size_kb": 0, 00:12:52.245 "state": "configuring", 00:12:52.245 "raid_level": "raid1", 00:12:52.245 "superblock": true, 00:12:52.245 "num_base_bdevs": 3, 00:12:52.245 "num_base_bdevs_discovered": 1, 00:12:52.245 "num_base_bdevs_operational": 3, 00:12:52.245 "base_bdevs_list": [ 00:12:52.245 { 00:12:52.245 "name": "pt1", 00:12:52.245 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:52.245 "is_configured": true, 00:12:52.245 "data_offset": 2048, 00:12:52.245 "data_size": 63488 00:12:52.245 }, 00:12:52.245 { 00:12:52.245 "name": null, 00:12:52.245 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:52.245 "is_configured": false, 00:12:52.245 "data_offset": 2048, 00:12:52.245 "data_size": 63488 00:12:52.245 }, 00:12:52.245 { 00:12:52.245 "name": null, 00:12:52.245 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:52.245 "is_configured": false, 00:12:52.245 "data_offset": 2048, 00:12:52.245 "data_size": 63488 00:12:52.245 } 00:12:52.245 ] 00:12:52.245 }' 00:12:52.245 15:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:52.245 15:01:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.512 15:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:12:52.512 15:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:52.774 [2024-07-12 15:01:18.348408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:52.774 [2024-07-12 15:01:18.348498] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.774 [2024-07-12 15:01:18.348512] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f1d56835680 00:12:52.774 [2024-07-12 15:01:18.348522] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.774 [2024-07-12 15:01:18.348678] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.774 [2024-07-12 15:01:18.348690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:52.774 [2024-07-12 15:01:18.348717] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:52.774 [2024-07-12 15:01:18.348727] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:52.774 pt2 00:12:52.774 15:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:52.774 [2024-07-12 15:01:18.588423] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:53.033 15:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:53.033 15:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:53.033 15:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:53.033 15:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:53.033 15:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:53.033 15:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:53.033 15:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:53.033 15:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:53.033 15:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:53.033 15:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:53.033 15:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:53.033 15:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.291 15:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:53.291 "name": "raid_bdev1", 00:12:53.291 "uuid": "94a318a9-405f-11ef-b2a4-e9dca065e82e", 00:12:53.291 "strip_size_kb": 0, 00:12:53.291 "state": "configuring", 00:12:53.291 "raid_level": "raid1", 00:12:53.291 "superblock": true, 00:12:53.291 "num_base_bdevs": 3, 00:12:53.291 "num_base_bdevs_discovered": 1, 00:12:53.291 "num_base_bdevs_operational": 3, 00:12:53.291 "base_bdevs_list": [ 00:12:53.291 { 00:12:53.291 "name": "pt1", 00:12:53.291 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:53.291 "is_configured": true, 00:12:53.291 "data_offset": 2048, 00:12:53.291 "data_size": 63488 00:12:53.291 }, 00:12:53.291 { 00:12:53.291 "name": null, 00:12:53.291 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:53.291 "is_configured": false, 00:12:53.291 "data_offset": 2048, 00:12:53.291 "data_size": 63488 00:12:53.291 }, 00:12:53.291 { 00:12:53.291 "name": null, 00:12:53.291 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:53.291 "is_configured": false, 00:12:53.291 "data_offset": 2048, 00:12:53.291 "data_size": 63488 00:12:53.291 } 00:12:53.291 ] 00:12:53.291 }' 00:12:53.291 15:01:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:53.291 15:01:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.550 15:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:12:53.550 15:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:53.550 15:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:53.809 [2024-07-12 15:01:19.396561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:53.809 [2024-07-12 15:01:19.396633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.809 [2024-07-12 15:01:19.396651] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f1d56835680 00:12:53.809 [2024-07-12 15:01:19.396660] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.809 [2024-07-12 15:01:19.396851] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.809 [2024-07-12 15:01:19.396863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:53.809 [2024-07-12 15:01:19.396890] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:53.809 [2024-07-12 15:01:19.396899] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:53.809 pt2 00:12:53.809 15:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:12:53.809 15:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:53.809 15:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:54.069 [2024-07-12 15:01:19.640594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:54.069 [2024-07-12 15:01:19.640710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.069 [2024-07-12 15:01:19.640722] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f1d56835400 00:12:54.069 [2024-07-12 15:01:19.640730] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.069 [2024-07-12 15:01:19.640880] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.069 [2024-07-12 15:01:19.640897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:54.069 [2024-07-12 15:01:19.640923] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:54.069 [2024-07-12 15:01:19.640932] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:54.069 [2024-07-12 15:01:19.640966] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f1d56834780 00:12:54.069 [2024-07-12 15:01:19.640971] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:54.069 [2024-07-12 15:01:19.640993] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f1d56897e20 00:12:54.069 [2024-07-12 15:01:19.641061] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f1d56834780 00:12:54.069 [2024-07-12 15:01:19.641065] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f1d56834780 00:12:54.069 [2024-07-12 15:01:19.641086] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.069 pt3 00:12:54.069 15:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:12:54.069 15:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:54.069 15:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:54.069 15:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:54.069 15:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:54.069 15:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:54.069 15:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:54.069 15:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:54.069 15:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:54.069 15:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:54.069 15:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:54.069 15:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:54.069 15:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:54.069 15:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.329 15:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:54.329 "name": "raid_bdev1", 00:12:54.329 "uuid": "94a318a9-405f-11ef-b2a4-e9dca065e82e", 00:12:54.329 "strip_size_kb": 0, 00:12:54.329 "state": "online", 00:12:54.329 "raid_level": "raid1", 00:12:54.329 "superblock": true, 00:12:54.329 "num_base_bdevs": 3, 00:12:54.329 "num_base_bdevs_discovered": 3, 00:12:54.329 "num_base_bdevs_operational": 3, 00:12:54.329 "base_bdevs_list": [ 00:12:54.329 { 00:12:54.329 "name": "pt1", 00:12:54.329 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:54.329 "is_configured": true, 00:12:54.329 "data_offset": 2048, 00:12:54.329 "data_size": 63488 00:12:54.329 }, 00:12:54.329 { 00:12:54.329 "name": "pt2", 00:12:54.329 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:54.329 "is_configured": true, 00:12:54.329 "data_offset": 2048, 00:12:54.329 "data_size": 63488 00:12:54.329 }, 00:12:54.329 { 00:12:54.329 "name": "pt3", 00:12:54.329 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:54.329 "is_configured": true, 00:12:54.329 "data_offset": 2048, 00:12:54.329 "data_size": 63488 00:12:54.329 } 00:12:54.329 ] 00:12:54.329 }' 00:12:54.329 15:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:54.329 15:01:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.595 15:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:12:54.595 15:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:54.595 15:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:54.595 15:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:54.595 15:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:54.595 15:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:54.595 15:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:54.595 15:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:54.854 [2024-07-12 15:01:20.548772] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:54.854 15:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:54.854 "name": "raid_bdev1", 00:12:54.854 "aliases": [ 00:12:54.854 "94a318a9-405f-11ef-b2a4-e9dca065e82e" 00:12:54.854 ], 00:12:54.854 "product_name": "Raid Volume", 00:12:54.854 "block_size": 512, 00:12:54.854 "num_blocks": 63488, 00:12:54.854 "uuid": "94a318a9-405f-11ef-b2a4-e9dca065e82e", 00:12:54.854 "assigned_rate_limits": { 00:12:54.854 "rw_ios_per_sec": 0, 00:12:54.854 "rw_mbytes_per_sec": 0, 00:12:54.854 "r_mbytes_per_sec": 0, 00:12:54.854 "w_mbytes_per_sec": 0 00:12:54.854 }, 00:12:54.854 "claimed": false, 00:12:54.854 "zoned": false, 00:12:54.854 "supported_io_types": { 00:12:54.854 "read": true, 00:12:54.854 "write": true, 00:12:54.854 "unmap": false, 00:12:54.854 "flush": false, 00:12:54.854 "reset": true, 00:12:54.854 "nvme_admin": false, 00:12:54.854 "nvme_io": false, 00:12:54.854 "nvme_io_md": false, 00:12:54.854 "write_zeroes": true, 00:12:54.854 "zcopy": false, 00:12:54.854 "get_zone_info": false, 00:12:54.854 "zone_management": false, 00:12:54.854 "zone_append": false, 00:12:54.854 "compare": false, 00:12:54.854 "compare_and_write": false, 00:12:54.854 "abort": false, 00:12:54.854 "seek_hole": false, 00:12:54.854 "seek_data": false, 00:12:54.854 "copy": false, 00:12:54.854 "nvme_iov_md": false 00:12:54.854 }, 00:12:54.854 "memory_domains": [ 00:12:54.854 { 00:12:54.854 "dma_device_id": "system", 00:12:54.854 "dma_device_type": 1 00:12:54.854 }, 00:12:54.854 { 00:12:54.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.854 "dma_device_type": 2 00:12:54.854 }, 00:12:54.854 { 00:12:54.854 "dma_device_id": "system", 00:12:54.854 "dma_device_type": 1 00:12:54.854 }, 00:12:54.854 { 00:12:54.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.854 "dma_device_type": 2 00:12:54.854 }, 00:12:54.854 { 00:12:54.854 "dma_device_id": "system", 00:12:54.854 "dma_device_type": 1 00:12:54.854 }, 00:12:54.854 { 00:12:54.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.854 "dma_device_type": 2 00:12:54.854 } 00:12:54.854 ], 00:12:54.854 "driver_specific": { 00:12:54.854 "raid": { 00:12:54.854 "uuid": "94a318a9-405f-11ef-b2a4-e9dca065e82e", 00:12:54.854 "strip_size_kb": 0, 00:12:54.854 "state": "online", 00:12:54.854 "raid_level": "raid1", 00:12:54.854 "superblock": true, 00:12:54.854 "num_base_bdevs": 3, 00:12:54.854 "num_base_bdevs_discovered": 3, 00:12:54.854 "num_base_bdevs_operational": 3, 00:12:54.854 "base_bdevs_list": [ 00:12:54.854 { 00:12:54.854 "name": "pt1", 00:12:54.854 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:54.854 "is_configured": true, 00:12:54.854 "data_offset": 2048, 00:12:54.854 "data_size": 63488 00:12:54.854 }, 00:12:54.854 { 00:12:54.854 "name": "pt2", 00:12:54.854 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:54.854 "is_configured": true, 00:12:54.854 "data_offset": 2048, 00:12:54.854 "data_size": 63488 00:12:54.854 }, 00:12:54.854 { 00:12:54.854 "name": "pt3", 00:12:54.854 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:54.854 "is_configured": true, 00:12:54.854 "data_offset": 2048, 00:12:54.854 "data_size": 63488 00:12:54.854 } 00:12:54.854 ] 00:12:54.854 } 00:12:54.854 } 00:12:54.854 }' 00:12:54.854 15:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:54.854 15:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:54.854 pt2 00:12:54.854 pt3' 00:12:54.854 15:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:54.854 15:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:54.854 15:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:55.132 15:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:55.132 "name": "pt1", 00:12:55.132 "aliases": [ 00:12:55.132 "00000000-0000-0000-0000-000000000001" 00:12:55.132 ], 00:12:55.132 "product_name": "passthru", 00:12:55.132 "block_size": 512, 00:12:55.132 "num_blocks": 65536, 00:12:55.132 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:55.132 "assigned_rate_limits": { 00:12:55.132 "rw_ios_per_sec": 0, 00:12:55.132 "rw_mbytes_per_sec": 0, 00:12:55.132 "r_mbytes_per_sec": 0, 00:12:55.132 "w_mbytes_per_sec": 0 00:12:55.132 }, 00:12:55.132 "claimed": true, 00:12:55.132 "claim_type": "exclusive_write", 00:12:55.132 "zoned": false, 00:12:55.132 "supported_io_types": { 00:12:55.132 "read": true, 00:12:55.132 "write": true, 00:12:55.132 "unmap": true, 00:12:55.132 "flush": true, 00:12:55.132 "reset": true, 00:12:55.132 "nvme_admin": false, 00:12:55.132 "nvme_io": false, 00:12:55.132 "nvme_io_md": false, 00:12:55.132 "write_zeroes": true, 00:12:55.132 "zcopy": true, 00:12:55.132 "get_zone_info": false, 00:12:55.132 "zone_management": false, 00:12:55.132 "zone_append": false, 00:12:55.132 "compare": false, 00:12:55.132 "compare_and_write": false, 00:12:55.132 "abort": true, 00:12:55.133 "seek_hole": false, 00:12:55.133 "seek_data": false, 00:12:55.133 "copy": true, 00:12:55.133 "nvme_iov_md": false 00:12:55.133 }, 00:12:55.133 "memory_domains": [ 00:12:55.133 { 00:12:55.133 "dma_device_id": "system", 00:12:55.133 "dma_device_type": 1 00:12:55.133 }, 00:12:55.133 { 00:12:55.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.133 "dma_device_type": 2 00:12:55.133 } 00:12:55.133 ], 00:12:55.133 "driver_specific": { 00:12:55.133 "passthru": { 00:12:55.133 "name": "pt1", 00:12:55.133 "base_bdev_name": "malloc1" 00:12:55.133 } 00:12:55.133 } 00:12:55.133 }' 00:12:55.133 15:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:55.133 15:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:55.133 15:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:55.133 15:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:55.133 15:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:55.133 15:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:55.133 15:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:55.133 15:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:55.133 15:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:55.133 15:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:55.133 15:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:55.133 15:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:55.133 15:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:55.133 15:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:55.133 15:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:55.398 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:55.398 "name": "pt2", 00:12:55.398 "aliases": [ 00:12:55.398 "00000000-0000-0000-0000-000000000002" 00:12:55.398 ], 00:12:55.399 "product_name": "passthru", 00:12:55.399 "block_size": 512, 00:12:55.399 "num_blocks": 65536, 00:12:55.399 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:55.399 "assigned_rate_limits": { 00:12:55.399 "rw_ios_per_sec": 0, 00:12:55.399 "rw_mbytes_per_sec": 0, 00:12:55.399 "r_mbytes_per_sec": 0, 00:12:55.399 "w_mbytes_per_sec": 0 00:12:55.399 }, 00:12:55.399 "claimed": true, 00:12:55.399 "claim_type": "exclusive_write", 00:12:55.399 "zoned": false, 00:12:55.399 "supported_io_types": { 00:12:55.399 "read": true, 00:12:55.399 "write": true, 00:12:55.399 "unmap": true, 00:12:55.399 "flush": true, 00:12:55.399 "reset": true, 00:12:55.399 "nvme_admin": false, 00:12:55.399 "nvme_io": false, 00:12:55.399 "nvme_io_md": false, 00:12:55.399 "write_zeroes": true, 00:12:55.399 "zcopy": true, 00:12:55.399 "get_zone_info": false, 00:12:55.399 "zone_management": false, 00:12:55.399 "zone_append": false, 00:12:55.399 "compare": false, 00:12:55.399 "compare_and_write": false, 00:12:55.399 "abort": true, 00:12:55.399 "seek_hole": false, 00:12:55.399 "seek_data": false, 00:12:55.399 "copy": true, 00:12:55.399 "nvme_iov_md": false 00:12:55.399 }, 00:12:55.399 "memory_domains": [ 00:12:55.399 { 00:12:55.399 "dma_device_id": "system", 00:12:55.399 "dma_device_type": 1 00:12:55.399 }, 00:12:55.399 { 00:12:55.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.399 "dma_device_type": 2 00:12:55.399 } 00:12:55.399 ], 00:12:55.399 "driver_specific": { 00:12:55.399 "passthru": { 00:12:55.399 "name": "pt2", 00:12:55.399 "base_bdev_name": "malloc2" 00:12:55.399 } 00:12:55.399 } 00:12:55.399 }' 00:12:55.399 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:55.399 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:55.399 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:55.399 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:55.399 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:55.399 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:55.399 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:55.399 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:55.399 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:55.399 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:55.399 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:55.399 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:55.399 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:55.399 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:55.399 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:55.658 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:55.658 "name": "pt3", 00:12:55.658 "aliases": [ 00:12:55.658 "00000000-0000-0000-0000-000000000003" 00:12:55.658 ], 00:12:55.658 "product_name": "passthru", 00:12:55.658 "block_size": 512, 00:12:55.658 "num_blocks": 65536, 00:12:55.658 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:55.658 "assigned_rate_limits": { 00:12:55.658 "rw_ios_per_sec": 0, 00:12:55.658 "rw_mbytes_per_sec": 0, 00:12:55.658 "r_mbytes_per_sec": 0, 00:12:55.658 "w_mbytes_per_sec": 0 00:12:55.658 }, 00:12:55.658 "claimed": true, 00:12:55.658 "claim_type": "exclusive_write", 00:12:55.658 "zoned": false, 00:12:55.658 "supported_io_types": { 00:12:55.658 "read": true, 00:12:55.658 "write": true, 00:12:55.658 "unmap": true, 00:12:55.658 "flush": true, 00:12:55.658 "reset": true, 00:12:55.658 "nvme_admin": false, 00:12:55.658 "nvme_io": false, 00:12:55.658 "nvme_io_md": false, 00:12:55.658 "write_zeroes": true, 00:12:55.658 "zcopy": true, 00:12:55.658 "get_zone_info": false, 00:12:55.658 "zone_management": false, 00:12:55.658 "zone_append": false, 00:12:55.659 "compare": false, 00:12:55.659 "compare_and_write": false, 00:12:55.659 "abort": true, 00:12:55.659 "seek_hole": false, 00:12:55.659 "seek_data": false, 00:12:55.659 "copy": true, 00:12:55.659 "nvme_iov_md": false 00:12:55.659 }, 00:12:55.659 "memory_domains": [ 00:12:55.659 { 00:12:55.659 "dma_device_id": "system", 00:12:55.659 "dma_device_type": 1 00:12:55.659 }, 00:12:55.659 { 00:12:55.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.659 "dma_device_type": 2 00:12:55.659 } 00:12:55.659 ], 00:12:55.659 "driver_specific": { 00:12:55.659 "passthru": { 00:12:55.659 "name": "pt3", 00:12:55.659 "base_bdev_name": "malloc3" 00:12:55.659 } 00:12:55.659 } 00:12:55.659 }' 00:12:55.659 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:55.659 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:55.659 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:55.659 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:55.659 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:55.659 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:55.659 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:55.659 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:55.659 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:55.659 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:55.659 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:55.659 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:55.659 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:55.659 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:12:55.917 [2024-07-12 15:01:21.692865] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:55.917 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 94a318a9-405f-11ef-b2a4-e9dca065e82e '!=' 94a318a9-405f-11ef-b2a4-e9dca065e82e ']' 00:12:55.917 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:12:55.917 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:55.917 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:12:55.917 15:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:56.175 [2024-07-12 15:01:21.988856] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:56.434 15:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:56.434 15:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:56.434 15:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:56.434 15:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:56.434 15:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:56.434 15:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:56.434 15:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:56.434 15:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:56.434 15:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:56.434 15:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:56.434 15:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:56.434 15:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.692 15:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:56.692 "name": "raid_bdev1", 00:12:56.692 "uuid": "94a318a9-405f-11ef-b2a4-e9dca065e82e", 00:12:56.692 "strip_size_kb": 0, 00:12:56.692 "state": "online", 00:12:56.692 "raid_level": "raid1", 00:12:56.692 "superblock": true, 00:12:56.692 "num_base_bdevs": 3, 00:12:56.692 "num_base_bdevs_discovered": 2, 00:12:56.692 "num_base_bdevs_operational": 2, 00:12:56.692 "base_bdevs_list": [ 00:12:56.692 { 00:12:56.692 "name": null, 00:12:56.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.692 "is_configured": false, 00:12:56.692 "data_offset": 2048, 00:12:56.692 "data_size": 63488 00:12:56.692 }, 00:12:56.692 { 00:12:56.692 "name": "pt2", 00:12:56.692 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:56.692 "is_configured": true, 00:12:56.692 "data_offset": 2048, 00:12:56.692 "data_size": 63488 00:12:56.692 }, 00:12:56.692 { 00:12:56.692 "name": "pt3", 00:12:56.692 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:56.692 "is_configured": true, 00:12:56.692 "data_offset": 2048, 00:12:56.692 "data_size": 63488 00:12:56.692 } 00:12:56.692 ] 00:12:56.692 }' 00:12:56.692 15:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:56.692 15:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.950 15:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:57.208 [2024-07-12 15:01:22.812906] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:57.208 [2024-07-12 15:01:22.812939] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:57.208 [2024-07-12 15:01:22.812969] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:57.208 [2024-07-12 15:01:22.812987] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:57.209 [2024-07-12 15:01:22.812992] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f1d56834780 name raid_bdev1, state offline 00:12:57.209 15:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:57.209 15:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:12:57.467 15:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:12:57.467 15:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:12:57.467 15:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:12:57.467 15:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:12:57.467 15:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:57.726 15:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:12:57.726 15:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:12:57.726 15:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:57.992 15:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:12:57.992 15:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:12:57.992 15:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:12:57.992 15:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:12:57.992 15:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:58.253 [2024-07-12 15:01:23.900998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:58.253 [2024-07-12 15:01:23.901067] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.253 [2024-07-12 15:01:23.901080] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f1d56835400 00:12:58.253 [2024-07-12 15:01:23.901088] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.253 [2024-07-12 15:01:23.901997] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.253 [2024-07-12 15:01:23.902021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:58.253 [2024-07-12 15:01:23.902049] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:58.253 [2024-07-12 15:01:23.902062] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:58.253 pt2 00:12:58.253 15:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:58.253 15:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:58.253 15:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:58.253 15:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:58.253 15:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:58.253 15:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:58.253 15:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:58.253 15:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:58.253 15:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:58.253 15:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:58.253 15:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:58.253 15:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.509 15:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:58.509 "name": "raid_bdev1", 00:12:58.509 "uuid": "94a318a9-405f-11ef-b2a4-e9dca065e82e", 00:12:58.509 "strip_size_kb": 0, 00:12:58.509 "state": "configuring", 00:12:58.509 "raid_level": "raid1", 00:12:58.509 "superblock": true, 00:12:58.509 "num_base_bdevs": 3, 00:12:58.509 "num_base_bdevs_discovered": 1, 00:12:58.509 "num_base_bdevs_operational": 2, 00:12:58.509 "base_bdevs_list": [ 00:12:58.509 { 00:12:58.509 "name": null, 00:12:58.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.509 "is_configured": false, 00:12:58.509 "data_offset": 2048, 00:12:58.509 "data_size": 63488 00:12:58.509 }, 00:12:58.509 { 00:12:58.509 "name": "pt2", 00:12:58.509 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:58.509 "is_configured": true, 00:12:58.509 "data_offset": 2048, 00:12:58.509 "data_size": 63488 00:12:58.509 }, 00:12:58.509 { 00:12:58.509 "name": null, 00:12:58.509 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:58.509 "is_configured": false, 00:12:58.509 "data_offset": 2048, 00:12:58.509 "data_size": 63488 00:12:58.509 } 00:12:58.509 ] 00:12:58.509 }' 00:12:58.509 15:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:58.509 15:01:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.767 15:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:12:58.767 15:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:12:58.767 15:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:12:58.767 15:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:59.025 [2024-07-12 15:01:24.717130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:59.025 [2024-07-12 15:01:24.717203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.025 [2024-07-12 15:01:24.717224] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f1d56834780 00:12:59.025 [2024-07-12 15:01:24.717233] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.025 [2024-07-12 15:01:24.717382] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.025 [2024-07-12 15:01:24.717394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:59.025 [2024-07-12 15:01:24.717434] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:59.025 [2024-07-12 15:01:24.717445] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:59.025 [2024-07-12 15:01:24.717478] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f1d56835180 00:12:59.025 [2024-07-12 15:01:24.717483] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:59.025 [2024-07-12 15:01:24.717504] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f1d56897e20 00:12:59.025 [2024-07-12 15:01:24.717554] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f1d56835180 00:12:59.025 [2024-07-12 15:01:24.717558] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f1d56835180 00:12:59.025 [2024-07-12 15:01:24.717580] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.025 pt3 00:12:59.025 15:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:59.025 15:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:59.025 15:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:59.025 15:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:59.025 15:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:59.025 15:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:59.025 15:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:59.025 15:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:59.025 15:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:59.025 15:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:59.025 15:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:59.025 15:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.282 15:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:59.282 "name": "raid_bdev1", 00:12:59.282 "uuid": "94a318a9-405f-11ef-b2a4-e9dca065e82e", 00:12:59.282 "strip_size_kb": 0, 00:12:59.282 "state": "online", 00:12:59.282 "raid_level": "raid1", 00:12:59.282 "superblock": true, 00:12:59.282 "num_base_bdevs": 3, 00:12:59.282 "num_base_bdevs_discovered": 2, 00:12:59.282 "num_base_bdevs_operational": 2, 00:12:59.282 "base_bdevs_list": [ 00:12:59.282 { 00:12:59.282 "name": null, 00:12:59.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.282 "is_configured": false, 00:12:59.282 "data_offset": 2048, 00:12:59.282 "data_size": 63488 00:12:59.282 }, 00:12:59.282 { 00:12:59.282 "name": "pt2", 00:12:59.282 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:59.282 "is_configured": true, 00:12:59.282 "data_offset": 2048, 00:12:59.282 "data_size": 63488 00:12:59.282 }, 00:12:59.282 { 00:12:59.282 "name": "pt3", 00:12:59.282 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:59.282 "is_configured": true, 00:12:59.282 "data_offset": 2048, 00:12:59.282 "data_size": 63488 00:12:59.282 } 00:12:59.282 ] 00:12:59.282 }' 00:12:59.282 15:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:59.282 15:01:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.541 15:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:59.798 [2024-07-12 15:01:25.489153] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:59.798 [2024-07-12 15:01:25.489185] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:59.798 [2024-07-12 15:01:25.489215] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:59.798 [2024-07-12 15:01:25.489232] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:59.798 [2024-07-12 15:01:25.489237] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f1d56835180 name raid_bdev1, state offline 00:12:59.798 15:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:59.798 15:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:13:00.055 15:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:13:00.055 15:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:13:00.055 15:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:13:00.055 15:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:13:00.055 15:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:13:00.371 15:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:00.628 [2024-07-12 15:01:26.293221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:00.628 [2024-07-12 15:01:26.293293] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.628 [2024-07-12 15:01:26.293307] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f1d56834780 00:13:00.628 [2024-07-12 15:01:26.293315] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.628 [2024-07-12 15:01:26.294139] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.628 [2024-07-12 15:01:26.294180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:00.628 [2024-07-12 15:01:26.294211] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:00.628 [2024-07-12 15:01:26.294224] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:00.628 [2024-07-12 15:01:26.294259] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:00.628 [2024-07-12 15:01:26.294263] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:00.628 [2024-07-12 15:01:26.294269] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f1d56835180 name raid_bdev1, state configuring 00:13:00.628 [2024-07-12 15:01:26.294277] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:00.628 pt1 00:13:00.628 15:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:13:00.628 15:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:00.628 15:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:00.628 15:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:00.628 15:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:00.628 15:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:00.628 15:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:00.628 15:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:00.628 15:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:00.628 15:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:00.628 15:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:00.628 15:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:00.628 15:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.886 15:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:00.886 "name": "raid_bdev1", 00:13:00.886 "uuid": "94a318a9-405f-11ef-b2a4-e9dca065e82e", 00:13:00.886 "strip_size_kb": 0, 00:13:00.886 "state": "configuring", 00:13:00.886 "raid_level": "raid1", 00:13:00.886 "superblock": true, 00:13:00.886 "num_base_bdevs": 3, 00:13:00.886 "num_base_bdevs_discovered": 1, 00:13:00.886 "num_base_bdevs_operational": 2, 00:13:00.886 "base_bdevs_list": [ 00:13:00.886 { 00:13:00.886 "name": null, 00:13:00.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.886 "is_configured": false, 00:13:00.886 "data_offset": 2048, 00:13:00.886 "data_size": 63488 00:13:00.886 }, 00:13:00.886 { 00:13:00.886 "name": "pt2", 00:13:00.886 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:00.886 "is_configured": true, 00:13:00.886 "data_offset": 2048, 00:13:00.886 "data_size": 63488 00:13:00.886 }, 00:13:00.886 { 00:13:00.886 "name": null, 00:13:00.886 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:00.886 "is_configured": false, 00:13:00.886 "data_offset": 2048, 00:13:00.886 "data_size": 63488 00:13:00.886 } 00:13:00.886 ] 00:13:00.886 }' 00:13:00.886 15:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:00.886 15:01:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.143 15:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:13:01.143 15:01:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:01.400 15:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:13:01.401 15:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:01.658 [2024-07-12 15:01:27.401329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:01.658 [2024-07-12 15:01:27.401408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.658 [2024-07-12 15:01:27.401424] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f1d56834c80 00:13:01.658 [2024-07-12 15:01:27.401433] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.658 [2024-07-12 15:01:27.401582] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.658 [2024-07-12 15:01:27.401594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:01.658 [2024-07-12 15:01:27.401622] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:01.658 [2024-07-12 15:01:27.401632] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:01.658 [2024-07-12 15:01:27.401665] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f1d56835180 00:13:01.658 [2024-07-12 15:01:27.401669] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:01.658 [2024-07-12 15:01:27.401690] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f1d56897e20 00:13:01.658 [2024-07-12 15:01:27.401754] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f1d56835180 00:13:01.658 [2024-07-12 15:01:27.401758] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f1d56835180 00:13:01.658 [2024-07-12 15:01:27.401795] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.658 pt3 00:13:01.658 15:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:01.658 15:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:01.658 15:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:01.658 15:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:01.658 15:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:01.658 15:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:01.658 15:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:01.658 15:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:01.658 15:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:01.658 15:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:01.658 15:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:01.658 15:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.917 15:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:01.917 "name": "raid_bdev1", 00:13:01.917 "uuid": "94a318a9-405f-11ef-b2a4-e9dca065e82e", 00:13:01.917 "strip_size_kb": 0, 00:13:01.917 "state": "online", 00:13:01.917 "raid_level": "raid1", 00:13:01.917 "superblock": true, 00:13:01.917 "num_base_bdevs": 3, 00:13:01.917 "num_base_bdevs_discovered": 2, 00:13:01.917 "num_base_bdevs_operational": 2, 00:13:01.917 "base_bdevs_list": [ 00:13:01.917 { 00:13:01.917 "name": null, 00:13:01.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.917 "is_configured": false, 00:13:01.917 "data_offset": 2048, 00:13:01.917 "data_size": 63488 00:13:01.917 }, 00:13:01.917 { 00:13:01.917 "name": "pt2", 00:13:01.917 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:01.917 "is_configured": true, 00:13:01.917 "data_offset": 2048, 00:13:01.917 "data_size": 63488 00:13:01.917 }, 00:13:01.917 { 00:13:01.917 "name": "pt3", 00:13:01.917 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:01.917 "is_configured": true, 00:13:01.917 "data_offset": 2048, 00:13:01.917 "data_size": 63488 00:13:01.917 } 00:13:01.917 ] 00:13:01.917 }' 00:13:01.917 15:01:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:01.917 15:01:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.484 15:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:13:02.484 15:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:02.742 15:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:13:02.742 15:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:02.742 15:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:13:02.742 [2024-07-12 15:01:28.529602] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:02.742 15:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 94a318a9-405f-11ef-b2a4-e9dca065e82e '!=' 94a318a9-405f-11ef-b2a4-e9dca065e82e ']' 00:13:02.742 15:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 57568 00:13:02.742 15:01:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 57568 ']' 00:13:02.742 15:01:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 57568 00:13:02.742 15:01:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:13:02.742 15:01:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:13:02.742 15:01:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 57568 00:13:02.742 15:01:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:13:02.742 15:01:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:13:02.742 15:01:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:13:02.742 killing process with pid 57568 00:13:02.742 15:01:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 57568' 00:13:02.742 15:01:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 57568 00:13:02.742 15:01:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 57568 00:13:02.742 [2024-07-12 15:01:28.558848] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:02.742 [2024-07-12 15:01:28.558884] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:02.742 [2024-07-12 15:01:28.558901] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:02.742 [2024-07-12 15:01:28.558906] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f1d56835180 name raid_bdev1, state offline 00:13:03.000 [2024-07-12 15:01:28.584745] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:03.257 15:01:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:13:03.257 00:13:03.257 real 0m18.578s 00:13:03.257 user 0m33.632s 00:13:03.257 sys 0m2.623s 00:13:03.257 15:01:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:03.257 ************************************ 00:13:03.257 END TEST raid_superblock_test 00:13:03.257 ************************************ 00:13:03.257 15:01:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.257 15:01:28 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:03.257 15:01:28 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:13:03.257 15:01:28 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:03.257 15:01:28 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:03.257 15:01:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:03.257 ************************************ 00:13:03.257 START TEST raid_read_error_test 00:13:03.257 ************************************ 00:13:03.257 15:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 read 00:13:03.257 15:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:13:03.257 15:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:13:03.257 15:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:13:03.257 15:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:13:03.257 15:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:03.257 15:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:13:03.257 15:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:03.257 15:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:03.257 15:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:13:03.257 15:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:03.257 15:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:03.258 15:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:13:03.258 15:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:03.258 15:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:03.258 15:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:03.258 15:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:13:03.258 15:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:13:03.258 15:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:13:03.258 15:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:13:03.258 15:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:13:03.258 15:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:13:03.258 15:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:13:03.258 15:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:13:03.258 15:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:13:03.258 15:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.x3BTVjZgMV 00:13:03.258 15:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=58118 00:13:03.258 15:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:03.258 15:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 58118 /var/tmp/spdk-raid.sock 00:13:03.258 15:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 58118 ']' 00:13:03.258 15:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:03.258 15:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:03.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:03.258 15:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:03.258 15:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:03.258 15:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.258 [2024-07-12 15:01:28.888897] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:13:03.258 [2024-07-12 15:01:28.889118] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:13:03.830 EAL: TSC is not safe to use in SMP mode 00:13:03.830 EAL: TSC is not invariant 00:13:03.830 [2024-07-12 15:01:29.432277] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.830 [2024-07-12 15:01:29.537978] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:03.830 [2024-07-12 15:01:29.540099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.830 [2024-07-12 15:01:29.540873] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:03.830 [2024-07-12 15:01:29.540889] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:04.458 15:01:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:04.458 15:01:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:13:04.458 15:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:04.458 15:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:04.458 BaseBdev1_malloc 00:13:04.458 15:01:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:04.715 true 00:13:04.715 15:01:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:04.972 [2024-07-12 15:01:30.701029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:04.972 [2024-07-12 15:01:30.701120] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.972 [2024-07-12 15:01:30.701160] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ae854034780 00:13:04.972 [2024-07-12 15:01:30.701170] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.972 [2024-07-12 15:01:30.702051] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.972 [2024-07-12 15:01:30.702078] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:04.972 BaseBdev1 00:13:04.972 15:01:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:04.972 15:01:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:05.229 BaseBdev2_malloc 00:13:05.230 15:01:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:05.487 true 00:13:05.487 15:01:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:05.745 [2024-07-12 15:01:31.533100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:05.745 [2024-07-12 15:01:31.533189] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.745 [2024-07-12 15:01:31.533230] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ae854034c80 00:13:05.745 [2024-07-12 15:01:31.533239] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.745 [2024-07-12 15:01:31.534092] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.745 [2024-07-12 15:01:31.534116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:05.745 BaseBdev2 00:13:05.745 15:01:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:05.745 15:01:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:06.002 BaseBdev3_malloc 00:13:06.002 15:01:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:13:06.259 true 00:13:06.516 15:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:06.517 [2024-07-12 15:01:32.321130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:06.517 [2024-07-12 15:01:32.321194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.517 [2024-07-12 15:01:32.321228] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ae854035180 00:13:06.517 [2024-07-12 15:01:32.321237] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.517 [2024-07-12 15:01:32.322076] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.517 [2024-07-12 15:01:32.322101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:06.517 BaseBdev3 00:13:06.774 15:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:13:06.774 [2024-07-12 15:01:32.597197] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:07.032 [2024-07-12 15:01:32.598105] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:07.032 [2024-07-12 15:01:32.598154] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:07.032 [2024-07-12 15:01:32.598230] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1ae854035400 00:13:07.032 [2024-07-12 15:01:32.598236] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:07.032 [2024-07-12 15:01:32.598273] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ae8540a0e20 00:13:07.032 [2024-07-12 15:01:32.598415] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1ae854035400 00:13:07.032 [2024-07-12 15:01:32.598421] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1ae854035400 00:13:07.032 [2024-07-12 15:01:32.598463] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.032 15:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:07.032 15:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:07.032 15:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:07.032 15:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:07.032 15:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:07.032 15:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:07.032 15:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:07.032 15:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:07.032 15:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:07.032 15:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:07.032 15:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:07.032 15:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.290 15:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:07.290 "name": "raid_bdev1", 00:13:07.290 "uuid": "a0398d6e-405f-11ef-b2a4-e9dca065e82e", 00:13:07.290 "strip_size_kb": 0, 00:13:07.290 "state": "online", 00:13:07.290 "raid_level": "raid1", 00:13:07.290 "superblock": true, 00:13:07.290 "num_base_bdevs": 3, 00:13:07.290 "num_base_bdevs_discovered": 3, 00:13:07.290 "num_base_bdevs_operational": 3, 00:13:07.290 "base_bdevs_list": [ 00:13:07.290 { 00:13:07.290 "name": "BaseBdev1", 00:13:07.290 "uuid": "3e94309a-dbba-2e53-a90d-2b89241b008a", 00:13:07.290 "is_configured": true, 00:13:07.290 "data_offset": 2048, 00:13:07.290 "data_size": 63488 00:13:07.290 }, 00:13:07.290 { 00:13:07.290 "name": "BaseBdev2", 00:13:07.290 "uuid": "550d88f7-9e0b-e85f-a187-7b486661e344", 00:13:07.290 "is_configured": true, 00:13:07.290 "data_offset": 2048, 00:13:07.290 "data_size": 63488 00:13:07.290 }, 00:13:07.290 { 00:13:07.290 "name": "BaseBdev3", 00:13:07.290 "uuid": "6535ba17-63b7-d55b-9e4a-98e0ce192d85", 00:13:07.290 "is_configured": true, 00:13:07.290 "data_offset": 2048, 00:13:07.290 "data_size": 63488 00:13:07.290 } 00:13:07.290 ] 00:13:07.290 }' 00:13:07.290 15:01:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:07.290 15:01:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.548 15:01:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:07.548 15:01:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:13:07.548 [2024-07-12 15:01:33.365532] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ae8540a0ec0 00:13:08.484 15:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:09.050 15:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:13:09.050 15:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:09.050 15:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:13:09.050 15:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:13:09.050 15:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:09.050 15:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:09.050 15:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:09.050 15:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:09.050 15:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:09.050 15:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:09.050 15:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:09.050 15:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:09.050 15:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:09.050 15:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:09.050 15:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:09.050 15:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.050 15:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:09.050 "name": "raid_bdev1", 00:13:09.050 "uuid": "a0398d6e-405f-11ef-b2a4-e9dca065e82e", 00:13:09.050 "strip_size_kb": 0, 00:13:09.050 "state": "online", 00:13:09.050 "raid_level": "raid1", 00:13:09.050 "superblock": true, 00:13:09.050 "num_base_bdevs": 3, 00:13:09.050 "num_base_bdevs_discovered": 3, 00:13:09.050 "num_base_bdevs_operational": 3, 00:13:09.050 "base_bdevs_list": [ 00:13:09.050 { 00:13:09.050 "name": "BaseBdev1", 00:13:09.050 "uuid": "3e94309a-dbba-2e53-a90d-2b89241b008a", 00:13:09.050 "is_configured": true, 00:13:09.050 "data_offset": 2048, 00:13:09.050 "data_size": 63488 00:13:09.050 }, 00:13:09.050 { 00:13:09.050 "name": "BaseBdev2", 00:13:09.050 "uuid": "550d88f7-9e0b-e85f-a187-7b486661e344", 00:13:09.050 "is_configured": true, 00:13:09.050 "data_offset": 2048, 00:13:09.050 "data_size": 63488 00:13:09.050 }, 00:13:09.050 { 00:13:09.050 "name": "BaseBdev3", 00:13:09.050 "uuid": "6535ba17-63b7-d55b-9e4a-98e0ce192d85", 00:13:09.050 "is_configured": true, 00:13:09.050 "data_offset": 2048, 00:13:09.050 "data_size": 63488 00:13:09.050 } 00:13:09.050 ] 00:13:09.050 }' 00:13:09.050 15:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:09.050 15:01:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.340 15:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:09.598 [2024-07-12 15:01:35.374332] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:09.598 [2024-07-12 15:01:35.374366] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:09.598 [2024-07-12 15:01:35.374904] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:09.598 [2024-07-12 15:01:35.374922] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.598 [2024-07-12 15:01:35.374954] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:09.598 [2024-07-12 15:01:35.374978] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ae854035400 name raid_bdev1, state offline 00:13:09.598 0 00:13:09.598 15:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 58118 00:13:09.598 15:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 58118 ']' 00:13:09.598 15:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 58118 00:13:09.598 15:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:13:09.598 15:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:13:09.598 15:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 58118 00:13:09.598 15:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:13:09.598 15:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:13:09.598 15:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:13:09.598 15:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58118' 00:13:09.598 killing process with pid 58118 00:13:09.598 15:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 58118 00:13:09.598 [2024-07-12 15:01:35.404193] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:09.598 15:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 58118 00:13:09.856 [2024-07-12 15:01:35.430408] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:10.114 15:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.x3BTVjZgMV 00:13:10.114 15:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:13:10.114 15:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:13:10.114 15:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:13:10.114 15:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:13:10.114 15:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:10.114 15:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:13:10.114 15:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:10.114 00:13:10.114 real 0m6.823s 00:13:10.114 user 0m10.727s 00:13:10.114 sys 0m1.121s 00:13:10.114 15:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:10.114 15:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.114 ************************************ 00:13:10.114 END TEST raid_read_error_test 00:13:10.114 ************************************ 00:13:10.114 15:01:35 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:10.114 15:01:35 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:13:10.114 15:01:35 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:10.114 15:01:35 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:10.114 15:01:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:10.114 ************************************ 00:13:10.114 START TEST raid_write_error_test 00:13:10.114 ************************************ 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 write 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.Gd4sgDG2Mz 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=58253 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 58253 /var/tmp/spdk-raid.sock 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 58253 ']' 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:10.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:10.114 15:01:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.114 [2024-07-12 15:01:35.764454] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:13:10.114 [2024-07-12 15:01:35.764701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:13:10.680 EAL: TSC is not safe to use in SMP mode 00:13:10.680 EAL: TSC is not invariant 00:13:10.680 [2024-07-12 15:01:36.283256] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.680 [2024-07-12 15:01:36.374407] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:10.680 [2024-07-12 15:01:36.376540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.680 [2024-07-12 15:01:36.377327] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.680 [2024-07-12 15:01:36.377342] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:11.246 15:01:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:11.246 15:01:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:13:11.246 15:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:11.246 15:01:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:11.246 BaseBdev1_malloc 00:13:11.246 15:01:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:11.503 true 00:13:11.503 15:01:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:11.760 [2024-07-12 15:01:37.553687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:11.760 [2024-07-12 15:01:37.553791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.760 [2024-07-12 15:01:37.553833] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3dc376834780 00:13:11.760 [2024-07-12 15:01:37.553844] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.760 [2024-07-12 15:01:37.554784] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.760 [2024-07-12 15:01:37.554817] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:11.760 BaseBdev1 00:13:11.760 15:01:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:11.760 15:01:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:12.018 BaseBdev2_malloc 00:13:12.018 15:01:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:12.276 true 00:13:12.534 15:01:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:12.792 [2024-07-12 15:01:38.373857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:12.792 [2024-07-12 15:01:38.373998] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.792 [2024-07-12 15:01:38.374087] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3dc376834c80 00:13:12.792 [2024-07-12 15:01:38.374105] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.792 [2024-07-12 15:01:38.375199] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.792 [2024-07-12 15:01:38.375258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:12.792 BaseBdev2 00:13:12.792 15:01:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:12.792 15:01:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:13.049 BaseBdev3_malloc 00:13:13.049 15:01:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:13:13.307 true 00:13:13.307 15:01:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:13.565 [2024-07-12 15:01:39.141995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:13.565 [2024-07-12 15:01:39.142059] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.565 [2024-07-12 15:01:39.142089] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3dc376835180 00:13:13.565 [2024-07-12 15:01:39.142097] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.565 [2024-07-12 15:01:39.143006] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.565 [2024-07-12 15:01:39.143030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:13.565 BaseBdev3 00:13:13.565 15:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:13:13.823 [2024-07-12 15:01:39.410043] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:13.823 [2024-07-12 15:01:39.410899] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:13.823 [2024-07-12 15:01:39.410925] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:13.823 [2024-07-12 15:01:39.411010] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3dc376835400 00:13:13.823 [2024-07-12 15:01:39.411017] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:13.823 [2024-07-12 15:01:39.411055] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3dc3768a0e20 00:13:13.823 [2024-07-12 15:01:39.411161] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3dc376835400 00:13:13.823 [2024-07-12 15:01:39.411166] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3dc376835400 00:13:13.823 [2024-07-12 15:01:39.411211] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.823 15:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:13.823 15:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:13.823 15:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:13.823 15:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:13.823 15:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:13.823 15:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:13.823 15:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:13.823 15:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:13.823 15:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:13.823 15:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:13.823 15:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:13.823 15:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.081 15:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:14.081 "name": "raid_bdev1", 00:13:14.081 "uuid": "a4491bfd-405f-11ef-b2a4-e9dca065e82e", 00:13:14.081 "strip_size_kb": 0, 00:13:14.081 "state": "online", 00:13:14.081 "raid_level": "raid1", 00:13:14.081 "superblock": true, 00:13:14.081 "num_base_bdevs": 3, 00:13:14.081 "num_base_bdevs_discovered": 3, 00:13:14.081 "num_base_bdevs_operational": 3, 00:13:14.081 "base_bdevs_list": [ 00:13:14.081 { 00:13:14.081 "name": "BaseBdev1", 00:13:14.081 "uuid": "e54081d1-9f2a-7c5c-b4c7-e48de5b7527c", 00:13:14.081 "is_configured": true, 00:13:14.081 "data_offset": 2048, 00:13:14.081 "data_size": 63488 00:13:14.081 }, 00:13:14.081 { 00:13:14.081 "name": "BaseBdev2", 00:13:14.081 "uuid": "e2be42c7-2a7b-0253-ae03-95419506e5b0", 00:13:14.081 "is_configured": true, 00:13:14.081 "data_offset": 2048, 00:13:14.081 "data_size": 63488 00:13:14.081 }, 00:13:14.081 { 00:13:14.081 "name": "BaseBdev3", 00:13:14.081 "uuid": "ed32cee8-ba85-f95d-a60c-f04d06df9e39", 00:13:14.081 "is_configured": true, 00:13:14.081 "data_offset": 2048, 00:13:14.081 "data_size": 63488 00:13:14.081 } 00:13:14.081 ] 00:13:14.081 }' 00:13:14.081 15:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:14.081 15:01:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.339 15:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:14.339 15:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:13:14.339 [2024-07-12 15:01:40.158455] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3dc3768a0ec0 00:13:15.715 15:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:15.715 [2024-07-12 15:01:41.410240] bdev_raid.c:2222:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:13:15.715 [2024-07-12 15:01:41.410341] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:15.715 [2024-07-12 15:01:41.410476] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x3dc3768a0ec0 00:13:15.715 15:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:13:15.715 15:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:15.715 15:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:13:15.715 15:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=2 00:13:15.715 15:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:15.715 15:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:15.715 15:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:15.715 15:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:15.715 15:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:15.715 15:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:15.715 15:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:15.715 15:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:15.715 15:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:15.715 15:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:15.715 15:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:15.715 15:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.973 15:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:15.973 "name": "raid_bdev1", 00:13:15.973 "uuid": "a4491bfd-405f-11ef-b2a4-e9dca065e82e", 00:13:15.973 "strip_size_kb": 0, 00:13:15.973 "state": "online", 00:13:15.973 "raid_level": "raid1", 00:13:15.973 "superblock": true, 00:13:15.973 "num_base_bdevs": 3, 00:13:15.973 "num_base_bdevs_discovered": 2, 00:13:15.973 "num_base_bdevs_operational": 2, 00:13:15.973 "base_bdevs_list": [ 00:13:15.973 { 00:13:15.973 "name": null, 00:13:15.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.973 "is_configured": false, 00:13:15.973 "data_offset": 2048, 00:13:15.973 "data_size": 63488 00:13:15.973 }, 00:13:15.973 { 00:13:15.973 "name": "BaseBdev2", 00:13:15.973 "uuid": "e2be42c7-2a7b-0253-ae03-95419506e5b0", 00:13:15.973 "is_configured": true, 00:13:15.973 "data_offset": 2048, 00:13:15.973 "data_size": 63488 00:13:15.973 }, 00:13:15.973 { 00:13:15.973 "name": "BaseBdev3", 00:13:15.973 "uuid": "ed32cee8-ba85-f95d-a60c-f04d06df9e39", 00:13:15.973 "is_configured": true, 00:13:15.973 "data_offset": 2048, 00:13:15.973 "data_size": 63488 00:13:15.973 } 00:13:15.973 ] 00:13:15.973 }' 00:13:15.973 15:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:15.973 15:01:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.579 15:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:16.579 [2024-07-12 15:01:42.394081] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:16.579 [2024-07-12 15:01:42.394116] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:16.579 [2024-07-12 15:01:42.394583] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:16.579 [2024-07-12 15:01:42.394594] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.579 [2024-07-12 15:01:42.394610] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:16.579 [2024-07-12 15:01:42.394615] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3dc376835400 name raid_bdev1, state offline 00:13:16.579 0 00:13:16.838 15:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 58253 00:13:16.838 15:01:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 58253 ']' 00:13:16.838 15:01:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 58253 00:13:16.838 15:01:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:13:16.838 15:01:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:13:16.838 15:01:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 58253 00:13:16.838 15:01:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:13:16.838 15:01:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:13:16.838 killing process with pid 58253 00:13:16.838 15:01:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:13:16.838 15:01:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58253' 00:13:16.838 15:01:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 58253 00:13:16.838 [2024-07-12 15:01:42.422919] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:16.838 15:01:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 58253 00:13:16.838 [2024-07-12 15:01:42.449459] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:17.097 15:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.Gd4sgDG2Mz 00:13:17.097 15:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:13:17.097 15:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:13:17.097 15:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:13:17.097 15:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:13:17.097 15:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:17.097 15:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:13:17.097 15:01:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:17.097 00:13:17.097 real 0m6.972s 00:13:17.097 user 0m10.998s 00:13:17.097 sys 0m1.100s 00:13:17.097 15:01:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:17.097 15:01:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.097 ************************************ 00:13:17.097 END TEST raid_write_error_test 00:13:17.097 ************************************ 00:13:17.097 15:01:42 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:17.097 15:01:42 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:13:17.097 15:01:42 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:13:17.097 15:01:42 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:13:17.097 15:01:42 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:17.097 15:01:42 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:17.097 15:01:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:17.097 ************************************ 00:13:17.097 START TEST raid_state_function_test 00:13:17.097 ************************************ 00:13:17.097 15:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 false 00:13:17.097 15:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:13:17.097 15:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:13:17.097 15:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:13:17.097 15:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:17.097 15:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:17.097 15:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:17.097 15:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:17.097 15:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:17.097 15:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:17.097 15:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:17.097 15:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:17.097 15:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:17.097 15:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:13:17.097 15:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:17.097 15:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:17.097 15:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:13:17.097 15:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:17.097 15:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:17.097 15:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:17.097 15:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:17.097 15:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:17.098 15:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:17.098 15:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:17.098 15:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:17.098 15:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:13:17.098 15:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:13:17.098 15:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:13:17.098 15:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:13:17.098 15:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:13:17.098 15:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=58382 00:13:17.098 Process raid pid: 58382 00:13:17.098 15:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 58382' 00:13:17.098 15:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 58382 /var/tmp/spdk-raid.sock 00:13:17.098 15:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:17.098 15:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 58382 ']' 00:13:17.098 15:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:17.098 15:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:17.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:17.098 15:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:17.098 15:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:17.098 15:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.098 [2024-07-12 15:01:42.776214] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:13:17.098 [2024-07-12 15:01:42.776514] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:13:17.664 EAL: TSC is not safe to use in SMP mode 00:13:17.664 EAL: TSC is not invariant 00:13:17.664 [2024-07-12 15:01:43.333745] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.664 [2024-07-12 15:01:43.421148] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:17.664 [2024-07-12 15:01:43.423311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.664 [2024-07-12 15:01:43.424095] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:17.664 [2024-07-12 15:01:43.424109] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:18.230 15:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:18.230 15:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:13:18.230 15:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:18.230 [2024-07-12 15:01:44.024858] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:18.230 [2024-07-12 15:01:44.024943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:18.230 [2024-07-12 15:01:44.024954] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:18.230 [2024-07-12 15:01:44.024970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:18.230 [2024-07-12 15:01:44.024977] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:18.230 [2024-07-12 15:01:44.024991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:18.230 [2024-07-12 15:01:44.024997] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:18.230 [2024-07-12 15:01:44.025011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:18.230 15:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:18.230 15:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:18.230 15:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:18.230 15:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:18.230 15:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:18.230 15:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:18.230 15:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:18.230 15:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:18.230 15:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:18.230 15:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:18.230 15:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:18.230 15:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.795 15:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:18.795 "name": "Existed_Raid", 00:13:18.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.795 "strip_size_kb": 64, 00:13:18.795 "state": "configuring", 00:13:18.795 "raid_level": "raid0", 00:13:18.795 "superblock": false, 00:13:18.795 "num_base_bdevs": 4, 00:13:18.795 "num_base_bdevs_discovered": 0, 00:13:18.795 "num_base_bdevs_operational": 4, 00:13:18.795 "base_bdevs_list": [ 00:13:18.795 { 00:13:18.795 "name": "BaseBdev1", 00:13:18.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.795 "is_configured": false, 00:13:18.795 "data_offset": 0, 00:13:18.795 "data_size": 0 00:13:18.795 }, 00:13:18.795 { 00:13:18.795 "name": "BaseBdev2", 00:13:18.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.795 "is_configured": false, 00:13:18.795 "data_offset": 0, 00:13:18.795 "data_size": 0 00:13:18.796 }, 00:13:18.796 { 00:13:18.796 "name": "BaseBdev3", 00:13:18.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.796 "is_configured": false, 00:13:18.796 "data_offset": 0, 00:13:18.796 "data_size": 0 00:13:18.796 }, 00:13:18.796 { 00:13:18.796 "name": "BaseBdev4", 00:13:18.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.796 "is_configured": false, 00:13:18.796 "data_offset": 0, 00:13:18.796 "data_size": 0 00:13:18.796 } 00:13:18.796 ] 00:13:18.796 }' 00:13:18.796 15:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:18.796 15:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.052 15:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:19.052 [2024-07-12 15:01:44.864858] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:19.052 [2024-07-12 15:01:44.864891] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x5cbc1234500 name Existed_Raid, state configuring 00:13:19.310 15:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:19.310 [2024-07-12 15:01:45.116893] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:19.310 [2024-07-12 15:01:45.116973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:19.310 [2024-07-12 15:01:45.116981] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:19.310 [2024-07-12 15:01:45.116995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:19.310 [2024-07-12 15:01:45.117001] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:19.310 [2024-07-12 15:01:45.117014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:19.310 [2024-07-12 15:01:45.117019] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:19.310 [2024-07-12 15:01:45.117032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:19.567 15:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:19.825 [2024-07-12 15:01:45.405983] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:19.825 BaseBdev1 00:13:19.825 15:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:19.825 15:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:19.825 15:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:19.825 15:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:19.825 15:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:19.825 15:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:19.825 15:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:20.081 15:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:20.081 [ 00:13:20.081 { 00:13:20.081 "name": "BaseBdev1", 00:13:20.081 "aliases": [ 00:13:20.081 "a7dbdb57-405f-11ef-b2a4-e9dca065e82e" 00:13:20.081 ], 00:13:20.081 "product_name": "Malloc disk", 00:13:20.081 "block_size": 512, 00:13:20.081 "num_blocks": 65536, 00:13:20.081 "uuid": "a7dbdb57-405f-11ef-b2a4-e9dca065e82e", 00:13:20.081 "assigned_rate_limits": { 00:13:20.081 "rw_ios_per_sec": 0, 00:13:20.081 "rw_mbytes_per_sec": 0, 00:13:20.081 "r_mbytes_per_sec": 0, 00:13:20.081 "w_mbytes_per_sec": 0 00:13:20.081 }, 00:13:20.081 "claimed": true, 00:13:20.081 "claim_type": "exclusive_write", 00:13:20.082 "zoned": false, 00:13:20.082 "supported_io_types": { 00:13:20.082 "read": true, 00:13:20.082 "write": true, 00:13:20.082 "unmap": true, 00:13:20.082 "flush": true, 00:13:20.082 "reset": true, 00:13:20.082 "nvme_admin": false, 00:13:20.082 "nvme_io": false, 00:13:20.082 "nvme_io_md": false, 00:13:20.082 "write_zeroes": true, 00:13:20.082 "zcopy": true, 00:13:20.082 "get_zone_info": false, 00:13:20.082 "zone_management": false, 00:13:20.082 "zone_append": false, 00:13:20.082 "compare": false, 00:13:20.082 "compare_and_write": false, 00:13:20.082 "abort": true, 00:13:20.082 "seek_hole": false, 00:13:20.082 "seek_data": false, 00:13:20.082 "copy": true, 00:13:20.082 "nvme_iov_md": false 00:13:20.082 }, 00:13:20.082 "memory_domains": [ 00:13:20.082 { 00:13:20.082 "dma_device_id": "system", 00:13:20.082 "dma_device_type": 1 00:13:20.082 }, 00:13:20.082 { 00:13:20.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.082 "dma_device_type": 2 00:13:20.082 } 00:13:20.082 ], 00:13:20.082 "driver_specific": {} 00:13:20.082 } 00:13:20.082 ] 00:13:20.339 15:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:20.339 15:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:20.339 15:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:20.339 15:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:20.339 15:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:20.339 15:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:20.339 15:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:20.339 15:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:20.339 15:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:20.339 15:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:20.339 15:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:20.339 15:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:20.339 15:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.597 15:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:20.597 "name": "Existed_Raid", 00:13:20.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.597 "strip_size_kb": 64, 00:13:20.597 "state": "configuring", 00:13:20.597 "raid_level": "raid0", 00:13:20.597 "superblock": false, 00:13:20.597 "num_base_bdevs": 4, 00:13:20.597 "num_base_bdevs_discovered": 1, 00:13:20.597 "num_base_bdevs_operational": 4, 00:13:20.597 "base_bdevs_list": [ 00:13:20.597 { 00:13:20.597 "name": "BaseBdev1", 00:13:20.597 "uuid": "a7dbdb57-405f-11ef-b2a4-e9dca065e82e", 00:13:20.597 "is_configured": true, 00:13:20.597 "data_offset": 0, 00:13:20.597 "data_size": 65536 00:13:20.597 }, 00:13:20.597 { 00:13:20.597 "name": "BaseBdev2", 00:13:20.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.597 "is_configured": false, 00:13:20.597 "data_offset": 0, 00:13:20.597 "data_size": 0 00:13:20.597 }, 00:13:20.597 { 00:13:20.597 "name": "BaseBdev3", 00:13:20.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.597 "is_configured": false, 00:13:20.597 "data_offset": 0, 00:13:20.597 "data_size": 0 00:13:20.597 }, 00:13:20.597 { 00:13:20.597 "name": "BaseBdev4", 00:13:20.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.597 "is_configured": false, 00:13:20.597 "data_offset": 0, 00:13:20.597 "data_size": 0 00:13:20.597 } 00:13:20.597 ] 00:13:20.597 }' 00:13:20.597 15:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:20.597 15:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.855 15:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:21.112 [2024-07-12 15:01:46.701027] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:21.112 [2024-07-12 15:01:46.701073] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x5cbc1234500 name Existed_Raid, state configuring 00:13:21.112 15:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:21.369 [2024-07-12 15:01:46.945094] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:21.369 [2024-07-12 15:01:46.946218] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:21.369 [2024-07-12 15:01:46.946265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:21.369 [2024-07-12 15:01:46.946270] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:21.369 [2024-07-12 15:01:46.946279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:21.369 [2024-07-12 15:01:46.946282] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:21.369 [2024-07-12 15:01:46.946289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:21.369 15:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:21.369 15:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:21.369 15:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:21.369 15:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:21.369 15:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:21.369 15:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:21.369 15:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:21.369 15:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:21.369 15:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:21.369 15:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:21.369 15:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:21.369 15:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:21.369 15:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:21.369 15:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.626 15:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:21.626 "name": "Existed_Raid", 00:13:21.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.626 "strip_size_kb": 64, 00:13:21.626 "state": "configuring", 00:13:21.626 "raid_level": "raid0", 00:13:21.626 "superblock": false, 00:13:21.626 "num_base_bdevs": 4, 00:13:21.626 "num_base_bdevs_discovered": 1, 00:13:21.626 "num_base_bdevs_operational": 4, 00:13:21.626 "base_bdevs_list": [ 00:13:21.626 { 00:13:21.626 "name": "BaseBdev1", 00:13:21.626 "uuid": "a7dbdb57-405f-11ef-b2a4-e9dca065e82e", 00:13:21.626 "is_configured": true, 00:13:21.626 "data_offset": 0, 00:13:21.626 "data_size": 65536 00:13:21.626 }, 00:13:21.626 { 00:13:21.626 "name": "BaseBdev2", 00:13:21.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.626 "is_configured": false, 00:13:21.626 "data_offset": 0, 00:13:21.626 "data_size": 0 00:13:21.626 }, 00:13:21.626 { 00:13:21.626 "name": "BaseBdev3", 00:13:21.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.626 "is_configured": false, 00:13:21.626 "data_offset": 0, 00:13:21.626 "data_size": 0 00:13:21.626 }, 00:13:21.626 { 00:13:21.626 "name": "BaseBdev4", 00:13:21.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.626 "is_configured": false, 00:13:21.626 "data_offset": 0, 00:13:21.626 "data_size": 0 00:13:21.626 } 00:13:21.626 ] 00:13:21.626 }' 00:13:21.626 15:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:21.626 15:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.884 15:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:22.141 [2024-07-12 15:01:47.793360] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:22.141 BaseBdev2 00:13:22.141 15:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:22.141 15:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:22.141 15:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:22.141 15:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:22.141 15:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:22.141 15:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:22.141 15:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:22.399 15:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:22.656 [ 00:13:22.656 { 00:13:22.656 "name": "BaseBdev2", 00:13:22.656 "aliases": [ 00:13:22.656 "a94846f8-405f-11ef-b2a4-e9dca065e82e" 00:13:22.656 ], 00:13:22.656 "product_name": "Malloc disk", 00:13:22.656 "block_size": 512, 00:13:22.656 "num_blocks": 65536, 00:13:22.656 "uuid": "a94846f8-405f-11ef-b2a4-e9dca065e82e", 00:13:22.656 "assigned_rate_limits": { 00:13:22.656 "rw_ios_per_sec": 0, 00:13:22.656 "rw_mbytes_per_sec": 0, 00:13:22.656 "r_mbytes_per_sec": 0, 00:13:22.656 "w_mbytes_per_sec": 0 00:13:22.656 }, 00:13:22.656 "claimed": true, 00:13:22.656 "claim_type": "exclusive_write", 00:13:22.656 "zoned": false, 00:13:22.656 "supported_io_types": { 00:13:22.656 "read": true, 00:13:22.656 "write": true, 00:13:22.656 "unmap": true, 00:13:22.656 "flush": true, 00:13:22.656 "reset": true, 00:13:22.656 "nvme_admin": false, 00:13:22.656 "nvme_io": false, 00:13:22.656 "nvme_io_md": false, 00:13:22.656 "write_zeroes": true, 00:13:22.656 "zcopy": true, 00:13:22.656 "get_zone_info": false, 00:13:22.656 "zone_management": false, 00:13:22.656 "zone_append": false, 00:13:22.656 "compare": false, 00:13:22.656 "compare_and_write": false, 00:13:22.656 "abort": true, 00:13:22.656 "seek_hole": false, 00:13:22.656 "seek_data": false, 00:13:22.656 "copy": true, 00:13:22.656 "nvme_iov_md": false 00:13:22.656 }, 00:13:22.656 "memory_domains": [ 00:13:22.656 { 00:13:22.656 "dma_device_id": "system", 00:13:22.656 "dma_device_type": 1 00:13:22.656 }, 00:13:22.656 { 00:13:22.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.656 "dma_device_type": 2 00:13:22.656 } 00:13:22.656 ], 00:13:22.656 "driver_specific": {} 00:13:22.656 } 00:13:22.656 ] 00:13:22.656 15:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:22.656 15:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:22.656 15:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:22.656 15:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:22.657 15:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:22.657 15:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:22.657 15:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:22.657 15:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:22.657 15:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:22.657 15:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:22.657 15:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:22.657 15:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:22.657 15:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:22.657 15:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:22.657 15:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.914 15:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:22.914 "name": "Existed_Raid", 00:13:22.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.914 "strip_size_kb": 64, 00:13:22.914 "state": "configuring", 00:13:22.914 "raid_level": "raid0", 00:13:22.914 "superblock": false, 00:13:22.914 "num_base_bdevs": 4, 00:13:22.914 "num_base_bdevs_discovered": 2, 00:13:22.914 "num_base_bdevs_operational": 4, 00:13:22.914 "base_bdevs_list": [ 00:13:22.914 { 00:13:22.914 "name": "BaseBdev1", 00:13:22.914 "uuid": "a7dbdb57-405f-11ef-b2a4-e9dca065e82e", 00:13:22.914 "is_configured": true, 00:13:22.914 "data_offset": 0, 00:13:22.914 "data_size": 65536 00:13:22.914 }, 00:13:22.914 { 00:13:22.914 "name": "BaseBdev2", 00:13:22.914 "uuid": "a94846f8-405f-11ef-b2a4-e9dca065e82e", 00:13:22.914 "is_configured": true, 00:13:22.914 "data_offset": 0, 00:13:22.914 "data_size": 65536 00:13:22.914 }, 00:13:22.914 { 00:13:22.914 "name": "BaseBdev3", 00:13:22.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.914 "is_configured": false, 00:13:22.914 "data_offset": 0, 00:13:22.914 "data_size": 0 00:13:22.914 }, 00:13:22.914 { 00:13:22.914 "name": "BaseBdev4", 00:13:22.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.914 "is_configured": false, 00:13:22.914 "data_offset": 0, 00:13:22.914 "data_size": 0 00:13:22.914 } 00:13:22.914 ] 00:13:22.914 }' 00:13:22.914 15:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:22.914 15:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.172 15:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:23.430 [2024-07-12 15:01:49.189551] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:23.430 BaseBdev3 00:13:23.430 15:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:13:23.430 15:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:13:23.430 15:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:23.431 15:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:23.431 15:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:23.431 15:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:23.431 15:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:23.709 15:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:24.279 [ 00:13:24.279 { 00:13:24.279 "name": "BaseBdev3", 00:13:24.279 "aliases": [ 00:13:24.279 "aa1d51e8-405f-11ef-b2a4-e9dca065e82e" 00:13:24.279 ], 00:13:24.279 "product_name": "Malloc disk", 00:13:24.279 "block_size": 512, 00:13:24.279 "num_blocks": 65536, 00:13:24.279 "uuid": "aa1d51e8-405f-11ef-b2a4-e9dca065e82e", 00:13:24.279 "assigned_rate_limits": { 00:13:24.279 "rw_ios_per_sec": 0, 00:13:24.279 "rw_mbytes_per_sec": 0, 00:13:24.279 "r_mbytes_per_sec": 0, 00:13:24.279 "w_mbytes_per_sec": 0 00:13:24.279 }, 00:13:24.279 "claimed": true, 00:13:24.279 "claim_type": "exclusive_write", 00:13:24.279 "zoned": false, 00:13:24.279 "supported_io_types": { 00:13:24.279 "read": true, 00:13:24.279 "write": true, 00:13:24.279 "unmap": true, 00:13:24.279 "flush": true, 00:13:24.279 "reset": true, 00:13:24.279 "nvme_admin": false, 00:13:24.279 "nvme_io": false, 00:13:24.279 "nvme_io_md": false, 00:13:24.279 "write_zeroes": true, 00:13:24.279 "zcopy": true, 00:13:24.279 "get_zone_info": false, 00:13:24.279 "zone_management": false, 00:13:24.279 "zone_append": false, 00:13:24.279 "compare": false, 00:13:24.279 "compare_and_write": false, 00:13:24.279 "abort": true, 00:13:24.279 "seek_hole": false, 00:13:24.279 "seek_data": false, 00:13:24.279 "copy": true, 00:13:24.279 "nvme_iov_md": false 00:13:24.279 }, 00:13:24.279 "memory_domains": [ 00:13:24.279 { 00:13:24.279 "dma_device_id": "system", 00:13:24.279 "dma_device_type": 1 00:13:24.279 }, 00:13:24.279 { 00:13:24.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.279 "dma_device_type": 2 00:13:24.279 } 00:13:24.279 ], 00:13:24.279 "driver_specific": {} 00:13:24.279 } 00:13:24.279 ] 00:13:24.279 15:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:24.279 15:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:24.279 15:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:24.279 15:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:24.279 15:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:24.279 15:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:24.279 15:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:24.279 15:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:24.279 15:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:24.279 15:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:24.279 15:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:24.279 15:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:24.279 15:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:24.279 15:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:24.279 15:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.537 15:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:24.537 "name": "Existed_Raid", 00:13:24.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.537 "strip_size_kb": 64, 00:13:24.537 "state": "configuring", 00:13:24.537 "raid_level": "raid0", 00:13:24.537 "superblock": false, 00:13:24.537 "num_base_bdevs": 4, 00:13:24.537 "num_base_bdevs_discovered": 3, 00:13:24.537 "num_base_bdevs_operational": 4, 00:13:24.537 "base_bdevs_list": [ 00:13:24.537 { 00:13:24.537 "name": "BaseBdev1", 00:13:24.537 "uuid": "a7dbdb57-405f-11ef-b2a4-e9dca065e82e", 00:13:24.537 "is_configured": true, 00:13:24.537 "data_offset": 0, 00:13:24.537 "data_size": 65536 00:13:24.537 }, 00:13:24.537 { 00:13:24.537 "name": "BaseBdev2", 00:13:24.537 "uuid": "a94846f8-405f-11ef-b2a4-e9dca065e82e", 00:13:24.537 "is_configured": true, 00:13:24.537 "data_offset": 0, 00:13:24.537 "data_size": 65536 00:13:24.537 }, 00:13:24.537 { 00:13:24.537 "name": "BaseBdev3", 00:13:24.537 "uuid": "aa1d51e8-405f-11ef-b2a4-e9dca065e82e", 00:13:24.537 "is_configured": true, 00:13:24.537 "data_offset": 0, 00:13:24.537 "data_size": 65536 00:13:24.537 }, 00:13:24.537 { 00:13:24.537 "name": "BaseBdev4", 00:13:24.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.537 "is_configured": false, 00:13:24.537 "data_offset": 0, 00:13:24.537 "data_size": 0 00:13:24.537 } 00:13:24.537 ] 00:13:24.537 }' 00:13:24.537 15:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:24.537 15:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.795 15:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:25.053 [2024-07-12 15:01:50.757535] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:25.053 [2024-07-12 15:01:50.757570] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x5cbc1234a00 00:13:25.053 [2024-07-12 15:01:50.757575] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:25.053 [2024-07-12 15:01:50.757599] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x5cbc1297e20 00:13:25.053 [2024-07-12 15:01:50.757695] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x5cbc1234a00 00:13:25.053 [2024-07-12 15:01:50.757699] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x5cbc1234a00 00:13:25.053 [2024-07-12 15:01:50.757743] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.053 BaseBdev4 00:13:25.053 15:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:13:25.053 15:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:13:25.053 15:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:25.053 15:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:25.053 15:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:25.053 15:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:25.053 15:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:25.311 15:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:25.568 [ 00:13:25.568 { 00:13:25.568 "name": "BaseBdev4", 00:13:25.568 "aliases": [ 00:13:25.568 "ab0c950b-405f-11ef-b2a4-e9dca065e82e" 00:13:25.568 ], 00:13:25.568 "product_name": "Malloc disk", 00:13:25.568 "block_size": 512, 00:13:25.568 "num_blocks": 65536, 00:13:25.568 "uuid": "ab0c950b-405f-11ef-b2a4-e9dca065e82e", 00:13:25.568 "assigned_rate_limits": { 00:13:25.568 "rw_ios_per_sec": 0, 00:13:25.568 "rw_mbytes_per_sec": 0, 00:13:25.568 "r_mbytes_per_sec": 0, 00:13:25.568 "w_mbytes_per_sec": 0 00:13:25.568 }, 00:13:25.568 "claimed": true, 00:13:25.568 "claim_type": "exclusive_write", 00:13:25.568 "zoned": false, 00:13:25.568 "supported_io_types": { 00:13:25.568 "read": true, 00:13:25.568 "write": true, 00:13:25.568 "unmap": true, 00:13:25.568 "flush": true, 00:13:25.568 "reset": true, 00:13:25.568 "nvme_admin": false, 00:13:25.568 "nvme_io": false, 00:13:25.568 "nvme_io_md": false, 00:13:25.568 "write_zeroes": true, 00:13:25.568 "zcopy": true, 00:13:25.568 "get_zone_info": false, 00:13:25.568 "zone_management": false, 00:13:25.568 "zone_append": false, 00:13:25.568 "compare": false, 00:13:25.568 "compare_and_write": false, 00:13:25.568 "abort": true, 00:13:25.568 "seek_hole": false, 00:13:25.568 "seek_data": false, 00:13:25.568 "copy": true, 00:13:25.568 "nvme_iov_md": false 00:13:25.568 }, 00:13:25.568 "memory_domains": [ 00:13:25.568 { 00:13:25.568 "dma_device_id": "system", 00:13:25.568 "dma_device_type": 1 00:13:25.568 }, 00:13:25.568 { 00:13:25.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.568 "dma_device_type": 2 00:13:25.568 } 00:13:25.568 ], 00:13:25.568 "driver_specific": {} 00:13:25.568 } 00:13:25.568 ] 00:13:25.568 15:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:25.568 15:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:25.568 15:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:25.568 15:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:25.568 15:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:25.568 15:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:25.568 15:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:25.568 15:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:25.568 15:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:25.568 15:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:25.568 15:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:25.568 15:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:25.569 15:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:25.569 15:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:25.569 15:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.825 15:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:25.825 "name": "Existed_Raid", 00:13:25.825 "uuid": "ab0c9c01-405f-11ef-b2a4-e9dca065e82e", 00:13:25.825 "strip_size_kb": 64, 00:13:25.825 "state": "online", 00:13:25.825 "raid_level": "raid0", 00:13:25.825 "superblock": false, 00:13:25.825 "num_base_bdevs": 4, 00:13:25.825 "num_base_bdevs_discovered": 4, 00:13:25.825 "num_base_bdevs_operational": 4, 00:13:25.825 "base_bdevs_list": [ 00:13:25.825 { 00:13:25.825 "name": "BaseBdev1", 00:13:25.825 "uuid": "a7dbdb57-405f-11ef-b2a4-e9dca065e82e", 00:13:25.825 "is_configured": true, 00:13:25.825 "data_offset": 0, 00:13:25.825 "data_size": 65536 00:13:25.825 }, 00:13:25.825 { 00:13:25.825 "name": "BaseBdev2", 00:13:25.825 "uuid": "a94846f8-405f-11ef-b2a4-e9dca065e82e", 00:13:25.825 "is_configured": true, 00:13:25.825 "data_offset": 0, 00:13:25.825 "data_size": 65536 00:13:25.825 }, 00:13:25.825 { 00:13:25.825 "name": "BaseBdev3", 00:13:25.825 "uuid": "aa1d51e8-405f-11ef-b2a4-e9dca065e82e", 00:13:25.825 "is_configured": true, 00:13:25.825 "data_offset": 0, 00:13:25.825 "data_size": 65536 00:13:25.825 }, 00:13:25.825 { 00:13:25.825 "name": "BaseBdev4", 00:13:25.825 "uuid": "ab0c950b-405f-11ef-b2a4-e9dca065e82e", 00:13:25.825 "is_configured": true, 00:13:25.825 "data_offset": 0, 00:13:25.825 "data_size": 65536 00:13:25.825 } 00:13:25.825 ] 00:13:25.825 }' 00:13:25.825 15:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:25.825 15:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.390 15:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:26.390 15:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:26.390 15:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:26.390 15:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:26.390 15:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:26.390 15:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:26.390 15:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:26.390 15:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:26.390 [2024-07-12 15:01:52.153567] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:26.390 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:26.390 "name": "Existed_Raid", 00:13:26.390 "aliases": [ 00:13:26.390 "ab0c9c01-405f-11ef-b2a4-e9dca065e82e" 00:13:26.390 ], 00:13:26.390 "product_name": "Raid Volume", 00:13:26.390 "block_size": 512, 00:13:26.390 "num_blocks": 262144, 00:13:26.390 "uuid": "ab0c9c01-405f-11ef-b2a4-e9dca065e82e", 00:13:26.390 "assigned_rate_limits": { 00:13:26.390 "rw_ios_per_sec": 0, 00:13:26.390 "rw_mbytes_per_sec": 0, 00:13:26.390 "r_mbytes_per_sec": 0, 00:13:26.390 "w_mbytes_per_sec": 0 00:13:26.390 }, 00:13:26.390 "claimed": false, 00:13:26.390 "zoned": false, 00:13:26.390 "supported_io_types": { 00:13:26.390 "read": true, 00:13:26.390 "write": true, 00:13:26.390 "unmap": true, 00:13:26.390 "flush": true, 00:13:26.390 "reset": true, 00:13:26.390 "nvme_admin": false, 00:13:26.390 "nvme_io": false, 00:13:26.390 "nvme_io_md": false, 00:13:26.390 "write_zeroes": true, 00:13:26.390 "zcopy": false, 00:13:26.390 "get_zone_info": false, 00:13:26.390 "zone_management": false, 00:13:26.390 "zone_append": false, 00:13:26.390 "compare": false, 00:13:26.390 "compare_and_write": false, 00:13:26.390 "abort": false, 00:13:26.390 "seek_hole": false, 00:13:26.390 "seek_data": false, 00:13:26.390 "copy": false, 00:13:26.390 "nvme_iov_md": false 00:13:26.390 }, 00:13:26.390 "memory_domains": [ 00:13:26.390 { 00:13:26.390 "dma_device_id": "system", 00:13:26.390 "dma_device_type": 1 00:13:26.390 }, 00:13:26.390 { 00:13:26.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.390 "dma_device_type": 2 00:13:26.390 }, 00:13:26.390 { 00:13:26.390 "dma_device_id": "system", 00:13:26.390 "dma_device_type": 1 00:13:26.390 }, 00:13:26.390 { 00:13:26.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.390 "dma_device_type": 2 00:13:26.390 }, 00:13:26.390 { 00:13:26.390 "dma_device_id": "system", 00:13:26.390 "dma_device_type": 1 00:13:26.390 }, 00:13:26.390 { 00:13:26.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.390 "dma_device_type": 2 00:13:26.390 }, 00:13:26.390 { 00:13:26.390 "dma_device_id": "system", 00:13:26.390 "dma_device_type": 1 00:13:26.390 }, 00:13:26.390 { 00:13:26.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.390 "dma_device_type": 2 00:13:26.390 } 00:13:26.390 ], 00:13:26.390 "driver_specific": { 00:13:26.390 "raid": { 00:13:26.390 "uuid": "ab0c9c01-405f-11ef-b2a4-e9dca065e82e", 00:13:26.390 "strip_size_kb": 64, 00:13:26.390 "state": "online", 00:13:26.390 "raid_level": "raid0", 00:13:26.390 "superblock": false, 00:13:26.390 "num_base_bdevs": 4, 00:13:26.390 "num_base_bdevs_discovered": 4, 00:13:26.390 "num_base_bdevs_operational": 4, 00:13:26.390 "base_bdevs_list": [ 00:13:26.390 { 00:13:26.390 "name": "BaseBdev1", 00:13:26.390 "uuid": "a7dbdb57-405f-11ef-b2a4-e9dca065e82e", 00:13:26.390 "is_configured": true, 00:13:26.390 "data_offset": 0, 00:13:26.390 "data_size": 65536 00:13:26.390 }, 00:13:26.390 { 00:13:26.390 "name": "BaseBdev2", 00:13:26.390 "uuid": "a94846f8-405f-11ef-b2a4-e9dca065e82e", 00:13:26.390 "is_configured": true, 00:13:26.390 "data_offset": 0, 00:13:26.390 "data_size": 65536 00:13:26.390 }, 00:13:26.390 { 00:13:26.390 "name": "BaseBdev3", 00:13:26.390 "uuid": "aa1d51e8-405f-11ef-b2a4-e9dca065e82e", 00:13:26.390 "is_configured": true, 00:13:26.390 "data_offset": 0, 00:13:26.390 "data_size": 65536 00:13:26.390 }, 00:13:26.390 { 00:13:26.390 "name": "BaseBdev4", 00:13:26.390 "uuid": "ab0c950b-405f-11ef-b2a4-e9dca065e82e", 00:13:26.390 "is_configured": true, 00:13:26.390 "data_offset": 0, 00:13:26.390 "data_size": 65536 00:13:26.390 } 00:13:26.390 ] 00:13:26.390 } 00:13:26.390 } 00:13:26.390 }' 00:13:26.390 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:26.390 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:26.390 BaseBdev2 00:13:26.390 BaseBdev3 00:13:26.390 BaseBdev4' 00:13:26.390 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:26.390 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:26.390 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:26.648 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:26.648 "name": "BaseBdev1", 00:13:26.648 "aliases": [ 00:13:26.648 "a7dbdb57-405f-11ef-b2a4-e9dca065e82e" 00:13:26.648 ], 00:13:26.648 "product_name": "Malloc disk", 00:13:26.648 "block_size": 512, 00:13:26.648 "num_blocks": 65536, 00:13:26.648 "uuid": "a7dbdb57-405f-11ef-b2a4-e9dca065e82e", 00:13:26.648 "assigned_rate_limits": { 00:13:26.648 "rw_ios_per_sec": 0, 00:13:26.648 "rw_mbytes_per_sec": 0, 00:13:26.648 "r_mbytes_per_sec": 0, 00:13:26.648 "w_mbytes_per_sec": 0 00:13:26.648 }, 00:13:26.648 "claimed": true, 00:13:26.648 "claim_type": "exclusive_write", 00:13:26.648 "zoned": false, 00:13:26.648 "supported_io_types": { 00:13:26.648 "read": true, 00:13:26.648 "write": true, 00:13:26.648 "unmap": true, 00:13:26.648 "flush": true, 00:13:26.648 "reset": true, 00:13:26.648 "nvme_admin": false, 00:13:26.648 "nvme_io": false, 00:13:26.648 "nvme_io_md": false, 00:13:26.648 "write_zeroes": true, 00:13:26.648 "zcopy": true, 00:13:26.648 "get_zone_info": false, 00:13:26.648 "zone_management": false, 00:13:26.648 "zone_append": false, 00:13:26.648 "compare": false, 00:13:26.648 "compare_and_write": false, 00:13:26.648 "abort": true, 00:13:26.648 "seek_hole": false, 00:13:26.648 "seek_data": false, 00:13:26.648 "copy": true, 00:13:26.648 "nvme_iov_md": false 00:13:26.648 }, 00:13:26.648 "memory_domains": [ 00:13:26.648 { 00:13:26.648 "dma_device_id": "system", 00:13:26.648 "dma_device_type": 1 00:13:26.648 }, 00:13:26.648 { 00:13:26.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.648 "dma_device_type": 2 00:13:26.648 } 00:13:26.648 ], 00:13:26.648 "driver_specific": {} 00:13:26.648 }' 00:13:26.648 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:26.648 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:26.648 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:26.648 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:26.648 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:26.648 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:26.648 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:26.648 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:26.648 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:26.905 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:26.905 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:26.905 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:26.905 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:26.905 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:26.905 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:27.163 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:27.163 "name": "BaseBdev2", 00:13:27.163 "aliases": [ 00:13:27.163 "a94846f8-405f-11ef-b2a4-e9dca065e82e" 00:13:27.163 ], 00:13:27.163 "product_name": "Malloc disk", 00:13:27.163 "block_size": 512, 00:13:27.163 "num_blocks": 65536, 00:13:27.163 "uuid": "a94846f8-405f-11ef-b2a4-e9dca065e82e", 00:13:27.163 "assigned_rate_limits": { 00:13:27.163 "rw_ios_per_sec": 0, 00:13:27.163 "rw_mbytes_per_sec": 0, 00:13:27.163 "r_mbytes_per_sec": 0, 00:13:27.163 "w_mbytes_per_sec": 0 00:13:27.163 }, 00:13:27.163 "claimed": true, 00:13:27.163 "claim_type": "exclusive_write", 00:13:27.163 "zoned": false, 00:13:27.163 "supported_io_types": { 00:13:27.163 "read": true, 00:13:27.163 "write": true, 00:13:27.163 "unmap": true, 00:13:27.163 "flush": true, 00:13:27.163 "reset": true, 00:13:27.163 "nvme_admin": false, 00:13:27.163 "nvme_io": false, 00:13:27.163 "nvme_io_md": false, 00:13:27.163 "write_zeroes": true, 00:13:27.163 "zcopy": true, 00:13:27.163 "get_zone_info": false, 00:13:27.163 "zone_management": false, 00:13:27.163 "zone_append": false, 00:13:27.163 "compare": false, 00:13:27.163 "compare_and_write": false, 00:13:27.163 "abort": true, 00:13:27.163 "seek_hole": false, 00:13:27.163 "seek_data": false, 00:13:27.163 "copy": true, 00:13:27.163 "nvme_iov_md": false 00:13:27.163 }, 00:13:27.163 "memory_domains": [ 00:13:27.163 { 00:13:27.163 "dma_device_id": "system", 00:13:27.163 "dma_device_type": 1 00:13:27.163 }, 00:13:27.163 { 00:13:27.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.163 "dma_device_type": 2 00:13:27.163 } 00:13:27.163 ], 00:13:27.163 "driver_specific": {} 00:13:27.163 }' 00:13:27.163 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:27.163 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:27.163 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:27.163 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:27.163 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:27.163 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:27.163 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:27.163 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:27.163 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:27.163 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:27.163 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:27.163 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:27.163 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:27.163 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:27.163 15:01:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:27.420 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:27.420 "name": "BaseBdev3", 00:13:27.420 "aliases": [ 00:13:27.420 "aa1d51e8-405f-11ef-b2a4-e9dca065e82e" 00:13:27.420 ], 00:13:27.420 "product_name": "Malloc disk", 00:13:27.420 "block_size": 512, 00:13:27.420 "num_blocks": 65536, 00:13:27.420 "uuid": "aa1d51e8-405f-11ef-b2a4-e9dca065e82e", 00:13:27.420 "assigned_rate_limits": { 00:13:27.420 "rw_ios_per_sec": 0, 00:13:27.420 "rw_mbytes_per_sec": 0, 00:13:27.420 "r_mbytes_per_sec": 0, 00:13:27.420 "w_mbytes_per_sec": 0 00:13:27.420 }, 00:13:27.420 "claimed": true, 00:13:27.420 "claim_type": "exclusive_write", 00:13:27.420 "zoned": false, 00:13:27.420 "supported_io_types": { 00:13:27.420 "read": true, 00:13:27.420 "write": true, 00:13:27.420 "unmap": true, 00:13:27.420 "flush": true, 00:13:27.420 "reset": true, 00:13:27.420 "nvme_admin": false, 00:13:27.420 "nvme_io": false, 00:13:27.420 "nvme_io_md": false, 00:13:27.420 "write_zeroes": true, 00:13:27.420 "zcopy": true, 00:13:27.420 "get_zone_info": false, 00:13:27.420 "zone_management": false, 00:13:27.420 "zone_append": false, 00:13:27.420 "compare": false, 00:13:27.420 "compare_and_write": false, 00:13:27.420 "abort": true, 00:13:27.420 "seek_hole": false, 00:13:27.420 "seek_data": false, 00:13:27.420 "copy": true, 00:13:27.420 "nvme_iov_md": false 00:13:27.420 }, 00:13:27.420 "memory_domains": [ 00:13:27.420 { 00:13:27.420 "dma_device_id": "system", 00:13:27.420 "dma_device_type": 1 00:13:27.420 }, 00:13:27.420 { 00:13:27.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.420 "dma_device_type": 2 00:13:27.420 } 00:13:27.420 ], 00:13:27.420 "driver_specific": {} 00:13:27.420 }' 00:13:27.420 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:27.420 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:27.420 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:27.420 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:27.420 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:27.420 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:27.420 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:27.420 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:27.420 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:27.420 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:27.420 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:27.420 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:27.420 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:27.420 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:13:27.421 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:27.678 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:27.678 "name": "BaseBdev4", 00:13:27.678 "aliases": [ 00:13:27.678 "ab0c950b-405f-11ef-b2a4-e9dca065e82e" 00:13:27.678 ], 00:13:27.678 "product_name": "Malloc disk", 00:13:27.678 "block_size": 512, 00:13:27.678 "num_blocks": 65536, 00:13:27.678 "uuid": "ab0c950b-405f-11ef-b2a4-e9dca065e82e", 00:13:27.678 "assigned_rate_limits": { 00:13:27.678 "rw_ios_per_sec": 0, 00:13:27.678 "rw_mbytes_per_sec": 0, 00:13:27.678 "r_mbytes_per_sec": 0, 00:13:27.678 "w_mbytes_per_sec": 0 00:13:27.678 }, 00:13:27.678 "claimed": true, 00:13:27.678 "claim_type": "exclusive_write", 00:13:27.678 "zoned": false, 00:13:27.678 "supported_io_types": { 00:13:27.678 "read": true, 00:13:27.678 "write": true, 00:13:27.678 "unmap": true, 00:13:27.678 "flush": true, 00:13:27.678 "reset": true, 00:13:27.678 "nvme_admin": false, 00:13:27.678 "nvme_io": false, 00:13:27.678 "nvme_io_md": false, 00:13:27.678 "write_zeroes": true, 00:13:27.678 "zcopy": true, 00:13:27.678 "get_zone_info": false, 00:13:27.678 "zone_management": false, 00:13:27.678 "zone_append": false, 00:13:27.678 "compare": false, 00:13:27.678 "compare_and_write": false, 00:13:27.678 "abort": true, 00:13:27.678 "seek_hole": false, 00:13:27.678 "seek_data": false, 00:13:27.678 "copy": true, 00:13:27.678 "nvme_iov_md": false 00:13:27.678 }, 00:13:27.678 "memory_domains": [ 00:13:27.678 { 00:13:27.678 "dma_device_id": "system", 00:13:27.678 "dma_device_type": 1 00:13:27.678 }, 00:13:27.678 { 00:13:27.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.678 "dma_device_type": 2 00:13:27.678 } 00:13:27.678 ], 00:13:27.678 "driver_specific": {} 00:13:27.678 }' 00:13:27.678 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:27.936 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:27.936 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:27.936 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:27.936 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:27.936 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:27.936 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:27.936 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:27.936 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:27.936 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:27.936 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:27.936 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:27.936 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:28.212 [2024-07-12 15:01:53.785641] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:28.212 [2024-07-12 15:01:53.785675] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:28.212 [2024-07-12 15:01:53.785693] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:28.212 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:28.212 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:13:28.212 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:28.212 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:28.212 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:13:28.212 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:13:28.212 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:28.212 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:13:28.212 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:28.212 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:28.212 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:28.212 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:28.212 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:28.212 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:28.212 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:28.212 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.212 15:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:28.494 15:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:28.494 "name": "Existed_Raid", 00:13:28.494 "uuid": "ab0c9c01-405f-11ef-b2a4-e9dca065e82e", 00:13:28.494 "strip_size_kb": 64, 00:13:28.494 "state": "offline", 00:13:28.494 "raid_level": "raid0", 00:13:28.494 "superblock": false, 00:13:28.494 "num_base_bdevs": 4, 00:13:28.494 "num_base_bdevs_discovered": 3, 00:13:28.494 "num_base_bdevs_operational": 3, 00:13:28.494 "base_bdevs_list": [ 00:13:28.494 { 00:13:28.494 "name": null, 00:13:28.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.494 "is_configured": false, 00:13:28.494 "data_offset": 0, 00:13:28.494 "data_size": 65536 00:13:28.494 }, 00:13:28.494 { 00:13:28.494 "name": "BaseBdev2", 00:13:28.494 "uuid": "a94846f8-405f-11ef-b2a4-e9dca065e82e", 00:13:28.494 "is_configured": true, 00:13:28.494 "data_offset": 0, 00:13:28.494 "data_size": 65536 00:13:28.494 }, 00:13:28.494 { 00:13:28.494 "name": "BaseBdev3", 00:13:28.494 "uuid": "aa1d51e8-405f-11ef-b2a4-e9dca065e82e", 00:13:28.494 "is_configured": true, 00:13:28.494 "data_offset": 0, 00:13:28.494 "data_size": 65536 00:13:28.494 }, 00:13:28.494 { 00:13:28.494 "name": "BaseBdev4", 00:13:28.494 "uuid": "ab0c950b-405f-11ef-b2a4-e9dca065e82e", 00:13:28.494 "is_configured": true, 00:13:28.494 "data_offset": 0, 00:13:28.494 "data_size": 65536 00:13:28.494 } 00:13:28.494 ] 00:13:28.494 }' 00:13:28.494 15:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:28.494 15:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.752 15:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:28.752 15:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:28.752 15:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:28.752 15:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:29.008 15:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:29.008 15:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:29.008 15:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:29.266 [2024-07-12 15:01:54.878120] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:29.266 15:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:29.266 15:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:29.266 15:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:29.266 15:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:29.524 15:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:29.524 15:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:29.524 15:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:29.781 [2024-07-12 15:01:55.470481] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:29.781 15:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:29.781 15:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:29.781 15:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:29.781 15:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:30.039 15:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:30.039 15:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:30.039 15:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:13:30.297 [2024-07-12 15:01:56.090983] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:30.297 [2024-07-12 15:01:56.091033] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x5cbc1234a00 name Existed_Raid, state offline 00:13:30.297 15:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:30.297 15:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:30.297 15:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:30.297 15:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:30.862 15:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:30.862 15:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:30.862 15:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:13:30.862 15:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:13:30.863 15:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:30.863 15:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:30.863 BaseBdev2 00:13:30.863 15:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:13:30.863 15:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:30.863 15:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:30.863 15:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:30.863 15:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:30.863 15:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:30.863 15:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:31.126 15:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:31.383 [ 00:13:31.383 { 00:13:31.383 "name": "BaseBdev2", 00:13:31.383 "aliases": [ 00:13:31.383 "ae8cd9a5-405f-11ef-b2a4-e9dca065e82e" 00:13:31.383 ], 00:13:31.383 "product_name": "Malloc disk", 00:13:31.383 "block_size": 512, 00:13:31.383 "num_blocks": 65536, 00:13:31.383 "uuid": "ae8cd9a5-405f-11ef-b2a4-e9dca065e82e", 00:13:31.383 "assigned_rate_limits": { 00:13:31.383 "rw_ios_per_sec": 0, 00:13:31.383 "rw_mbytes_per_sec": 0, 00:13:31.383 "r_mbytes_per_sec": 0, 00:13:31.383 "w_mbytes_per_sec": 0 00:13:31.383 }, 00:13:31.383 "claimed": false, 00:13:31.383 "zoned": false, 00:13:31.383 "supported_io_types": { 00:13:31.383 "read": true, 00:13:31.383 "write": true, 00:13:31.383 "unmap": true, 00:13:31.383 "flush": true, 00:13:31.383 "reset": true, 00:13:31.383 "nvme_admin": false, 00:13:31.383 "nvme_io": false, 00:13:31.383 "nvme_io_md": false, 00:13:31.383 "write_zeroes": true, 00:13:31.383 "zcopy": true, 00:13:31.383 "get_zone_info": false, 00:13:31.383 "zone_management": false, 00:13:31.383 "zone_append": false, 00:13:31.383 "compare": false, 00:13:31.383 "compare_and_write": false, 00:13:31.383 "abort": true, 00:13:31.383 "seek_hole": false, 00:13:31.383 "seek_data": false, 00:13:31.383 "copy": true, 00:13:31.383 "nvme_iov_md": false 00:13:31.383 }, 00:13:31.383 "memory_domains": [ 00:13:31.383 { 00:13:31.383 "dma_device_id": "system", 00:13:31.383 "dma_device_type": 1 00:13:31.383 }, 00:13:31.383 { 00:13:31.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.383 "dma_device_type": 2 00:13:31.383 } 00:13:31.383 ], 00:13:31.383 "driver_specific": {} 00:13:31.383 } 00:13:31.383 ] 00:13:31.383 15:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:31.383 15:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:31.384 15:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:31.384 15:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:31.949 BaseBdev3 00:13:31.949 15:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:13:31.949 15:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:13:31.949 15:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:31.949 15:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:31.949 15:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:31.949 15:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:31.949 15:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:31.949 15:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:32.206 [ 00:13:32.206 { 00:13:32.206 "name": "BaseBdev3", 00:13:32.206 "aliases": [ 00:13:32.206 "af0bce8e-405f-11ef-b2a4-e9dca065e82e" 00:13:32.206 ], 00:13:32.206 "product_name": "Malloc disk", 00:13:32.206 "block_size": 512, 00:13:32.206 "num_blocks": 65536, 00:13:32.206 "uuid": "af0bce8e-405f-11ef-b2a4-e9dca065e82e", 00:13:32.206 "assigned_rate_limits": { 00:13:32.206 "rw_ios_per_sec": 0, 00:13:32.206 "rw_mbytes_per_sec": 0, 00:13:32.206 "r_mbytes_per_sec": 0, 00:13:32.206 "w_mbytes_per_sec": 0 00:13:32.206 }, 00:13:32.206 "claimed": false, 00:13:32.206 "zoned": false, 00:13:32.206 "supported_io_types": { 00:13:32.206 "read": true, 00:13:32.206 "write": true, 00:13:32.206 "unmap": true, 00:13:32.206 "flush": true, 00:13:32.206 "reset": true, 00:13:32.206 "nvme_admin": false, 00:13:32.206 "nvme_io": false, 00:13:32.206 "nvme_io_md": false, 00:13:32.206 "write_zeroes": true, 00:13:32.206 "zcopy": true, 00:13:32.206 "get_zone_info": false, 00:13:32.206 "zone_management": false, 00:13:32.206 "zone_append": false, 00:13:32.206 "compare": false, 00:13:32.206 "compare_and_write": false, 00:13:32.206 "abort": true, 00:13:32.206 "seek_hole": false, 00:13:32.206 "seek_data": false, 00:13:32.206 "copy": true, 00:13:32.206 "nvme_iov_md": false 00:13:32.206 }, 00:13:32.206 "memory_domains": [ 00:13:32.206 { 00:13:32.206 "dma_device_id": "system", 00:13:32.206 "dma_device_type": 1 00:13:32.206 }, 00:13:32.206 { 00:13:32.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.206 "dma_device_type": 2 00:13:32.206 } 00:13:32.206 ], 00:13:32.206 "driver_specific": {} 00:13:32.206 } 00:13:32.206 ] 00:13:32.206 15:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:32.206 15:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:32.206 15:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:32.206 15:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:32.464 BaseBdev4 00:13:32.464 15:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:13:32.464 15:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:13:32.464 15:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:32.464 15:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:32.464 15:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:32.464 15:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:32.464 15:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:32.729 15:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:32.987 [ 00:13:32.987 { 00:13:32.987 "name": "BaseBdev4", 00:13:32.987 "aliases": [ 00:13:32.987 "af7df305-405f-11ef-b2a4-e9dca065e82e" 00:13:32.987 ], 00:13:32.987 "product_name": "Malloc disk", 00:13:32.987 "block_size": 512, 00:13:32.987 "num_blocks": 65536, 00:13:32.987 "uuid": "af7df305-405f-11ef-b2a4-e9dca065e82e", 00:13:32.987 "assigned_rate_limits": { 00:13:32.987 "rw_ios_per_sec": 0, 00:13:32.987 "rw_mbytes_per_sec": 0, 00:13:32.988 "r_mbytes_per_sec": 0, 00:13:32.988 "w_mbytes_per_sec": 0 00:13:32.988 }, 00:13:32.988 "claimed": false, 00:13:32.988 "zoned": false, 00:13:32.988 "supported_io_types": { 00:13:32.988 "read": true, 00:13:32.988 "write": true, 00:13:32.988 "unmap": true, 00:13:32.988 "flush": true, 00:13:32.988 "reset": true, 00:13:32.988 "nvme_admin": false, 00:13:32.988 "nvme_io": false, 00:13:32.988 "nvme_io_md": false, 00:13:32.988 "write_zeroes": true, 00:13:32.988 "zcopy": true, 00:13:32.988 "get_zone_info": false, 00:13:32.988 "zone_management": false, 00:13:32.988 "zone_append": false, 00:13:32.988 "compare": false, 00:13:32.988 "compare_and_write": false, 00:13:32.988 "abort": true, 00:13:32.988 "seek_hole": false, 00:13:32.988 "seek_data": false, 00:13:32.988 "copy": true, 00:13:32.988 "nvme_iov_md": false 00:13:32.988 }, 00:13:32.988 "memory_domains": [ 00:13:32.988 { 00:13:32.988 "dma_device_id": "system", 00:13:32.988 "dma_device_type": 1 00:13:32.988 }, 00:13:32.988 { 00:13:32.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.988 "dma_device_type": 2 00:13:32.988 } 00:13:32.988 ], 00:13:32.988 "driver_specific": {} 00:13:32.988 } 00:13:32.988 ] 00:13:32.988 15:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:32.988 15:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:32.988 15:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:32.988 15:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:33.245 [2024-07-12 15:01:58.967285] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:33.245 [2024-07-12 15:01:58.967357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:33.245 [2024-07-12 15:01:58.967368] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:33.245 [2024-07-12 15:01:58.968138] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:33.245 [2024-07-12 15:01:58.968158] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:33.245 15:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:33.245 15:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:33.245 15:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:33.245 15:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:33.245 15:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:33.245 15:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:33.245 15:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:33.245 15:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:33.245 15:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:33.245 15:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:33.245 15:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:33.245 15:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.502 15:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:33.502 "name": "Existed_Raid", 00:13:33.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.502 "strip_size_kb": 64, 00:13:33.502 "state": "configuring", 00:13:33.502 "raid_level": "raid0", 00:13:33.502 "superblock": false, 00:13:33.502 "num_base_bdevs": 4, 00:13:33.502 "num_base_bdevs_discovered": 3, 00:13:33.502 "num_base_bdevs_operational": 4, 00:13:33.502 "base_bdevs_list": [ 00:13:33.502 { 00:13:33.502 "name": "BaseBdev1", 00:13:33.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.502 "is_configured": false, 00:13:33.503 "data_offset": 0, 00:13:33.503 "data_size": 0 00:13:33.503 }, 00:13:33.503 { 00:13:33.503 "name": "BaseBdev2", 00:13:33.503 "uuid": "ae8cd9a5-405f-11ef-b2a4-e9dca065e82e", 00:13:33.503 "is_configured": true, 00:13:33.503 "data_offset": 0, 00:13:33.503 "data_size": 65536 00:13:33.503 }, 00:13:33.503 { 00:13:33.503 "name": "BaseBdev3", 00:13:33.503 "uuid": "af0bce8e-405f-11ef-b2a4-e9dca065e82e", 00:13:33.503 "is_configured": true, 00:13:33.503 "data_offset": 0, 00:13:33.503 "data_size": 65536 00:13:33.503 }, 00:13:33.503 { 00:13:33.503 "name": "BaseBdev4", 00:13:33.503 "uuid": "af7df305-405f-11ef-b2a4-e9dca065e82e", 00:13:33.503 "is_configured": true, 00:13:33.503 "data_offset": 0, 00:13:33.503 "data_size": 65536 00:13:33.503 } 00:13:33.503 ] 00:13:33.503 }' 00:13:33.503 15:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:33.503 15:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.760 15:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:13:34.325 [2024-07-12 15:01:59.851321] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:34.325 15:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:34.325 15:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:34.325 15:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:34.325 15:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:34.325 15:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:34.325 15:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:34.325 15:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:34.325 15:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:34.325 15:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:34.325 15:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:34.325 15:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:34.325 15:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.583 15:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:34.583 "name": "Existed_Raid", 00:13:34.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.583 "strip_size_kb": 64, 00:13:34.583 "state": "configuring", 00:13:34.583 "raid_level": "raid0", 00:13:34.583 "superblock": false, 00:13:34.583 "num_base_bdevs": 4, 00:13:34.583 "num_base_bdevs_discovered": 2, 00:13:34.583 "num_base_bdevs_operational": 4, 00:13:34.583 "base_bdevs_list": [ 00:13:34.583 { 00:13:34.583 "name": "BaseBdev1", 00:13:34.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.583 "is_configured": false, 00:13:34.583 "data_offset": 0, 00:13:34.583 "data_size": 0 00:13:34.583 }, 00:13:34.583 { 00:13:34.583 "name": null, 00:13:34.583 "uuid": "ae8cd9a5-405f-11ef-b2a4-e9dca065e82e", 00:13:34.583 "is_configured": false, 00:13:34.583 "data_offset": 0, 00:13:34.583 "data_size": 65536 00:13:34.583 }, 00:13:34.583 { 00:13:34.583 "name": "BaseBdev3", 00:13:34.583 "uuid": "af0bce8e-405f-11ef-b2a4-e9dca065e82e", 00:13:34.583 "is_configured": true, 00:13:34.583 "data_offset": 0, 00:13:34.583 "data_size": 65536 00:13:34.583 }, 00:13:34.583 { 00:13:34.583 "name": "BaseBdev4", 00:13:34.583 "uuid": "af7df305-405f-11ef-b2a4-e9dca065e82e", 00:13:34.583 "is_configured": true, 00:13:34.583 "data_offset": 0, 00:13:34.583 "data_size": 65536 00:13:34.583 } 00:13:34.583 ] 00:13:34.583 }' 00:13:34.583 15:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:34.583 15:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.842 15:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:34.842 15:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:35.100 15:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:13:35.100 15:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:35.357 [2024-07-12 15:02:01.095565] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:35.357 BaseBdev1 00:13:35.357 15:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:13:35.357 15:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:35.357 15:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:35.357 15:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:35.357 15:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:35.357 15:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:35.357 15:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:35.637 15:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:35.909 [ 00:13:35.909 { 00:13:35.909 "name": "BaseBdev1", 00:13:35.909 "aliases": [ 00:13:35.909 "b13608ce-405f-11ef-b2a4-e9dca065e82e" 00:13:35.909 ], 00:13:35.909 "product_name": "Malloc disk", 00:13:35.909 "block_size": 512, 00:13:35.909 "num_blocks": 65536, 00:13:35.909 "uuid": "b13608ce-405f-11ef-b2a4-e9dca065e82e", 00:13:35.909 "assigned_rate_limits": { 00:13:35.909 "rw_ios_per_sec": 0, 00:13:35.909 "rw_mbytes_per_sec": 0, 00:13:35.909 "r_mbytes_per_sec": 0, 00:13:35.909 "w_mbytes_per_sec": 0 00:13:35.909 }, 00:13:35.909 "claimed": true, 00:13:35.909 "claim_type": "exclusive_write", 00:13:35.909 "zoned": false, 00:13:35.909 "supported_io_types": { 00:13:35.909 "read": true, 00:13:35.909 "write": true, 00:13:35.909 "unmap": true, 00:13:35.909 "flush": true, 00:13:35.909 "reset": true, 00:13:35.909 "nvme_admin": false, 00:13:35.909 "nvme_io": false, 00:13:35.909 "nvme_io_md": false, 00:13:35.909 "write_zeroes": true, 00:13:35.909 "zcopy": true, 00:13:35.909 "get_zone_info": false, 00:13:35.909 "zone_management": false, 00:13:35.909 "zone_append": false, 00:13:35.909 "compare": false, 00:13:35.909 "compare_and_write": false, 00:13:35.909 "abort": true, 00:13:35.909 "seek_hole": false, 00:13:35.909 "seek_data": false, 00:13:35.909 "copy": true, 00:13:35.909 "nvme_iov_md": false 00:13:35.909 }, 00:13:35.909 "memory_domains": [ 00:13:35.909 { 00:13:35.909 "dma_device_id": "system", 00:13:35.909 "dma_device_type": 1 00:13:35.909 }, 00:13:35.909 { 00:13:35.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.910 "dma_device_type": 2 00:13:35.910 } 00:13:35.910 ], 00:13:35.910 "driver_specific": {} 00:13:35.910 } 00:13:35.910 ] 00:13:35.910 15:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:35.910 15:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:35.910 15:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:35.910 15:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:35.910 15:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:35.910 15:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:35.910 15:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:35.910 15:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:35.910 15:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:35.910 15:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:35.910 15:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:35.910 15:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.910 15:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:36.167 15:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:36.167 "name": "Existed_Raid", 00:13:36.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.167 "strip_size_kb": 64, 00:13:36.167 "state": "configuring", 00:13:36.167 "raid_level": "raid0", 00:13:36.167 "superblock": false, 00:13:36.167 "num_base_bdevs": 4, 00:13:36.167 "num_base_bdevs_discovered": 3, 00:13:36.167 "num_base_bdevs_operational": 4, 00:13:36.167 "base_bdevs_list": [ 00:13:36.167 { 00:13:36.167 "name": "BaseBdev1", 00:13:36.167 "uuid": "b13608ce-405f-11ef-b2a4-e9dca065e82e", 00:13:36.167 "is_configured": true, 00:13:36.167 "data_offset": 0, 00:13:36.167 "data_size": 65536 00:13:36.167 }, 00:13:36.167 { 00:13:36.167 "name": null, 00:13:36.167 "uuid": "ae8cd9a5-405f-11ef-b2a4-e9dca065e82e", 00:13:36.167 "is_configured": false, 00:13:36.167 "data_offset": 0, 00:13:36.167 "data_size": 65536 00:13:36.167 }, 00:13:36.167 { 00:13:36.167 "name": "BaseBdev3", 00:13:36.167 "uuid": "af0bce8e-405f-11ef-b2a4-e9dca065e82e", 00:13:36.167 "is_configured": true, 00:13:36.167 "data_offset": 0, 00:13:36.167 "data_size": 65536 00:13:36.167 }, 00:13:36.167 { 00:13:36.167 "name": "BaseBdev4", 00:13:36.167 "uuid": "af7df305-405f-11ef-b2a4-e9dca065e82e", 00:13:36.167 "is_configured": true, 00:13:36.167 "data_offset": 0, 00:13:36.167 "data_size": 65536 00:13:36.167 } 00:13:36.167 ] 00:13:36.167 }' 00:13:36.167 15:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:36.167 15:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.425 15:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:36.425 15:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:36.683 15:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:13:36.683 15:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:36.941 [2024-07-12 15:02:02.643439] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:36.941 15:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:36.941 15:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:36.941 15:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:36.941 15:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:36.941 15:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:36.941 15:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:36.941 15:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:36.941 15:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:36.941 15:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:36.941 15:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:36.941 15:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:36.941 15:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:37.207 15:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:37.207 "name": "Existed_Raid", 00:13:37.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.207 "strip_size_kb": 64, 00:13:37.207 "state": "configuring", 00:13:37.207 "raid_level": "raid0", 00:13:37.207 "superblock": false, 00:13:37.207 "num_base_bdevs": 4, 00:13:37.207 "num_base_bdevs_discovered": 2, 00:13:37.207 "num_base_bdevs_operational": 4, 00:13:37.207 "base_bdevs_list": [ 00:13:37.207 { 00:13:37.207 "name": "BaseBdev1", 00:13:37.207 "uuid": "b13608ce-405f-11ef-b2a4-e9dca065e82e", 00:13:37.207 "is_configured": true, 00:13:37.207 "data_offset": 0, 00:13:37.207 "data_size": 65536 00:13:37.207 }, 00:13:37.207 { 00:13:37.207 "name": null, 00:13:37.207 "uuid": "ae8cd9a5-405f-11ef-b2a4-e9dca065e82e", 00:13:37.207 "is_configured": false, 00:13:37.207 "data_offset": 0, 00:13:37.207 "data_size": 65536 00:13:37.207 }, 00:13:37.207 { 00:13:37.207 "name": null, 00:13:37.207 "uuid": "af0bce8e-405f-11ef-b2a4-e9dca065e82e", 00:13:37.207 "is_configured": false, 00:13:37.207 "data_offset": 0, 00:13:37.207 "data_size": 65536 00:13:37.207 }, 00:13:37.207 { 00:13:37.207 "name": "BaseBdev4", 00:13:37.207 "uuid": "af7df305-405f-11ef-b2a4-e9dca065e82e", 00:13:37.207 "is_configured": true, 00:13:37.207 "data_offset": 0, 00:13:37.207 "data_size": 65536 00:13:37.207 } 00:13:37.207 ] 00:13:37.207 }' 00:13:37.207 15:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:37.207 15:02:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.464 15:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:37.464 15:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:37.721 15:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:13:37.721 15:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:37.980 [2024-07-12 15:02:03.695496] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:37.980 15:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:37.980 15:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:37.980 15:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:37.980 15:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:37.980 15:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:37.980 15:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:37.980 15:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:37.980 15:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:37.980 15:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:37.980 15:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:37.980 15:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:37.980 15:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.238 15:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:38.238 "name": "Existed_Raid", 00:13:38.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.238 "strip_size_kb": 64, 00:13:38.238 "state": "configuring", 00:13:38.238 "raid_level": "raid0", 00:13:38.238 "superblock": false, 00:13:38.238 "num_base_bdevs": 4, 00:13:38.238 "num_base_bdevs_discovered": 3, 00:13:38.238 "num_base_bdevs_operational": 4, 00:13:38.238 "base_bdevs_list": [ 00:13:38.238 { 00:13:38.238 "name": "BaseBdev1", 00:13:38.238 "uuid": "b13608ce-405f-11ef-b2a4-e9dca065e82e", 00:13:38.238 "is_configured": true, 00:13:38.238 "data_offset": 0, 00:13:38.238 "data_size": 65536 00:13:38.238 }, 00:13:38.238 { 00:13:38.238 "name": null, 00:13:38.238 "uuid": "ae8cd9a5-405f-11ef-b2a4-e9dca065e82e", 00:13:38.238 "is_configured": false, 00:13:38.238 "data_offset": 0, 00:13:38.238 "data_size": 65536 00:13:38.238 }, 00:13:38.238 { 00:13:38.238 "name": "BaseBdev3", 00:13:38.238 "uuid": "af0bce8e-405f-11ef-b2a4-e9dca065e82e", 00:13:38.238 "is_configured": true, 00:13:38.238 "data_offset": 0, 00:13:38.238 "data_size": 65536 00:13:38.238 }, 00:13:38.238 { 00:13:38.238 "name": "BaseBdev4", 00:13:38.238 "uuid": "af7df305-405f-11ef-b2a4-e9dca065e82e", 00:13:38.238 "is_configured": true, 00:13:38.238 "data_offset": 0, 00:13:38.238 "data_size": 65536 00:13:38.238 } 00:13:38.238 ] 00:13:38.238 }' 00:13:38.238 15:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:38.238 15:02:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.495 15:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:38.495 15:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:38.752 15:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:13:38.752 15:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:39.010 [2024-07-12 15:02:04.747559] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:39.010 15:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:39.010 15:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:39.010 15:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:39.010 15:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:39.010 15:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:39.010 15:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:39.010 15:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:39.010 15:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:39.010 15:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:39.010 15:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:39.010 15:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:39.010 15:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.268 15:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:39.268 "name": "Existed_Raid", 00:13:39.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.268 "strip_size_kb": 64, 00:13:39.268 "state": "configuring", 00:13:39.268 "raid_level": "raid0", 00:13:39.268 "superblock": false, 00:13:39.268 "num_base_bdevs": 4, 00:13:39.268 "num_base_bdevs_discovered": 2, 00:13:39.268 "num_base_bdevs_operational": 4, 00:13:39.268 "base_bdevs_list": [ 00:13:39.268 { 00:13:39.268 "name": null, 00:13:39.268 "uuid": "b13608ce-405f-11ef-b2a4-e9dca065e82e", 00:13:39.268 "is_configured": false, 00:13:39.268 "data_offset": 0, 00:13:39.268 "data_size": 65536 00:13:39.268 }, 00:13:39.268 { 00:13:39.268 "name": null, 00:13:39.268 "uuid": "ae8cd9a5-405f-11ef-b2a4-e9dca065e82e", 00:13:39.268 "is_configured": false, 00:13:39.268 "data_offset": 0, 00:13:39.268 "data_size": 65536 00:13:39.268 }, 00:13:39.268 { 00:13:39.268 "name": "BaseBdev3", 00:13:39.268 "uuid": "af0bce8e-405f-11ef-b2a4-e9dca065e82e", 00:13:39.268 "is_configured": true, 00:13:39.268 "data_offset": 0, 00:13:39.268 "data_size": 65536 00:13:39.268 }, 00:13:39.268 { 00:13:39.268 "name": "BaseBdev4", 00:13:39.268 "uuid": "af7df305-405f-11ef-b2a4-e9dca065e82e", 00:13:39.268 "is_configured": true, 00:13:39.268 "data_offset": 0, 00:13:39.268 "data_size": 65536 00:13:39.268 } 00:13:39.268 ] 00:13:39.268 }' 00:13:39.268 15:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:39.268 15:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.529 15:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:39.529 15:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:39.800 15:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:13:39.800 15:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:40.059 [2024-07-12 15:02:05.820474] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:40.059 15:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:40.059 15:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:40.059 15:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:40.059 15:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:40.059 15:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:40.059 15:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:40.059 15:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:40.059 15:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:40.059 15:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:40.059 15:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:40.059 15:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:40.059 15:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.317 15:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:40.317 "name": "Existed_Raid", 00:13:40.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.317 "strip_size_kb": 64, 00:13:40.317 "state": "configuring", 00:13:40.317 "raid_level": "raid0", 00:13:40.317 "superblock": false, 00:13:40.317 "num_base_bdevs": 4, 00:13:40.317 "num_base_bdevs_discovered": 3, 00:13:40.317 "num_base_bdevs_operational": 4, 00:13:40.317 "base_bdevs_list": [ 00:13:40.317 { 00:13:40.317 "name": null, 00:13:40.317 "uuid": "b13608ce-405f-11ef-b2a4-e9dca065e82e", 00:13:40.317 "is_configured": false, 00:13:40.317 "data_offset": 0, 00:13:40.317 "data_size": 65536 00:13:40.317 }, 00:13:40.317 { 00:13:40.317 "name": "BaseBdev2", 00:13:40.318 "uuid": "ae8cd9a5-405f-11ef-b2a4-e9dca065e82e", 00:13:40.318 "is_configured": true, 00:13:40.318 "data_offset": 0, 00:13:40.318 "data_size": 65536 00:13:40.318 }, 00:13:40.318 { 00:13:40.318 "name": "BaseBdev3", 00:13:40.318 "uuid": "af0bce8e-405f-11ef-b2a4-e9dca065e82e", 00:13:40.318 "is_configured": true, 00:13:40.318 "data_offset": 0, 00:13:40.318 "data_size": 65536 00:13:40.318 }, 00:13:40.318 { 00:13:40.318 "name": "BaseBdev4", 00:13:40.318 "uuid": "af7df305-405f-11ef-b2a4-e9dca065e82e", 00:13:40.318 "is_configured": true, 00:13:40.318 "data_offset": 0, 00:13:40.318 "data_size": 65536 00:13:40.318 } 00:13:40.318 ] 00:13:40.318 }' 00:13:40.318 15:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:40.318 15:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.883 15:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:40.883 15:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:40.883 15:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:13:40.883 15:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:40.883 15:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:41.142 15:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u b13608ce-405f-11ef-b2a4-e9dca065e82e 00:13:41.400 [2024-07-12 15:02:07.220710] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:41.400 [2024-07-12 15:02:07.220747] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x5cbc1234f00 00:13:41.400 [2024-07-12 15:02:07.220752] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:41.400 [2024-07-12 15:02:07.220779] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x5cbc1297e20 00:13:41.400 [2024-07-12 15:02:07.220870] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x5cbc1234f00 00:13:41.400 [2024-07-12 15:02:07.220875] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x5cbc1234f00 00:13:41.400 [2024-07-12 15:02:07.220913] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.657 NewBaseBdev 00:13:41.657 15:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:13:41.658 15:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:13:41.658 15:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:41.658 15:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:41.658 15:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:41.658 15:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:41.658 15:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:41.916 15:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:42.174 [ 00:13:42.174 { 00:13:42.174 "name": "NewBaseBdev", 00:13:42.174 "aliases": [ 00:13:42.174 "b13608ce-405f-11ef-b2a4-e9dca065e82e" 00:13:42.174 ], 00:13:42.174 "product_name": "Malloc disk", 00:13:42.174 "block_size": 512, 00:13:42.174 "num_blocks": 65536, 00:13:42.174 "uuid": "b13608ce-405f-11ef-b2a4-e9dca065e82e", 00:13:42.174 "assigned_rate_limits": { 00:13:42.174 "rw_ios_per_sec": 0, 00:13:42.174 "rw_mbytes_per_sec": 0, 00:13:42.174 "r_mbytes_per_sec": 0, 00:13:42.174 "w_mbytes_per_sec": 0 00:13:42.174 }, 00:13:42.174 "claimed": true, 00:13:42.174 "claim_type": "exclusive_write", 00:13:42.174 "zoned": false, 00:13:42.174 "supported_io_types": { 00:13:42.174 "read": true, 00:13:42.174 "write": true, 00:13:42.174 "unmap": true, 00:13:42.174 "flush": true, 00:13:42.174 "reset": true, 00:13:42.174 "nvme_admin": false, 00:13:42.174 "nvme_io": false, 00:13:42.174 "nvme_io_md": false, 00:13:42.174 "write_zeroes": true, 00:13:42.174 "zcopy": true, 00:13:42.174 "get_zone_info": false, 00:13:42.174 "zone_management": false, 00:13:42.174 "zone_append": false, 00:13:42.174 "compare": false, 00:13:42.174 "compare_and_write": false, 00:13:42.174 "abort": true, 00:13:42.174 "seek_hole": false, 00:13:42.174 "seek_data": false, 00:13:42.174 "copy": true, 00:13:42.174 "nvme_iov_md": false 00:13:42.174 }, 00:13:42.174 "memory_domains": [ 00:13:42.174 { 00:13:42.174 "dma_device_id": "system", 00:13:42.174 "dma_device_type": 1 00:13:42.174 }, 00:13:42.174 { 00:13:42.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.174 "dma_device_type": 2 00:13:42.174 } 00:13:42.174 ], 00:13:42.174 "driver_specific": {} 00:13:42.174 } 00:13:42.174 ] 00:13:42.174 15:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:42.174 15:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:42.174 15:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:42.174 15:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:42.174 15:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:42.174 15:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:42.174 15:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:42.174 15:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:42.174 15:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:42.174 15:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:42.174 15:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:42.174 15:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:42.174 15:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.432 15:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:42.432 "name": "Existed_Raid", 00:13:42.432 "uuid": "b4dcb0f2-405f-11ef-b2a4-e9dca065e82e", 00:13:42.432 "strip_size_kb": 64, 00:13:42.432 "state": "online", 00:13:42.432 "raid_level": "raid0", 00:13:42.432 "superblock": false, 00:13:42.432 "num_base_bdevs": 4, 00:13:42.432 "num_base_bdevs_discovered": 4, 00:13:42.432 "num_base_bdevs_operational": 4, 00:13:42.432 "base_bdevs_list": [ 00:13:42.432 { 00:13:42.432 "name": "NewBaseBdev", 00:13:42.432 "uuid": "b13608ce-405f-11ef-b2a4-e9dca065e82e", 00:13:42.432 "is_configured": true, 00:13:42.432 "data_offset": 0, 00:13:42.432 "data_size": 65536 00:13:42.432 }, 00:13:42.432 { 00:13:42.432 "name": "BaseBdev2", 00:13:42.432 "uuid": "ae8cd9a5-405f-11ef-b2a4-e9dca065e82e", 00:13:42.432 "is_configured": true, 00:13:42.432 "data_offset": 0, 00:13:42.432 "data_size": 65536 00:13:42.432 }, 00:13:42.432 { 00:13:42.432 "name": "BaseBdev3", 00:13:42.432 "uuid": "af0bce8e-405f-11ef-b2a4-e9dca065e82e", 00:13:42.432 "is_configured": true, 00:13:42.432 "data_offset": 0, 00:13:42.432 "data_size": 65536 00:13:42.432 }, 00:13:42.432 { 00:13:42.432 "name": "BaseBdev4", 00:13:42.432 "uuid": "af7df305-405f-11ef-b2a4-e9dca065e82e", 00:13:42.432 "is_configured": true, 00:13:42.432 "data_offset": 0, 00:13:42.432 "data_size": 65536 00:13:42.432 } 00:13:42.432 ] 00:13:42.432 }' 00:13:42.432 15:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:42.432 15:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.690 15:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:13:42.690 15:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:42.690 15:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:42.690 15:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:42.690 15:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:42.690 15:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:42.690 15:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:42.690 15:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:42.949 [2024-07-12 15:02:08.696739] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:42.949 15:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:42.949 "name": "Existed_Raid", 00:13:42.949 "aliases": [ 00:13:42.949 "b4dcb0f2-405f-11ef-b2a4-e9dca065e82e" 00:13:42.949 ], 00:13:42.949 "product_name": "Raid Volume", 00:13:42.949 "block_size": 512, 00:13:42.949 "num_blocks": 262144, 00:13:42.949 "uuid": "b4dcb0f2-405f-11ef-b2a4-e9dca065e82e", 00:13:42.949 "assigned_rate_limits": { 00:13:42.949 "rw_ios_per_sec": 0, 00:13:42.949 "rw_mbytes_per_sec": 0, 00:13:42.949 "r_mbytes_per_sec": 0, 00:13:42.949 "w_mbytes_per_sec": 0 00:13:42.949 }, 00:13:42.949 "claimed": false, 00:13:42.949 "zoned": false, 00:13:42.949 "supported_io_types": { 00:13:42.949 "read": true, 00:13:42.949 "write": true, 00:13:42.949 "unmap": true, 00:13:42.949 "flush": true, 00:13:42.949 "reset": true, 00:13:42.949 "nvme_admin": false, 00:13:42.949 "nvme_io": false, 00:13:42.949 "nvme_io_md": false, 00:13:42.949 "write_zeroes": true, 00:13:42.949 "zcopy": false, 00:13:42.949 "get_zone_info": false, 00:13:42.949 "zone_management": false, 00:13:42.949 "zone_append": false, 00:13:42.949 "compare": false, 00:13:42.949 "compare_and_write": false, 00:13:42.949 "abort": false, 00:13:42.949 "seek_hole": false, 00:13:42.949 "seek_data": false, 00:13:42.949 "copy": false, 00:13:42.949 "nvme_iov_md": false 00:13:42.949 }, 00:13:42.949 "memory_domains": [ 00:13:42.949 { 00:13:42.949 "dma_device_id": "system", 00:13:42.949 "dma_device_type": 1 00:13:42.949 }, 00:13:42.949 { 00:13:42.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.949 "dma_device_type": 2 00:13:42.949 }, 00:13:42.949 { 00:13:42.949 "dma_device_id": "system", 00:13:42.949 "dma_device_type": 1 00:13:42.949 }, 00:13:42.949 { 00:13:42.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.949 "dma_device_type": 2 00:13:42.949 }, 00:13:42.949 { 00:13:42.949 "dma_device_id": "system", 00:13:42.949 "dma_device_type": 1 00:13:42.949 }, 00:13:42.949 { 00:13:42.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.949 "dma_device_type": 2 00:13:42.949 }, 00:13:42.949 { 00:13:42.949 "dma_device_id": "system", 00:13:42.949 "dma_device_type": 1 00:13:42.949 }, 00:13:42.949 { 00:13:42.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.949 "dma_device_type": 2 00:13:42.949 } 00:13:42.949 ], 00:13:42.949 "driver_specific": { 00:13:42.949 "raid": { 00:13:42.949 "uuid": "b4dcb0f2-405f-11ef-b2a4-e9dca065e82e", 00:13:42.949 "strip_size_kb": 64, 00:13:42.949 "state": "online", 00:13:42.949 "raid_level": "raid0", 00:13:42.949 "superblock": false, 00:13:42.949 "num_base_bdevs": 4, 00:13:42.949 "num_base_bdevs_discovered": 4, 00:13:42.949 "num_base_bdevs_operational": 4, 00:13:42.949 "base_bdevs_list": [ 00:13:42.949 { 00:13:42.949 "name": "NewBaseBdev", 00:13:42.949 "uuid": "b13608ce-405f-11ef-b2a4-e9dca065e82e", 00:13:42.949 "is_configured": true, 00:13:42.949 "data_offset": 0, 00:13:42.949 "data_size": 65536 00:13:42.949 }, 00:13:42.949 { 00:13:42.949 "name": "BaseBdev2", 00:13:42.949 "uuid": "ae8cd9a5-405f-11ef-b2a4-e9dca065e82e", 00:13:42.949 "is_configured": true, 00:13:42.949 "data_offset": 0, 00:13:42.949 "data_size": 65536 00:13:42.949 }, 00:13:42.949 { 00:13:42.949 "name": "BaseBdev3", 00:13:42.949 "uuid": "af0bce8e-405f-11ef-b2a4-e9dca065e82e", 00:13:42.949 "is_configured": true, 00:13:42.949 "data_offset": 0, 00:13:42.949 "data_size": 65536 00:13:42.949 }, 00:13:42.949 { 00:13:42.949 "name": "BaseBdev4", 00:13:42.949 "uuid": "af7df305-405f-11ef-b2a4-e9dca065e82e", 00:13:42.949 "is_configured": true, 00:13:42.949 "data_offset": 0, 00:13:42.949 "data_size": 65536 00:13:42.949 } 00:13:42.949 ] 00:13:42.949 } 00:13:42.949 } 00:13:42.949 }' 00:13:42.949 15:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:42.949 15:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:13:42.949 BaseBdev2 00:13:42.949 BaseBdev3 00:13:42.949 BaseBdev4' 00:13:42.949 15:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:42.949 15:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:13:42.949 15:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:43.216 15:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:43.216 "name": "NewBaseBdev", 00:13:43.216 "aliases": [ 00:13:43.216 "b13608ce-405f-11ef-b2a4-e9dca065e82e" 00:13:43.216 ], 00:13:43.216 "product_name": "Malloc disk", 00:13:43.216 "block_size": 512, 00:13:43.216 "num_blocks": 65536, 00:13:43.216 "uuid": "b13608ce-405f-11ef-b2a4-e9dca065e82e", 00:13:43.216 "assigned_rate_limits": { 00:13:43.216 "rw_ios_per_sec": 0, 00:13:43.216 "rw_mbytes_per_sec": 0, 00:13:43.216 "r_mbytes_per_sec": 0, 00:13:43.216 "w_mbytes_per_sec": 0 00:13:43.216 }, 00:13:43.216 "claimed": true, 00:13:43.216 "claim_type": "exclusive_write", 00:13:43.216 "zoned": false, 00:13:43.216 "supported_io_types": { 00:13:43.216 "read": true, 00:13:43.216 "write": true, 00:13:43.216 "unmap": true, 00:13:43.216 "flush": true, 00:13:43.216 "reset": true, 00:13:43.216 "nvme_admin": false, 00:13:43.216 "nvme_io": false, 00:13:43.216 "nvme_io_md": false, 00:13:43.216 "write_zeroes": true, 00:13:43.216 "zcopy": true, 00:13:43.216 "get_zone_info": false, 00:13:43.216 "zone_management": false, 00:13:43.216 "zone_append": false, 00:13:43.216 "compare": false, 00:13:43.216 "compare_and_write": false, 00:13:43.216 "abort": true, 00:13:43.216 "seek_hole": false, 00:13:43.216 "seek_data": false, 00:13:43.216 "copy": true, 00:13:43.216 "nvme_iov_md": false 00:13:43.216 }, 00:13:43.216 "memory_domains": [ 00:13:43.216 { 00:13:43.216 "dma_device_id": "system", 00:13:43.216 "dma_device_type": 1 00:13:43.216 }, 00:13:43.216 { 00:13:43.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.216 "dma_device_type": 2 00:13:43.216 } 00:13:43.216 ], 00:13:43.216 "driver_specific": {} 00:13:43.216 }' 00:13:43.216 15:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:43.216 15:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:43.216 15:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:43.216 15:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:43.216 15:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:43.216 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:43.216 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:43.216 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:43.216 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:43.216 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:43.216 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:43.216 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:43.216 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:43.216 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:43.216 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:43.473 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:43.473 "name": "BaseBdev2", 00:13:43.473 "aliases": [ 00:13:43.473 "ae8cd9a5-405f-11ef-b2a4-e9dca065e82e" 00:13:43.473 ], 00:13:43.473 "product_name": "Malloc disk", 00:13:43.473 "block_size": 512, 00:13:43.473 "num_blocks": 65536, 00:13:43.473 "uuid": "ae8cd9a5-405f-11ef-b2a4-e9dca065e82e", 00:13:43.473 "assigned_rate_limits": { 00:13:43.473 "rw_ios_per_sec": 0, 00:13:43.473 "rw_mbytes_per_sec": 0, 00:13:43.473 "r_mbytes_per_sec": 0, 00:13:43.473 "w_mbytes_per_sec": 0 00:13:43.473 }, 00:13:43.473 "claimed": true, 00:13:43.473 "claim_type": "exclusive_write", 00:13:43.473 "zoned": false, 00:13:43.473 "supported_io_types": { 00:13:43.473 "read": true, 00:13:43.473 "write": true, 00:13:43.473 "unmap": true, 00:13:43.473 "flush": true, 00:13:43.473 "reset": true, 00:13:43.473 "nvme_admin": false, 00:13:43.473 "nvme_io": false, 00:13:43.473 "nvme_io_md": false, 00:13:43.474 "write_zeroes": true, 00:13:43.474 "zcopy": true, 00:13:43.474 "get_zone_info": false, 00:13:43.474 "zone_management": false, 00:13:43.474 "zone_append": false, 00:13:43.474 "compare": false, 00:13:43.474 "compare_and_write": false, 00:13:43.474 "abort": true, 00:13:43.474 "seek_hole": false, 00:13:43.474 "seek_data": false, 00:13:43.474 "copy": true, 00:13:43.474 "nvme_iov_md": false 00:13:43.474 }, 00:13:43.474 "memory_domains": [ 00:13:43.474 { 00:13:43.474 "dma_device_id": "system", 00:13:43.474 "dma_device_type": 1 00:13:43.474 }, 00:13:43.474 { 00:13:43.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.474 "dma_device_type": 2 00:13:43.474 } 00:13:43.474 ], 00:13:43.474 "driver_specific": {} 00:13:43.474 }' 00:13:43.474 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:43.474 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:43.474 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:43.474 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:43.474 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:43.732 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:43.732 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:43.732 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:43.732 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:43.732 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:43.732 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:43.732 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:43.732 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:43.732 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:43.732 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:43.989 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:43.989 "name": "BaseBdev3", 00:13:43.989 "aliases": [ 00:13:43.989 "af0bce8e-405f-11ef-b2a4-e9dca065e82e" 00:13:43.989 ], 00:13:43.989 "product_name": "Malloc disk", 00:13:43.989 "block_size": 512, 00:13:43.989 "num_blocks": 65536, 00:13:43.989 "uuid": "af0bce8e-405f-11ef-b2a4-e9dca065e82e", 00:13:43.989 "assigned_rate_limits": { 00:13:43.989 "rw_ios_per_sec": 0, 00:13:43.989 "rw_mbytes_per_sec": 0, 00:13:43.989 "r_mbytes_per_sec": 0, 00:13:43.989 "w_mbytes_per_sec": 0 00:13:43.989 }, 00:13:43.989 "claimed": true, 00:13:43.989 "claim_type": "exclusive_write", 00:13:43.989 "zoned": false, 00:13:43.989 "supported_io_types": { 00:13:43.989 "read": true, 00:13:43.989 "write": true, 00:13:43.989 "unmap": true, 00:13:43.989 "flush": true, 00:13:43.989 "reset": true, 00:13:43.989 "nvme_admin": false, 00:13:43.989 "nvme_io": false, 00:13:43.989 "nvme_io_md": false, 00:13:43.989 "write_zeroes": true, 00:13:43.989 "zcopy": true, 00:13:43.989 "get_zone_info": false, 00:13:43.989 "zone_management": false, 00:13:43.989 "zone_append": false, 00:13:43.989 "compare": false, 00:13:43.989 "compare_and_write": false, 00:13:43.989 "abort": true, 00:13:43.989 "seek_hole": false, 00:13:43.989 "seek_data": false, 00:13:43.989 "copy": true, 00:13:43.989 "nvme_iov_md": false 00:13:43.989 }, 00:13:43.989 "memory_domains": [ 00:13:43.989 { 00:13:43.989 "dma_device_id": "system", 00:13:43.989 "dma_device_type": 1 00:13:43.989 }, 00:13:43.989 { 00:13:43.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.989 "dma_device_type": 2 00:13:43.989 } 00:13:43.989 ], 00:13:43.989 "driver_specific": {} 00:13:43.989 }' 00:13:43.989 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:43.989 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:43.989 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:43.989 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:43.989 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:43.989 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:43.989 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:43.989 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:43.989 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:43.989 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:43.989 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:43.989 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:43.989 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:43.989 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:43.989 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:13:44.247 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:44.247 "name": "BaseBdev4", 00:13:44.247 "aliases": [ 00:13:44.247 "af7df305-405f-11ef-b2a4-e9dca065e82e" 00:13:44.247 ], 00:13:44.247 "product_name": "Malloc disk", 00:13:44.247 "block_size": 512, 00:13:44.247 "num_blocks": 65536, 00:13:44.247 "uuid": "af7df305-405f-11ef-b2a4-e9dca065e82e", 00:13:44.247 "assigned_rate_limits": { 00:13:44.247 "rw_ios_per_sec": 0, 00:13:44.247 "rw_mbytes_per_sec": 0, 00:13:44.247 "r_mbytes_per_sec": 0, 00:13:44.247 "w_mbytes_per_sec": 0 00:13:44.247 }, 00:13:44.247 "claimed": true, 00:13:44.247 "claim_type": "exclusive_write", 00:13:44.247 "zoned": false, 00:13:44.247 "supported_io_types": { 00:13:44.247 "read": true, 00:13:44.247 "write": true, 00:13:44.247 "unmap": true, 00:13:44.247 "flush": true, 00:13:44.247 "reset": true, 00:13:44.247 "nvme_admin": false, 00:13:44.247 "nvme_io": false, 00:13:44.247 "nvme_io_md": false, 00:13:44.247 "write_zeroes": true, 00:13:44.247 "zcopy": true, 00:13:44.247 "get_zone_info": false, 00:13:44.247 "zone_management": false, 00:13:44.247 "zone_append": false, 00:13:44.247 "compare": false, 00:13:44.247 "compare_and_write": false, 00:13:44.247 "abort": true, 00:13:44.247 "seek_hole": false, 00:13:44.247 "seek_data": false, 00:13:44.247 "copy": true, 00:13:44.247 "nvme_iov_md": false 00:13:44.248 }, 00:13:44.248 "memory_domains": [ 00:13:44.248 { 00:13:44.248 "dma_device_id": "system", 00:13:44.248 "dma_device_type": 1 00:13:44.248 }, 00:13:44.248 { 00:13:44.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.248 "dma_device_type": 2 00:13:44.248 } 00:13:44.248 ], 00:13:44.248 "driver_specific": {} 00:13:44.248 }' 00:13:44.248 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:44.248 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:44.248 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:44.248 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:44.248 15:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:44.248 15:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:44.248 15:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:44.248 15:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:44.248 15:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:44.248 15:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:44.248 15:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:44.248 15:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:44.248 15:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:44.506 [2024-07-12 15:02:10.252766] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:44.506 [2024-07-12 15:02:10.252804] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:44.506 [2024-07-12 15:02:10.252852] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:44.506 [2024-07-12 15:02:10.252871] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:44.506 [2024-07-12 15:02:10.252876] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x5cbc1234f00 name Existed_Raid, state offline 00:13:44.506 15:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 58382 00:13:44.506 15:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 58382 ']' 00:13:44.506 15:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 58382 00:13:44.506 15:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:13:44.506 15:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:13:44.506 15:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 58382 00:13:44.506 15:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:13:44.506 15:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:13:44.506 killing process with pid 58382 00:13:44.506 15:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:13:44.506 15:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58382' 00:13:44.506 15:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 58382 00:13:44.506 [2024-07-12 15:02:10.280213] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:44.506 15:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 58382 00:13:44.506 [2024-07-12 15:02:10.313955] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:44.764 15:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:13:44.764 00:13:44.764 real 0m27.812s 00:13:44.764 user 0m50.896s 00:13:44.764 sys 0m3.735s 00:13:44.764 15:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:44.764 15:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.764 ************************************ 00:13:44.764 END TEST raid_state_function_test 00:13:44.764 ************************************ 00:13:45.023 15:02:10 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:45.024 15:02:10 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:13:45.024 15:02:10 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:45.024 15:02:10 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:45.024 15:02:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:45.024 ************************************ 00:13:45.024 START TEST raid_state_function_test_sb 00:13:45.024 ************************************ 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 true 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=59205 00:13:45.024 Process raid pid: 59205 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 59205' 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 59205 /var/tmp/spdk-raid.sock 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 59205 ']' 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:45.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:45.024 15:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.024 [2024-07-12 15:02:10.639198] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:13:45.024 [2024-07-12 15:02:10.639630] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:13:45.591 EAL: TSC is not safe to use in SMP mode 00:13:45.591 EAL: TSC is not invariant 00:13:45.591 [2024-07-12 15:02:11.175237] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.591 [2024-07-12 15:02:11.288694] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:45.591 [2024-07-12 15:02:11.291321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.591 [2024-07-12 15:02:11.292188] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.591 [2024-07-12 15:02:11.292202] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:46.157 15:02:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:46.157 15:02:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:13:46.157 15:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:46.157 [2024-07-12 15:02:11.939788] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:46.157 [2024-07-12 15:02:11.939869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:46.157 [2024-07-12 15:02:11.939875] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:46.157 [2024-07-12 15:02:11.939884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:46.157 [2024-07-12 15:02:11.939887] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:46.157 [2024-07-12 15:02:11.939895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:46.157 [2024-07-12 15:02:11.939899] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:46.157 [2024-07-12 15:02:11.939906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:46.157 15:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:46.157 15:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:46.157 15:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:46.157 15:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:46.157 15:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:46.157 15:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:46.157 15:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:46.157 15:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:46.157 15:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:46.157 15:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:46.157 15:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:46.157 15:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.723 15:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:46.723 "name": "Existed_Raid", 00:13:46.723 "uuid": "b7acc17b-405f-11ef-b2a4-e9dca065e82e", 00:13:46.723 "strip_size_kb": 64, 00:13:46.723 "state": "configuring", 00:13:46.723 "raid_level": "raid0", 00:13:46.723 "superblock": true, 00:13:46.723 "num_base_bdevs": 4, 00:13:46.723 "num_base_bdevs_discovered": 0, 00:13:46.723 "num_base_bdevs_operational": 4, 00:13:46.723 "base_bdevs_list": [ 00:13:46.723 { 00:13:46.723 "name": "BaseBdev1", 00:13:46.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.723 "is_configured": false, 00:13:46.723 "data_offset": 0, 00:13:46.723 "data_size": 0 00:13:46.723 }, 00:13:46.723 { 00:13:46.723 "name": "BaseBdev2", 00:13:46.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.723 "is_configured": false, 00:13:46.723 "data_offset": 0, 00:13:46.723 "data_size": 0 00:13:46.723 }, 00:13:46.723 { 00:13:46.723 "name": "BaseBdev3", 00:13:46.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.723 "is_configured": false, 00:13:46.723 "data_offset": 0, 00:13:46.723 "data_size": 0 00:13:46.723 }, 00:13:46.723 { 00:13:46.723 "name": "BaseBdev4", 00:13:46.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.723 "is_configured": false, 00:13:46.723 "data_offset": 0, 00:13:46.723 "data_size": 0 00:13:46.723 } 00:13:46.723 ] 00:13:46.723 }' 00:13:46.723 15:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:46.723 15:02:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.992 15:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:47.273 [2024-07-12 15:02:12.855800] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:47.273 [2024-07-12 15:02:12.855838] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2e05c6634500 name Existed_Raid, state configuring 00:13:47.273 15:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:47.531 [2024-07-12 15:02:13.139826] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:47.531 [2024-07-12 15:02:13.139891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:47.531 [2024-07-12 15:02:13.139896] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:47.531 [2024-07-12 15:02:13.139905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:47.531 [2024-07-12 15:02:13.139909] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:47.531 [2024-07-12 15:02:13.139917] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:47.531 [2024-07-12 15:02:13.139920] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:47.531 [2024-07-12 15:02:13.139943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:47.531 15:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:47.789 [2024-07-12 15:02:13.425024] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:47.789 BaseBdev1 00:13:47.789 15:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:47.789 15:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:47.789 15:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:47.789 15:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:47.789 15:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:47.789 15:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:47.789 15:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:48.048 15:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:48.306 [ 00:13:48.306 { 00:13:48.306 "name": "BaseBdev1", 00:13:48.306 "aliases": [ 00:13:48.306 "b88f34ba-405f-11ef-b2a4-e9dca065e82e" 00:13:48.306 ], 00:13:48.306 "product_name": "Malloc disk", 00:13:48.306 "block_size": 512, 00:13:48.306 "num_blocks": 65536, 00:13:48.306 "uuid": "b88f34ba-405f-11ef-b2a4-e9dca065e82e", 00:13:48.306 "assigned_rate_limits": { 00:13:48.306 "rw_ios_per_sec": 0, 00:13:48.306 "rw_mbytes_per_sec": 0, 00:13:48.306 "r_mbytes_per_sec": 0, 00:13:48.306 "w_mbytes_per_sec": 0 00:13:48.306 }, 00:13:48.306 "claimed": true, 00:13:48.306 "claim_type": "exclusive_write", 00:13:48.306 "zoned": false, 00:13:48.306 "supported_io_types": { 00:13:48.306 "read": true, 00:13:48.306 "write": true, 00:13:48.306 "unmap": true, 00:13:48.306 "flush": true, 00:13:48.306 "reset": true, 00:13:48.306 "nvme_admin": false, 00:13:48.306 "nvme_io": false, 00:13:48.306 "nvme_io_md": false, 00:13:48.306 "write_zeroes": true, 00:13:48.306 "zcopy": true, 00:13:48.306 "get_zone_info": false, 00:13:48.306 "zone_management": false, 00:13:48.306 "zone_append": false, 00:13:48.306 "compare": false, 00:13:48.306 "compare_and_write": false, 00:13:48.306 "abort": true, 00:13:48.306 "seek_hole": false, 00:13:48.306 "seek_data": false, 00:13:48.306 "copy": true, 00:13:48.306 "nvme_iov_md": false 00:13:48.306 }, 00:13:48.306 "memory_domains": [ 00:13:48.306 { 00:13:48.306 "dma_device_id": "system", 00:13:48.306 "dma_device_type": 1 00:13:48.306 }, 00:13:48.306 { 00:13:48.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.306 "dma_device_type": 2 00:13:48.306 } 00:13:48.306 ], 00:13:48.306 "driver_specific": {} 00:13:48.306 } 00:13:48.306 ] 00:13:48.306 15:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:48.306 15:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:48.306 15:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:48.306 15:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:48.306 15:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:48.306 15:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:48.306 15:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:48.306 15:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:48.306 15:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:48.306 15:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:48.306 15:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:48.306 15:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:48.306 15:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.564 15:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:48.564 "name": "Existed_Raid", 00:13:48.564 "uuid": "b863de10-405f-11ef-b2a4-e9dca065e82e", 00:13:48.564 "strip_size_kb": 64, 00:13:48.564 "state": "configuring", 00:13:48.564 "raid_level": "raid0", 00:13:48.564 "superblock": true, 00:13:48.564 "num_base_bdevs": 4, 00:13:48.564 "num_base_bdevs_discovered": 1, 00:13:48.564 "num_base_bdevs_operational": 4, 00:13:48.564 "base_bdevs_list": [ 00:13:48.564 { 00:13:48.564 "name": "BaseBdev1", 00:13:48.564 "uuid": "b88f34ba-405f-11ef-b2a4-e9dca065e82e", 00:13:48.564 "is_configured": true, 00:13:48.564 "data_offset": 2048, 00:13:48.564 "data_size": 63488 00:13:48.564 }, 00:13:48.564 { 00:13:48.564 "name": "BaseBdev2", 00:13:48.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.564 "is_configured": false, 00:13:48.564 "data_offset": 0, 00:13:48.564 "data_size": 0 00:13:48.564 }, 00:13:48.564 { 00:13:48.564 "name": "BaseBdev3", 00:13:48.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.564 "is_configured": false, 00:13:48.564 "data_offset": 0, 00:13:48.564 "data_size": 0 00:13:48.564 }, 00:13:48.564 { 00:13:48.564 "name": "BaseBdev4", 00:13:48.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.564 "is_configured": false, 00:13:48.564 "data_offset": 0, 00:13:48.564 "data_size": 0 00:13:48.564 } 00:13:48.564 ] 00:13:48.564 }' 00:13:48.564 15:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:48.564 15:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.822 15:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:49.095 [2024-07-12 15:02:14.780023] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:49.095 [2024-07-12 15:02:14.780067] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2e05c6634500 name Existed_Raid, state configuring 00:13:49.095 15:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:49.357 [2024-07-12 15:02:15.052061] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:49.357 [2024-07-12 15:02:15.053041] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:49.357 [2024-07-12 15:02:15.053087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:49.357 [2024-07-12 15:02:15.053092] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:49.357 [2024-07-12 15:02:15.053101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:49.357 [2024-07-12 15:02:15.053105] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:49.357 [2024-07-12 15:02:15.053113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:49.357 15:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:49.357 15:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:49.357 15:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:49.357 15:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:49.357 15:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:49.357 15:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:49.357 15:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:49.357 15:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:49.357 15:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:49.357 15:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:49.357 15:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:49.357 15:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:49.357 15:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:49.357 15:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.614 15:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:49.614 "name": "Existed_Raid", 00:13:49.614 "uuid": "b987a68d-405f-11ef-b2a4-e9dca065e82e", 00:13:49.614 "strip_size_kb": 64, 00:13:49.614 "state": "configuring", 00:13:49.614 "raid_level": "raid0", 00:13:49.614 "superblock": true, 00:13:49.614 "num_base_bdevs": 4, 00:13:49.614 "num_base_bdevs_discovered": 1, 00:13:49.614 "num_base_bdevs_operational": 4, 00:13:49.614 "base_bdevs_list": [ 00:13:49.614 { 00:13:49.614 "name": "BaseBdev1", 00:13:49.614 "uuid": "b88f34ba-405f-11ef-b2a4-e9dca065e82e", 00:13:49.614 "is_configured": true, 00:13:49.614 "data_offset": 2048, 00:13:49.614 "data_size": 63488 00:13:49.614 }, 00:13:49.614 { 00:13:49.614 "name": "BaseBdev2", 00:13:49.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.614 "is_configured": false, 00:13:49.614 "data_offset": 0, 00:13:49.614 "data_size": 0 00:13:49.614 }, 00:13:49.614 { 00:13:49.614 "name": "BaseBdev3", 00:13:49.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.615 "is_configured": false, 00:13:49.615 "data_offset": 0, 00:13:49.615 "data_size": 0 00:13:49.615 }, 00:13:49.615 { 00:13:49.615 "name": "BaseBdev4", 00:13:49.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.615 "is_configured": false, 00:13:49.615 "data_offset": 0, 00:13:49.615 "data_size": 0 00:13:49.615 } 00:13:49.615 ] 00:13:49.615 }' 00:13:49.615 15:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:49.615 15:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.872 15:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:50.130 [2024-07-12 15:02:15.892272] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:50.130 BaseBdev2 00:13:50.130 15:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:50.130 15:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:50.130 15:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:50.130 15:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:50.130 15:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:50.130 15:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:50.130 15:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:50.388 15:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:50.647 [ 00:13:50.647 { 00:13:50.647 "name": "BaseBdev2", 00:13:50.647 "aliases": [ 00:13:50.647 "ba07d4ea-405f-11ef-b2a4-e9dca065e82e" 00:13:50.647 ], 00:13:50.647 "product_name": "Malloc disk", 00:13:50.647 "block_size": 512, 00:13:50.647 "num_blocks": 65536, 00:13:50.647 "uuid": "ba07d4ea-405f-11ef-b2a4-e9dca065e82e", 00:13:50.647 "assigned_rate_limits": { 00:13:50.647 "rw_ios_per_sec": 0, 00:13:50.647 "rw_mbytes_per_sec": 0, 00:13:50.647 "r_mbytes_per_sec": 0, 00:13:50.647 "w_mbytes_per_sec": 0 00:13:50.647 }, 00:13:50.647 "claimed": true, 00:13:50.647 "claim_type": "exclusive_write", 00:13:50.647 "zoned": false, 00:13:50.647 "supported_io_types": { 00:13:50.647 "read": true, 00:13:50.647 "write": true, 00:13:50.647 "unmap": true, 00:13:50.647 "flush": true, 00:13:50.647 "reset": true, 00:13:50.647 "nvme_admin": false, 00:13:50.647 "nvme_io": false, 00:13:50.647 "nvme_io_md": false, 00:13:50.647 "write_zeroes": true, 00:13:50.647 "zcopy": true, 00:13:50.647 "get_zone_info": false, 00:13:50.647 "zone_management": false, 00:13:50.647 "zone_append": false, 00:13:50.647 "compare": false, 00:13:50.647 "compare_and_write": false, 00:13:50.647 "abort": true, 00:13:50.647 "seek_hole": false, 00:13:50.647 "seek_data": false, 00:13:50.647 "copy": true, 00:13:50.647 "nvme_iov_md": false 00:13:50.647 }, 00:13:50.647 "memory_domains": [ 00:13:50.647 { 00:13:50.647 "dma_device_id": "system", 00:13:50.647 "dma_device_type": 1 00:13:50.647 }, 00:13:50.647 { 00:13:50.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.647 "dma_device_type": 2 00:13:50.647 } 00:13:50.647 ], 00:13:50.647 "driver_specific": {} 00:13:50.647 } 00:13:50.647 ] 00:13:50.647 15:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:50.647 15:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:50.647 15:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:50.647 15:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:50.647 15:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:50.647 15:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:50.647 15:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:50.647 15:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:50.647 15:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:50.647 15:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:50.647 15:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:50.647 15:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:50.647 15:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:50.647 15:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:50.647 15:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.905 15:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:50.905 "name": "Existed_Raid", 00:13:50.905 "uuid": "b987a68d-405f-11ef-b2a4-e9dca065e82e", 00:13:50.905 "strip_size_kb": 64, 00:13:50.905 "state": "configuring", 00:13:50.905 "raid_level": "raid0", 00:13:50.905 "superblock": true, 00:13:50.905 "num_base_bdevs": 4, 00:13:50.905 "num_base_bdevs_discovered": 2, 00:13:50.905 "num_base_bdevs_operational": 4, 00:13:50.905 "base_bdevs_list": [ 00:13:50.905 { 00:13:50.905 "name": "BaseBdev1", 00:13:50.905 "uuid": "b88f34ba-405f-11ef-b2a4-e9dca065e82e", 00:13:50.905 "is_configured": true, 00:13:50.905 "data_offset": 2048, 00:13:50.905 "data_size": 63488 00:13:50.905 }, 00:13:50.905 { 00:13:50.905 "name": "BaseBdev2", 00:13:50.905 "uuid": "ba07d4ea-405f-11ef-b2a4-e9dca065e82e", 00:13:50.905 "is_configured": true, 00:13:50.905 "data_offset": 2048, 00:13:50.905 "data_size": 63488 00:13:50.905 }, 00:13:50.905 { 00:13:50.905 "name": "BaseBdev3", 00:13:50.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.905 "is_configured": false, 00:13:50.905 "data_offset": 0, 00:13:50.905 "data_size": 0 00:13:50.905 }, 00:13:50.905 { 00:13:50.905 "name": "BaseBdev4", 00:13:50.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.905 "is_configured": false, 00:13:50.905 "data_offset": 0, 00:13:50.905 "data_size": 0 00:13:50.905 } 00:13:50.905 ] 00:13:50.905 }' 00:13:50.905 15:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:50.905 15:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.487 15:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:51.487 [2024-07-12 15:02:17.224357] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:51.487 BaseBdev3 00:13:51.487 15:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:13:51.487 15:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:13:51.487 15:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:51.487 15:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:51.487 15:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:51.487 15:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:51.487 15:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:51.745 15:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:52.004 [ 00:13:52.004 { 00:13:52.004 "name": "BaseBdev3", 00:13:52.004 "aliases": [ 00:13:52.004 "bad3187a-405f-11ef-b2a4-e9dca065e82e" 00:13:52.004 ], 00:13:52.004 "product_name": "Malloc disk", 00:13:52.004 "block_size": 512, 00:13:52.004 "num_blocks": 65536, 00:13:52.004 "uuid": "bad3187a-405f-11ef-b2a4-e9dca065e82e", 00:13:52.004 "assigned_rate_limits": { 00:13:52.004 "rw_ios_per_sec": 0, 00:13:52.004 "rw_mbytes_per_sec": 0, 00:13:52.004 "r_mbytes_per_sec": 0, 00:13:52.004 "w_mbytes_per_sec": 0 00:13:52.004 }, 00:13:52.004 "claimed": true, 00:13:52.004 "claim_type": "exclusive_write", 00:13:52.004 "zoned": false, 00:13:52.004 "supported_io_types": { 00:13:52.004 "read": true, 00:13:52.004 "write": true, 00:13:52.004 "unmap": true, 00:13:52.004 "flush": true, 00:13:52.004 "reset": true, 00:13:52.004 "nvme_admin": false, 00:13:52.004 "nvme_io": false, 00:13:52.004 "nvme_io_md": false, 00:13:52.004 "write_zeroes": true, 00:13:52.004 "zcopy": true, 00:13:52.004 "get_zone_info": false, 00:13:52.004 "zone_management": false, 00:13:52.004 "zone_append": false, 00:13:52.004 "compare": false, 00:13:52.004 "compare_and_write": false, 00:13:52.004 "abort": true, 00:13:52.004 "seek_hole": false, 00:13:52.004 "seek_data": false, 00:13:52.004 "copy": true, 00:13:52.004 "nvme_iov_md": false 00:13:52.004 }, 00:13:52.004 "memory_domains": [ 00:13:52.004 { 00:13:52.004 "dma_device_id": "system", 00:13:52.004 "dma_device_type": 1 00:13:52.004 }, 00:13:52.004 { 00:13:52.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.004 "dma_device_type": 2 00:13:52.004 } 00:13:52.004 ], 00:13:52.004 "driver_specific": {} 00:13:52.004 } 00:13:52.004 ] 00:13:52.004 15:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:52.004 15:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:52.004 15:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:52.004 15:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:52.004 15:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:52.004 15:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:52.004 15:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:52.004 15:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:52.004 15:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:52.004 15:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:52.004 15:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:52.004 15:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:52.004 15:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:52.004 15:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:52.004 15:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.263 15:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:52.263 "name": "Existed_Raid", 00:13:52.263 "uuid": "b987a68d-405f-11ef-b2a4-e9dca065e82e", 00:13:52.263 "strip_size_kb": 64, 00:13:52.263 "state": "configuring", 00:13:52.263 "raid_level": "raid0", 00:13:52.263 "superblock": true, 00:13:52.263 "num_base_bdevs": 4, 00:13:52.263 "num_base_bdevs_discovered": 3, 00:13:52.263 "num_base_bdevs_operational": 4, 00:13:52.263 "base_bdevs_list": [ 00:13:52.263 { 00:13:52.263 "name": "BaseBdev1", 00:13:52.263 "uuid": "b88f34ba-405f-11ef-b2a4-e9dca065e82e", 00:13:52.263 "is_configured": true, 00:13:52.263 "data_offset": 2048, 00:13:52.263 "data_size": 63488 00:13:52.263 }, 00:13:52.263 { 00:13:52.263 "name": "BaseBdev2", 00:13:52.263 "uuid": "ba07d4ea-405f-11ef-b2a4-e9dca065e82e", 00:13:52.263 "is_configured": true, 00:13:52.263 "data_offset": 2048, 00:13:52.263 "data_size": 63488 00:13:52.263 }, 00:13:52.263 { 00:13:52.263 "name": "BaseBdev3", 00:13:52.263 "uuid": "bad3187a-405f-11ef-b2a4-e9dca065e82e", 00:13:52.263 "is_configured": true, 00:13:52.263 "data_offset": 2048, 00:13:52.263 "data_size": 63488 00:13:52.263 }, 00:13:52.263 { 00:13:52.263 "name": "BaseBdev4", 00:13:52.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.263 "is_configured": false, 00:13:52.263 "data_offset": 0, 00:13:52.263 "data_size": 0 00:13:52.263 } 00:13:52.263 ] 00:13:52.263 }' 00:13:52.263 15:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:52.263 15:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.521 15:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:52.780 [2024-07-12 15:02:18.568457] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:52.780 [2024-07-12 15:02:18.568544] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2e05c6634a00 00:13:52.780 [2024-07-12 15:02:18.568552] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:52.780 [2024-07-12 15:02:18.568575] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2e05c6697e20 00:13:52.780 [2024-07-12 15:02:18.568631] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2e05c6634a00 00:13:52.780 [2024-07-12 15:02:18.568636] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2e05c6634a00 00:13:52.780 [2024-07-12 15:02:18.568659] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.780 BaseBdev4 00:13:52.780 15:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:13:52.780 15:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:13:52.780 15:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:52.780 15:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:52.780 15:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:52.780 15:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:52.780 15:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:53.039 15:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:53.297 [ 00:13:53.298 { 00:13:53.298 "name": "BaseBdev4", 00:13:53.298 "aliases": [ 00:13:53.298 "bba02f9c-405f-11ef-b2a4-e9dca065e82e" 00:13:53.298 ], 00:13:53.298 "product_name": "Malloc disk", 00:13:53.298 "block_size": 512, 00:13:53.298 "num_blocks": 65536, 00:13:53.298 "uuid": "bba02f9c-405f-11ef-b2a4-e9dca065e82e", 00:13:53.298 "assigned_rate_limits": { 00:13:53.298 "rw_ios_per_sec": 0, 00:13:53.298 "rw_mbytes_per_sec": 0, 00:13:53.298 "r_mbytes_per_sec": 0, 00:13:53.298 "w_mbytes_per_sec": 0 00:13:53.298 }, 00:13:53.298 "claimed": true, 00:13:53.298 "claim_type": "exclusive_write", 00:13:53.298 "zoned": false, 00:13:53.298 "supported_io_types": { 00:13:53.298 "read": true, 00:13:53.298 "write": true, 00:13:53.298 "unmap": true, 00:13:53.298 "flush": true, 00:13:53.298 "reset": true, 00:13:53.298 "nvme_admin": false, 00:13:53.298 "nvme_io": false, 00:13:53.298 "nvme_io_md": false, 00:13:53.298 "write_zeroes": true, 00:13:53.298 "zcopy": true, 00:13:53.298 "get_zone_info": false, 00:13:53.298 "zone_management": false, 00:13:53.298 "zone_append": false, 00:13:53.298 "compare": false, 00:13:53.298 "compare_and_write": false, 00:13:53.298 "abort": true, 00:13:53.298 "seek_hole": false, 00:13:53.298 "seek_data": false, 00:13:53.298 "copy": true, 00:13:53.298 "nvme_iov_md": false 00:13:53.298 }, 00:13:53.298 "memory_domains": [ 00:13:53.298 { 00:13:53.298 "dma_device_id": "system", 00:13:53.298 "dma_device_type": 1 00:13:53.298 }, 00:13:53.298 { 00:13:53.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.298 "dma_device_type": 2 00:13:53.298 } 00:13:53.298 ], 00:13:53.298 "driver_specific": {} 00:13:53.298 } 00:13:53.298 ] 00:13:53.298 15:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:53.298 15:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:53.298 15:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:53.298 15:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:53.298 15:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:53.298 15:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:53.298 15:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:53.298 15:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:53.298 15:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:53.298 15:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:53.298 15:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:53.298 15:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:53.298 15:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:53.298 15:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:53.298 15:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.556 15:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:53.556 "name": "Existed_Raid", 00:13:53.556 "uuid": "b987a68d-405f-11ef-b2a4-e9dca065e82e", 00:13:53.556 "strip_size_kb": 64, 00:13:53.556 "state": "online", 00:13:53.556 "raid_level": "raid0", 00:13:53.556 "superblock": true, 00:13:53.556 "num_base_bdevs": 4, 00:13:53.556 "num_base_bdevs_discovered": 4, 00:13:53.556 "num_base_bdevs_operational": 4, 00:13:53.556 "base_bdevs_list": [ 00:13:53.556 { 00:13:53.556 "name": "BaseBdev1", 00:13:53.556 "uuid": "b88f34ba-405f-11ef-b2a4-e9dca065e82e", 00:13:53.556 "is_configured": true, 00:13:53.556 "data_offset": 2048, 00:13:53.556 "data_size": 63488 00:13:53.556 }, 00:13:53.556 { 00:13:53.556 "name": "BaseBdev2", 00:13:53.556 "uuid": "ba07d4ea-405f-11ef-b2a4-e9dca065e82e", 00:13:53.556 "is_configured": true, 00:13:53.556 "data_offset": 2048, 00:13:53.556 "data_size": 63488 00:13:53.556 }, 00:13:53.556 { 00:13:53.556 "name": "BaseBdev3", 00:13:53.556 "uuid": "bad3187a-405f-11ef-b2a4-e9dca065e82e", 00:13:53.556 "is_configured": true, 00:13:53.556 "data_offset": 2048, 00:13:53.556 "data_size": 63488 00:13:53.556 }, 00:13:53.556 { 00:13:53.556 "name": "BaseBdev4", 00:13:53.557 "uuid": "bba02f9c-405f-11ef-b2a4-e9dca065e82e", 00:13:53.557 "is_configured": true, 00:13:53.557 "data_offset": 2048, 00:13:53.557 "data_size": 63488 00:13:53.557 } 00:13:53.557 ] 00:13:53.557 }' 00:13:53.557 15:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:53.557 15:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.816 15:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:53.816 15:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:53.816 15:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:53.816 15:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:53.816 15:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:53.816 15:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:53.816 15:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:53.816 15:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:54.074 [2024-07-12 15:02:19.840407] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:54.074 15:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:54.074 "name": "Existed_Raid", 00:13:54.074 "aliases": [ 00:13:54.074 "b987a68d-405f-11ef-b2a4-e9dca065e82e" 00:13:54.074 ], 00:13:54.074 "product_name": "Raid Volume", 00:13:54.074 "block_size": 512, 00:13:54.074 "num_blocks": 253952, 00:13:54.074 "uuid": "b987a68d-405f-11ef-b2a4-e9dca065e82e", 00:13:54.074 "assigned_rate_limits": { 00:13:54.074 "rw_ios_per_sec": 0, 00:13:54.074 "rw_mbytes_per_sec": 0, 00:13:54.074 "r_mbytes_per_sec": 0, 00:13:54.074 "w_mbytes_per_sec": 0 00:13:54.074 }, 00:13:54.074 "claimed": false, 00:13:54.074 "zoned": false, 00:13:54.074 "supported_io_types": { 00:13:54.074 "read": true, 00:13:54.074 "write": true, 00:13:54.074 "unmap": true, 00:13:54.074 "flush": true, 00:13:54.074 "reset": true, 00:13:54.074 "nvme_admin": false, 00:13:54.074 "nvme_io": false, 00:13:54.074 "nvme_io_md": false, 00:13:54.074 "write_zeroes": true, 00:13:54.074 "zcopy": false, 00:13:54.074 "get_zone_info": false, 00:13:54.074 "zone_management": false, 00:13:54.074 "zone_append": false, 00:13:54.074 "compare": false, 00:13:54.074 "compare_and_write": false, 00:13:54.074 "abort": false, 00:13:54.074 "seek_hole": false, 00:13:54.074 "seek_data": false, 00:13:54.074 "copy": false, 00:13:54.074 "nvme_iov_md": false 00:13:54.074 }, 00:13:54.074 "memory_domains": [ 00:13:54.074 { 00:13:54.074 "dma_device_id": "system", 00:13:54.074 "dma_device_type": 1 00:13:54.074 }, 00:13:54.074 { 00:13:54.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.074 "dma_device_type": 2 00:13:54.074 }, 00:13:54.074 { 00:13:54.074 "dma_device_id": "system", 00:13:54.074 "dma_device_type": 1 00:13:54.074 }, 00:13:54.074 { 00:13:54.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.075 "dma_device_type": 2 00:13:54.075 }, 00:13:54.075 { 00:13:54.075 "dma_device_id": "system", 00:13:54.075 "dma_device_type": 1 00:13:54.075 }, 00:13:54.075 { 00:13:54.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.075 "dma_device_type": 2 00:13:54.075 }, 00:13:54.075 { 00:13:54.075 "dma_device_id": "system", 00:13:54.075 "dma_device_type": 1 00:13:54.075 }, 00:13:54.075 { 00:13:54.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.075 "dma_device_type": 2 00:13:54.075 } 00:13:54.075 ], 00:13:54.075 "driver_specific": { 00:13:54.075 "raid": { 00:13:54.075 "uuid": "b987a68d-405f-11ef-b2a4-e9dca065e82e", 00:13:54.075 "strip_size_kb": 64, 00:13:54.075 "state": "online", 00:13:54.075 "raid_level": "raid0", 00:13:54.075 "superblock": true, 00:13:54.075 "num_base_bdevs": 4, 00:13:54.075 "num_base_bdevs_discovered": 4, 00:13:54.075 "num_base_bdevs_operational": 4, 00:13:54.075 "base_bdevs_list": [ 00:13:54.075 { 00:13:54.075 "name": "BaseBdev1", 00:13:54.075 "uuid": "b88f34ba-405f-11ef-b2a4-e9dca065e82e", 00:13:54.075 "is_configured": true, 00:13:54.075 "data_offset": 2048, 00:13:54.075 "data_size": 63488 00:13:54.075 }, 00:13:54.075 { 00:13:54.075 "name": "BaseBdev2", 00:13:54.075 "uuid": "ba07d4ea-405f-11ef-b2a4-e9dca065e82e", 00:13:54.075 "is_configured": true, 00:13:54.075 "data_offset": 2048, 00:13:54.075 "data_size": 63488 00:13:54.075 }, 00:13:54.075 { 00:13:54.075 "name": "BaseBdev3", 00:13:54.075 "uuid": "bad3187a-405f-11ef-b2a4-e9dca065e82e", 00:13:54.075 "is_configured": true, 00:13:54.075 "data_offset": 2048, 00:13:54.075 "data_size": 63488 00:13:54.075 }, 00:13:54.075 { 00:13:54.075 "name": "BaseBdev4", 00:13:54.075 "uuid": "bba02f9c-405f-11ef-b2a4-e9dca065e82e", 00:13:54.075 "is_configured": true, 00:13:54.075 "data_offset": 2048, 00:13:54.075 "data_size": 63488 00:13:54.075 } 00:13:54.075 ] 00:13:54.075 } 00:13:54.075 } 00:13:54.075 }' 00:13:54.075 15:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:54.075 15:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:54.075 BaseBdev2 00:13:54.075 BaseBdev3 00:13:54.075 BaseBdev4' 00:13:54.075 15:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:54.075 15:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:54.075 15:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:54.333 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:54.333 "name": "BaseBdev1", 00:13:54.333 "aliases": [ 00:13:54.333 "b88f34ba-405f-11ef-b2a4-e9dca065e82e" 00:13:54.333 ], 00:13:54.333 "product_name": "Malloc disk", 00:13:54.333 "block_size": 512, 00:13:54.333 "num_blocks": 65536, 00:13:54.333 "uuid": "b88f34ba-405f-11ef-b2a4-e9dca065e82e", 00:13:54.333 "assigned_rate_limits": { 00:13:54.333 "rw_ios_per_sec": 0, 00:13:54.333 "rw_mbytes_per_sec": 0, 00:13:54.333 "r_mbytes_per_sec": 0, 00:13:54.333 "w_mbytes_per_sec": 0 00:13:54.334 }, 00:13:54.334 "claimed": true, 00:13:54.334 "claim_type": "exclusive_write", 00:13:54.334 "zoned": false, 00:13:54.334 "supported_io_types": { 00:13:54.334 "read": true, 00:13:54.334 "write": true, 00:13:54.334 "unmap": true, 00:13:54.334 "flush": true, 00:13:54.334 "reset": true, 00:13:54.334 "nvme_admin": false, 00:13:54.334 "nvme_io": false, 00:13:54.334 "nvme_io_md": false, 00:13:54.334 "write_zeroes": true, 00:13:54.334 "zcopy": true, 00:13:54.334 "get_zone_info": false, 00:13:54.334 "zone_management": false, 00:13:54.334 "zone_append": false, 00:13:54.334 "compare": false, 00:13:54.334 "compare_and_write": false, 00:13:54.334 "abort": true, 00:13:54.334 "seek_hole": false, 00:13:54.334 "seek_data": false, 00:13:54.334 "copy": true, 00:13:54.334 "nvme_iov_md": false 00:13:54.334 }, 00:13:54.334 "memory_domains": [ 00:13:54.334 { 00:13:54.334 "dma_device_id": "system", 00:13:54.334 "dma_device_type": 1 00:13:54.334 }, 00:13:54.334 { 00:13:54.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.334 "dma_device_type": 2 00:13:54.334 } 00:13:54.334 ], 00:13:54.334 "driver_specific": {} 00:13:54.334 }' 00:13:54.334 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:54.334 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:54.334 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:54.334 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:54.334 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:54.334 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:54.334 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:54.592 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:54.592 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:54.592 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:54.592 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:54.592 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:54.592 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:54.592 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:54.592 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:54.592 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:54.592 "name": "BaseBdev2", 00:13:54.592 "aliases": [ 00:13:54.592 "ba07d4ea-405f-11ef-b2a4-e9dca065e82e" 00:13:54.592 ], 00:13:54.592 "product_name": "Malloc disk", 00:13:54.592 "block_size": 512, 00:13:54.592 "num_blocks": 65536, 00:13:54.592 "uuid": "ba07d4ea-405f-11ef-b2a4-e9dca065e82e", 00:13:54.592 "assigned_rate_limits": { 00:13:54.592 "rw_ios_per_sec": 0, 00:13:54.592 "rw_mbytes_per_sec": 0, 00:13:54.592 "r_mbytes_per_sec": 0, 00:13:54.592 "w_mbytes_per_sec": 0 00:13:54.592 }, 00:13:54.592 "claimed": true, 00:13:54.592 "claim_type": "exclusive_write", 00:13:54.593 "zoned": false, 00:13:54.593 "supported_io_types": { 00:13:54.593 "read": true, 00:13:54.593 "write": true, 00:13:54.593 "unmap": true, 00:13:54.593 "flush": true, 00:13:54.593 "reset": true, 00:13:54.593 "nvme_admin": false, 00:13:54.593 "nvme_io": false, 00:13:54.593 "nvme_io_md": false, 00:13:54.593 "write_zeroes": true, 00:13:54.593 "zcopy": true, 00:13:54.593 "get_zone_info": false, 00:13:54.593 "zone_management": false, 00:13:54.593 "zone_append": false, 00:13:54.593 "compare": false, 00:13:54.593 "compare_and_write": false, 00:13:54.593 "abort": true, 00:13:54.593 "seek_hole": false, 00:13:54.593 "seek_data": false, 00:13:54.593 "copy": true, 00:13:54.593 "nvme_iov_md": false 00:13:54.593 }, 00:13:54.593 "memory_domains": [ 00:13:54.593 { 00:13:54.593 "dma_device_id": "system", 00:13:54.593 "dma_device_type": 1 00:13:54.593 }, 00:13:54.593 { 00:13:54.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.593 "dma_device_type": 2 00:13:54.593 } 00:13:54.593 ], 00:13:54.593 "driver_specific": {} 00:13:54.593 }' 00:13:54.593 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:54.852 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:54.852 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:54.852 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:54.852 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:54.852 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:54.852 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:54.852 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:54.852 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:54.852 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:54.852 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:54.852 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:54.852 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:54.852 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:54.852 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:55.110 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:55.110 "name": "BaseBdev3", 00:13:55.110 "aliases": [ 00:13:55.110 "bad3187a-405f-11ef-b2a4-e9dca065e82e" 00:13:55.110 ], 00:13:55.110 "product_name": "Malloc disk", 00:13:55.110 "block_size": 512, 00:13:55.110 "num_blocks": 65536, 00:13:55.110 "uuid": "bad3187a-405f-11ef-b2a4-e9dca065e82e", 00:13:55.110 "assigned_rate_limits": { 00:13:55.110 "rw_ios_per_sec": 0, 00:13:55.110 "rw_mbytes_per_sec": 0, 00:13:55.110 "r_mbytes_per_sec": 0, 00:13:55.110 "w_mbytes_per_sec": 0 00:13:55.110 }, 00:13:55.110 "claimed": true, 00:13:55.110 "claim_type": "exclusive_write", 00:13:55.110 "zoned": false, 00:13:55.110 "supported_io_types": { 00:13:55.110 "read": true, 00:13:55.110 "write": true, 00:13:55.110 "unmap": true, 00:13:55.110 "flush": true, 00:13:55.110 "reset": true, 00:13:55.110 "nvme_admin": false, 00:13:55.110 "nvme_io": false, 00:13:55.110 "nvme_io_md": false, 00:13:55.110 "write_zeroes": true, 00:13:55.110 "zcopy": true, 00:13:55.110 "get_zone_info": false, 00:13:55.110 "zone_management": false, 00:13:55.110 "zone_append": false, 00:13:55.110 "compare": false, 00:13:55.110 "compare_and_write": false, 00:13:55.110 "abort": true, 00:13:55.110 "seek_hole": false, 00:13:55.110 "seek_data": false, 00:13:55.110 "copy": true, 00:13:55.110 "nvme_iov_md": false 00:13:55.110 }, 00:13:55.110 "memory_domains": [ 00:13:55.110 { 00:13:55.110 "dma_device_id": "system", 00:13:55.110 "dma_device_type": 1 00:13:55.110 }, 00:13:55.110 { 00:13:55.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.110 "dma_device_type": 2 00:13:55.110 } 00:13:55.110 ], 00:13:55.110 "driver_specific": {} 00:13:55.110 }' 00:13:55.110 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:55.110 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:55.110 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:55.110 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:55.110 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:55.110 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:55.110 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:55.110 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:55.110 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:55.110 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:55.110 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:55.110 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:55.110 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:55.110 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:13:55.110 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:55.368 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:55.368 "name": "BaseBdev4", 00:13:55.368 "aliases": [ 00:13:55.369 "bba02f9c-405f-11ef-b2a4-e9dca065e82e" 00:13:55.369 ], 00:13:55.369 "product_name": "Malloc disk", 00:13:55.369 "block_size": 512, 00:13:55.369 "num_blocks": 65536, 00:13:55.369 "uuid": "bba02f9c-405f-11ef-b2a4-e9dca065e82e", 00:13:55.369 "assigned_rate_limits": { 00:13:55.369 "rw_ios_per_sec": 0, 00:13:55.369 "rw_mbytes_per_sec": 0, 00:13:55.369 "r_mbytes_per_sec": 0, 00:13:55.369 "w_mbytes_per_sec": 0 00:13:55.369 }, 00:13:55.369 "claimed": true, 00:13:55.369 "claim_type": "exclusive_write", 00:13:55.369 "zoned": false, 00:13:55.369 "supported_io_types": { 00:13:55.369 "read": true, 00:13:55.369 "write": true, 00:13:55.369 "unmap": true, 00:13:55.369 "flush": true, 00:13:55.369 "reset": true, 00:13:55.369 "nvme_admin": false, 00:13:55.369 "nvme_io": false, 00:13:55.369 "nvme_io_md": false, 00:13:55.369 "write_zeroes": true, 00:13:55.369 "zcopy": true, 00:13:55.369 "get_zone_info": false, 00:13:55.369 "zone_management": false, 00:13:55.369 "zone_append": false, 00:13:55.369 "compare": false, 00:13:55.369 "compare_and_write": false, 00:13:55.369 "abort": true, 00:13:55.369 "seek_hole": false, 00:13:55.369 "seek_data": false, 00:13:55.369 "copy": true, 00:13:55.369 "nvme_iov_md": false 00:13:55.369 }, 00:13:55.369 "memory_domains": [ 00:13:55.369 { 00:13:55.369 "dma_device_id": "system", 00:13:55.369 "dma_device_type": 1 00:13:55.369 }, 00:13:55.369 { 00:13:55.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.369 "dma_device_type": 2 00:13:55.369 } 00:13:55.369 ], 00:13:55.369 "driver_specific": {} 00:13:55.369 }' 00:13:55.369 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:55.369 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:55.369 15:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:55.369 15:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:55.369 15:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:55.369 15:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:55.369 15:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:55.369 15:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:55.369 15:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:55.369 15:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:55.369 15:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:55.369 15:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:55.369 15:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:55.627 [2024-07-12 15:02:21.264444] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:55.627 [2024-07-12 15:02:21.264487] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:55.627 [2024-07-12 15:02:21.264529] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:55.627 15:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:55.627 15:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:13:55.627 15:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:55.627 15:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:13:55.627 15:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:13:55.627 15:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:13:55.627 15:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:55.627 15:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:13:55.627 15:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:55.627 15:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:55.627 15:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:55.627 15:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:55.627 15:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:55.627 15:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:55.627 15:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:55.627 15:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.627 15:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:55.887 15:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:55.887 "name": "Existed_Raid", 00:13:55.887 "uuid": "b987a68d-405f-11ef-b2a4-e9dca065e82e", 00:13:55.887 "strip_size_kb": 64, 00:13:55.887 "state": "offline", 00:13:55.887 "raid_level": "raid0", 00:13:55.887 "superblock": true, 00:13:55.887 "num_base_bdevs": 4, 00:13:55.887 "num_base_bdevs_discovered": 3, 00:13:55.887 "num_base_bdevs_operational": 3, 00:13:55.887 "base_bdevs_list": [ 00:13:55.887 { 00:13:55.887 "name": null, 00:13:55.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.887 "is_configured": false, 00:13:55.887 "data_offset": 2048, 00:13:55.887 "data_size": 63488 00:13:55.887 }, 00:13:55.887 { 00:13:55.887 "name": "BaseBdev2", 00:13:55.887 "uuid": "ba07d4ea-405f-11ef-b2a4-e9dca065e82e", 00:13:55.887 "is_configured": true, 00:13:55.887 "data_offset": 2048, 00:13:55.887 "data_size": 63488 00:13:55.887 }, 00:13:55.887 { 00:13:55.887 "name": "BaseBdev3", 00:13:55.887 "uuid": "bad3187a-405f-11ef-b2a4-e9dca065e82e", 00:13:55.887 "is_configured": true, 00:13:55.887 "data_offset": 2048, 00:13:55.887 "data_size": 63488 00:13:55.887 }, 00:13:55.887 { 00:13:55.887 "name": "BaseBdev4", 00:13:55.887 "uuid": "bba02f9c-405f-11ef-b2a4-e9dca065e82e", 00:13:55.887 "is_configured": true, 00:13:55.887 "data_offset": 2048, 00:13:55.887 "data_size": 63488 00:13:55.887 } 00:13:55.887 ] 00:13:55.887 }' 00:13:55.887 15:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:55.887 15:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.146 15:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:56.146 15:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:56.146 15:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.146 15:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:56.404 15:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:56.405 15:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:56.405 15:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:56.663 [2024-07-12 15:02:22.291131] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:56.663 15:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:56.663 15:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:56.663 15:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:56.663 15:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.922 15:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:56.922 15:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:56.922 15:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:57.185 [2024-07-12 15:02:22.827419] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:57.186 15:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:57.186 15:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:57.186 15:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:57.186 15:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:57.448 15:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:57.448 15:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:57.448 15:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:13:57.706 [2024-07-12 15:02:23.360457] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:57.706 [2024-07-12 15:02:23.360495] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2e05c6634a00 name Existed_Raid, state offline 00:13:57.706 15:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:57.706 15:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:57.706 15:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:57.706 15:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:57.965 15:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:57.965 15:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:57.965 15:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:13:57.965 15:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:13:57.965 15:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:57.965 15:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:58.223 BaseBdev2 00:13:58.223 15:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:13:58.223 15:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:58.223 15:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:58.223 15:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:58.223 15:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:58.223 15:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:58.223 15:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:58.509 15:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:58.768 [ 00:13:58.768 { 00:13:58.768 "name": "BaseBdev2", 00:13:58.768 "aliases": [ 00:13:58.768 "becb81fb-405f-11ef-b2a4-e9dca065e82e" 00:13:58.768 ], 00:13:58.768 "product_name": "Malloc disk", 00:13:58.768 "block_size": 512, 00:13:58.768 "num_blocks": 65536, 00:13:58.768 "uuid": "becb81fb-405f-11ef-b2a4-e9dca065e82e", 00:13:58.768 "assigned_rate_limits": { 00:13:58.768 "rw_ios_per_sec": 0, 00:13:58.768 "rw_mbytes_per_sec": 0, 00:13:58.768 "r_mbytes_per_sec": 0, 00:13:58.768 "w_mbytes_per_sec": 0 00:13:58.768 }, 00:13:58.768 "claimed": false, 00:13:58.768 "zoned": false, 00:13:58.768 "supported_io_types": { 00:13:58.768 "read": true, 00:13:58.768 "write": true, 00:13:58.768 "unmap": true, 00:13:58.768 "flush": true, 00:13:58.768 "reset": true, 00:13:58.768 "nvme_admin": false, 00:13:58.768 "nvme_io": false, 00:13:58.768 "nvme_io_md": false, 00:13:58.768 "write_zeroes": true, 00:13:58.768 "zcopy": true, 00:13:58.768 "get_zone_info": false, 00:13:58.768 "zone_management": false, 00:13:58.768 "zone_append": false, 00:13:58.768 "compare": false, 00:13:58.768 "compare_and_write": false, 00:13:58.768 "abort": true, 00:13:58.768 "seek_hole": false, 00:13:58.768 "seek_data": false, 00:13:58.768 "copy": true, 00:13:58.768 "nvme_iov_md": false 00:13:58.768 }, 00:13:58.768 "memory_domains": [ 00:13:58.768 { 00:13:58.768 "dma_device_id": "system", 00:13:58.768 "dma_device_type": 1 00:13:58.768 }, 00:13:58.768 { 00:13:58.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.768 "dma_device_type": 2 00:13:58.768 } 00:13:58.768 ], 00:13:58.768 "driver_specific": {} 00:13:58.768 } 00:13:58.768 ] 00:13:58.768 15:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:58.768 15:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:58.768 15:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:58.768 15:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:59.027 BaseBdev3 00:13:59.027 15:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:13:59.027 15:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:13:59.027 15:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:59.027 15:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:59.027 15:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:59.027 15:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:59.027 15:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:59.285 15:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:59.285 [ 00:13:59.285 { 00:13:59.285 "name": "BaseBdev3", 00:13:59.285 "aliases": [ 00:13:59.285 "bf395f76-405f-11ef-b2a4-e9dca065e82e" 00:13:59.285 ], 00:13:59.285 "product_name": "Malloc disk", 00:13:59.285 "block_size": 512, 00:13:59.285 "num_blocks": 65536, 00:13:59.285 "uuid": "bf395f76-405f-11ef-b2a4-e9dca065e82e", 00:13:59.285 "assigned_rate_limits": { 00:13:59.285 "rw_ios_per_sec": 0, 00:13:59.285 "rw_mbytes_per_sec": 0, 00:13:59.285 "r_mbytes_per_sec": 0, 00:13:59.285 "w_mbytes_per_sec": 0 00:13:59.285 }, 00:13:59.285 "claimed": false, 00:13:59.285 "zoned": false, 00:13:59.285 "supported_io_types": { 00:13:59.285 "read": true, 00:13:59.285 "write": true, 00:13:59.285 "unmap": true, 00:13:59.285 "flush": true, 00:13:59.285 "reset": true, 00:13:59.285 "nvme_admin": false, 00:13:59.285 "nvme_io": false, 00:13:59.285 "nvme_io_md": false, 00:13:59.285 "write_zeroes": true, 00:13:59.285 "zcopy": true, 00:13:59.285 "get_zone_info": false, 00:13:59.285 "zone_management": false, 00:13:59.285 "zone_append": false, 00:13:59.285 "compare": false, 00:13:59.285 "compare_and_write": false, 00:13:59.285 "abort": true, 00:13:59.285 "seek_hole": false, 00:13:59.285 "seek_data": false, 00:13:59.285 "copy": true, 00:13:59.285 "nvme_iov_md": false 00:13:59.285 }, 00:13:59.285 "memory_domains": [ 00:13:59.285 { 00:13:59.285 "dma_device_id": "system", 00:13:59.285 "dma_device_type": 1 00:13:59.285 }, 00:13:59.285 { 00:13:59.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.285 "dma_device_type": 2 00:13:59.285 } 00:13:59.285 ], 00:13:59.285 "driver_specific": {} 00:13:59.285 } 00:13:59.285 ] 00:13:59.544 15:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:59.544 15:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:59.544 15:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:59.544 15:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:59.802 BaseBdev4 00:13:59.802 15:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:13:59.802 15:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:13:59.802 15:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:59.802 15:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:59.802 15:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:59.802 15:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:59.802 15:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:00.061 15:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:00.061 [ 00:14:00.061 { 00:14:00.061 "name": "BaseBdev4", 00:14:00.061 "aliases": [ 00:14:00.061 "bfb2d719-405f-11ef-b2a4-e9dca065e82e" 00:14:00.061 ], 00:14:00.061 "product_name": "Malloc disk", 00:14:00.061 "block_size": 512, 00:14:00.061 "num_blocks": 65536, 00:14:00.061 "uuid": "bfb2d719-405f-11ef-b2a4-e9dca065e82e", 00:14:00.061 "assigned_rate_limits": { 00:14:00.061 "rw_ios_per_sec": 0, 00:14:00.061 "rw_mbytes_per_sec": 0, 00:14:00.061 "r_mbytes_per_sec": 0, 00:14:00.061 "w_mbytes_per_sec": 0 00:14:00.061 }, 00:14:00.061 "claimed": false, 00:14:00.061 "zoned": false, 00:14:00.061 "supported_io_types": { 00:14:00.061 "read": true, 00:14:00.061 "write": true, 00:14:00.061 "unmap": true, 00:14:00.061 "flush": true, 00:14:00.061 "reset": true, 00:14:00.061 "nvme_admin": false, 00:14:00.061 "nvme_io": false, 00:14:00.061 "nvme_io_md": false, 00:14:00.061 "write_zeroes": true, 00:14:00.061 "zcopy": true, 00:14:00.061 "get_zone_info": false, 00:14:00.061 "zone_management": false, 00:14:00.061 "zone_append": false, 00:14:00.061 "compare": false, 00:14:00.061 "compare_and_write": false, 00:14:00.061 "abort": true, 00:14:00.061 "seek_hole": false, 00:14:00.061 "seek_data": false, 00:14:00.061 "copy": true, 00:14:00.061 "nvme_iov_md": false 00:14:00.061 }, 00:14:00.061 "memory_domains": [ 00:14:00.061 { 00:14:00.061 "dma_device_id": "system", 00:14:00.061 "dma_device_type": 1 00:14:00.061 }, 00:14:00.061 { 00:14:00.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.061 "dma_device_type": 2 00:14:00.061 } 00:14:00.061 ], 00:14:00.061 "driver_specific": {} 00:14:00.061 } 00:14:00.061 ] 00:14:00.061 15:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:00.061 15:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:00.061 15:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:00.061 15:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:00.319 [2024-07-12 15:02:26.097445] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:00.320 [2024-07-12 15:02:26.097504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:00.320 [2024-07-12 15:02:26.097515] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:00.320 [2024-07-12 15:02:26.098087] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:00.320 [2024-07-12 15:02:26.098098] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:00.320 15:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:00.320 15:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:00.320 15:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:00.320 15:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:00.320 15:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:00.320 15:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:00.320 15:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:00.320 15:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:00.320 15:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:00.320 15:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:00.320 15:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.320 15:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.578 15:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:00.578 "name": "Existed_Raid", 00:14:00.578 "uuid": "c01d0b29-405f-11ef-b2a4-e9dca065e82e", 00:14:00.578 "strip_size_kb": 64, 00:14:00.578 "state": "configuring", 00:14:00.578 "raid_level": "raid0", 00:14:00.578 "superblock": true, 00:14:00.578 "num_base_bdevs": 4, 00:14:00.578 "num_base_bdevs_discovered": 3, 00:14:00.578 "num_base_bdevs_operational": 4, 00:14:00.578 "base_bdevs_list": [ 00:14:00.578 { 00:14:00.578 "name": "BaseBdev1", 00:14:00.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.578 "is_configured": false, 00:14:00.578 "data_offset": 0, 00:14:00.578 "data_size": 0 00:14:00.578 }, 00:14:00.578 { 00:14:00.578 "name": "BaseBdev2", 00:14:00.578 "uuid": "becb81fb-405f-11ef-b2a4-e9dca065e82e", 00:14:00.578 "is_configured": true, 00:14:00.578 "data_offset": 2048, 00:14:00.578 "data_size": 63488 00:14:00.578 }, 00:14:00.578 { 00:14:00.578 "name": "BaseBdev3", 00:14:00.578 "uuid": "bf395f76-405f-11ef-b2a4-e9dca065e82e", 00:14:00.578 "is_configured": true, 00:14:00.578 "data_offset": 2048, 00:14:00.578 "data_size": 63488 00:14:00.578 }, 00:14:00.578 { 00:14:00.578 "name": "BaseBdev4", 00:14:00.578 "uuid": "bfb2d719-405f-11ef-b2a4-e9dca065e82e", 00:14:00.578 "is_configured": true, 00:14:00.578 "data_offset": 2048, 00:14:00.578 "data_size": 63488 00:14:00.578 } 00:14:00.578 ] 00:14:00.578 }' 00:14:00.578 15:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:00.578 15:02:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.146 15:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:01.146 [2024-07-12 15:02:26.937470] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:01.146 15:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:01.146 15:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:01.146 15:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:01.146 15:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:01.146 15:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:01.146 15:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:01.146 15:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:01.146 15:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:01.146 15:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:01.146 15:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:01.146 15:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.146 15:02:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.404 15:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:01.404 "name": "Existed_Raid", 00:14:01.404 "uuid": "c01d0b29-405f-11ef-b2a4-e9dca065e82e", 00:14:01.405 "strip_size_kb": 64, 00:14:01.405 "state": "configuring", 00:14:01.405 "raid_level": "raid0", 00:14:01.405 "superblock": true, 00:14:01.405 "num_base_bdevs": 4, 00:14:01.405 "num_base_bdevs_discovered": 2, 00:14:01.405 "num_base_bdevs_operational": 4, 00:14:01.405 "base_bdevs_list": [ 00:14:01.405 { 00:14:01.405 "name": "BaseBdev1", 00:14:01.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.405 "is_configured": false, 00:14:01.405 "data_offset": 0, 00:14:01.405 "data_size": 0 00:14:01.405 }, 00:14:01.405 { 00:14:01.405 "name": null, 00:14:01.405 "uuid": "becb81fb-405f-11ef-b2a4-e9dca065e82e", 00:14:01.405 "is_configured": false, 00:14:01.405 "data_offset": 2048, 00:14:01.405 "data_size": 63488 00:14:01.405 }, 00:14:01.405 { 00:14:01.405 "name": "BaseBdev3", 00:14:01.405 "uuid": "bf395f76-405f-11ef-b2a4-e9dca065e82e", 00:14:01.405 "is_configured": true, 00:14:01.405 "data_offset": 2048, 00:14:01.405 "data_size": 63488 00:14:01.405 }, 00:14:01.405 { 00:14:01.405 "name": "BaseBdev4", 00:14:01.405 "uuid": "bfb2d719-405f-11ef-b2a4-e9dca065e82e", 00:14:01.405 "is_configured": true, 00:14:01.405 "data_offset": 2048, 00:14:01.405 "data_size": 63488 00:14:01.405 } 00:14:01.405 ] 00:14:01.405 }' 00:14:01.405 15:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:01.405 15:02:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.664 15:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.664 15:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:01.923 15:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:14:01.923 15:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:02.181 [2024-07-12 15:02:27.989672] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:02.181 BaseBdev1 00:14:02.181 15:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:14:02.181 15:02:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:02.439 15:02:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:02.439 15:02:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:02.439 15:02:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:02.439 15:02:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:02.439 15:02:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:02.439 15:02:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:02.698 [ 00:14:02.698 { 00:14:02.698 "name": "BaseBdev1", 00:14:02.698 "aliases": [ 00:14:02.698 "c13dc12b-405f-11ef-b2a4-e9dca065e82e" 00:14:02.698 ], 00:14:02.698 "product_name": "Malloc disk", 00:14:02.698 "block_size": 512, 00:14:02.698 "num_blocks": 65536, 00:14:02.698 "uuid": "c13dc12b-405f-11ef-b2a4-e9dca065e82e", 00:14:02.698 "assigned_rate_limits": { 00:14:02.698 "rw_ios_per_sec": 0, 00:14:02.698 "rw_mbytes_per_sec": 0, 00:14:02.698 "r_mbytes_per_sec": 0, 00:14:02.698 "w_mbytes_per_sec": 0 00:14:02.698 }, 00:14:02.698 "claimed": true, 00:14:02.698 "claim_type": "exclusive_write", 00:14:02.698 "zoned": false, 00:14:02.698 "supported_io_types": { 00:14:02.698 "read": true, 00:14:02.698 "write": true, 00:14:02.698 "unmap": true, 00:14:02.698 "flush": true, 00:14:02.698 "reset": true, 00:14:02.698 "nvme_admin": false, 00:14:02.698 "nvme_io": false, 00:14:02.698 "nvme_io_md": false, 00:14:02.698 "write_zeroes": true, 00:14:02.699 "zcopy": true, 00:14:02.699 "get_zone_info": false, 00:14:02.699 "zone_management": false, 00:14:02.699 "zone_append": false, 00:14:02.699 "compare": false, 00:14:02.699 "compare_and_write": false, 00:14:02.699 "abort": true, 00:14:02.699 "seek_hole": false, 00:14:02.699 "seek_data": false, 00:14:02.699 "copy": true, 00:14:02.699 "nvme_iov_md": false 00:14:02.699 }, 00:14:02.699 "memory_domains": [ 00:14:02.699 { 00:14:02.699 "dma_device_id": "system", 00:14:02.699 "dma_device_type": 1 00:14:02.699 }, 00:14:02.699 { 00:14:02.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.699 "dma_device_type": 2 00:14:02.699 } 00:14:02.699 ], 00:14:02.699 "driver_specific": {} 00:14:02.699 } 00:14:02.699 ] 00:14:02.699 15:02:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:02.699 15:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:02.699 15:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:02.699 15:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:02.699 15:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:02.699 15:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:02.699 15:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:02.699 15:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:02.699 15:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:02.699 15:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:02.699 15:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:02.699 15:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:02.699 15:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.958 15:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:02.958 "name": "Existed_Raid", 00:14:02.958 "uuid": "c01d0b29-405f-11ef-b2a4-e9dca065e82e", 00:14:02.958 "strip_size_kb": 64, 00:14:02.958 "state": "configuring", 00:14:02.958 "raid_level": "raid0", 00:14:02.958 "superblock": true, 00:14:02.958 "num_base_bdevs": 4, 00:14:02.958 "num_base_bdevs_discovered": 3, 00:14:02.958 "num_base_bdevs_operational": 4, 00:14:02.958 "base_bdevs_list": [ 00:14:02.958 { 00:14:02.958 "name": "BaseBdev1", 00:14:02.958 "uuid": "c13dc12b-405f-11ef-b2a4-e9dca065e82e", 00:14:02.958 "is_configured": true, 00:14:02.958 "data_offset": 2048, 00:14:02.958 "data_size": 63488 00:14:02.958 }, 00:14:02.958 { 00:14:02.958 "name": null, 00:14:02.958 "uuid": "becb81fb-405f-11ef-b2a4-e9dca065e82e", 00:14:02.958 "is_configured": false, 00:14:02.958 "data_offset": 2048, 00:14:02.958 "data_size": 63488 00:14:02.958 }, 00:14:02.958 { 00:14:02.958 "name": "BaseBdev3", 00:14:02.958 "uuid": "bf395f76-405f-11ef-b2a4-e9dca065e82e", 00:14:02.958 "is_configured": true, 00:14:02.958 "data_offset": 2048, 00:14:02.958 "data_size": 63488 00:14:02.958 }, 00:14:02.958 { 00:14:02.958 "name": "BaseBdev4", 00:14:02.958 "uuid": "bfb2d719-405f-11ef-b2a4-e9dca065e82e", 00:14:02.958 "is_configured": true, 00:14:02.958 "data_offset": 2048, 00:14:02.958 "data_size": 63488 00:14:02.958 } 00:14:02.958 ] 00:14:02.958 }' 00:14:02.958 15:02:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:02.958 15:02:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.525 15:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:03.525 15:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:03.525 15:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:14:03.525 15:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:04.094 [2024-07-12 15:02:29.621783] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:04.094 15:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:04.094 15:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:04.094 15:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:04.094 15:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:04.094 15:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:04.094 15:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:04.094 15:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:04.094 15:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:04.094 15:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:04.094 15:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:04.094 15:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:04.094 15:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.352 15:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:04.352 "name": "Existed_Raid", 00:14:04.352 "uuid": "c01d0b29-405f-11ef-b2a4-e9dca065e82e", 00:14:04.352 "strip_size_kb": 64, 00:14:04.352 "state": "configuring", 00:14:04.352 "raid_level": "raid0", 00:14:04.352 "superblock": true, 00:14:04.352 "num_base_bdevs": 4, 00:14:04.352 "num_base_bdevs_discovered": 2, 00:14:04.352 "num_base_bdevs_operational": 4, 00:14:04.352 "base_bdevs_list": [ 00:14:04.352 { 00:14:04.352 "name": "BaseBdev1", 00:14:04.352 "uuid": "c13dc12b-405f-11ef-b2a4-e9dca065e82e", 00:14:04.352 "is_configured": true, 00:14:04.352 "data_offset": 2048, 00:14:04.352 "data_size": 63488 00:14:04.352 }, 00:14:04.352 { 00:14:04.352 "name": null, 00:14:04.352 "uuid": "becb81fb-405f-11ef-b2a4-e9dca065e82e", 00:14:04.352 "is_configured": false, 00:14:04.352 "data_offset": 2048, 00:14:04.352 "data_size": 63488 00:14:04.352 }, 00:14:04.352 { 00:14:04.353 "name": null, 00:14:04.353 "uuid": "bf395f76-405f-11ef-b2a4-e9dca065e82e", 00:14:04.353 "is_configured": false, 00:14:04.353 "data_offset": 2048, 00:14:04.353 "data_size": 63488 00:14:04.353 }, 00:14:04.353 { 00:14:04.353 "name": "BaseBdev4", 00:14:04.353 "uuid": "bfb2d719-405f-11ef-b2a4-e9dca065e82e", 00:14:04.353 "is_configured": true, 00:14:04.353 "data_offset": 2048, 00:14:04.353 "data_size": 63488 00:14:04.353 } 00:14:04.353 ] 00:14:04.353 }' 00:14:04.353 15:02:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:04.353 15:02:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.611 15:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:04.611 15:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:04.869 15:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:14:04.869 15:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:05.127 [2024-07-12 15:02:30.729847] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:05.127 15:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:05.127 15:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:05.127 15:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:05.127 15:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:05.127 15:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:05.127 15:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:05.127 15:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:05.127 15:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:05.127 15:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:05.127 15:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:05.127 15:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:05.127 15:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.385 15:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:05.385 "name": "Existed_Raid", 00:14:05.385 "uuid": "c01d0b29-405f-11ef-b2a4-e9dca065e82e", 00:14:05.385 "strip_size_kb": 64, 00:14:05.385 "state": "configuring", 00:14:05.385 "raid_level": "raid0", 00:14:05.385 "superblock": true, 00:14:05.385 "num_base_bdevs": 4, 00:14:05.385 "num_base_bdevs_discovered": 3, 00:14:05.385 "num_base_bdevs_operational": 4, 00:14:05.385 "base_bdevs_list": [ 00:14:05.385 { 00:14:05.385 "name": "BaseBdev1", 00:14:05.385 "uuid": "c13dc12b-405f-11ef-b2a4-e9dca065e82e", 00:14:05.385 "is_configured": true, 00:14:05.385 "data_offset": 2048, 00:14:05.385 "data_size": 63488 00:14:05.385 }, 00:14:05.385 { 00:14:05.385 "name": null, 00:14:05.385 "uuid": "becb81fb-405f-11ef-b2a4-e9dca065e82e", 00:14:05.385 "is_configured": false, 00:14:05.385 "data_offset": 2048, 00:14:05.385 "data_size": 63488 00:14:05.385 }, 00:14:05.385 { 00:14:05.385 "name": "BaseBdev3", 00:14:05.385 "uuid": "bf395f76-405f-11ef-b2a4-e9dca065e82e", 00:14:05.385 "is_configured": true, 00:14:05.385 "data_offset": 2048, 00:14:05.385 "data_size": 63488 00:14:05.385 }, 00:14:05.385 { 00:14:05.385 "name": "BaseBdev4", 00:14:05.385 "uuid": "bfb2d719-405f-11ef-b2a4-e9dca065e82e", 00:14:05.385 "is_configured": true, 00:14:05.385 "data_offset": 2048, 00:14:05.385 "data_size": 63488 00:14:05.385 } 00:14:05.385 ] 00:14:05.385 }' 00:14:05.385 15:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:05.385 15:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.642 15:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:05.642 15:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:05.901 15:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:14:05.901 15:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:06.159 [2024-07-12 15:02:31.965885] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:06.417 15:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:06.417 15:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:06.417 15:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:06.417 15:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:06.417 15:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:06.417 15:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:06.417 15:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:06.417 15:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:06.417 15:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:06.417 15:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:06.417 15:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:06.417 15:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.675 15:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:06.675 "name": "Existed_Raid", 00:14:06.675 "uuid": "c01d0b29-405f-11ef-b2a4-e9dca065e82e", 00:14:06.675 "strip_size_kb": 64, 00:14:06.675 "state": "configuring", 00:14:06.675 "raid_level": "raid0", 00:14:06.675 "superblock": true, 00:14:06.675 "num_base_bdevs": 4, 00:14:06.675 "num_base_bdevs_discovered": 2, 00:14:06.675 "num_base_bdevs_operational": 4, 00:14:06.675 "base_bdevs_list": [ 00:14:06.675 { 00:14:06.675 "name": null, 00:14:06.675 "uuid": "c13dc12b-405f-11ef-b2a4-e9dca065e82e", 00:14:06.675 "is_configured": false, 00:14:06.675 "data_offset": 2048, 00:14:06.675 "data_size": 63488 00:14:06.675 }, 00:14:06.675 { 00:14:06.675 "name": null, 00:14:06.675 "uuid": "becb81fb-405f-11ef-b2a4-e9dca065e82e", 00:14:06.675 "is_configured": false, 00:14:06.675 "data_offset": 2048, 00:14:06.675 "data_size": 63488 00:14:06.675 }, 00:14:06.675 { 00:14:06.675 "name": "BaseBdev3", 00:14:06.675 "uuid": "bf395f76-405f-11ef-b2a4-e9dca065e82e", 00:14:06.675 "is_configured": true, 00:14:06.675 "data_offset": 2048, 00:14:06.675 "data_size": 63488 00:14:06.675 }, 00:14:06.675 { 00:14:06.675 "name": "BaseBdev4", 00:14:06.675 "uuid": "bfb2d719-405f-11ef-b2a4-e9dca065e82e", 00:14:06.675 "is_configured": true, 00:14:06.675 "data_offset": 2048, 00:14:06.675 "data_size": 63488 00:14:06.675 } 00:14:06.675 ] 00:14:06.675 }' 00:14:06.675 15:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:06.675 15:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.934 15:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:06.934 15:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:07.211 15:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:14:07.211 15:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:07.474 [2024-07-12 15:02:33.127885] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:07.474 15:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:07.474 15:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:07.474 15:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:07.474 15:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:07.474 15:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:07.474 15:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:07.474 15:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:07.474 15:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:07.474 15:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:07.474 15:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:07.474 15:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:07.474 15:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.733 15:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:07.733 "name": "Existed_Raid", 00:14:07.733 "uuid": "c01d0b29-405f-11ef-b2a4-e9dca065e82e", 00:14:07.733 "strip_size_kb": 64, 00:14:07.733 "state": "configuring", 00:14:07.733 "raid_level": "raid0", 00:14:07.733 "superblock": true, 00:14:07.733 "num_base_bdevs": 4, 00:14:07.733 "num_base_bdevs_discovered": 3, 00:14:07.733 "num_base_bdevs_operational": 4, 00:14:07.733 "base_bdevs_list": [ 00:14:07.733 { 00:14:07.733 "name": null, 00:14:07.733 "uuid": "c13dc12b-405f-11ef-b2a4-e9dca065e82e", 00:14:07.733 "is_configured": false, 00:14:07.733 "data_offset": 2048, 00:14:07.733 "data_size": 63488 00:14:07.733 }, 00:14:07.733 { 00:14:07.733 "name": "BaseBdev2", 00:14:07.733 "uuid": "becb81fb-405f-11ef-b2a4-e9dca065e82e", 00:14:07.733 "is_configured": true, 00:14:07.733 "data_offset": 2048, 00:14:07.733 "data_size": 63488 00:14:07.733 }, 00:14:07.733 { 00:14:07.733 "name": "BaseBdev3", 00:14:07.733 "uuid": "bf395f76-405f-11ef-b2a4-e9dca065e82e", 00:14:07.733 "is_configured": true, 00:14:07.733 "data_offset": 2048, 00:14:07.733 "data_size": 63488 00:14:07.733 }, 00:14:07.733 { 00:14:07.733 "name": "BaseBdev4", 00:14:07.733 "uuid": "bfb2d719-405f-11ef-b2a4-e9dca065e82e", 00:14:07.733 "is_configured": true, 00:14:07.733 "data_offset": 2048, 00:14:07.733 "data_size": 63488 00:14:07.733 } 00:14:07.733 ] 00:14:07.733 }' 00:14:07.733 15:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:07.733 15:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.991 15:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:07.991 15:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:08.249 15:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:14:08.249 15:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.249 15:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:08.507 15:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u c13dc12b-405f-11ef-b2a4-e9dca065e82e 00:14:08.765 [2024-07-12 15:02:34.488081] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:08.765 [2024-07-12 15:02:34.488138] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2e05c6634f00 00:14:08.765 [2024-07-12 15:02:34.488152] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:08.765 [2024-07-12 15:02:34.488174] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2e05c6697e20 00:14:08.765 [2024-07-12 15:02:34.488222] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2e05c6634f00 00:14:08.765 [2024-07-12 15:02:34.488227] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2e05c6634f00 00:14:08.765 [2024-07-12 15:02:34.488258] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.765 NewBaseBdev 00:14:08.765 15:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:14:08.765 15:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:14:08.765 15:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:08.765 15:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:08.765 15:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:08.765 15:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:08.765 15:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:09.023 15:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:09.281 [ 00:14:09.281 { 00:14:09.281 "name": "NewBaseBdev", 00:14:09.281 "aliases": [ 00:14:09.281 "c13dc12b-405f-11ef-b2a4-e9dca065e82e" 00:14:09.281 ], 00:14:09.281 "product_name": "Malloc disk", 00:14:09.281 "block_size": 512, 00:14:09.281 "num_blocks": 65536, 00:14:09.281 "uuid": "c13dc12b-405f-11ef-b2a4-e9dca065e82e", 00:14:09.281 "assigned_rate_limits": { 00:14:09.281 "rw_ios_per_sec": 0, 00:14:09.281 "rw_mbytes_per_sec": 0, 00:14:09.281 "r_mbytes_per_sec": 0, 00:14:09.281 "w_mbytes_per_sec": 0 00:14:09.281 }, 00:14:09.281 "claimed": true, 00:14:09.281 "claim_type": "exclusive_write", 00:14:09.281 "zoned": false, 00:14:09.281 "supported_io_types": { 00:14:09.281 "read": true, 00:14:09.281 "write": true, 00:14:09.281 "unmap": true, 00:14:09.281 "flush": true, 00:14:09.281 "reset": true, 00:14:09.281 "nvme_admin": false, 00:14:09.281 "nvme_io": false, 00:14:09.281 "nvme_io_md": false, 00:14:09.281 "write_zeroes": true, 00:14:09.281 "zcopy": true, 00:14:09.281 "get_zone_info": false, 00:14:09.281 "zone_management": false, 00:14:09.281 "zone_append": false, 00:14:09.281 "compare": false, 00:14:09.281 "compare_and_write": false, 00:14:09.281 "abort": true, 00:14:09.281 "seek_hole": false, 00:14:09.281 "seek_data": false, 00:14:09.281 "copy": true, 00:14:09.281 "nvme_iov_md": false 00:14:09.281 }, 00:14:09.281 "memory_domains": [ 00:14:09.281 { 00:14:09.281 "dma_device_id": "system", 00:14:09.281 "dma_device_type": 1 00:14:09.281 }, 00:14:09.281 { 00:14:09.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.281 "dma_device_type": 2 00:14:09.281 } 00:14:09.281 ], 00:14:09.281 "driver_specific": {} 00:14:09.281 } 00:14:09.281 ] 00:14:09.281 15:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:09.281 15:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:14:09.281 15:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:09.281 15:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:09.281 15:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:09.281 15:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:09.281 15:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:09.281 15:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:09.281 15:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:09.281 15:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:09.281 15:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:09.281 15:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.281 15:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.538 15:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:09.538 "name": "Existed_Raid", 00:14:09.538 "uuid": "c01d0b29-405f-11ef-b2a4-e9dca065e82e", 00:14:09.538 "strip_size_kb": 64, 00:14:09.538 "state": "online", 00:14:09.538 "raid_level": "raid0", 00:14:09.538 "superblock": true, 00:14:09.538 "num_base_bdevs": 4, 00:14:09.538 "num_base_bdevs_discovered": 4, 00:14:09.538 "num_base_bdevs_operational": 4, 00:14:09.538 "base_bdevs_list": [ 00:14:09.538 { 00:14:09.538 "name": "NewBaseBdev", 00:14:09.538 "uuid": "c13dc12b-405f-11ef-b2a4-e9dca065e82e", 00:14:09.538 "is_configured": true, 00:14:09.538 "data_offset": 2048, 00:14:09.538 "data_size": 63488 00:14:09.538 }, 00:14:09.538 { 00:14:09.538 "name": "BaseBdev2", 00:14:09.538 "uuid": "becb81fb-405f-11ef-b2a4-e9dca065e82e", 00:14:09.538 "is_configured": true, 00:14:09.538 "data_offset": 2048, 00:14:09.538 "data_size": 63488 00:14:09.538 }, 00:14:09.538 { 00:14:09.538 "name": "BaseBdev3", 00:14:09.538 "uuid": "bf395f76-405f-11ef-b2a4-e9dca065e82e", 00:14:09.538 "is_configured": true, 00:14:09.538 "data_offset": 2048, 00:14:09.538 "data_size": 63488 00:14:09.538 }, 00:14:09.538 { 00:14:09.538 "name": "BaseBdev4", 00:14:09.538 "uuid": "bfb2d719-405f-11ef-b2a4-e9dca065e82e", 00:14:09.538 "is_configured": true, 00:14:09.538 "data_offset": 2048, 00:14:09.538 "data_size": 63488 00:14:09.538 } 00:14:09.538 ] 00:14:09.538 }' 00:14:09.538 15:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:09.538 15:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.795 15:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:14:09.795 15:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:09.795 15:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:09.795 15:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:09.795 15:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:09.795 15:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:09.795 15:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:09.795 15:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:10.053 [2024-07-12 15:02:35.792043] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:10.053 15:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:10.053 "name": "Existed_Raid", 00:14:10.053 "aliases": [ 00:14:10.053 "c01d0b29-405f-11ef-b2a4-e9dca065e82e" 00:14:10.053 ], 00:14:10.053 "product_name": "Raid Volume", 00:14:10.053 "block_size": 512, 00:14:10.053 "num_blocks": 253952, 00:14:10.053 "uuid": "c01d0b29-405f-11ef-b2a4-e9dca065e82e", 00:14:10.053 "assigned_rate_limits": { 00:14:10.053 "rw_ios_per_sec": 0, 00:14:10.053 "rw_mbytes_per_sec": 0, 00:14:10.053 "r_mbytes_per_sec": 0, 00:14:10.053 "w_mbytes_per_sec": 0 00:14:10.053 }, 00:14:10.053 "claimed": false, 00:14:10.053 "zoned": false, 00:14:10.053 "supported_io_types": { 00:14:10.053 "read": true, 00:14:10.053 "write": true, 00:14:10.053 "unmap": true, 00:14:10.053 "flush": true, 00:14:10.053 "reset": true, 00:14:10.053 "nvme_admin": false, 00:14:10.053 "nvme_io": false, 00:14:10.053 "nvme_io_md": false, 00:14:10.053 "write_zeroes": true, 00:14:10.053 "zcopy": false, 00:14:10.053 "get_zone_info": false, 00:14:10.053 "zone_management": false, 00:14:10.053 "zone_append": false, 00:14:10.053 "compare": false, 00:14:10.053 "compare_and_write": false, 00:14:10.053 "abort": false, 00:14:10.053 "seek_hole": false, 00:14:10.053 "seek_data": false, 00:14:10.053 "copy": false, 00:14:10.053 "nvme_iov_md": false 00:14:10.053 }, 00:14:10.053 "memory_domains": [ 00:14:10.053 { 00:14:10.053 "dma_device_id": "system", 00:14:10.053 "dma_device_type": 1 00:14:10.053 }, 00:14:10.053 { 00:14:10.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.053 "dma_device_type": 2 00:14:10.053 }, 00:14:10.053 { 00:14:10.053 "dma_device_id": "system", 00:14:10.053 "dma_device_type": 1 00:14:10.053 }, 00:14:10.053 { 00:14:10.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.053 "dma_device_type": 2 00:14:10.053 }, 00:14:10.053 { 00:14:10.053 "dma_device_id": "system", 00:14:10.053 "dma_device_type": 1 00:14:10.053 }, 00:14:10.053 { 00:14:10.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.053 "dma_device_type": 2 00:14:10.053 }, 00:14:10.053 { 00:14:10.053 "dma_device_id": "system", 00:14:10.053 "dma_device_type": 1 00:14:10.053 }, 00:14:10.053 { 00:14:10.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.053 "dma_device_type": 2 00:14:10.053 } 00:14:10.053 ], 00:14:10.053 "driver_specific": { 00:14:10.053 "raid": { 00:14:10.053 "uuid": "c01d0b29-405f-11ef-b2a4-e9dca065e82e", 00:14:10.053 "strip_size_kb": 64, 00:14:10.053 "state": "online", 00:14:10.053 "raid_level": "raid0", 00:14:10.053 "superblock": true, 00:14:10.053 "num_base_bdevs": 4, 00:14:10.053 "num_base_bdevs_discovered": 4, 00:14:10.053 "num_base_bdevs_operational": 4, 00:14:10.053 "base_bdevs_list": [ 00:14:10.053 { 00:14:10.053 "name": "NewBaseBdev", 00:14:10.053 "uuid": "c13dc12b-405f-11ef-b2a4-e9dca065e82e", 00:14:10.053 "is_configured": true, 00:14:10.053 "data_offset": 2048, 00:14:10.053 "data_size": 63488 00:14:10.053 }, 00:14:10.053 { 00:14:10.053 "name": "BaseBdev2", 00:14:10.053 "uuid": "becb81fb-405f-11ef-b2a4-e9dca065e82e", 00:14:10.053 "is_configured": true, 00:14:10.053 "data_offset": 2048, 00:14:10.053 "data_size": 63488 00:14:10.053 }, 00:14:10.053 { 00:14:10.053 "name": "BaseBdev3", 00:14:10.053 "uuid": "bf395f76-405f-11ef-b2a4-e9dca065e82e", 00:14:10.053 "is_configured": true, 00:14:10.053 "data_offset": 2048, 00:14:10.053 "data_size": 63488 00:14:10.053 }, 00:14:10.053 { 00:14:10.053 "name": "BaseBdev4", 00:14:10.053 "uuid": "bfb2d719-405f-11ef-b2a4-e9dca065e82e", 00:14:10.053 "is_configured": true, 00:14:10.053 "data_offset": 2048, 00:14:10.053 "data_size": 63488 00:14:10.053 } 00:14:10.053 ] 00:14:10.053 } 00:14:10.053 } 00:14:10.053 }' 00:14:10.053 15:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:10.053 15:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:14:10.053 BaseBdev2 00:14:10.053 BaseBdev3 00:14:10.053 BaseBdev4' 00:14:10.053 15:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:10.053 15:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:10.053 15:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:10.309 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:10.309 "name": "NewBaseBdev", 00:14:10.309 "aliases": [ 00:14:10.309 "c13dc12b-405f-11ef-b2a4-e9dca065e82e" 00:14:10.309 ], 00:14:10.309 "product_name": "Malloc disk", 00:14:10.309 "block_size": 512, 00:14:10.309 "num_blocks": 65536, 00:14:10.309 "uuid": "c13dc12b-405f-11ef-b2a4-e9dca065e82e", 00:14:10.309 "assigned_rate_limits": { 00:14:10.309 "rw_ios_per_sec": 0, 00:14:10.309 "rw_mbytes_per_sec": 0, 00:14:10.309 "r_mbytes_per_sec": 0, 00:14:10.309 "w_mbytes_per_sec": 0 00:14:10.309 }, 00:14:10.309 "claimed": true, 00:14:10.309 "claim_type": "exclusive_write", 00:14:10.309 "zoned": false, 00:14:10.309 "supported_io_types": { 00:14:10.309 "read": true, 00:14:10.309 "write": true, 00:14:10.309 "unmap": true, 00:14:10.309 "flush": true, 00:14:10.309 "reset": true, 00:14:10.309 "nvme_admin": false, 00:14:10.309 "nvme_io": false, 00:14:10.309 "nvme_io_md": false, 00:14:10.309 "write_zeroes": true, 00:14:10.309 "zcopy": true, 00:14:10.309 "get_zone_info": false, 00:14:10.309 "zone_management": false, 00:14:10.309 "zone_append": false, 00:14:10.309 "compare": false, 00:14:10.309 "compare_and_write": false, 00:14:10.309 "abort": true, 00:14:10.309 "seek_hole": false, 00:14:10.309 "seek_data": false, 00:14:10.309 "copy": true, 00:14:10.309 "nvme_iov_md": false 00:14:10.309 }, 00:14:10.309 "memory_domains": [ 00:14:10.309 { 00:14:10.309 "dma_device_id": "system", 00:14:10.309 "dma_device_type": 1 00:14:10.309 }, 00:14:10.309 { 00:14:10.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.309 "dma_device_type": 2 00:14:10.310 } 00:14:10.310 ], 00:14:10.310 "driver_specific": {} 00:14:10.310 }' 00:14:10.310 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:10.310 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:10.310 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:10.310 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:10.310 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:10.566 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:10.566 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:10.566 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:10.566 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:10.566 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:10.566 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:10.566 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:10.566 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:10.566 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:10.566 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:10.822 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:10.823 "name": "BaseBdev2", 00:14:10.823 "aliases": [ 00:14:10.823 "becb81fb-405f-11ef-b2a4-e9dca065e82e" 00:14:10.823 ], 00:14:10.823 "product_name": "Malloc disk", 00:14:10.823 "block_size": 512, 00:14:10.823 "num_blocks": 65536, 00:14:10.823 "uuid": "becb81fb-405f-11ef-b2a4-e9dca065e82e", 00:14:10.823 "assigned_rate_limits": { 00:14:10.823 "rw_ios_per_sec": 0, 00:14:10.823 "rw_mbytes_per_sec": 0, 00:14:10.823 "r_mbytes_per_sec": 0, 00:14:10.823 "w_mbytes_per_sec": 0 00:14:10.823 }, 00:14:10.823 "claimed": true, 00:14:10.823 "claim_type": "exclusive_write", 00:14:10.823 "zoned": false, 00:14:10.823 "supported_io_types": { 00:14:10.823 "read": true, 00:14:10.823 "write": true, 00:14:10.823 "unmap": true, 00:14:10.823 "flush": true, 00:14:10.823 "reset": true, 00:14:10.823 "nvme_admin": false, 00:14:10.823 "nvme_io": false, 00:14:10.823 "nvme_io_md": false, 00:14:10.823 "write_zeroes": true, 00:14:10.823 "zcopy": true, 00:14:10.823 "get_zone_info": false, 00:14:10.823 "zone_management": false, 00:14:10.823 "zone_append": false, 00:14:10.823 "compare": false, 00:14:10.823 "compare_and_write": false, 00:14:10.823 "abort": true, 00:14:10.823 "seek_hole": false, 00:14:10.823 "seek_data": false, 00:14:10.823 "copy": true, 00:14:10.823 "nvme_iov_md": false 00:14:10.823 }, 00:14:10.823 "memory_domains": [ 00:14:10.823 { 00:14:10.823 "dma_device_id": "system", 00:14:10.823 "dma_device_type": 1 00:14:10.823 }, 00:14:10.823 { 00:14:10.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.823 "dma_device_type": 2 00:14:10.823 } 00:14:10.823 ], 00:14:10.823 "driver_specific": {} 00:14:10.823 }' 00:14:10.823 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:10.823 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:10.823 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:10.823 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:10.823 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:10.823 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:10.823 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:10.823 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:10.823 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:10.823 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:10.823 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:10.823 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:10.823 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:10.823 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:10.823 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:11.081 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:11.081 "name": "BaseBdev3", 00:14:11.081 "aliases": [ 00:14:11.081 "bf395f76-405f-11ef-b2a4-e9dca065e82e" 00:14:11.081 ], 00:14:11.081 "product_name": "Malloc disk", 00:14:11.081 "block_size": 512, 00:14:11.081 "num_blocks": 65536, 00:14:11.081 "uuid": "bf395f76-405f-11ef-b2a4-e9dca065e82e", 00:14:11.081 "assigned_rate_limits": { 00:14:11.081 "rw_ios_per_sec": 0, 00:14:11.081 "rw_mbytes_per_sec": 0, 00:14:11.081 "r_mbytes_per_sec": 0, 00:14:11.081 "w_mbytes_per_sec": 0 00:14:11.081 }, 00:14:11.081 "claimed": true, 00:14:11.081 "claim_type": "exclusive_write", 00:14:11.081 "zoned": false, 00:14:11.081 "supported_io_types": { 00:14:11.081 "read": true, 00:14:11.081 "write": true, 00:14:11.081 "unmap": true, 00:14:11.081 "flush": true, 00:14:11.081 "reset": true, 00:14:11.081 "nvme_admin": false, 00:14:11.081 "nvme_io": false, 00:14:11.081 "nvme_io_md": false, 00:14:11.081 "write_zeroes": true, 00:14:11.081 "zcopy": true, 00:14:11.081 "get_zone_info": false, 00:14:11.081 "zone_management": false, 00:14:11.081 "zone_append": false, 00:14:11.081 "compare": false, 00:14:11.081 "compare_and_write": false, 00:14:11.081 "abort": true, 00:14:11.081 "seek_hole": false, 00:14:11.081 "seek_data": false, 00:14:11.081 "copy": true, 00:14:11.081 "nvme_iov_md": false 00:14:11.081 }, 00:14:11.081 "memory_domains": [ 00:14:11.081 { 00:14:11.081 "dma_device_id": "system", 00:14:11.081 "dma_device_type": 1 00:14:11.081 }, 00:14:11.081 { 00:14:11.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.081 "dma_device_type": 2 00:14:11.081 } 00:14:11.081 ], 00:14:11.081 "driver_specific": {} 00:14:11.081 }' 00:14:11.081 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:11.081 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:11.081 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:11.081 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:11.081 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:11.081 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:11.081 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:11.081 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:11.081 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:11.081 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:11.081 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:11.081 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:11.081 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:11.081 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:14:11.081 15:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:11.338 15:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:11.338 "name": "BaseBdev4", 00:14:11.338 "aliases": [ 00:14:11.338 "bfb2d719-405f-11ef-b2a4-e9dca065e82e" 00:14:11.338 ], 00:14:11.338 "product_name": "Malloc disk", 00:14:11.338 "block_size": 512, 00:14:11.338 "num_blocks": 65536, 00:14:11.338 "uuid": "bfb2d719-405f-11ef-b2a4-e9dca065e82e", 00:14:11.338 "assigned_rate_limits": { 00:14:11.338 "rw_ios_per_sec": 0, 00:14:11.338 "rw_mbytes_per_sec": 0, 00:14:11.338 "r_mbytes_per_sec": 0, 00:14:11.338 "w_mbytes_per_sec": 0 00:14:11.338 }, 00:14:11.338 "claimed": true, 00:14:11.338 "claim_type": "exclusive_write", 00:14:11.338 "zoned": false, 00:14:11.338 "supported_io_types": { 00:14:11.338 "read": true, 00:14:11.338 "write": true, 00:14:11.338 "unmap": true, 00:14:11.338 "flush": true, 00:14:11.338 "reset": true, 00:14:11.338 "nvme_admin": false, 00:14:11.338 "nvme_io": false, 00:14:11.338 "nvme_io_md": false, 00:14:11.338 "write_zeroes": true, 00:14:11.338 "zcopy": true, 00:14:11.338 "get_zone_info": false, 00:14:11.338 "zone_management": false, 00:14:11.338 "zone_append": false, 00:14:11.338 "compare": false, 00:14:11.338 "compare_and_write": false, 00:14:11.338 "abort": true, 00:14:11.338 "seek_hole": false, 00:14:11.338 "seek_data": false, 00:14:11.338 "copy": true, 00:14:11.338 "nvme_iov_md": false 00:14:11.338 }, 00:14:11.338 "memory_domains": [ 00:14:11.338 { 00:14:11.338 "dma_device_id": "system", 00:14:11.338 "dma_device_type": 1 00:14:11.338 }, 00:14:11.338 { 00:14:11.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.338 "dma_device_type": 2 00:14:11.338 } 00:14:11.338 ], 00:14:11.338 "driver_specific": {} 00:14:11.338 }' 00:14:11.338 15:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:11.338 15:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:11.338 15:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:11.338 15:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:11.338 15:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:11.338 15:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:11.338 15:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:11.338 15:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:11.338 15:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:11.338 15:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:11.338 15:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:11.596 15:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:11.596 15:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:11.853 [2024-07-12 15:02:37.424071] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:11.853 [2024-07-12 15:02:37.424093] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:11.853 [2024-07-12 15:02:37.424123] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:11.853 [2024-07-12 15:02:37.424138] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:11.853 [2024-07-12 15:02:37.424143] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2e05c6634f00 name Existed_Raid, state offline 00:14:11.853 15:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 59205 00:14:11.853 15:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 59205 ']' 00:14:11.853 15:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 59205 00:14:11.853 15:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:14:11.853 15:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:14:11.853 15:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 59205 00:14:11.853 15:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:14:11.853 15:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:14:11.853 killing process with pid 59205 00:14:11.853 15:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:14:11.853 15:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59205' 00:14:11.853 15:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 59205 00:14:11.853 [2024-07-12 15:02:37.451702] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:11.853 15:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 59205 00:14:11.853 [2024-07-12 15:02:37.474498] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:11.853 15:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:14:11.853 00:14:11.853 real 0m27.028s 00:14:11.853 user 0m49.551s 00:14:11.853 sys 0m3.620s 00:14:11.853 15:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:11.853 15:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.853 ************************************ 00:14:11.853 END TEST raid_state_function_test_sb 00:14:11.853 ************************************ 00:14:12.111 15:02:37 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:12.111 15:02:37 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:14:12.111 15:02:37 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:12.111 15:02:37 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:12.111 15:02:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:12.111 ************************************ 00:14:12.111 START TEST raid_superblock_test 00:14:12.111 ************************************ 00:14:12.111 15:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 4 00:14:12.111 15:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:14:12.111 15:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:14:12.111 15:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:14:12.111 15:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:14:12.111 15:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:14:12.111 15:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:14:12.112 15:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:14:12.112 15:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:14:12.112 15:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:14:12.112 15:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:14:12.112 15:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:14:12.112 15:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:14:12.112 15:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:14:12.112 15:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:14:12.112 15:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:14:12.112 15:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:14:12.112 15:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=60019 00:14:12.112 15:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 60019 /var/tmp/spdk-raid.sock 00:14:12.112 15:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:12.112 15:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 60019 ']' 00:14:12.112 15:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:12.112 15:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:12.112 15:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:12.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:12.112 15:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:12.112 15:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.112 [2024-07-12 15:02:37.712388] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:14:12.112 [2024-07-12 15:02:37.712532] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:14:12.678 EAL: TSC is not safe to use in SMP mode 00:14:12.678 EAL: TSC is not invariant 00:14:12.678 [2024-07-12 15:02:38.228350] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.678 [2024-07-12 15:02:38.314571] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:12.678 [2024-07-12 15:02:38.316697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.678 [2024-07-12 15:02:38.317466] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:12.678 [2024-07-12 15:02:38.317486] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:13.243 15:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:13.243 15:02:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:14:13.243 15:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:14:13.243 15:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:13.243 15:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:14:13.243 15:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:14:13.243 15:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:13.243 15:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:13.243 15:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:13.243 15:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:13.243 15:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:13.243 malloc1 00:14:13.515 15:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:13.775 [2024-07-12 15:02:39.345234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:13.775 [2024-07-12 15:02:39.345297] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.775 [2024-07-12 15:02:39.345309] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x50c88634780 00:14:13.775 [2024-07-12 15:02:39.345318] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.775 [2024-07-12 15:02:39.346194] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.775 [2024-07-12 15:02:39.346222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:13.775 pt1 00:14:13.775 15:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:13.775 15:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:13.775 15:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:14:13.775 15:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:14:13.775 15:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:13.775 15:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:13.775 15:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:13.775 15:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:13.775 15:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:14.034 malloc2 00:14:14.034 15:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:14.292 [2024-07-12 15:02:39.897275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:14.292 [2024-07-12 15:02:39.897328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.292 [2024-07-12 15:02:39.897340] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x50c88634c80 00:14:14.292 [2024-07-12 15:02:39.897348] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.292 [2024-07-12 15:02:39.897981] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.292 [2024-07-12 15:02:39.898006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:14.292 pt2 00:14:14.292 15:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:14.292 15:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:14.292 15:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:14:14.292 15:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:14:14.292 15:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:14.292 15:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:14.292 15:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:14.292 15:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:14.292 15:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:14:14.549 malloc3 00:14:14.549 15:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:14.806 [2024-07-12 15:02:40.437330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:14.806 [2024-07-12 15:02:40.437386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.806 [2024-07-12 15:02:40.437398] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x50c88635180 00:14:14.806 [2024-07-12 15:02:40.437405] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.806 [2024-07-12 15:02:40.438072] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.806 [2024-07-12 15:02:40.438098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:14.806 pt3 00:14:14.806 15:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:14.806 15:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:14.806 15:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:14:14.806 15:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:14:14.806 15:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:14.806 15:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:14.806 15:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:14.806 15:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:14.806 15:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:14:15.063 malloc4 00:14:15.063 15:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:15.321 [2024-07-12 15:02:40.961348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:15.321 [2024-07-12 15:02:40.961403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.321 [2024-07-12 15:02:40.961414] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x50c88635680 00:14:15.321 [2024-07-12 15:02:40.961422] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.321 [2024-07-12 15:02:40.962050] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.321 [2024-07-12 15:02:40.962074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:15.321 pt4 00:14:15.321 15:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:15.321 15:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:15.321 15:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:14:15.579 [2024-07-12 15:02:41.273376] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:15.579 [2024-07-12 15:02:41.273943] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:15.579 [2024-07-12 15:02:41.273965] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:15.579 [2024-07-12 15:02:41.273977] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:15.579 [2024-07-12 15:02:41.274030] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x50c88635900 00:14:15.579 [2024-07-12 15:02:41.274036] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:15.579 [2024-07-12 15:02:41.274070] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50c88697e20 00:14:15.579 [2024-07-12 15:02:41.274146] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x50c88635900 00:14:15.579 [2024-07-12 15:02:41.274151] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x50c88635900 00:14:15.579 [2024-07-12 15:02:41.274179] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.579 15:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:15.579 15:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:15.579 15:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:15.579 15:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:15.579 15:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:15.579 15:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:15.579 15:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:15.579 15:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:15.579 15:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:15.579 15:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:15.579 15:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.579 15:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.836 15:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:15.836 "name": "raid_bdev1", 00:14:15.836 "uuid": "c928b4d1-405f-11ef-b2a4-e9dca065e82e", 00:14:15.837 "strip_size_kb": 64, 00:14:15.837 "state": "online", 00:14:15.837 "raid_level": "raid0", 00:14:15.837 "superblock": true, 00:14:15.837 "num_base_bdevs": 4, 00:14:15.837 "num_base_bdevs_discovered": 4, 00:14:15.837 "num_base_bdevs_operational": 4, 00:14:15.837 "base_bdevs_list": [ 00:14:15.837 { 00:14:15.837 "name": "pt1", 00:14:15.837 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:15.837 "is_configured": true, 00:14:15.837 "data_offset": 2048, 00:14:15.837 "data_size": 63488 00:14:15.837 }, 00:14:15.837 { 00:14:15.837 "name": "pt2", 00:14:15.837 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:15.837 "is_configured": true, 00:14:15.837 "data_offset": 2048, 00:14:15.837 "data_size": 63488 00:14:15.837 }, 00:14:15.837 { 00:14:15.837 "name": "pt3", 00:14:15.837 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:15.837 "is_configured": true, 00:14:15.837 "data_offset": 2048, 00:14:15.837 "data_size": 63488 00:14:15.837 }, 00:14:15.837 { 00:14:15.837 "name": "pt4", 00:14:15.837 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:15.837 "is_configured": true, 00:14:15.837 "data_offset": 2048, 00:14:15.837 "data_size": 63488 00:14:15.837 } 00:14:15.837 ] 00:14:15.837 }' 00:14:15.837 15:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:15.837 15:02:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.095 15:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:14:16.095 15:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:16.095 15:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:16.095 15:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:16.095 15:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:16.095 15:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:16.095 15:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:16.095 15:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:16.353 [2024-07-12 15:02:42.113445] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:16.353 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:16.353 "name": "raid_bdev1", 00:14:16.353 "aliases": [ 00:14:16.353 "c928b4d1-405f-11ef-b2a4-e9dca065e82e" 00:14:16.353 ], 00:14:16.353 "product_name": "Raid Volume", 00:14:16.353 "block_size": 512, 00:14:16.353 "num_blocks": 253952, 00:14:16.353 "uuid": "c928b4d1-405f-11ef-b2a4-e9dca065e82e", 00:14:16.353 "assigned_rate_limits": { 00:14:16.353 "rw_ios_per_sec": 0, 00:14:16.353 "rw_mbytes_per_sec": 0, 00:14:16.353 "r_mbytes_per_sec": 0, 00:14:16.353 "w_mbytes_per_sec": 0 00:14:16.353 }, 00:14:16.353 "claimed": false, 00:14:16.353 "zoned": false, 00:14:16.353 "supported_io_types": { 00:14:16.353 "read": true, 00:14:16.353 "write": true, 00:14:16.353 "unmap": true, 00:14:16.353 "flush": true, 00:14:16.353 "reset": true, 00:14:16.353 "nvme_admin": false, 00:14:16.353 "nvme_io": false, 00:14:16.353 "nvme_io_md": false, 00:14:16.353 "write_zeroes": true, 00:14:16.353 "zcopy": false, 00:14:16.353 "get_zone_info": false, 00:14:16.353 "zone_management": false, 00:14:16.353 "zone_append": false, 00:14:16.353 "compare": false, 00:14:16.353 "compare_and_write": false, 00:14:16.353 "abort": false, 00:14:16.353 "seek_hole": false, 00:14:16.353 "seek_data": false, 00:14:16.353 "copy": false, 00:14:16.353 "nvme_iov_md": false 00:14:16.353 }, 00:14:16.353 "memory_domains": [ 00:14:16.353 { 00:14:16.353 "dma_device_id": "system", 00:14:16.353 "dma_device_type": 1 00:14:16.353 }, 00:14:16.353 { 00:14:16.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.353 "dma_device_type": 2 00:14:16.353 }, 00:14:16.353 { 00:14:16.353 "dma_device_id": "system", 00:14:16.353 "dma_device_type": 1 00:14:16.353 }, 00:14:16.353 { 00:14:16.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.353 "dma_device_type": 2 00:14:16.353 }, 00:14:16.353 { 00:14:16.353 "dma_device_id": "system", 00:14:16.353 "dma_device_type": 1 00:14:16.353 }, 00:14:16.353 { 00:14:16.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.353 "dma_device_type": 2 00:14:16.353 }, 00:14:16.353 { 00:14:16.353 "dma_device_id": "system", 00:14:16.353 "dma_device_type": 1 00:14:16.353 }, 00:14:16.353 { 00:14:16.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.353 "dma_device_type": 2 00:14:16.353 } 00:14:16.353 ], 00:14:16.353 "driver_specific": { 00:14:16.353 "raid": { 00:14:16.353 "uuid": "c928b4d1-405f-11ef-b2a4-e9dca065e82e", 00:14:16.353 "strip_size_kb": 64, 00:14:16.353 "state": "online", 00:14:16.353 "raid_level": "raid0", 00:14:16.353 "superblock": true, 00:14:16.353 "num_base_bdevs": 4, 00:14:16.353 "num_base_bdevs_discovered": 4, 00:14:16.353 "num_base_bdevs_operational": 4, 00:14:16.353 "base_bdevs_list": [ 00:14:16.353 { 00:14:16.353 "name": "pt1", 00:14:16.353 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:16.353 "is_configured": true, 00:14:16.353 "data_offset": 2048, 00:14:16.353 "data_size": 63488 00:14:16.353 }, 00:14:16.353 { 00:14:16.353 "name": "pt2", 00:14:16.353 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:16.353 "is_configured": true, 00:14:16.353 "data_offset": 2048, 00:14:16.353 "data_size": 63488 00:14:16.353 }, 00:14:16.353 { 00:14:16.353 "name": "pt3", 00:14:16.353 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:16.353 "is_configured": true, 00:14:16.353 "data_offset": 2048, 00:14:16.353 "data_size": 63488 00:14:16.353 }, 00:14:16.353 { 00:14:16.353 "name": "pt4", 00:14:16.353 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:16.353 "is_configured": true, 00:14:16.353 "data_offset": 2048, 00:14:16.353 "data_size": 63488 00:14:16.353 } 00:14:16.353 ] 00:14:16.353 } 00:14:16.353 } 00:14:16.353 }' 00:14:16.353 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:16.353 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:16.353 pt2 00:14:16.353 pt3 00:14:16.353 pt4' 00:14:16.353 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:16.353 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:16.353 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:16.611 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:16.611 "name": "pt1", 00:14:16.611 "aliases": [ 00:14:16.611 "00000000-0000-0000-0000-000000000001" 00:14:16.611 ], 00:14:16.611 "product_name": "passthru", 00:14:16.611 "block_size": 512, 00:14:16.611 "num_blocks": 65536, 00:14:16.611 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:16.611 "assigned_rate_limits": { 00:14:16.611 "rw_ios_per_sec": 0, 00:14:16.611 "rw_mbytes_per_sec": 0, 00:14:16.611 "r_mbytes_per_sec": 0, 00:14:16.611 "w_mbytes_per_sec": 0 00:14:16.611 }, 00:14:16.611 "claimed": true, 00:14:16.611 "claim_type": "exclusive_write", 00:14:16.611 "zoned": false, 00:14:16.611 "supported_io_types": { 00:14:16.611 "read": true, 00:14:16.611 "write": true, 00:14:16.611 "unmap": true, 00:14:16.611 "flush": true, 00:14:16.611 "reset": true, 00:14:16.611 "nvme_admin": false, 00:14:16.611 "nvme_io": false, 00:14:16.611 "nvme_io_md": false, 00:14:16.611 "write_zeroes": true, 00:14:16.611 "zcopy": true, 00:14:16.611 "get_zone_info": false, 00:14:16.611 "zone_management": false, 00:14:16.611 "zone_append": false, 00:14:16.611 "compare": false, 00:14:16.611 "compare_and_write": false, 00:14:16.612 "abort": true, 00:14:16.612 "seek_hole": false, 00:14:16.612 "seek_data": false, 00:14:16.612 "copy": true, 00:14:16.612 "nvme_iov_md": false 00:14:16.612 }, 00:14:16.612 "memory_domains": [ 00:14:16.612 { 00:14:16.612 "dma_device_id": "system", 00:14:16.612 "dma_device_type": 1 00:14:16.612 }, 00:14:16.612 { 00:14:16.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.612 "dma_device_type": 2 00:14:16.612 } 00:14:16.612 ], 00:14:16.612 "driver_specific": { 00:14:16.612 "passthru": { 00:14:16.612 "name": "pt1", 00:14:16.612 "base_bdev_name": "malloc1" 00:14:16.612 } 00:14:16.612 } 00:14:16.612 }' 00:14:16.612 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:16.612 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:16.612 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:16.612 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:16.870 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:16.870 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:16.870 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:16.870 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:16.870 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:16.870 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:16.870 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:16.870 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:16.870 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:16.870 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:16.870 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:17.128 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:17.128 "name": "pt2", 00:14:17.128 "aliases": [ 00:14:17.128 "00000000-0000-0000-0000-000000000002" 00:14:17.128 ], 00:14:17.128 "product_name": "passthru", 00:14:17.128 "block_size": 512, 00:14:17.128 "num_blocks": 65536, 00:14:17.128 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:17.128 "assigned_rate_limits": { 00:14:17.128 "rw_ios_per_sec": 0, 00:14:17.128 "rw_mbytes_per_sec": 0, 00:14:17.128 "r_mbytes_per_sec": 0, 00:14:17.128 "w_mbytes_per_sec": 0 00:14:17.128 }, 00:14:17.128 "claimed": true, 00:14:17.128 "claim_type": "exclusive_write", 00:14:17.128 "zoned": false, 00:14:17.128 "supported_io_types": { 00:14:17.128 "read": true, 00:14:17.128 "write": true, 00:14:17.128 "unmap": true, 00:14:17.128 "flush": true, 00:14:17.128 "reset": true, 00:14:17.128 "nvme_admin": false, 00:14:17.128 "nvme_io": false, 00:14:17.128 "nvme_io_md": false, 00:14:17.128 "write_zeroes": true, 00:14:17.128 "zcopy": true, 00:14:17.128 "get_zone_info": false, 00:14:17.128 "zone_management": false, 00:14:17.128 "zone_append": false, 00:14:17.128 "compare": false, 00:14:17.128 "compare_and_write": false, 00:14:17.128 "abort": true, 00:14:17.129 "seek_hole": false, 00:14:17.129 "seek_data": false, 00:14:17.129 "copy": true, 00:14:17.129 "nvme_iov_md": false 00:14:17.129 }, 00:14:17.129 "memory_domains": [ 00:14:17.129 { 00:14:17.129 "dma_device_id": "system", 00:14:17.129 "dma_device_type": 1 00:14:17.129 }, 00:14:17.129 { 00:14:17.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.129 "dma_device_type": 2 00:14:17.129 } 00:14:17.129 ], 00:14:17.129 "driver_specific": { 00:14:17.129 "passthru": { 00:14:17.129 "name": "pt2", 00:14:17.129 "base_bdev_name": "malloc2" 00:14:17.129 } 00:14:17.129 } 00:14:17.129 }' 00:14:17.129 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:17.129 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:17.129 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:17.129 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:17.129 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:17.129 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:17.129 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:17.129 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:17.129 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:17.129 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:17.129 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:17.129 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:17.129 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:17.129 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:17.129 15:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:14:17.387 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:17.387 "name": "pt3", 00:14:17.387 "aliases": [ 00:14:17.387 "00000000-0000-0000-0000-000000000003" 00:14:17.387 ], 00:14:17.387 "product_name": "passthru", 00:14:17.387 "block_size": 512, 00:14:17.387 "num_blocks": 65536, 00:14:17.387 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:17.387 "assigned_rate_limits": { 00:14:17.387 "rw_ios_per_sec": 0, 00:14:17.387 "rw_mbytes_per_sec": 0, 00:14:17.387 "r_mbytes_per_sec": 0, 00:14:17.387 "w_mbytes_per_sec": 0 00:14:17.387 }, 00:14:17.387 "claimed": true, 00:14:17.387 "claim_type": "exclusive_write", 00:14:17.387 "zoned": false, 00:14:17.387 "supported_io_types": { 00:14:17.387 "read": true, 00:14:17.387 "write": true, 00:14:17.387 "unmap": true, 00:14:17.387 "flush": true, 00:14:17.387 "reset": true, 00:14:17.387 "nvme_admin": false, 00:14:17.387 "nvme_io": false, 00:14:17.387 "nvme_io_md": false, 00:14:17.387 "write_zeroes": true, 00:14:17.387 "zcopy": true, 00:14:17.387 "get_zone_info": false, 00:14:17.387 "zone_management": false, 00:14:17.387 "zone_append": false, 00:14:17.387 "compare": false, 00:14:17.387 "compare_and_write": false, 00:14:17.387 "abort": true, 00:14:17.387 "seek_hole": false, 00:14:17.387 "seek_data": false, 00:14:17.387 "copy": true, 00:14:17.387 "nvme_iov_md": false 00:14:17.387 }, 00:14:17.387 "memory_domains": [ 00:14:17.387 { 00:14:17.387 "dma_device_id": "system", 00:14:17.387 "dma_device_type": 1 00:14:17.387 }, 00:14:17.387 { 00:14:17.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.387 "dma_device_type": 2 00:14:17.387 } 00:14:17.387 ], 00:14:17.387 "driver_specific": { 00:14:17.387 "passthru": { 00:14:17.387 "name": "pt3", 00:14:17.388 "base_bdev_name": "malloc3" 00:14:17.388 } 00:14:17.388 } 00:14:17.388 }' 00:14:17.388 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:17.388 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:17.388 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:17.388 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:17.388 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:17.388 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:17.388 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:17.388 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:17.388 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:17.388 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:17.388 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:17.388 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:17.388 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:17.388 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:14:17.388 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:17.646 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:17.646 "name": "pt4", 00:14:17.646 "aliases": [ 00:14:17.646 "00000000-0000-0000-0000-000000000004" 00:14:17.646 ], 00:14:17.646 "product_name": "passthru", 00:14:17.646 "block_size": 512, 00:14:17.646 "num_blocks": 65536, 00:14:17.646 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:17.646 "assigned_rate_limits": { 00:14:17.646 "rw_ios_per_sec": 0, 00:14:17.646 "rw_mbytes_per_sec": 0, 00:14:17.646 "r_mbytes_per_sec": 0, 00:14:17.646 "w_mbytes_per_sec": 0 00:14:17.646 }, 00:14:17.646 "claimed": true, 00:14:17.646 "claim_type": "exclusive_write", 00:14:17.646 "zoned": false, 00:14:17.646 "supported_io_types": { 00:14:17.646 "read": true, 00:14:17.646 "write": true, 00:14:17.646 "unmap": true, 00:14:17.646 "flush": true, 00:14:17.646 "reset": true, 00:14:17.646 "nvme_admin": false, 00:14:17.646 "nvme_io": false, 00:14:17.646 "nvme_io_md": false, 00:14:17.646 "write_zeroes": true, 00:14:17.646 "zcopy": true, 00:14:17.646 "get_zone_info": false, 00:14:17.646 "zone_management": false, 00:14:17.646 "zone_append": false, 00:14:17.646 "compare": false, 00:14:17.646 "compare_and_write": false, 00:14:17.646 "abort": true, 00:14:17.646 "seek_hole": false, 00:14:17.646 "seek_data": false, 00:14:17.646 "copy": true, 00:14:17.646 "nvme_iov_md": false 00:14:17.646 }, 00:14:17.646 "memory_domains": [ 00:14:17.646 { 00:14:17.646 "dma_device_id": "system", 00:14:17.646 "dma_device_type": 1 00:14:17.646 }, 00:14:17.646 { 00:14:17.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.646 "dma_device_type": 2 00:14:17.646 } 00:14:17.646 ], 00:14:17.646 "driver_specific": { 00:14:17.646 "passthru": { 00:14:17.646 "name": "pt4", 00:14:17.646 "base_bdev_name": "malloc4" 00:14:17.646 } 00:14:17.646 } 00:14:17.646 }' 00:14:17.646 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:17.646 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:17.646 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:17.646 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:17.929 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:17.929 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:17.929 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:17.929 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:17.929 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:17.929 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:17.929 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:17.929 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:17.929 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:17.929 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:14:18.187 [2024-07-12 15:02:43.761521] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:18.187 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=c928b4d1-405f-11ef-b2a4-e9dca065e82e 00:14:18.187 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z c928b4d1-405f-11ef-b2a4-e9dca065e82e ']' 00:14:18.187 15:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:18.446 [2024-07-12 15:02:44.057489] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:18.446 [2024-07-12 15:02:44.057518] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:18.446 [2024-07-12 15:02:44.057542] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:18.446 [2024-07-12 15:02:44.057557] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:18.446 [2024-07-12 15:02:44.057561] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x50c88635900 name raid_bdev1, state offline 00:14:18.446 15:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:18.446 15:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:14:18.705 15:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:14:18.705 15:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:14:18.705 15:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:18.705 15:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:18.964 15:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:18.964 15:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:19.223 15:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:19.223 15:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:19.485 15:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:19.485 15:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:14:19.748 15:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:19.748 15:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:20.014 15:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:14:20.014 15:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:14:20.014 15:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:14:20.014 15:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:14:20.014 15:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:20.014 15:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:20.014 15:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:20.014 15:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:20.014 15:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:20.014 15:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:20.014 15:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:20.014 15:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:20.014 15:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:14:20.281 [2024-07-12 15:02:45.945594] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:20.281 [2024-07-12 15:02:45.946161] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:20.281 [2024-07-12 15:02:45.946180] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:20.281 [2024-07-12 15:02:45.946189] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:20.281 [2024-07-12 15:02:45.946204] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:20.281 [2024-07-12 15:02:45.946240] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:20.281 [2024-07-12 15:02:45.946252] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:20.281 [2024-07-12 15:02:45.946261] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:20.281 [2024-07-12 15:02:45.946270] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:20.281 [2024-07-12 15:02:45.946274] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x50c88635680 name raid_bdev1, state configuring 00:14:20.281 request: 00:14:20.281 { 00:14:20.281 "name": "raid_bdev1", 00:14:20.281 "raid_level": "raid0", 00:14:20.281 "base_bdevs": [ 00:14:20.281 "malloc1", 00:14:20.281 "malloc2", 00:14:20.281 "malloc3", 00:14:20.281 "malloc4" 00:14:20.281 ], 00:14:20.281 "strip_size_kb": 64, 00:14:20.281 "superblock": false, 00:14:20.281 "method": "bdev_raid_create", 00:14:20.281 "req_id": 1 00:14:20.281 } 00:14:20.281 Got JSON-RPC error response 00:14:20.281 response: 00:14:20.281 { 00:14:20.281 "code": -17, 00:14:20.281 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:20.281 } 00:14:20.281 15:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:14:20.281 15:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:20.281 15:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:20.281 15:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:20.281 15:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.281 15:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:14:20.552 15:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:14:20.552 15:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:14:20.552 15:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:20.825 [2024-07-12 15:02:46.469606] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:20.825 [2024-07-12 15:02:46.469662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.825 [2024-07-12 15:02:46.469674] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x50c88635180 00:14:20.825 [2024-07-12 15:02:46.469682] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.825 [2024-07-12 15:02:46.470332] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.825 [2024-07-12 15:02:46.470364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:20.825 [2024-07-12 15:02:46.470391] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:20.825 [2024-07-12 15:02:46.470403] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:20.825 pt1 00:14:20.825 15:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:14:20.825 15:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:20.825 15:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:20.825 15:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:20.825 15:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:20.825 15:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:20.825 15:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:20.825 15:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:20.825 15:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:20.825 15:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:20.825 15:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.825 15:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.100 15:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:21.100 "name": "raid_bdev1", 00:14:21.100 "uuid": "c928b4d1-405f-11ef-b2a4-e9dca065e82e", 00:14:21.100 "strip_size_kb": 64, 00:14:21.100 "state": "configuring", 00:14:21.100 "raid_level": "raid0", 00:14:21.100 "superblock": true, 00:14:21.100 "num_base_bdevs": 4, 00:14:21.100 "num_base_bdevs_discovered": 1, 00:14:21.100 "num_base_bdevs_operational": 4, 00:14:21.100 "base_bdevs_list": [ 00:14:21.100 { 00:14:21.100 "name": "pt1", 00:14:21.100 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:21.100 "is_configured": true, 00:14:21.100 "data_offset": 2048, 00:14:21.100 "data_size": 63488 00:14:21.100 }, 00:14:21.100 { 00:14:21.100 "name": null, 00:14:21.100 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:21.100 "is_configured": false, 00:14:21.100 "data_offset": 2048, 00:14:21.100 "data_size": 63488 00:14:21.100 }, 00:14:21.100 { 00:14:21.100 "name": null, 00:14:21.100 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:21.100 "is_configured": false, 00:14:21.100 "data_offset": 2048, 00:14:21.100 "data_size": 63488 00:14:21.100 }, 00:14:21.100 { 00:14:21.100 "name": null, 00:14:21.100 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:21.100 "is_configured": false, 00:14:21.100 "data_offset": 2048, 00:14:21.100 "data_size": 63488 00:14:21.100 } 00:14:21.100 ] 00:14:21.100 }' 00:14:21.100 15:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:21.100 15:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.362 15:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:14:21.362 15:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:21.620 [2024-07-12 15:02:47.441659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:21.620 [2024-07-12 15:02:47.441714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:21.620 [2024-07-12 15:02:47.441725] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x50c88634780 00:14:21.620 [2024-07-12 15:02:47.441733] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:21.620 [2024-07-12 15:02:47.441858] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:21.620 [2024-07-12 15:02:47.441869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:21.620 [2024-07-12 15:02:47.441894] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:21.620 [2024-07-12 15:02:47.441902] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:21.880 pt2 00:14:21.880 15:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:22.139 [2024-07-12 15:02:47.705674] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:22.139 15:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:14:22.139 15:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:22.139 15:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:22.139 15:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:22.139 15:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:22.139 15:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:22.139 15:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:22.139 15:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:22.139 15:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:22.139 15:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:22.139 15:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.139 15:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.398 15:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:22.398 "name": "raid_bdev1", 00:14:22.398 "uuid": "c928b4d1-405f-11ef-b2a4-e9dca065e82e", 00:14:22.398 "strip_size_kb": 64, 00:14:22.398 "state": "configuring", 00:14:22.398 "raid_level": "raid0", 00:14:22.398 "superblock": true, 00:14:22.398 "num_base_bdevs": 4, 00:14:22.398 "num_base_bdevs_discovered": 1, 00:14:22.398 "num_base_bdevs_operational": 4, 00:14:22.398 "base_bdevs_list": [ 00:14:22.398 { 00:14:22.398 "name": "pt1", 00:14:22.398 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:22.398 "is_configured": true, 00:14:22.398 "data_offset": 2048, 00:14:22.398 "data_size": 63488 00:14:22.398 }, 00:14:22.398 { 00:14:22.398 "name": null, 00:14:22.398 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:22.398 "is_configured": false, 00:14:22.398 "data_offset": 2048, 00:14:22.398 "data_size": 63488 00:14:22.398 }, 00:14:22.398 { 00:14:22.398 "name": null, 00:14:22.398 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:22.398 "is_configured": false, 00:14:22.398 "data_offset": 2048, 00:14:22.398 "data_size": 63488 00:14:22.398 }, 00:14:22.398 { 00:14:22.398 "name": null, 00:14:22.398 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:22.398 "is_configured": false, 00:14:22.398 "data_offset": 2048, 00:14:22.398 "data_size": 63488 00:14:22.398 } 00:14:22.398 ] 00:14:22.398 }' 00:14:22.398 15:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:22.398 15:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.657 15:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:14:22.657 15:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:22.657 15:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:22.916 [2024-07-12 15:02:48.485703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:22.916 [2024-07-12 15:02:48.485751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.916 [2024-07-12 15:02:48.485762] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x50c88634780 00:14:22.916 [2024-07-12 15:02:48.485770] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.916 [2024-07-12 15:02:48.485882] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.916 [2024-07-12 15:02:48.485894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:22.916 [2024-07-12 15:02:48.485918] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:22.916 [2024-07-12 15:02:48.485926] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:22.916 pt2 00:14:22.916 15:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:14:22.916 15:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:22.916 15:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:23.174 [2024-07-12 15:02:48.757719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:23.174 [2024-07-12 15:02:48.757767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.174 [2024-07-12 15:02:48.757778] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x50c88635b80 00:14:23.174 [2024-07-12 15:02:48.757786] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.174 [2024-07-12 15:02:48.757901] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.174 [2024-07-12 15:02:48.757912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:23.174 [2024-07-12 15:02:48.757935] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:23.174 [2024-07-12 15:02:48.757944] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:23.174 pt3 00:14:23.174 15:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:14:23.174 15:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:23.174 15:02:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:23.174 [2024-07-12 15:02:48.989729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:23.174 [2024-07-12 15:02:48.989777] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.174 [2024-07-12 15:02:48.989789] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x50c88635900 00:14:23.174 [2024-07-12 15:02:48.989797] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.174 [2024-07-12 15:02:48.989905] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.174 [2024-07-12 15:02:48.989917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:23.174 [2024-07-12 15:02:48.989949] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:23.174 [2024-07-12 15:02:48.989958] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:23.174 [2024-07-12 15:02:48.989990] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x50c88634c80 00:14:23.174 [2024-07-12 15:02:48.989995] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:23.174 [2024-07-12 15:02:48.990016] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50c88697e20 00:14:23.174 [2024-07-12 15:02:48.990070] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x50c88634c80 00:14:23.174 [2024-07-12 15:02:48.990075] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x50c88634c80 00:14:23.174 [2024-07-12 15:02:48.990096] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.174 pt4 00:14:23.445 15:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:14:23.445 15:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:23.445 15:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:23.445 15:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:23.445 15:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:23.445 15:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:23.445 15:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:23.445 15:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:23.445 15:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:23.445 15:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:23.445 15:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:23.445 15:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:23.445 15:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.445 15:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.445 15:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:23.445 "name": "raid_bdev1", 00:14:23.445 "uuid": "c928b4d1-405f-11ef-b2a4-e9dca065e82e", 00:14:23.445 "strip_size_kb": 64, 00:14:23.445 "state": "online", 00:14:23.445 "raid_level": "raid0", 00:14:23.445 "superblock": true, 00:14:23.445 "num_base_bdevs": 4, 00:14:23.445 "num_base_bdevs_discovered": 4, 00:14:23.445 "num_base_bdevs_operational": 4, 00:14:23.445 "base_bdevs_list": [ 00:14:23.445 { 00:14:23.445 "name": "pt1", 00:14:23.445 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:23.445 "is_configured": true, 00:14:23.445 "data_offset": 2048, 00:14:23.445 "data_size": 63488 00:14:23.445 }, 00:14:23.445 { 00:14:23.445 "name": "pt2", 00:14:23.445 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:23.445 "is_configured": true, 00:14:23.445 "data_offset": 2048, 00:14:23.445 "data_size": 63488 00:14:23.445 }, 00:14:23.445 { 00:14:23.445 "name": "pt3", 00:14:23.445 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:23.445 "is_configured": true, 00:14:23.445 "data_offset": 2048, 00:14:23.445 "data_size": 63488 00:14:23.445 }, 00:14:23.445 { 00:14:23.445 "name": "pt4", 00:14:23.445 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:23.445 "is_configured": true, 00:14:23.445 "data_offset": 2048, 00:14:23.445 "data_size": 63488 00:14:23.445 } 00:14:23.445 ] 00:14:23.445 }' 00:14:23.445 15:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:23.445 15:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.723 15:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:14:23.723 15:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:23.723 15:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:23.723 15:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:23.723 15:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:23.723 15:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:23.723 15:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:23.723 15:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:24.290 [2024-07-12 15:02:49.849816] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:24.290 15:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:24.290 "name": "raid_bdev1", 00:14:24.290 "aliases": [ 00:14:24.290 "c928b4d1-405f-11ef-b2a4-e9dca065e82e" 00:14:24.290 ], 00:14:24.290 "product_name": "Raid Volume", 00:14:24.290 "block_size": 512, 00:14:24.290 "num_blocks": 253952, 00:14:24.290 "uuid": "c928b4d1-405f-11ef-b2a4-e9dca065e82e", 00:14:24.290 "assigned_rate_limits": { 00:14:24.290 "rw_ios_per_sec": 0, 00:14:24.290 "rw_mbytes_per_sec": 0, 00:14:24.290 "r_mbytes_per_sec": 0, 00:14:24.290 "w_mbytes_per_sec": 0 00:14:24.290 }, 00:14:24.290 "claimed": false, 00:14:24.290 "zoned": false, 00:14:24.290 "supported_io_types": { 00:14:24.290 "read": true, 00:14:24.290 "write": true, 00:14:24.290 "unmap": true, 00:14:24.290 "flush": true, 00:14:24.290 "reset": true, 00:14:24.290 "nvme_admin": false, 00:14:24.290 "nvme_io": false, 00:14:24.290 "nvme_io_md": false, 00:14:24.290 "write_zeroes": true, 00:14:24.290 "zcopy": false, 00:14:24.290 "get_zone_info": false, 00:14:24.290 "zone_management": false, 00:14:24.290 "zone_append": false, 00:14:24.290 "compare": false, 00:14:24.290 "compare_and_write": false, 00:14:24.290 "abort": false, 00:14:24.290 "seek_hole": false, 00:14:24.290 "seek_data": false, 00:14:24.290 "copy": false, 00:14:24.290 "nvme_iov_md": false 00:14:24.290 }, 00:14:24.290 "memory_domains": [ 00:14:24.290 { 00:14:24.290 "dma_device_id": "system", 00:14:24.290 "dma_device_type": 1 00:14:24.290 }, 00:14:24.290 { 00:14:24.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.290 "dma_device_type": 2 00:14:24.290 }, 00:14:24.290 { 00:14:24.290 "dma_device_id": "system", 00:14:24.290 "dma_device_type": 1 00:14:24.290 }, 00:14:24.290 { 00:14:24.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.290 "dma_device_type": 2 00:14:24.290 }, 00:14:24.290 { 00:14:24.290 "dma_device_id": "system", 00:14:24.290 "dma_device_type": 1 00:14:24.290 }, 00:14:24.290 { 00:14:24.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.290 "dma_device_type": 2 00:14:24.290 }, 00:14:24.290 { 00:14:24.290 "dma_device_id": "system", 00:14:24.290 "dma_device_type": 1 00:14:24.290 }, 00:14:24.290 { 00:14:24.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.290 "dma_device_type": 2 00:14:24.290 } 00:14:24.290 ], 00:14:24.290 "driver_specific": { 00:14:24.290 "raid": { 00:14:24.290 "uuid": "c928b4d1-405f-11ef-b2a4-e9dca065e82e", 00:14:24.290 "strip_size_kb": 64, 00:14:24.290 "state": "online", 00:14:24.290 "raid_level": "raid0", 00:14:24.290 "superblock": true, 00:14:24.290 "num_base_bdevs": 4, 00:14:24.290 "num_base_bdevs_discovered": 4, 00:14:24.290 "num_base_bdevs_operational": 4, 00:14:24.290 "base_bdevs_list": [ 00:14:24.290 { 00:14:24.290 "name": "pt1", 00:14:24.290 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:24.290 "is_configured": true, 00:14:24.290 "data_offset": 2048, 00:14:24.290 "data_size": 63488 00:14:24.290 }, 00:14:24.290 { 00:14:24.290 "name": "pt2", 00:14:24.290 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:24.290 "is_configured": true, 00:14:24.290 "data_offset": 2048, 00:14:24.290 "data_size": 63488 00:14:24.290 }, 00:14:24.290 { 00:14:24.290 "name": "pt3", 00:14:24.290 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:24.290 "is_configured": true, 00:14:24.290 "data_offset": 2048, 00:14:24.290 "data_size": 63488 00:14:24.290 }, 00:14:24.290 { 00:14:24.290 "name": "pt4", 00:14:24.290 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:24.290 "is_configured": true, 00:14:24.290 "data_offset": 2048, 00:14:24.290 "data_size": 63488 00:14:24.290 } 00:14:24.290 ] 00:14:24.290 } 00:14:24.290 } 00:14:24.290 }' 00:14:24.290 15:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:24.290 15:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:24.290 pt2 00:14:24.290 pt3 00:14:24.290 pt4' 00:14:24.290 15:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:24.290 15:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:24.290 15:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:24.549 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:24.549 "name": "pt1", 00:14:24.549 "aliases": [ 00:14:24.549 "00000000-0000-0000-0000-000000000001" 00:14:24.549 ], 00:14:24.549 "product_name": "passthru", 00:14:24.549 "block_size": 512, 00:14:24.549 "num_blocks": 65536, 00:14:24.549 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:24.549 "assigned_rate_limits": { 00:14:24.549 "rw_ios_per_sec": 0, 00:14:24.549 "rw_mbytes_per_sec": 0, 00:14:24.549 "r_mbytes_per_sec": 0, 00:14:24.549 "w_mbytes_per_sec": 0 00:14:24.549 }, 00:14:24.549 "claimed": true, 00:14:24.549 "claim_type": "exclusive_write", 00:14:24.549 "zoned": false, 00:14:24.549 "supported_io_types": { 00:14:24.549 "read": true, 00:14:24.549 "write": true, 00:14:24.549 "unmap": true, 00:14:24.549 "flush": true, 00:14:24.549 "reset": true, 00:14:24.549 "nvme_admin": false, 00:14:24.549 "nvme_io": false, 00:14:24.549 "nvme_io_md": false, 00:14:24.549 "write_zeroes": true, 00:14:24.549 "zcopy": true, 00:14:24.549 "get_zone_info": false, 00:14:24.549 "zone_management": false, 00:14:24.549 "zone_append": false, 00:14:24.549 "compare": false, 00:14:24.549 "compare_and_write": false, 00:14:24.549 "abort": true, 00:14:24.549 "seek_hole": false, 00:14:24.549 "seek_data": false, 00:14:24.549 "copy": true, 00:14:24.549 "nvme_iov_md": false 00:14:24.549 }, 00:14:24.549 "memory_domains": [ 00:14:24.549 { 00:14:24.549 "dma_device_id": "system", 00:14:24.549 "dma_device_type": 1 00:14:24.549 }, 00:14:24.549 { 00:14:24.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.549 "dma_device_type": 2 00:14:24.549 } 00:14:24.549 ], 00:14:24.549 "driver_specific": { 00:14:24.549 "passthru": { 00:14:24.549 "name": "pt1", 00:14:24.549 "base_bdev_name": "malloc1" 00:14:24.549 } 00:14:24.549 } 00:14:24.549 }' 00:14:24.549 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:24.549 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:24.549 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:24.549 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:24.549 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:24.549 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:24.549 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.549 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.549 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:24.549 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:24.549 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:24.549 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:24.549 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:24.550 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:24.550 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:24.809 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:24.809 "name": "pt2", 00:14:24.809 "aliases": [ 00:14:24.809 "00000000-0000-0000-0000-000000000002" 00:14:24.809 ], 00:14:24.809 "product_name": "passthru", 00:14:24.809 "block_size": 512, 00:14:24.809 "num_blocks": 65536, 00:14:24.809 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:24.809 "assigned_rate_limits": { 00:14:24.809 "rw_ios_per_sec": 0, 00:14:24.809 "rw_mbytes_per_sec": 0, 00:14:24.809 "r_mbytes_per_sec": 0, 00:14:24.809 "w_mbytes_per_sec": 0 00:14:24.809 }, 00:14:24.809 "claimed": true, 00:14:24.809 "claim_type": "exclusive_write", 00:14:24.809 "zoned": false, 00:14:24.809 "supported_io_types": { 00:14:24.809 "read": true, 00:14:24.809 "write": true, 00:14:24.809 "unmap": true, 00:14:24.809 "flush": true, 00:14:24.809 "reset": true, 00:14:24.809 "nvme_admin": false, 00:14:24.809 "nvme_io": false, 00:14:24.809 "nvme_io_md": false, 00:14:24.809 "write_zeroes": true, 00:14:24.809 "zcopy": true, 00:14:24.809 "get_zone_info": false, 00:14:24.809 "zone_management": false, 00:14:24.809 "zone_append": false, 00:14:24.809 "compare": false, 00:14:24.809 "compare_and_write": false, 00:14:24.809 "abort": true, 00:14:24.809 "seek_hole": false, 00:14:24.809 "seek_data": false, 00:14:24.809 "copy": true, 00:14:24.809 "nvme_iov_md": false 00:14:24.809 }, 00:14:24.809 "memory_domains": [ 00:14:24.809 { 00:14:24.809 "dma_device_id": "system", 00:14:24.809 "dma_device_type": 1 00:14:24.809 }, 00:14:24.809 { 00:14:24.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.809 "dma_device_type": 2 00:14:24.809 } 00:14:24.809 ], 00:14:24.809 "driver_specific": { 00:14:24.809 "passthru": { 00:14:24.809 "name": "pt2", 00:14:24.809 "base_bdev_name": "malloc2" 00:14:24.809 } 00:14:24.809 } 00:14:24.809 }' 00:14:24.809 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:24.809 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:24.809 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:24.809 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:24.809 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:24.809 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:24.809 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.809 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.809 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:24.809 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:24.809 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:24.809 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:24.809 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:24.809 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:14:24.809 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:25.068 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:25.068 "name": "pt3", 00:14:25.068 "aliases": [ 00:14:25.068 "00000000-0000-0000-0000-000000000003" 00:14:25.068 ], 00:14:25.068 "product_name": "passthru", 00:14:25.068 "block_size": 512, 00:14:25.068 "num_blocks": 65536, 00:14:25.068 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:25.068 "assigned_rate_limits": { 00:14:25.068 "rw_ios_per_sec": 0, 00:14:25.068 "rw_mbytes_per_sec": 0, 00:14:25.068 "r_mbytes_per_sec": 0, 00:14:25.068 "w_mbytes_per_sec": 0 00:14:25.068 }, 00:14:25.068 "claimed": true, 00:14:25.068 "claim_type": "exclusive_write", 00:14:25.068 "zoned": false, 00:14:25.068 "supported_io_types": { 00:14:25.068 "read": true, 00:14:25.068 "write": true, 00:14:25.068 "unmap": true, 00:14:25.068 "flush": true, 00:14:25.068 "reset": true, 00:14:25.068 "nvme_admin": false, 00:14:25.068 "nvme_io": false, 00:14:25.068 "nvme_io_md": false, 00:14:25.068 "write_zeroes": true, 00:14:25.068 "zcopy": true, 00:14:25.068 "get_zone_info": false, 00:14:25.068 "zone_management": false, 00:14:25.068 "zone_append": false, 00:14:25.068 "compare": false, 00:14:25.068 "compare_and_write": false, 00:14:25.068 "abort": true, 00:14:25.068 "seek_hole": false, 00:14:25.068 "seek_data": false, 00:14:25.068 "copy": true, 00:14:25.068 "nvme_iov_md": false 00:14:25.068 }, 00:14:25.068 "memory_domains": [ 00:14:25.068 { 00:14:25.068 "dma_device_id": "system", 00:14:25.068 "dma_device_type": 1 00:14:25.068 }, 00:14:25.068 { 00:14:25.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.068 "dma_device_type": 2 00:14:25.068 } 00:14:25.068 ], 00:14:25.068 "driver_specific": { 00:14:25.068 "passthru": { 00:14:25.068 "name": "pt3", 00:14:25.068 "base_bdev_name": "malloc3" 00:14:25.068 } 00:14:25.068 } 00:14:25.068 }' 00:14:25.068 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:25.068 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:25.068 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:25.068 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:25.068 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:25.068 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:25.068 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:25.068 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:25.068 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:25.068 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:25.068 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:25.068 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:25.068 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:25.068 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:14:25.068 15:02:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:25.327 15:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:25.327 "name": "pt4", 00:14:25.327 "aliases": [ 00:14:25.327 "00000000-0000-0000-0000-000000000004" 00:14:25.327 ], 00:14:25.327 "product_name": "passthru", 00:14:25.327 "block_size": 512, 00:14:25.327 "num_blocks": 65536, 00:14:25.327 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:25.327 "assigned_rate_limits": { 00:14:25.327 "rw_ios_per_sec": 0, 00:14:25.327 "rw_mbytes_per_sec": 0, 00:14:25.327 "r_mbytes_per_sec": 0, 00:14:25.327 "w_mbytes_per_sec": 0 00:14:25.327 }, 00:14:25.327 "claimed": true, 00:14:25.327 "claim_type": "exclusive_write", 00:14:25.327 "zoned": false, 00:14:25.327 "supported_io_types": { 00:14:25.327 "read": true, 00:14:25.327 "write": true, 00:14:25.327 "unmap": true, 00:14:25.327 "flush": true, 00:14:25.327 "reset": true, 00:14:25.327 "nvme_admin": false, 00:14:25.327 "nvme_io": false, 00:14:25.327 "nvme_io_md": false, 00:14:25.327 "write_zeroes": true, 00:14:25.327 "zcopy": true, 00:14:25.327 "get_zone_info": false, 00:14:25.327 "zone_management": false, 00:14:25.327 "zone_append": false, 00:14:25.327 "compare": false, 00:14:25.327 "compare_and_write": false, 00:14:25.327 "abort": true, 00:14:25.327 "seek_hole": false, 00:14:25.327 "seek_data": false, 00:14:25.327 "copy": true, 00:14:25.327 "nvme_iov_md": false 00:14:25.327 }, 00:14:25.327 "memory_domains": [ 00:14:25.327 { 00:14:25.327 "dma_device_id": "system", 00:14:25.327 "dma_device_type": 1 00:14:25.327 }, 00:14:25.327 { 00:14:25.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.327 "dma_device_type": 2 00:14:25.327 } 00:14:25.327 ], 00:14:25.327 "driver_specific": { 00:14:25.327 "passthru": { 00:14:25.327 "name": "pt4", 00:14:25.327 "base_bdev_name": "malloc4" 00:14:25.327 } 00:14:25.327 } 00:14:25.327 }' 00:14:25.327 15:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:25.327 15:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:25.327 15:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:25.327 15:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:25.327 15:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:25.327 15:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:25.327 15:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:25.327 15:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:25.327 15:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:25.600 15:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:25.600 15:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:25.600 15:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:25.600 15:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:25.600 15:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:14:25.861 [2024-07-12 15:02:51.441887] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:25.861 15:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' c928b4d1-405f-11ef-b2a4-e9dca065e82e '!=' c928b4d1-405f-11ef-b2a4-e9dca065e82e ']' 00:14:25.861 15:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:14:25.861 15:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:25.861 15:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:25.861 15:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 60019 00:14:25.861 15:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 60019 ']' 00:14:25.861 15:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 60019 00:14:25.861 15:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:14:25.861 15:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:14:25.861 15:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 60019 00:14:25.861 15:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:14:25.861 15:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:14:25.861 15:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:14:25.861 15:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60019' 00:14:25.861 killing process with pid 60019 00:14:25.861 15:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 60019 00:14:25.861 [2024-07-12 15:02:51.471510] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:25.861 [2024-07-12 15:02:51.471532] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:25.861 [2024-07-12 15:02:51.471547] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:25.861 [2024-07-12 15:02:51.471552] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x50c88634c80 name raid_bdev1, state offline 00:14:25.861 15:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 60019 00:14:25.861 [2024-07-12 15:02:51.494054] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:25.861 15:02:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:14:25.861 00:14:25.861 real 0m13.962s 00:14:25.861 user 0m25.175s 00:14:25.861 sys 0m1.919s 00:14:25.861 15:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:25.861 15:02:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.861 ************************************ 00:14:25.861 END TEST raid_superblock_test 00:14:25.861 ************************************ 00:14:26.120 15:02:51 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:26.120 15:02:51 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:14:26.120 15:02:51 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:26.120 15:02:51 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:26.120 15:02:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:26.120 ************************************ 00:14:26.120 START TEST raid_read_error_test 00:14:26.120 ************************************ 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 read 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.qA0KBh9HUt 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=60424 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 60424 /var/tmp/spdk-raid.sock 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 60424 ']' 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:26.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:26.120 15:02:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.120 [2024-07-12 15:02:51.724996] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:14:26.120 [2024-07-12 15:02:51.725253] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:14:26.735 EAL: TSC is not safe to use in SMP mode 00:14:26.735 EAL: TSC is not invariant 00:14:26.735 [2024-07-12 15:02:52.288432] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.735 [2024-07-12 15:02:52.373295] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:26.735 [2024-07-12 15:02:52.375368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.735 [2024-07-12 15:02:52.376130] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.735 [2024-07-12 15:02:52.376144] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.993 15:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:26.993 15:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:14:26.993 15:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:26.993 15:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:27.252 BaseBdev1_malloc 00:14:27.252 15:02:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:27.510 true 00:14:27.510 15:02:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:27.768 [2024-07-12 15:02:53.564271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:27.768 [2024-07-12 15:02:53.564341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.768 [2024-07-12 15:02:53.564370] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22cb7a34780 00:14:27.768 [2024-07-12 15:02:53.564379] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.768 [2024-07-12 15:02:53.565065] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.768 [2024-07-12 15:02:53.565090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:27.768 BaseBdev1 00:14:27.768 15:02:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:27.768 15:02:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:28.336 BaseBdev2_malloc 00:14:28.336 15:02:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:28.336 true 00:14:28.594 15:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:28.853 [2024-07-12 15:02:54.432324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:28.853 [2024-07-12 15:02:54.432409] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.853 [2024-07-12 15:02:54.432438] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22cb7a34c80 00:14:28.853 [2024-07-12 15:02:54.432447] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.853 [2024-07-12 15:02:54.433122] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.853 [2024-07-12 15:02:54.433151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:28.853 BaseBdev2 00:14:28.853 15:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:28.853 15:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:28.853 BaseBdev3_malloc 00:14:29.112 15:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:14:29.112 true 00:14:29.112 15:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:29.371 [2024-07-12 15:02:55.188344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:29.371 [2024-07-12 15:02:55.188399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.371 [2024-07-12 15:02:55.188425] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22cb7a35180 00:14:29.371 [2024-07-12 15:02:55.188435] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.371 [2024-07-12 15:02:55.189097] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.371 [2024-07-12 15:02:55.189125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:29.371 BaseBdev3 00:14:29.630 15:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:29.630 15:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:29.888 BaseBdev4_malloc 00:14:29.888 15:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:14:30.148 true 00:14:30.148 15:02:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:30.407 [2024-07-12 15:02:56.080410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:30.407 [2024-07-12 15:02:56.080471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.407 [2024-07-12 15:02:56.080499] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22cb7a35680 00:14:30.407 [2024-07-12 15:02:56.080508] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.407 [2024-07-12 15:02:56.081179] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.407 [2024-07-12 15:02:56.081205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:30.407 BaseBdev4 00:14:30.407 15:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:14:30.665 [2024-07-12 15:02:56.332427] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:30.665 [2024-07-12 15:02:56.333023] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:30.665 [2024-07-12 15:02:56.333051] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:30.665 [2024-07-12 15:02:56.333066] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:30.665 [2024-07-12 15:02:56.333132] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x22cb7a35900 00:14:30.665 [2024-07-12 15:02:56.333138] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:30.665 [2024-07-12 15:02:56.333173] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22cb7aa0e20 00:14:30.665 [2024-07-12 15:02:56.333248] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x22cb7a35900 00:14:30.665 [2024-07-12 15:02:56.333252] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x22cb7a35900 00:14:30.665 [2024-07-12 15:02:56.333280] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.665 15:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:30.665 15:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:30.665 15:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:30.665 15:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:30.665 15:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:30.665 15:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:30.665 15:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:30.665 15:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:30.665 15:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:30.665 15:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:30.665 15:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.665 15:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.923 15:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:30.923 "name": "raid_bdev1", 00:14:30.923 "uuid": "d222891e-405f-11ef-b2a4-e9dca065e82e", 00:14:30.923 "strip_size_kb": 64, 00:14:30.923 "state": "online", 00:14:30.923 "raid_level": "raid0", 00:14:30.923 "superblock": true, 00:14:30.923 "num_base_bdevs": 4, 00:14:30.923 "num_base_bdevs_discovered": 4, 00:14:30.923 "num_base_bdevs_operational": 4, 00:14:30.923 "base_bdevs_list": [ 00:14:30.923 { 00:14:30.923 "name": "BaseBdev1", 00:14:30.923 "uuid": "129e9003-dbd7-7054-924b-66b5b851bbfb", 00:14:30.923 "is_configured": true, 00:14:30.923 "data_offset": 2048, 00:14:30.923 "data_size": 63488 00:14:30.923 }, 00:14:30.923 { 00:14:30.923 "name": "BaseBdev2", 00:14:30.923 "uuid": "c11e3451-a7ab-e45d-81ad-48ea24144d8b", 00:14:30.923 "is_configured": true, 00:14:30.923 "data_offset": 2048, 00:14:30.923 "data_size": 63488 00:14:30.923 }, 00:14:30.923 { 00:14:30.923 "name": "BaseBdev3", 00:14:30.923 "uuid": "25b2f970-1780-8f55-9965-db17593b5529", 00:14:30.923 "is_configured": true, 00:14:30.923 "data_offset": 2048, 00:14:30.923 "data_size": 63488 00:14:30.923 }, 00:14:30.923 { 00:14:30.923 "name": "BaseBdev4", 00:14:30.923 "uuid": "cb98d0a3-bc2d-b851-9383-5c3f29f4bc0f", 00:14:30.923 "is_configured": true, 00:14:30.923 "data_offset": 2048, 00:14:30.923 "data_size": 63488 00:14:30.923 } 00:14:30.923 ] 00:14:30.923 }' 00:14:30.923 15:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:30.923 15:02:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.181 15:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:31.181 15:02:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:31.440 [2024-07-12 15:02:57.048629] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22cb7aa0ec0 00:14:32.378 15:02:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:32.637 15:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:32.637 15:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:32.637 15:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:14:32.637 15:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:32.637 15:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:32.637 15:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:32.637 15:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:32.637 15:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:32.637 15:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:32.637 15:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:32.637 15:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:32.637 15:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:32.637 15:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:32.637 15:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:32.637 15:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.896 15:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:32.896 "name": "raid_bdev1", 00:14:32.896 "uuid": "d222891e-405f-11ef-b2a4-e9dca065e82e", 00:14:32.896 "strip_size_kb": 64, 00:14:32.896 "state": "online", 00:14:32.896 "raid_level": "raid0", 00:14:32.896 "superblock": true, 00:14:32.896 "num_base_bdevs": 4, 00:14:32.896 "num_base_bdevs_discovered": 4, 00:14:32.896 "num_base_bdevs_operational": 4, 00:14:32.896 "base_bdevs_list": [ 00:14:32.896 { 00:14:32.896 "name": "BaseBdev1", 00:14:32.896 "uuid": "129e9003-dbd7-7054-924b-66b5b851bbfb", 00:14:32.896 "is_configured": true, 00:14:32.896 "data_offset": 2048, 00:14:32.896 "data_size": 63488 00:14:32.896 }, 00:14:32.896 { 00:14:32.896 "name": "BaseBdev2", 00:14:32.896 "uuid": "c11e3451-a7ab-e45d-81ad-48ea24144d8b", 00:14:32.896 "is_configured": true, 00:14:32.896 "data_offset": 2048, 00:14:32.896 "data_size": 63488 00:14:32.896 }, 00:14:32.896 { 00:14:32.896 "name": "BaseBdev3", 00:14:32.896 "uuid": "25b2f970-1780-8f55-9965-db17593b5529", 00:14:32.896 "is_configured": true, 00:14:32.896 "data_offset": 2048, 00:14:32.896 "data_size": 63488 00:14:32.896 }, 00:14:32.896 { 00:14:32.896 "name": "BaseBdev4", 00:14:32.896 "uuid": "cb98d0a3-bc2d-b851-9383-5c3f29f4bc0f", 00:14:32.896 "is_configured": true, 00:14:32.896 "data_offset": 2048, 00:14:32.896 "data_size": 63488 00:14:32.896 } 00:14:32.896 ] 00:14:32.896 }' 00:14:32.896 15:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:32.897 15:02:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.156 15:02:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:33.414 [2024-07-12 15:02:59.055396] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:33.414 [2024-07-12 15:02:59.055435] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:33.414 [2024-07-12 15:02:59.055862] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:33.414 [2024-07-12 15:02:59.055883] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.414 [2024-07-12 15:02:59.055894] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:33.414 [2024-07-12 15:02:59.055899] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x22cb7a35900 name raid_bdev1, state offline 00:14:33.414 0 00:14:33.414 15:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 60424 00:14:33.414 15:02:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 60424 ']' 00:14:33.414 15:02:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 60424 00:14:33.414 15:02:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:14:33.414 15:02:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:14:33.414 15:02:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 60424 00:14:33.414 15:02:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:14:33.414 15:02:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:14:33.414 killing process with pid 60424 00:14:33.414 15:02:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:14:33.414 15:02:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60424' 00:14:33.414 15:02:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 60424 00:14:33.414 [2024-07-12 15:02:59.084556] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:33.414 15:02:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 60424 00:14:33.414 [2024-07-12 15:02:59.118096] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:33.673 15:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.qA0KBh9HUt 00:14:33.673 15:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:33.673 15:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:33.673 15:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.50 00:14:33.673 15:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:14:33.673 15:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:33.673 15:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:33.673 15:02:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.50 != \0\.\0\0 ]] 00:14:33.673 00:14:33.673 real 0m7.664s 00:14:33.673 user 0m12.238s 00:14:33.673 sys 0m1.281s 00:14:33.673 15:02:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:33.673 ************************************ 00:14:33.673 END TEST raid_read_error_test 00:14:33.673 ************************************ 00:14:33.673 15:02:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.673 15:02:59 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:33.673 15:02:59 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:14:33.673 15:02:59 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:33.673 15:02:59 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:33.673 15:02:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:33.673 ************************************ 00:14:33.673 START TEST raid_write_error_test 00:14:33.673 ************************************ 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 write 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.g8Dxxz37IB 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=60562 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 60562 /var/tmp/spdk-raid.sock 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 60562 ']' 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:33.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:33.673 15:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.673 [2024-07-12 15:02:59.434822] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:14:33.673 [2024-07-12 15:02:59.435046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:14:34.609 EAL: TSC is not safe to use in SMP mode 00:14:34.609 EAL: TSC is not invariant 00:14:34.609 [2024-07-12 15:03:00.131470] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.609 [2024-07-12 15:03:00.237209] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:34.609 [2024-07-12 15:03:00.239622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.609 [2024-07-12 15:03:00.240458] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.609 [2024-07-12 15:03:00.240472] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.867 15:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:34.867 15:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:14:34.867 15:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:34.867 15:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:35.127 BaseBdev1_malloc 00:14:35.127 15:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:35.386 true 00:14:35.386 15:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:35.646 [2024-07-12 15:03:01.275302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:35.646 [2024-07-12 15:03:01.275400] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.646 [2024-07-12 15:03:01.275444] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14053b834780 00:14:35.646 [2024-07-12 15:03:01.275476] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.646 [2024-07-12 15:03:01.276340] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.646 [2024-07-12 15:03:01.276366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:35.646 BaseBdev1 00:14:35.646 15:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:35.646 15:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:35.905 BaseBdev2_malloc 00:14:35.905 15:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:36.165 true 00:14:36.165 15:03:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:36.423 [2024-07-12 15:03:02.031364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:36.423 [2024-07-12 15:03:02.031442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.423 [2024-07-12 15:03:02.031478] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14053b834c80 00:14:36.423 [2024-07-12 15:03:02.031487] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.423 [2024-07-12 15:03:02.032322] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.423 [2024-07-12 15:03:02.032346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:36.423 BaseBdev2 00:14:36.423 15:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:36.423 15:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:36.681 BaseBdev3_malloc 00:14:36.681 15:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:14:36.941 true 00:14:36.941 15:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:36.941 [2024-07-12 15:03:02.763412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:36.941 [2024-07-12 15:03:02.763489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.941 [2024-07-12 15:03:02.763524] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14053b835180 00:14:36.941 [2024-07-12 15:03:02.763534] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.941 [2024-07-12 15:03:02.764344] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.941 [2024-07-12 15:03:02.764369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:37.313 BaseBdev3 00:14:37.313 15:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:37.313 15:03:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:37.313 BaseBdev4_malloc 00:14:37.313 15:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:14:37.570 true 00:14:37.570 15:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:37.827 [2024-07-12 15:03:03.571442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:37.827 [2024-07-12 15:03:03.571510] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.827 [2024-07-12 15:03:03.571546] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14053b835680 00:14:37.827 [2024-07-12 15:03:03.571555] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.827 [2024-07-12 15:03:03.572387] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.827 [2024-07-12 15:03:03.572414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:37.827 BaseBdev4 00:14:37.827 15:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:14:38.085 [2024-07-12 15:03:03.811468] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:38.085 [2024-07-12 15:03:03.812215] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:38.085 [2024-07-12 15:03:03.812244] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:38.085 [2024-07-12 15:03:03.812262] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:38.085 [2024-07-12 15:03:03.812342] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x14053b835900 00:14:38.085 [2024-07-12 15:03:03.812349] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:38.085 [2024-07-12 15:03:03.812393] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14053b8a0e20 00:14:38.085 [2024-07-12 15:03:03.812482] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x14053b835900 00:14:38.085 [2024-07-12 15:03:03.812486] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x14053b835900 00:14:38.085 [2024-07-12 15:03:03.812518] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.085 15:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:38.085 15:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:38.085 15:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:38.085 15:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:38.085 15:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:38.085 15:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:38.085 15:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:38.085 15:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:38.085 15:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:38.085 15:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:38.085 15:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.085 15:03:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.343 15:03:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:38.343 "name": "raid_bdev1", 00:14:38.343 "uuid": "d697bee4-405f-11ef-b2a4-e9dca065e82e", 00:14:38.343 "strip_size_kb": 64, 00:14:38.343 "state": "online", 00:14:38.343 "raid_level": "raid0", 00:14:38.343 "superblock": true, 00:14:38.343 "num_base_bdevs": 4, 00:14:38.343 "num_base_bdevs_discovered": 4, 00:14:38.343 "num_base_bdevs_operational": 4, 00:14:38.343 "base_bdevs_list": [ 00:14:38.343 { 00:14:38.343 "name": "BaseBdev1", 00:14:38.343 "uuid": "9ae43187-f21a-4359-bc41-f12383ab79f0", 00:14:38.343 "is_configured": true, 00:14:38.343 "data_offset": 2048, 00:14:38.343 "data_size": 63488 00:14:38.343 }, 00:14:38.343 { 00:14:38.343 "name": "BaseBdev2", 00:14:38.343 "uuid": "9bef18c2-cf1c-8258-8122-96ce912cb39e", 00:14:38.343 "is_configured": true, 00:14:38.343 "data_offset": 2048, 00:14:38.343 "data_size": 63488 00:14:38.343 }, 00:14:38.343 { 00:14:38.343 "name": "BaseBdev3", 00:14:38.343 "uuid": "8b1e89f4-ebc5-1751-8c3d-02ce1f8b9853", 00:14:38.343 "is_configured": true, 00:14:38.343 "data_offset": 2048, 00:14:38.343 "data_size": 63488 00:14:38.343 }, 00:14:38.343 { 00:14:38.343 "name": "BaseBdev4", 00:14:38.343 "uuid": "8f132319-76ac-b253-aaf4-330c26442aae", 00:14:38.343 "is_configured": true, 00:14:38.343 "data_offset": 2048, 00:14:38.343 "data_size": 63488 00:14:38.343 } 00:14:38.343 ] 00:14:38.343 }' 00:14:38.343 15:03:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:38.343 15:03:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.600 15:03:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:38.857 15:03:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:38.857 [2024-07-12 15:03:04.527715] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14053b8a0ec0 00:14:39.789 15:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:40.046 15:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:40.046 15:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:40.046 15:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:14:40.046 15:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:40.046 15:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:40.046 15:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:40.046 15:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:40.046 15:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:40.046 15:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:40.046 15:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:40.046 15:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:40.046 15:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:40.046 15:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:40.046 15:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:40.046 15:03:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.303 15:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:40.303 "name": "raid_bdev1", 00:14:40.303 "uuid": "d697bee4-405f-11ef-b2a4-e9dca065e82e", 00:14:40.303 "strip_size_kb": 64, 00:14:40.303 "state": "online", 00:14:40.303 "raid_level": "raid0", 00:14:40.303 "superblock": true, 00:14:40.303 "num_base_bdevs": 4, 00:14:40.303 "num_base_bdevs_discovered": 4, 00:14:40.303 "num_base_bdevs_operational": 4, 00:14:40.303 "base_bdevs_list": [ 00:14:40.303 { 00:14:40.303 "name": "BaseBdev1", 00:14:40.303 "uuid": "9ae43187-f21a-4359-bc41-f12383ab79f0", 00:14:40.303 "is_configured": true, 00:14:40.303 "data_offset": 2048, 00:14:40.303 "data_size": 63488 00:14:40.303 }, 00:14:40.303 { 00:14:40.303 "name": "BaseBdev2", 00:14:40.303 "uuid": "9bef18c2-cf1c-8258-8122-96ce912cb39e", 00:14:40.303 "is_configured": true, 00:14:40.303 "data_offset": 2048, 00:14:40.303 "data_size": 63488 00:14:40.303 }, 00:14:40.303 { 00:14:40.303 "name": "BaseBdev3", 00:14:40.303 "uuid": "8b1e89f4-ebc5-1751-8c3d-02ce1f8b9853", 00:14:40.303 "is_configured": true, 00:14:40.303 "data_offset": 2048, 00:14:40.303 "data_size": 63488 00:14:40.303 }, 00:14:40.303 { 00:14:40.303 "name": "BaseBdev4", 00:14:40.303 "uuid": "8f132319-76ac-b253-aaf4-330c26442aae", 00:14:40.303 "is_configured": true, 00:14:40.303 "data_offset": 2048, 00:14:40.303 "data_size": 63488 00:14:40.303 } 00:14:40.303 ] 00:14:40.303 }' 00:14:40.303 15:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:40.303 15:03:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.869 15:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:40.869 [2024-07-12 15:03:06.656499] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:40.869 [2024-07-12 15:03:06.656535] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:40.869 [2024-07-12 15:03:06.656934] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:40.869 [2024-07-12 15:03:06.656953] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.869 [2024-07-12 15:03:06.656964] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:40.869 [2024-07-12 15:03:06.656969] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14053b835900 name raid_bdev1, state offline 00:14:40.869 0 00:14:40.869 15:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 60562 00:14:40.869 15:03:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 60562 ']' 00:14:40.869 15:03:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 60562 00:14:40.869 15:03:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:14:40.869 15:03:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:14:40.869 15:03:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 60562 00:14:40.869 15:03:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:14:40.869 15:03:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:14:40.869 15:03:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:14:40.869 15:03:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60562' 00:14:40.869 killing process with pid 60562 00:14:40.869 15:03:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 60562 00:14:40.869 [2024-07-12 15:03:06.683574] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:40.869 15:03:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 60562 00:14:41.128 [2024-07-12 15:03:06.717018] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:41.390 15:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.g8Dxxz37IB 00:14:41.390 15:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:41.390 15:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:41.390 15:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.47 00:14:41.390 15:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:14:41.390 15:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:41.390 15:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:41.390 15:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.47 != \0\.\0\0 ]] 00:14:41.390 00:14:41.390 real 0m7.559s 00:14:41.390 user 0m12.012s 00:14:41.390 sys 0m1.235s 00:14:41.390 15:03:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:41.390 15:03:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.390 ************************************ 00:14:41.390 END TEST raid_write_error_test 00:14:41.390 ************************************ 00:14:41.390 15:03:07 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:41.390 15:03:07 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:14:41.390 15:03:07 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:14:41.390 15:03:07 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:41.390 15:03:07 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:41.390 15:03:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:41.390 ************************************ 00:14:41.390 START TEST raid_state_function_test 00:14:41.390 ************************************ 00:14:41.390 15:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 false 00:14:41.390 15:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:14:41.390 15:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:14:41.390 15:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:14:41.390 15:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:41.390 15:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:41.390 15:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:41.390 15:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:41.390 15:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:41.391 15:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:41.391 15:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:41.391 15:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:41.391 15:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:41.391 15:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:14:41.391 15:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:41.391 15:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:41.391 15:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:14:41.391 15:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:41.391 15:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:41.391 15:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:41.391 15:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:41.391 15:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:41.391 15:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:41.391 15:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:41.391 15:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:41.391 15:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:14:41.391 15:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:41.391 15:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:41.391 15:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:14:41.391 15:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:14:41.391 15:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=60698 00:14:41.391 Process raid pid: 60698 00:14:41.391 15:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 60698' 00:14:41.391 15:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 60698 /var/tmp/spdk-raid.sock 00:14:41.391 15:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:41.391 15:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 60698 ']' 00:14:41.391 15:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:41.391 15:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:41.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:41.391 15:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:41.391 15:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:41.391 15:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.391 [2024-07-12 15:03:07.035099] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:14:41.391 [2024-07-12 15:03:07.035277] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:14:41.980 EAL: TSC is not safe to use in SMP mode 00:14:41.980 EAL: TSC is not invariant 00:14:41.980 [2024-07-12 15:03:07.578286] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.980 [2024-07-12 15:03:07.692696] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:41.980 [2024-07-12 15:03:07.695432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.980 [2024-07-12 15:03:07.696398] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:41.980 [2024-07-12 15:03:07.696414] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:42.598 15:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:42.598 15:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:14:42.599 15:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:42.599 [2024-07-12 15:03:08.344849] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:42.599 [2024-07-12 15:03:08.344924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:42.599 [2024-07-12 15:03:08.344930] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:42.599 [2024-07-12 15:03:08.344939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:42.599 [2024-07-12 15:03:08.344943] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:42.599 [2024-07-12 15:03:08.344951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:42.599 [2024-07-12 15:03:08.344954] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:42.599 [2024-07-12 15:03:08.344962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:42.599 15:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:42.599 15:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:42.599 15:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:42.599 15:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:42.599 15:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:42.599 15:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:42.599 15:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:42.599 15:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:42.599 15:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:42.599 15:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:42.599 15:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.599 15:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.856 15:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:42.856 "name": "Existed_Raid", 00:14:42.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.856 "strip_size_kb": 64, 00:14:42.856 "state": "configuring", 00:14:42.856 "raid_level": "concat", 00:14:42.856 "superblock": false, 00:14:42.856 "num_base_bdevs": 4, 00:14:42.856 "num_base_bdevs_discovered": 0, 00:14:42.856 "num_base_bdevs_operational": 4, 00:14:42.856 "base_bdevs_list": [ 00:14:42.856 { 00:14:42.856 "name": "BaseBdev1", 00:14:42.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.856 "is_configured": false, 00:14:42.856 "data_offset": 0, 00:14:42.856 "data_size": 0 00:14:42.856 }, 00:14:42.856 { 00:14:42.856 "name": "BaseBdev2", 00:14:42.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.856 "is_configured": false, 00:14:42.856 "data_offset": 0, 00:14:42.856 "data_size": 0 00:14:42.856 }, 00:14:42.856 { 00:14:42.856 "name": "BaseBdev3", 00:14:42.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.856 "is_configured": false, 00:14:42.856 "data_offset": 0, 00:14:42.856 "data_size": 0 00:14:42.856 }, 00:14:42.856 { 00:14:42.856 "name": "BaseBdev4", 00:14:42.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.856 "is_configured": false, 00:14:42.856 "data_offset": 0, 00:14:42.856 "data_size": 0 00:14:42.856 } 00:14:42.856 ] 00:14:42.856 }' 00:14:42.856 15:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:42.856 15:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.420 15:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:43.680 [2024-07-12 15:03:09.284868] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:43.680 [2024-07-12 15:03:09.284904] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3ec2d6834500 name Existed_Raid, state configuring 00:14:43.680 15:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:43.938 [2024-07-12 15:03:09.552914] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:43.938 [2024-07-12 15:03:09.552996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:43.938 [2024-07-12 15:03:09.553002] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:43.938 [2024-07-12 15:03:09.553011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:43.938 [2024-07-12 15:03:09.553014] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:43.938 [2024-07-12 15:03:09.553022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:43.938 [2024-07-12 15:03:09.553026] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:43.938 [2024-07-12 15:03:09.553033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:43.938 15:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:44.197 [2024-07-12 15:03:09.790160] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:44.197 BaseBdev1 00:14:44.197 15:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:44.197 15:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:44.197 15:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:44.197 15:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:44.197 15:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:44.197 15:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:44.197 15:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:44.455 15:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:44.713 [ 00:14:44.713 { 00:14:44.713 "name": "BaseBdev1", 00:14:44.713 "aliases": [ 00:14:44.713 "da27d6b2-405f-11ef-b2a4-e9dca065e82e" 00:14:44.713 ], 00:14:44.713 "product_name": "Malloc disk", 00:14:44.713 "block_size": 512, 00:14:44.713 "num_blocks": 65536, 00:14:44.713 "uuid": "da27d6b2-405f-11ef-b2a4-e9dca065e82e", 00:14:44.713 "assigned_rate_limits": { 00:14:44.713 "rw_ios_per_sec": 0, 00:14:44.713 "rw_mbytes_per_sec": 0, 00:14:44.713 "r_mbytes_per_sec": 0, 00:14:44.713 "w_mbytes_per_sec": 0 00:14:44.713 }, 00:14:44.713 "claimed": true, 00:14:44.713 "claim_type": "exclusive_write", 00:14:44.713 "zoned": false, 00:14:44.713 "supported_io_types": { 00:14:44.713 "read": true, 00:14:44.713 "write": true, 00:14:44.713 "unmap": true, 00:14:44.713 "flush": true, 00:14:44.713 "reset": true, 00:14:44.713 "nvme_admin": false, 00:14:44.713 "nvme_io": false, 00:14:44.713 "nvme_io_md": false, 00:14:44.713 "write_zeroes": true, 00:14:44.713 "zcopy": true, 00:14:44.713 "get_zone_info": false, 00:14:44.713 "zone_management": false, 00:14:44.713 "zone_append": false, 00:14:44.713 "compare": false, 00:14:44.713 "compare_and_write": false, 00:14:44.713 "abort": true, 00:14:44.713 "seek_hole": false, 00:14:44.713 "seek_data": false, 00:14:44.714 "copy": true, 00:14:44.714 "nvme_iov_md": false 00:14:44.714 }, 00:14:44.714 "memory_domains": [ 00:14:44.714 { 00:14:44.714 "dma_device_id": "system", 00:14:44.714 "dma_device_type": 1 00:14:44.714 }, 00:14:44.714 { 00:14:44.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.714 "dma_device_type": 2 00:14:44.714 } 00:14:44.714 ], 00:14:44.714 "driver_specific": {} 00:14:44.714 } 00:14:44.714 ] 00:14:44.714 15:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:44.714 15:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:44.714 15:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:44.714 15:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:44.714 15:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:44.714 15:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:44.714 15:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:44.714 15:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:44.714 15:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:44.714 15:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:44.714 15:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:44.714 15:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.714 15:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.973 15:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:44.973 "name": "Existed_Raid", 00:14:44.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.973 "strip_size_kb": 64, 00:14:44.973 "state": "configuring", 00:14:44.973 "raid_level": "concat", 00:14:44.973 "superblock": false, 00:14:44.973 "num_base_bdevs": 4, 00:14:44.973 "num_base_bdevs_discovered": 1, 00:14:44.973 "num_base_bdevs_operational": 4, 00:14:44.973 "base_bdevs_list": [ 00:14:44.973 { 00:14:44.973 "name": "BaseBdev1", 00:14:44.973 "uuid": "da27d6b2-405f-11ef-b2a4-e9dca065e82e", 00:14:44.973 "is_configured": true, 00:14:44.973 "data_offset": 0, 00:14:44.973 "data_size": 65536 00:14:44.973 }, 00:14:44.973 { 00:14:44.973 "name": "BaseBdev2", 00:14:44.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.973 "is_configured": false, 00:14:44.973 "data_offset": 0, 00:14:44.973 "data_size": 0 00:14:44.973 }, 00:14:44.973 { 00:14:44.973 "name": "BaseBdev3", 00:14:44.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.973 "is_configured": false, 00:14:44.973 "data_offset": 0, 00:14:44.973 "data_size": 0 00:14:44.973 }, 00:14:44.973 { 00:14:44.973 "name": "BaseBdev4", 00:14:44.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.973 "is_configured": false, 00:14:44.973 "data_offset": 0, 00:14:44.973 "data_size": 0 00:14:44.973 } 00:14:44.973 ] 00:14:44.973 }' 00:14:44.973 15:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:44.973 15:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.232 15:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:45.490 [2024-07-12 15:03:11.092988] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:45.490 [2024-07-12 15:03:11.093030] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3ec2d6834500 name Existed_Raid, state configuring 00:14:45.490 15:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:45.748 [2024-07-12 15:03:11.405045] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.748 [2024-07-12 15:03:11.406116] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:45.748 [2024-07-12 15:03:11.406176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:45.748 [2024-07-12 15:03:11.406196] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:45.748 [2024-07-12 15:03:11.406204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:45.748 [2024-07-12 15:03:11.406207] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:45.748 [2024-07-12 15:03:11.406214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:45.748 15:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:45.749 15:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:45.749 15:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:45.749 15:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:45.749 15:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:45.749 15:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:45.749 15:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:45.749 15:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:45.749 15:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:45.749 15:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:45.749 15:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:45.749 15:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:45.749 15:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.749 15:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.007 15:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:46.007 "name": "Existed_Raid", 00:14:46.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.007 "strip_size_kb": 64, 00:14:46.007 "state": "configuring", 00:14:46.007 "raid_level": "concat", 00:14:46.007 "superblock": false, 00:14:46.007 "num_base_bdevs": 4, 00:14:46.007 "num_base_bdevs_discovered": 1, 00:14:46.007 "num_base_bdevs_operational": 4, 00:14:46.007 "base_bdevs_list": [ 00:14:46.007 { 00:14:46.007 "name": "BaseBdev1", 00:14:46.007 "uuid": "da27d6b2-405f-11ef-b2a4-e9dca065e82e", 00:14:46.007 "is_configured": true, 00:14:46.007 "data_offset": 0, 00:14:46.007 "data_size": 65536 00:14:46.007 }, 00:14:46.007 { 00:14:46.007 "name": "BaseBdev2", 00:14:46.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.007 "is_configured": false, 00:14:46.007 "data_offset": 0, 00:14:46.007 "data_size": 0 00:14:46.007 }, 00:14:46.007 { 00:14:46.007 "name": "BaseBdev3", 00:14:46.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.007 "is_configured": false, 00:14:46.007 "data_offset": 0, 00:14:46.007 "data_size": 0 00:14:46.007 }, 00:14:46.007 { 00:14:46.007 "name": "BaseBdev4", 00:14:46.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.007 "is_configured": false, 00:14:46.007 "data_offset": 0, 00:14:46.007 "data_size": 0 00:14:46.007 } 00:14:46.007 ] 00:14:46.007 }' 00:14:46.007 15:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:46.007 15:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.265 15:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:46.522 [2024-07-12 15:03:12.261300] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:46.522 BaseBdev2 00:14:46.522 15:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:46.522 15:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:46.522 15:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:46.522 15:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:46.522 15:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:46.522 15:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:46.522 15:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:46.780 15:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:47.039 [ 00:14:47.039 { 00:14:47.039 "name": "BaseBdev2", 00:14:47.039 "aliases": [ 00:14:47.039 "dba10ec4-405f-11ef-b2a4-e9dca065e82e" 00:14:47.039 ], 00:14:47.039 "product_name": "Malloc disk", 00:14:47.039 "block_size": 512, 00:14:47.039 "num_blocks": 65536, 00:14:47.039 "uuid": "dba10ec4-405f-11ef-b2a4-e9dca065e82e", 00:14:47.039 "assigned_rate_limits": { 00:14:47.039 "rw_ios_per_sec": 0, 00:14:47.039 "rw_mbytes_per_sec": 0, 00:14:47.039 "r_mbytes_per_sec": 0, 00:14:47.039 "w_mbytes_per_sec": 0 00:14:47.039 }, 00:14:47.039 "claimed": true, 00:14:47.039 "claim_type": "exclusive_write", 00:14:47.039 "zoned": false, 00:14:47.039 "supported_io_types": { 00:14:47.039 "read": true, 00:14:47.039 "write": true, 00:14:47.039 "unmap": true, 00:14:47.039 "flush": true, 00:14:47.039 "reset": true, 00:14:47.039 "nvme_admin": false, 00:14:47.039 "nvme_io": false, 00:14:47.039 "nvme_io_md": false, 00:14:47.039 "write_zeroes": true, 00:14:47.039 "zcopy": true, 00:14:47.039 "get_zone_info": false, 00:14:47.039 "zone_management": false, 00:14:47.039 "zone_append": false, 00:14:47.039 "compare": false, 00:14:47.039 "compare_and_write": false, 00:14:47.039 "abort": true, 00:14:47.039 "seek_hole": false, 00:14:47.039 "seek_data": false, 00:14:47.039 "copy": true, 00:14:47.039 "nvme_iov_md": false 00:14:47.039 }, 00:14:47.039 "memory_domains": [ 00:14:47.039 { 00:14:47.039 "dma_device_id": "system", 00:14:47.039 "dma_device_type": 1 00:14:47.039 }, 00:14:47.039 { 00:14:47.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.039 "dma_device_type": 2 00:14:47.039 } 00:14:47.039 ], 00:14:47.039 "driver_specific": {} 00:14:47.039 } 00:14:47.039 ] 00:14:47.039 15:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:47.039 15:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:47.039 15:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:47.039 15:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:47.039 15:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:47.039 15:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:47.039 15:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:47.039 15:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:47.039 15:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:47.039 15:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:47.039 15:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:47.039 15:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:47.039 15:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:47.039 15:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.039 15:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.297 15:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:47.297 "name": "Existed_Raid", 00:14:47.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.297 "strip_size_kb": 64, 00:14:47.297 "state": "configuring", 00:14:47.297 "raid_level": "concat", 00:14:47.297 "superblock": false, 00:14:47.297 "num_base_bdevs": 4, 00:14:47.297 "num_base_bdevs_discovered": 2, 00:14:47.297 "num_base_bdevs_operational": 4, 00:14:47.297 "base_bdevs_list": [ 00:14:47.297 { 00:14:47.297 "name": "BaseBdev1", 00:14:47.297 "uuid": "da27d6b2-405f-11ef-b2a4-e9dca065e82e", 00:14:47.297 "is_configured": true, 00:14:47.297 "data_offset": 0, 00:14:47.297 "data_size": 65536 00:14:47.297 }, 00:14:47.297 { 00:14:47.297 "name": "BaseBdev2", 00:14:47.297 "uuid": "dba10ec4-405f-11ef-b2a4-e9dca065e82e", 00:14:47.297 "is_configured": true, 00:14:47.297 "data_offset": 0, 00:14:47.297 "data_size": 65536 00:14:47.297 }, 00:14:47.297 { 00:14:47.297 "name": "BaseBdev3", 00:14:47.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.297 "is_configured": false, 00:14:47.297 "data_offset": 0, 00:14:47.297 "data_size": 0 00:14:47.297 }, 00:14:47.297 { 00:14:47.297 "name": "BaseBdev4", 00:14:47.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.297 "is_configured": false, 00:14:47.297 "data_offset": 0, 00:14:47.297 "data_size": 0 00:14:47.297 } 00:14:47.297 ] 00:14:47.297 }' 00:14:47.297 15:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:47.297 15:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.860 15:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:47.860 [2024-07-12 15:03:13.669343] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:47.860 BaseBdev3 00:14:48.119 15:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:14:48.119 15:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:48.119 15:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:48.119 15:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:48.119 15:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:48.119 15:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:48.119 15:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:48.376 15:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:48.667 [ 00:14:48.667 { 00:14:48.667 "name": "BaseBdev3", 00:14:48.667 "aliases": [ 00:14:48.667 "dc77ea52-405f-11ef-b2a4-e9dca065e82e" 00:14:48.667 ], 00:14:48.667 "product_name": "Malloc disk", 00:14:48.667 "block_size": 512, 00:14:48.667 "num_blocks": 65536, 00:14:48.667 "uuid": "dc77ea52-405f-11ef-b2a4-e9dca065e82e", 00:14:48.667 "assigned_rate_limits": { 00:14:48.667 "rw_ios_per_sec": 0, 00:14:48.667 "rw_mbytes_per_sec": 0, 00:14:48.667 "r_mbytes_per_sec": 0, 00:14:48.667 "w_mbytes_per_sec": 0 00:14:48.667 }, 00:14:48.667 "claimed": true, 00:14:48.667 "claim_type": "exclusive_write", 00:14:48.667 "zoned": false, 00:14:48.667 "supported_io_types": { 00:14:48.667 "read": true, 00:14:48.667 "write": true, 00:14:48.667 "unmap": true, 00:14:48.667 "flush": true, 00:14:48.667 "reset": true, 00:14:48.667 "nvme_admin": false, 00:14:48.667 "nvme_io": false, 00:14:48.667 "nvme_io_md": false, 00:14:48.667 "write_zeroes": true, 00:14:48.667 "zcopy": true, 00:14:48.667 "get_zone_info": false, 00:14:48.667 "zone_management": false, 00:14:48.667 "zone_append": false, 00:14:48.667 "compare": false, 00:14:48.667 "compare_and_write": false, 00:14:48.667 "abort": true, 00:14:48.667 "seek_hole": false, 00:14:48.667 "seek_data": false, 00:14:48.667 "copy": true, 00:14:48.667 "nvme_iov_md": false 00:14:48.667 }, 00:14:48.667 "memory_domains": [ 00:14:48.667 { 00:14:48.667 "dma_device_id": "system", 00:14:48.667 "dma_device_type": 1 00:14:48.667 }, 00:14:48.667 { 00:14:48.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.667 "dma_device_type": 2 00:14:48.667 } 00:14:48.667 ], 00:14:48.667 "driver_specific": {} 00:14:48.667 } 00:14:48.667 ] 00:14:48.667 15:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:48.667 15:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:48.667 15:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:48.667 15:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:48.667 15:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:48.667 15:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:48.667 15:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:48.667 15:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:48.667 15:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:48.667 15:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:48.667 15:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:48.667 15:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:48.667 15:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:48.667 15:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.667 15:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.951 15:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:48.951 "name": "Existed_Raid", 00:14:48.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.951 "strip_size_kb": 64, 00:14:48.951 "state": "configuring", 00:14:48.951 "raid_level": "concat", 00:14:48.951 "superblock": false, 00:14:48.951 "num_base_bdevs": 4, 00:14:48.951 "num_base_bdevs_discovered": 3, 00:14:48.951 "num_base_bdevs_operational": 4, 00:14:48.951 "base_bdevs_list": [ 00:14:48.951 { 00:14:48.951 "name": "BaseBdev1", 00:14:48.951 "uuid": "da27d6b2-405f-11ef-b2a4-e9dca065e82e", 00:14:48.951 "is_configured": true, 00:14:48.952 "data_offset": 0, 00:14:48.952 "data_size": 65536 00:14:48.952 }, 00:14:48.952 { 00:14:48.952 "name": "BaseBdev2", 00:14:48.952 "uuid": "dba10ec4-405f-11ef-b2a4-e9dca065e82e", 00:14:48.952 "is_configured": true, 00:14:48.952 "data_offset": 0, 00:14:48.952 "data_size": 65536 00:14:48.952 }, 00:14:48.952 { 00:14:48.952 "name": "BaseBdev3", 00:14:48.952 "uuid": "dc77ea52-405f-11ef-b2a4-e9dca065e82e", 00:14:48.952 "is_configured": true, 00:14:48.952 "data_offset": 0, 00:14:48.952 "data_size": 65536 00:14:48.952 }, 00:14:48.952 { 00:14:48.952 "name": "BaseBdev4", 00:14:48.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.952 "is_configured": false, 00:14:48.952 "data_offset": 0, 00:14:48.952 "data_size": 0 00:14:48.952 } 00:14:48.952 ] 00:14:48.952 }' 00:14:48.952 15:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:48.952 15:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.210 15:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:49.470 [2024-07-12 15:03:15.153402] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:49.470 [2024-07-12 15:03:15.153440] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3ec2d6834a00 00:14:49.470 [2024-07-12 15:03:15.153445] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:49.470 [2024-07-12 15:03:15.153471] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3ec2d6897e20 00:14:49.470 [2024-07-12 15:03:15.153578] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3ec2d6834a00 00:14:49.470 [2024-07-12 15:03:15.153583] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3ec2d6834a00 00:14:49.470 [2024-07-12 15:03:15.153622] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.470 BaseBdev4 00:14:49.470 15:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:14:49.470 15:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:14:49.470 15:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:49.470 15:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:49.470 15:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:49.470 15:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:49.470 15:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:49.728 15:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:49.987 [ 00:14:49.987 { 00:14:49.987 "name": "BaseBdev4", 00:14:49.987 "aliases": [ 00:14:49.987 "dd5a5c7d-405f-11ef-b2a4-e9dca065e82e" 00:14:49.987 ], 00:14:49.987 "product_name": "Malloc disk", 00:14:49.987 "block_size": 512, 00:14:49.987 "num_blocks": 65536, 00:14:49.987 "uuid": "dd5a5c7d-405f-11ef-b2a4-e9dca065e82e", 00:14:49.987 "assigned_rate_limits": { 00:14:49.987 "rw_ios_per_sec": 0, 00:14:49.987 "rw_mbytes_per_sec": 0, 00:14:49.987 "r_mbytes_per_sec": 0, 00:14:49.987 "w_mbytes_per_sec": 0 00:14:49.987 }, 00:14:49.987 "claimed": true, 00:14:49.987 "claim_type": "exclusive_write", 00:14:49.987 "zoned": false, 00:14:49.987 "supported_io_types": { 00:14:49.987 "read": true, 00:14:49.987 "write": true, 00:14:49.987 "unmap": true, 00:14:49.987 "flush": true, 00:14:49.987 "reset": true, 00:14:49.987 "nvme_admin": false, 00:14:49.987 "nvme_io": false, 00:14:49.987 "nvme_io_md": false, 00:14:49.987 "write_zeroes": true, 00:14:49.987 "zcopy": true, 00:14:49.987 "get_zone_info": false, 00:14:49.987 "zone_management": false, 00:14:49.987 "zone_append": false, 00:14:49.987 "compare": false, 00:14:49.987 "compare_and_write": false, 00:14:49.987 "abort": true, 00:14:49.987 "seek_hole": false, 00:14:49.987 "seek_data": false, 00:14:49.987 "copy": true, 00:14:49.987 "nvme_iov_md": false 00:14:49.987 }, 00:14:49.987 "memory_domains": [ 00:14:49.987 { 00:14:49.987 "dma_device_id": "system", 00:14:49.987 "dma_device_type": 1 00:14:49.987 }, 00:14:49.987 { 00:14:49.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.987 "dma_device_type": 2 00:14:49.987 } 00:14:49.987 ], 00:14:49.987 "driver_specific": {} 00:14:49.987 } 00:14:49.987 ] 00:14:49.987 15:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:49.987 15:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:49.987 15:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:49.987 15:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:49.987 15:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:49.987 15:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:49.987 15:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:49.987 15:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:49.987 15:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:49.987 15:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:49.987 15:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:49.987 15:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:49.987 15:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:49.987 15:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.987 15:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.246 15:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:50.246 "name": "Existed_Raid", 00:14:50.246 "uuid": "dd5a65b0-405f-11ef-b2a4-e9dca065e82e", 00:14:50.246 "strip_size_kb": 64, 00:14:50.246 "state": "online", 00:14:50.246 "raid_level": "concat", 00:14:50.246 "superblock": false, 00:14:50.246 "num_base_bdevs": 4, 00:14:50.246 "num_base_bdevs_discovered": 4, 00:14:50.246 "num_base_bdevs_operational": 4, 00:14:50.246 "base_bdevs_list": [ 00:14:50.246 { 00:14:50.246 "name": "BaseBdev1", 00:14:50.246 "uuid": "da27d6b2-405f-11ef-b2a4-e9dca065e82e", 00:14:50.246 "is_configured": true, 00:14:50.246 "data_offset": 0, 00:14:50.246 "data_size": 65536 00:14:50.246 }, 00:14:50.246 { 00:14:50.246 "name": "BaseBdev2", 00:14:50.246 "uuid": "dba10ec4-405f-11ef-b2a4-e9dca065e82e", 00:14:50.246 "is_configured": true, 00:14:50.246 "data_offset": 0, 00:14:50.246 "data_size": 65536 00:14:50.246 }, 00:14:50.246 { 00:14:50.246 "name": "BaseBdev3", 00:14:50.246 "uuid": "dc77ea52-405f-11ef-b2a4-e9dca065e82e", 00:14:50.246 "is_configured": true, 00:14:50.246 "data_offset": 0, 00:14:50.246 "data_size": 65536 00:14:50.246 }, 00:14:50.246 { 00:14:50.246 "name": "BaseBdev4", 00:14:50.246 "uuid": "dd5a5c7d-405f-11ef-b2a4-e9dca065e82e", 00:14:50.246 "is_configured": true, 00:14:50.246 "data_offset": 0, 00:14:50.246 "data_size": 65536 00:14:50.246 } 00:14:50.246 ] 00:14:50.246 }' 00:14:50.246 15:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:50.246 15:03:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.504 15:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:50.505 15:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:50.505 15:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:50.505 15:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:50.505 15:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:50.505 15:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:50.505 15:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:50.505 15:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:50.763 [2024-07-12 15:03:16.541343] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:50.763 15:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:50.763 "name": "Existed_Raid", 00:14:50.763 "aliases": [ 00:14:50.763 "dd5a65b0-405f-11ef-b2a4-e9dca065e82e" 00:14:50.763 ], 00:14:50.763 "product_name": "Raid Volume", 00:14:50.763 "block_size": 512, 00:14:50.763 "num_blocks": 262144, 00:14:50.763 "uuid": "dd5a65b0-405f-11ef-b2a4-e9dca065e82e", 00:14:50.763 "assigned_rate_limits": { 00:14:50.763 "rw_ios_per_sec": 0, 00:14:50.763 "rw_mbytes_per_sec": 0, 00:14:50.763 "r_mbytes_per_sec": 0, 00:14:50.763 "w_mbytes_per_sec": 0 00:14:50.763 }, 00:14:50.763 "claimed": false, 00:14:50.763 "zoned": false, 00:14:50.763 "supported_io_types": { 00:14:50.763 "read": true, 00:14:50.763 "write": true, 00:14:50.763 "unmap": true, 00:14:50.763 "flush": true, 00:14:50.763 "reset": true, 00:14:50.763 "nvme_admin": false, 00:14:50.763 "nvme_io": false, 00:14:50.763 "nvme_io_md": false, 00:14:50.763 "write_zeroes": true, 00:14:50.763 "zcopy": false, 00:14:50.763 "get_zone_info": false, 00:14:50.763 "zone_management": false, 00:14:50.763 "zone_append": false, 00:14:50.763 "compare": false, 00:14:50.763 "compare_and_write": false, 00:14:50.763 "abort": false, 00:14:50.763 "seek_hole": false, 00:14:50.763 "seek_data": false, 00:14:50.763 "copy": false, 00:14:50.763 "nvme_iov_md": false 00:14:50.763 }, 00:14:50.763 "memory_domains": [ 00:14:50.763 { 00:14:50.763 "dma_device_id": "system", 00:14:50.763 "dma_device_type": 1 00:14:50.763 }, 00:14:50.763 { 00:14:50.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.763 "dma_device_type": 2 00:14:50.763 }, 00:14:50.763 { 00:14:50.763 "dma_device_id": "system", 00:14:50.763 "dma_device_type": 1 00:14:50.763 }, 00:14:50.763 { 00:14:50.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.763 "dma_device_type": 2 00:14:50.763 }, 00:14:50.763 { 00:14:50.763 "dma_device_id": "system", 00:14:50.763 "dma_device_type": 1 00:14:50.763 }, 00:14:50.763 { 00:14:50.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.763 "dma_device_type": 2 00:14:50.763 }, 00:14:50.763 { 00:14:50.763 "dma_device_id": "system", 00:14:50.763 "dma_device_type": 1 00:14:50.763 }, 00:14:50.763 { 00:14:50.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.763 "dma_device_type": 2 00:14:50.763 } 00:14:50.763 ], 00:14:50.763 "driver_specific": { 00:14:50.763 "raid": { 00:14:50.763 "uuid": "dd5a65b0-405f-11ef-b2a4-e9dca065e82e", 00:14:50.763 "strip_size_kb": 64, 00:14:50.763 "state": "online", 00:14:50.763 "raid_level": "concat", 00:14:50.763 "superblock": false, 00:14:50.763 "num_base_bdevs": 4, 00:14:50.763 "num_base_bdevs_discovered": 4, 00:14:50.763 "num_base_bdevs_operational": 4, 00:14:50.763 "base_bdevs_list": [ 00:14:50.763 { 00:14:50.763 "name": "BaseBdev1", 00:14:50.763 "uuid": "da27d6b2-405f-11ef-b2a4-e9dca065e82e", 00:14:50.763 "is_configured": true, 00:14:50.763 "data_offset": 0, 00:14:50.763 "data_size": 65536 00:14:50.763 }, 00:14:50.763 { 00:14:50.763 "name": "BaseBdev2", 00:14:50.763 "uuid": "dba10ec4-405f-11ef-b2a4-e9dca065e82e", 00:14:50.763 "is_configured": true, 00:14:50.763 "data_offset": 0, 00:14:50.763 "data_size": 65536 00:14:50.763 }, 00:14:50.763 { 00:14:50.763 "name": "BaseBdev3", 00:14:50.763 "uuid": "dc77ea52-405f-11ef-b2a4-e9dca065e82e", 00:14:50.763 "is_configured": true, 00:14:50.763 "data_offset": 0, 00:14:50.763 "data_size": 65536 00:14:50.763 }, 00:14:50.763 { 00:14:50.763 "name": "BaseBdev4", 00:14:50.763 "uuid": "dd5a5c7d-405f-11ef-b2a4-e9dca065e82e", 00:14:50.763 "is_configured": true, 00:14:50.763 "data_offset": 0, 00:14:50.763 "data_size": 65536 00:14:50.763 } 00:14:50.763 ] 00:14:50.763 } 00:14:50.763 } 00:14:50.763 }' 00:14:50.763 15:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:50.763 15:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:50.763 BaseBdev2 00:14:50.763 BaseBdev3 00:14:50.763 BaseBdev4' 00:14:50.764 15:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:50.764 15:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:50.764 15:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:51.329 15:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:51.329 "name": "BaseBdev1", 00:14:51.329 "aliases": [ 00:14:51.329 "da27d6b2-405f-11ef-b2a4-e9dca065e82e" 00:14:51.329 ], 00:14:51.329 "product_name": "Malloc disk", 00:14:51.329 "block_size": 512, 00:14:51.329 "num_blocks": 65536, 00:14:51.329 "uuid": "da27d6b2-405f-11ef-b2a4-e9dca065e82e", 00:14:51.329 "assigned_rate_limits": { 00:14:51.329 "rw_ios_per_sec": 0, 00:14:51.329 "rw_mbytes_per_sec": 0, 00:14:51.329 "r_mbytes_per_sec": 0, 00:14:51.329 "w_mbytes_per_sec": 0 00:14:51.329 }, 00:14:51.329 "claimed": true, 00:14:51.329 "claim_type": "exclusive_write", 00:14:51.329 "zoned": false, 00:14:51.329 "supported_io_types": { 00:14:51.329 "read": true, 00:14:51.329 "write": true, 00:14:51.329 "unmap": true, 00:14:51.329 "flush": true, 00:14:51.329 "reset": true, 00:14:51.329 "nvme_admin": false, 00:14:51.329 "nvme_io": false, 00:14:51.329 "nvme_io_md": false, 00:14:51.329 "write_zeroes": true, 00:14:51.329 "zcopy": true, 00:14:51.329 "get_zone_info": false, 00:14:51.329 "zone_management": false, 00:14:51.329 "zone_append": false, 00:14:51.329 "compare": false, 00:14:51.329 "compare_and_write": false, 00:14:51.329 "abort": true, 00:14:51.329 "seek_hole": false, 00:14:51.329 "seek_data": false, 00:14:51.329 "copy": true, 00:14:51.329 "nvme_iov_md": false 00:14:51.329 }, 00:14:51.329 "memory_domains": [ 00:14:51.329 { 00:14:51.329 "dma_device_id": "system", 00:14:51.329 "dma_device_type": 1 00:14:51.329 }, 00:14:51.329 { 00:14:51.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.329 "dma_device_type": 2 00:14:51.329 } 00:14:51.329 ], 00:14:51.329 "driver_specific": {} 00:14:51.329 }' 00:14:51.329 15:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:51.329 15:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:51.329 15:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:51.329 15:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:51.329 15:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:51.329 15:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:51.329 15:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:51.329 15:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:51.329 15:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:51.329 15:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:51.329 15:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:51.329 15:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:51.329 15:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:51.329 15:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:51.329 15:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:51.588 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:51.588 "name": "BaseBdev2", 00:14:51.588 "aliases": [ 00:14:51.588 "dba10ec4-405f-11ef-b2a4-e9dca065e82e" 00:14:51.588 ], 00:14:51.588 "product_name": "Malloc disk", 00:14:51.588 "block_size": 512, 00:14:51.588 "num_blocks": 65536, 00:14:51.588 "uuid": "dba10ec4-405f-11ef-b2a4-e9dca065e82e", 00:14:51.588 "assigned_rate_limits": { 00:14:51.588 "rw_ios_per_sec": 0, 00:14:51.588 "rw_mbytes_per_sec": 0, 00:14:51.588 "r_mbytes_per_sec": 0, 00:14:51.588 "w_mbytes_per_sec": 0 00:14:51.588 }, 00:14:51.588 "claimed": true, 00:14:51.588 "claim_type": "exclusive_write", 00:14:51.588 "zoned": false, 00:14:51.588 "supported_io_types": { 00:14:51.588 "read": true, 00:14:51.588 "write": true, 00:14:51.588 "unmap": true, 00:14:51.588 "flush": true, 00:14:51.588 "reset": true, 00:14:51.588 "nvme_admin": false, 00:14:51.588 "nvme_io": false, 00:14:51.588 "nvme_io_md": false, 00:14:51.588 "write_zeroes": true, 00:14:51.588 "zcopy": true, 00:14:51.588 "get_zone_info": false, 00:14:51.588 "zone_management": false, 00:14:51.588 "zone_append": false, 00:14:51.588 "compare": false, 00:14:51.588 "compare_and_write": false, 00:14:51.588 "abort": true, 00:14:51.588 "seek_hole": false, 00:14:51.588 "seek_data": false, 00:14:51.588 "copy": true, 00:14:51.588 "nvme_iov_md": false 00:14:51.588 }, 00:14:51.588 "memory_domains": [ 00:14:51.588 { 00:14:51.588 "dma_device_id": "system", 00:14:51.588 "dma_device_type": 1 00:14:51.588 }, 00:14:51.588 { 00:14:51.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.588 "dma_device_type": 2 00:14:51.588 } 00:14:51.588 ], 00:14:51.588 "driver_specific": {} 00:14:51.588 }' 00:14:51.588 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:51.588 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:51.588 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:51.588 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:51.588 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:51.588 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:51.588 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:51.588 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:51.588 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:51.588 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:51.588 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:51.588 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:51.588 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:51.588 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:51.588 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:51.847 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:51.847 "name": "BaseBdev3", 00:14:51.847 "aliases": [ 00:14:51.847 "dc77ea52-405f-11ef-b2a4-e9dca065e82e" 00:14:51.847 ], 00:14:51.847 "product_name": "Malloc disk", 00:14:51.847 "block_size": 512, 00:14:51.847 "num_blocks": 65536, 00:14:51.847 "uuid": "dc77ea52-405f-11ef-b2a4-e9dca065e82e", 00:14:51.847 "assigned_rate_limits": { 00:14:51.847 "rw_ios_per_sec": 0, 00:14:51.847 "rw_mbytes_per_sec": 0, 00:14:51.847 "r_mbytes_per_sec": 0, 00:14:51.847 "w_mbytes_per_sec": 0 00:14:51.847 }, 00:14:51.847 "claimed": true, 00:14:51.847 "claim_type": "exclusive_write", 00:14:51.847 "zoned": false, 00:14:51.847 "supported_io_types": { 00:14:51.847 "read": true, 00:14:51.847 "write": true, 00:14:51.847 "unmap": true, 00:14:51.847 "flush": true, 00:14:51.847 "reset": true, 00:14:51.847 "nvme_admin": false, 00:14:51.847 "nvme_io": false, 00:14:51.847 "nvme_io_md": false, 00:14:51.847 "write_zeroes": true, 00:14:51.847 "zcopy": true, 00:14:51.847 "get_zone_info": false, 00:14:51.847 "zone_management": false, 00:14:51.847 "zone_append": false, 00:14:51.847 "compare": false, 00:14:51.847 "compare_and_write": false, 00:14:51.847 "abort": true, 00:14:51.847 "seek_hole": false, 00:14:51.847 "seek_data": false, 00:14:51.847 "copy": true, 00:14:51.847 "nvme_iov_md": false 00:14:51.847 }, 00:14:51.847 "memory_domains": [ 00:14:51.847 { 00:14:51.847 "dma_device_id": "system", 00:14:51.848 "dma_device_type": 1 00:14:51.848 }, 00:14:51.848 { 00:14:51.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.848 "dma_device_type": 2 00:14:51.848 } 00:14:51.848 ], 00:14:51.848 "driver_specific": {} 00:14:51.848 }' 00:14:51.848 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:51.848 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:51.848 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:51.848 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:51.848 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:51.848 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:51.848 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:51.848 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:51.848 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:51.848 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:51.848 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:51.848 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:51.848 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:51.848 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:14:51.848 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:52.114 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:52.114 "name": "BaseBdev4", 00:14:52.114 "aliases": [ 00:14:52.114 "dd5a5c7d-405f-11ef-b2a4-e9dca065e82e" 00:14:52.114 ], 00:14:52.114 "product_name": "Malloc disk", 00:14:52.114 "block_size": 512, 00:14:52.114 "num_blocks": 65536, 00:14:52.114 "uuid": "dd5a5c7d-405f-11ef-b2a4-e9dca065e82e", 00:14:52.114 "assigned_rate_limits": { 00:14:52.114 "rw_ios_per_sec": 0, 00:14:52.114 "rw_mbytes_per_sec": 0, 00:14:52.114 "r_mbytes_per_sec": 0, 00:14:52.114 "w_mbytes_per_sec": 0 00:14:52.114 }, 00:14:52.114 "claimed": true, 00:14:52.114 "claim_type": "exclusive_write", 00:14:52.114 "zoned": false, 00:14:52.114 "supported_io_types": { 00:14:52.114 "read": true, 00:14:52.114 "write": true, 00:14:52.114 "unmap": true, 00:14:52.114 "flush": true, 00:14:52.114 "reset": true, 00:14:52.114 "nvme_admin": false, 00:14:52.114 "nvme_io": false, 00:14:52.114 "nvme_io_md": false, 00:14:52.114 "write_zeroes": true, 00:14:52.114 "zcopy": true, 00:14:52.114 "get_zone_info": false, 00:14:52.114 "zone_management": false, 00:14:52.114 "zone_append": false, 00:14:52.114 "compare": false, 00:14:52.114 "compare_and_write": false, 00:14:52.114 "abort": true, 00:14:52.114 "seek_hole": false, 00:14:52.114 "seek_data": false, 00:14:52.114 "copy": true, 00:14:52.114 "nvme_iov_md": false 00:14:52.114 }, 00:14:52.114 "memory_domains": [ 00:14:52.114 { 00:14:52.114 "dma_device_id": "system", 00:14:52.114 "dma_device_type": 1 00:14:52.114 }, 00:14:52.114 { 00:14:52.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.114 "dma_device_type": 2 00:14:52.114 } 00:14:52.114 ], 00:14:52.114 "driver_specific": {} 00:14:52.114 }' 00:14:52.114 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:52.114 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:52.114 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:52.114 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:52.114 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:52.114 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:52.114 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:52.114 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:52.114 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:52.114 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:52.114 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:52.114 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:52.114 15:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:52.372 [2024-07-12 15:03:18.177397] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:52.372 [2024-07-12 15:03:18.177442] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:52.372 [2024-07-12 15:03:18.177460] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:52.630 15:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:52.630 15:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:14:52.630 15:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:52.630 15:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:52.630 15:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:52.630 15:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:14:52.630 15:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:52.630 15:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:52.630 15:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:52.630 15:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:52.630 15:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:52.630 15:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:52.630 15:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:52.630 15:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:52.630 15:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:52.630 15:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.630 15:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.888 15:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:52.888 "name": "Existed_Raid", 00:14:52.888 "uuid": "dd5a65b0-405f-11ef-b2a4-e9dca065e82e", 00:14:52.888 "strip_size_kb": 64, 00:14:52.888 "state": "offline", 00:14:52.888 "raid_level": "concat", 00:14:52.888 "superblock": false, 00:14:52.888 "num_base_bdevs": 4, 00:14:52.888 "num_base_bdevs_discovered": 3, 00:14:52.888 "num_base_bdevs_operational": 3, 00:14:52.888 "base_bdevs_list": [ 00:14:52.888 { 00:14:52.888 "name": null, 00:14:52.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.888 "is_configured": false, 00:14:52.888 "data_offset": 0, 00:14:52.888 "data_size": 65536 00:14:52.888 }, 00:14:52.888 { 00:14:52.888 "name": "BaseBdev2", 00:14:52.888 "uuid": "dba10ec4-405f-11ef-b2a4-e9dca065e82e", 00:14:52.888 "is_configured": true, 00:14:52.888 "data_offset": 0, 00:14:52.888 "data_size": 65536 00:14:52.888 }, 00:14:52.888 { 00:14:52.888 "name": "BaseBdev3", 00:14:52.888 "uuid": "dc77ea52-405f-11ef-b2a4-e9dca065e82e", 00:14:52.888 "is_configured": true, 00:14:52.888 "data_offset": 0, 00:14:52.888 "data_size": 65536 00:14:52.888 }, 00:14:52.888 { 00:14:52.888 "name": "BaseBdev4", 00:14:52.888 "uuid": "dd5a5c7d-405f-11ef-b2a4-e9dca065e82e", 00:14:52.888 "is_configured": true, 00:14:52.888 "data_offset": 0, 00:14:52.888 "data_size": 65536 00:14:52.888 } 00:14:52.888 ] 00:14:52.888 }' 00:14:52.888 15:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:52.888 15:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.147 15:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:53.147 15:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:53.147 15:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.147 15:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:53.405 15:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:53.405 15:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:53.405 15:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:53.663 [2024-07-12 15:03:19.469631] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:53.922 15:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:53.922 15:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:53.922 15:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.922 15:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:54.181 15:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:54.181 15:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:54.181 15:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:54.439 [2024-07-12 15:03:20.029755] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:54.439 15:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:54.439 15:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:54.439 15:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.439 15:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:54.698 15:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:54.698 15:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:54.698 15:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:14:54.956 [2024-07-12 15:03:20.554242] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:54.956 [2024-07-12 15:03:20.554320] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3ec2d6834a00 name Existed_Raid, state offline 00:14:54.956 15:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:54.956 15:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:54.956 15:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.956 15:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:55.215 15:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:55.215 15:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:55.215 15:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:14:55.215 15:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:14:55.215 15:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:55.215 15:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:55.474 BaseBdev2 00:14:55.474 15:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:14:55.474 15:03:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:55.474 15:03:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:55.474 15:03:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:55.474 15:03:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:55.474 15:03:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:55.474 15:03:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:55.799 15:03:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:56.057 [ 00:14:56.057 { 00:14:56.057 "name": "BaseBdev2", 00:14:56.057 "aliases": [ 00:14:56.057 "e0e354b7-405f-11ef-b2a4-e9dca065e82e" 00:14:56.057 ], 00:14:56.057 "product_name": "Malloc disk", 00:14:56.057 "block_size": 512, 00:14:56.057 "num_blocks": 65536, 00:14:56.057 "uuid": "e0e354b7-405f-11ef-b2a4-e9dca065e82e", 00:14:56.057 "assigned_rate_limits": { 00:14:56.057 "rw_ios_per_sec": 0, 00:14:56.057 "rw_mbytes_per_sec": 0, 00:14:56.057 "r_mbytes_per_sec": 0, 00:14:56.057 "w_mbytes_per_sec": 0 00:14:56.057 }, 00:14:56.057 "claimed": false, 00:14:56.057 "zoned": false, 00:14:56.057 "supported_io_types": { 00:14:56.057 "read": true, 00:14:56.057 "write": true, 00:14:56.057 "unmap": true, 00:14:56.057 "flush": true, 00:14:56.057 "reset": true, 00:14:56.057 "nvme_admin": false, 00:14:56.057 "nvme_io": false, 00:14:56.057 "nvme_io_md": false, 00:14:56.057 "write_zeroes": true, 00:14:56.057 "zcopy": true, 00:14:56.057 "get_zone_info": false, 00:14:56.057 "zone_management": false, 00:14:56.057 "zone_append": false, 00:14:56.057 "compare": false, 00:14:56.057 "compare_and_write": false, 00:14:56.057 "abort": true, 00:14:56.057 "seek_hole": false, 00:14:56.057 "seek_data": false, 00:14:56.057 "copy": true, 00:14:56.057 "nvme_iov_md": false 00:14:56.057 }, 00:14:56.057 "memory_domains": [ 00:14:56.057 { 00:14:56.057 "dma_device_id": "system", 00:14:56.057 "dma_device_type": 1 00:14:56.057 }, 00:14:56.057 { 00:14:56.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.057 "dma_device_type": 2 00:14:56.057 } 00:14:56.057 ], 00:14:56.057 "driver_specific": {} 00:14:56.057 } 00:14:56.057 ] 00:14:56.057 15:03:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:56.057 15:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:56.057 15:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:56.057 15:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:56.316 BaseBdev3 00:14:56.316 15:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:14:56.316 15:03:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:56.316 15:03:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:56.316 15:03:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:56.316 15:03:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:56.316 15:03:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:56.316 15:03:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:56.573 15:03:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:56.832 [ 00:14:56.832 { 00:14:56.832 "name": "BaseBdev3", 00:14:56.832 "aliases": [ 00:14:56.832 "e15cce37-405f-11ef-b2a4-e9dca065e82e" 00:14:56.832 ], 00:14:56.832 "product_name": "Malloc disk", 00:14:56.832 "block_size": 512, 00:14:56.832 "num_blocks": 65536, 00:14:56.832 "uuid": "e15cce37-405f-11ef-b2a4-e9dca065e82e", 00:14:56.832 "assigned_rate_limits": { 00:14:56.832 "rw_ios_per_sec": 0, 00:14:56.832 "rw_mbytes_per_sec": 0, 00:14:56.832 "r_mbytes_per_sec": 0, 00:14:56.832 "w_mbytes_per_sec": 0 00:14:56.832 }, 00:14:56.832 "claimed": false, 00:14:56.832 "zoned": false, 00:14:56.832 "supported_io_types": { 00:14:56.832 "read": true, 00:14:56.832 "write": true, 00:14:56.832 "unmap": true, 00:14:56.832 "flush": true, 00:14:56.832 "reset": true, 00:14:56.832 "nvme_admin": false, 00:14:56.832 "nvme_io": false, 00:14:56.832 "nvme_io_md": false, 00:14:56.832 "write_zeroes": true, 00:14:56.832 "zcopy": true, 00:14:56.832 "get_zone_info": false, 00:14:56.832 "zone_management": false, 00:14:56.832 "zone_append": false, 00:14:56.832 "compare": false, 00:14:56.832 "compare_and_write": false, 00:14:56.832 "abort": true, 00:14:56.832 "seek_hole": false, 00:14:56.832 "seek_data": false, 00:14:56.832 "copy": true, 00:14:56.832 "nvme_iov_md": false 00:14:56.832 }, 00:14:56.832 "memory_domains": [ 00:14:56.832 { 00:14:56.832 "dma_device_id": "system", 00:14:56.832 "dma_device_type": 1 00:14:56.832 }, 00:14:56.832 { 00:14:56.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.832 "dma_device_type": 2 00:14:56.832 } 00:14:56.832 ], 00:14:56.832 "driver_specific": {} 00:14:56.832 } 00:14:56.832 ] 00:14:56.832 15:03:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:56.832 15:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:56.832 15:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:56.832 15:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:57.090 BaseBdev4 00:14:57.090 15:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:14:57.090 15:03:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:14:57.090 15:03:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:57.090 15:03:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:57.090 15:03:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:57.090 15:03:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:57.090 15:03:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:57.348 15:03:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:57.606 [ 00:14:57.606 { 00:14:57.606 "name": "BaseBdev4", 00:14:57.606 "aliases": [ 00:14:57.606 "e1dc5fea-405f-11ef-b2a4-e9dca065e82e" 00:14:57.606 ], 00:14:57.606 "product_name": "Malloc disk", 00:14:57.606 "block_size": 512, 00:14:57.606 "num_blocks": 65536, 00:14:57.606 "uuid": "e1dc5fea-405f-11ef-b2a4-e9dca065e82e", 00:14:57.606 "assigned_rate_limits": { 00:14:57.606 "rw_ios_per_sec": 0, 00:14:57.606 "rw_mbytes_per_sec": 0, 00:14:57.606 "r_mbytes_per_sec": 0, 00:14:57.606 "w_mbytes_per_sec": 0 00:14:57.606 }, 00:14:57.606 "claimed": false, 00:14:57.606 "zoned": false, 00:14:57.606 "supported_io_types": { 00:14:57.606 "read": true, 00:14:57.606 "write": true, 00:14:57.606 "unmap": true, 00:14:57.606 "flush": true, 00:14:57.606 "reset": true, 00:14:57.606 "nvme_admin": false, 00:14:57.606 "nvme_io": false, 00:14:57.606 "nvme_io_md": false, 00:14:57.606 "write_zeroes": true, 00:14:57.606 "zcopy": true, 00:14:57.606 "get_zone_info": false, 00:14:57.606 "zone_management": false, 00:14:57.606 "zone_append": false, 00:14:57.606 "compare": false, 00:14:57.606 "compare_and_write": false, 00:14:57.606 "abort": true, 00:14:57.606 "seek_hole": false, 00:14:57.606 "seek_data": false, 00:14:57.606 "copy": true, 00:14:57.606 "nvme_iov_md": false 00:14:57.606 }, 00:14:57.606 "memory_domains": [ 00:14:57.606 { 00:14:57.606 "dma_device_id": "system", 00:14:57.606 "dma_device_type": 1 00:14:57.606 }, 00:14:57.606 { 00:14:57.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.606 "dma_device_type": 2 00:14:57.606 } 00:14:57.606 ], 00:14:57.606 "driver_specific": {} 00:14:57.606 } 00:14:57.606 ] 00:14:57.606 15:03:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:57.606 15:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:57.606 15:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:57.606 15:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:57.865 [2024-07-12 15:03:23.504223] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:57.865 [2024-07-12 15:03:23.504295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:57.865 [2024-07-12 15:03:23.504307] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:57.865 [2024-07-12 15:03:23.505015] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:57.865 [2024-07-12 15:03:23.505036] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:57.865 15:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:57.865 15:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:57.865 15:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:57.865 15:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:57.865 15:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:57.865 15:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:57.865 15:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:57.865 15:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:57.865 15:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:57.865 15:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:57.865 15:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.865 15:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.124 15:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:58.124 "name": "Existed_Raid", 00:14:58.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.124 "strip_size_kb": 64, 00:14:58.124 "state": "configuring", 00:14:58.124 "raid_level": "concat", 00:14:58.124 "superblock": false, 00:14:58.124 "num_base_bdevs": 4, 00:14:58.124 "num_base_bdevs_discovered": 3, 00:14:58.124 "num_base_bdevs_operational": 4, 00:14:58.124 "base_bdevs_list": [ 00:14:58.124 { 00:14:58.124 "name": "BaseBdev1", 00:14:58.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.124 "is_configured": false, 00:14:58.124 "data_offset": 0, 00:14:58.124 "data_size": 0 00:14:58.124 }, 00:14:58.124 { 00:14:58.124 "name": "BaseBdev2", 00:14:58.124 "uuid": "e0e354b7-405f-11ef-b2a4-e9dca065e82e", 00:14:58.124 "is_configured": true, 00:14:58.124 "data_offset": 0, 00:14:58.124 "data_size": 65536 00:14:58.124 }, 00:14:58.124 { 00:14:58.124 "name": "BaseBdev3", 00:14:58.124 "uuid": "e15cce37-405f-11ef-b2a4-e9dca065e82e", 00:14:58.124 "is_configured": true, 00:14:58.124 "data_offset": 0, 00:14:58.124 "data_size": 65536 00:14:58.124 }, 00:14:58.124 { 00:14:58.124 "name": "BaseBdev4", 00:14:58.124 "uuid": "e1dc5fea-405f-11ef-b2a4-e9dca065e82e", 00:14:58.124 "is_configured": true, 00:14:58.124 "data_offset": 0, 00:14:58.124 "data_size": 65536 00:14:58.124 } 00:14:58.124 ] 00:14:58.124 }' 00:14:58.124 15:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:58.124 15:03:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.382 15:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:58.640 [2024-07-12 15:03:24.332288] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:58.640 15:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:58.640 15:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:58.640 15:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:58.640 15:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:58.640 15:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:58.640 15:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:58.641 15:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:58.641 15:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:58.641 15:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:58.641 15:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:58.641 15:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:58.641 15:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.898 15:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:58.898 "name": "Existed_Raid", 00:14:58.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.898 "strip_size_kb": 64, 00:14:58.898 "state": "configuring", 00:14:58.898 "raid_level": "concat", 00:14:58.898 "superblock": false, 00:14:58.898 "num_base_bdevs": 4, 00:14:58.898 "num_base_bdevs_discovered": 2, 00:14:58.899 "num_base_bdevs_operational": 4, 00:14:58.899 "base_bdevs_list": [ 00:14:58.899 { 00:14:58.899 "name": "BaseBdev1", 00:14:58.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.899 "is_configured": false, 00:14:58.899 "data_offset": 0, 00:14:58.899 "data_size": 0 00:14:58.899 }, 00:14:58.899 { 00:14:58.899 "name": null, 00:14:58.899 "uuid": "e0e354b7-405f-11ef-b2a4-e9dca065e82e", 00:14:58.899 "is_configured": false, 00:14:58.899 "data_offset": 0, 00:14:58.899 "data_size": 65536 00:14:58.899 }, 00:14:58.899 { 00:14:58.899 "name": "BaseBdev3", 00:14:58.899 "uuid": "e15cce37-405f-11ef-b2a4-e9dca065e82e", 00:14:58.899 "is_configured": true, 00:14:58.899 "data_offset": 0, 00:14:58.899 "data_size": 65536 00:14:58.899 }, 00:14:58.899 { 00:14:58.899 "name": "BaseBdev4", 00:14:58.899 "uuid": "e1dc5fea-405f-11ef-b2a4-e9dca065e82e", 00:14:58.899 "is_configured": true, 00:14:58.899 "data_offset": 0, 00:14:58.899 "data_size": 65536 00:14:58.899 } 00:14:58.899 ] 00:14:58.899 }' 00:14:58.899 15:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:58.899 15:03:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.185 15:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.185 15:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:59.752 15:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:14:59.752 15:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:59.752 [2024-07-12 15:03:25.524507] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:59.752 BaseBdev1 00:14:59.752 15:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:14:59.752 15:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:59.752 15:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:59.752 15:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:59.752 15:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:59.752 15:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:59.752 15:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:00.319 15:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:00.319 [ 00:15:00.319 { 00:15:00.319 "name": "BaseBdev1", 00:15:00.319 "aliases": [ 00:15:00.319 "e388def8-405f-11ef-b2a4-e9dca065e82e" 00:15:00.319 ], 00:15:00.319 "product_name": "Malloc disk", 00:15:00.319 "block_size": 512, 00:15:00.319 "num_blocks": 65536, 00:15:00.319 "uuid": "e388def8-405f-11ef-b2a4-e9dca065e82e", 00:15:00.319 "assigned_rate_limits": { 00:15:00.319 "rw_ios_per_sec": 0, 00:15:00.319 "rw_mbytes_per_sec": 0, 00:15:00.319 "r_mbytes_per_sec": 0, 00:15:00.319 "w_mbytes_per_sec": 0 00:15:00.319 }, 00:15:00.319 "claimed": true, 00:15:00.319 "claim_type": "exclusive_write", 00:15:00.319 "zoned": false, 00:15:00.319 "supported_io_types": { 00:15:00.319 "read": true, 00:15:00.319 "write": true, 00:15:00.319 "unmap": true, 00:15:00.319 "flush": true, 00:15:00.319 "reset": true, 00:15:00.319 "nvme_admin": false, 00:15:00.319 "nvme_io": false, 00:15:00.319 "nvme_io_md": false, 00:15:00.319 "write_zeroes": true, 00:15:00.319 "zcopy": true, 00:15:00.319 "get_zone_info": false, 00:15:00.319 "zone_management": false, 00:15:00.319 "zone_append": false, 00:15:00.319 "compare": false, 00:15:00.319 "compare_and_write": false, 00:15:00.319 "abort": true, 00:15:00.319 "seek_hole": false, 00:15:00.319 "seek_data": false, 00:15:00.319 "copy": true, 00:15:00.319 "nvme_iov_md": false 00:15:00.319 }, 00:15:00.319 "memory_domains": [ 00:15:00.319 { 00:15:00.319 "dma_device_id": "system", 00:15:00.319 "dma_device_type": 1 00:15:00.319 }, 00:15:00.319 { 00:15:00.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.319 "dma_device_type": 2 00:15:00.319 } 00:15:00.319 ], 00:15:00.319 "driver_specific": {} 00:15:00.319 } 00:15:00.319 ] 00:15:00.319 15:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:00.319 15:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:00.319 15:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:00.319 15:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:00.319 15:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:00.319 15:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:00.319 15:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:00.319 15:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:00.319 15:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:00.319 15:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:00.319 15:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:00.579 15:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.579 15:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.579 15:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:00.579 "name": "Existed_Raid", 00:15:00.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.579 "strip_size_kb": 64, 00:15:00.579 "state": "configuring", 00:15:00.579 "raid_level": "concat", 00:15:00.579 "superblock": false, 00:15:00.579 "num_base_bdevs": 4, 00:15:00.579 "num_base_bdevs_discovered": 3, 00:15:00.579 "num_base_bdevs_operational": 4, 00:15:00.579 "base_bdevs_list": [ 00:15:00.579 { 00:15:00.579 "name": "BaseBdev1", 00:15:00.579 "uuid": "e388def8-405f-11ef-b2a4-e9dca065e82e", 00:15:00.579 "is_configured": true, 00:15:00.579 "data_offset": 0, 00:15:00.579 "data_size": 65536 00:15:00.579 }, 00:15:00.579 { 00:15:00.579 "name": null, 00:15:00.579 "uuid": "e0e354b7-405f-11ef-b2a4-e9dca065e82e", 00:15:00.579 "is_configured": false, 00:15:00.579 "data_offset": 0, 00:15:00.579 "data_size": 65536 00:15:00.579 }, 00:15:00.579 { 00:15:00.579 "name": "BaseBdev3", 00:15:00.579 "uuid": "e15cce37-405f-11ef-b2a4-e9dca065e82e", 00:15:00.579 "is_configured": true, 00:15:00.579 "data_offset": 0, 00:15:00.579 "data_size": 65536 00:15:00.579 }, 00:15:00.579 { 00:15:00.579 "name": "BaseBdev4", 00:15:00.579 "uuid": "e1dc5fea-405f-11ef-b2a4-e9dca065e82e", 00:15:00.579 "is_configured": true, 00:15:00.579 "data_offset": 0, 00:15:00.579 "data_size": 65536 00:15:00.579 } 00:15:00.579 ] 00:15:00.579 }' 00:15:00.579 15:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:00.579 15:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.147 15:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.147 15:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:01.147 15:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:15:01.147 15:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:01.406 [2024-07-12 15:03:27.176406] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:01.406 15:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:01.406 15:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:01.406 15:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:01.406 15:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:01.406 15:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:01.406 15:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:01.406 15:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:01.406 15:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:01.406 15:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:01.406 15:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:01.406 15:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.406 15:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.974 15:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:01.974 "name": "Existed_Raid", 00:15:01.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.974 "strip_size_kb": 64, 00:15:01.974 "state": "configuring", 00:15:01.974 "raid_level": "concat", 00:15:01.974 "superblock": false, 00:15:01.974 "num_base_bdevs": 4, 00:15:01.974 "num_base_bdevs_discovered": 2, 00:15:01.974 "num_base_bdevs_operational": 4, 00:15:01.974 "base_bdevs_list": [ 00:15:01.974 { 00:15:01.974 "name": "BaseBdev1", 00:15:01.974 "uuid": "e388def8-405f-11ef-b2a4-e9dca065e82e", 00:15:01.974 "is_configured": true, 00:15:01.974 "data_offset": 0, 00:15:01.974 "data_size": 65536 00:15:01.974 }, 00:15:01.974 { 00:15:01.974 "name": null, 00:15:01.974 "uuid": "e0e354b7-405f-11ef-b2a4-e9dca065e82e", 00:15:01.974 "is_configured": false, 00:15:01.974 "data_offset": 0, 00:15:01.974 "data_size": 65536 00:15:01.974 }, 00:15:01.974 { 00:15:01.974 "name": null, 00:15:01.974 "uuid": "e15cce37-405f-11ef-b2a4-e9dca065e82e", 00:15:01.974 "is_configured": false, 00:15:01.974 "data_offset": 0, 00:15:01.974 "data_size": 65536 00:15:01.974 }, 00:15:01.974 { 00:15:01.974 "name": "BaseBdev4", 00:15:01.974 "uuid": "e1dc5fea-405f-11ef-b2a4-e9dca065e82e", 00:15:01.974 "is_configured": true, 00:15:01.974 "data_offset": 0, 00:15:01.974 "data_size": 65536 00:15:01.974 } 00:15:01.974 ] 00:15:01.974 }' 00:15:01.974 15:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:01.974 15:03:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.234 15:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:02.234 15:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:02.493 15:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:15:02.493 15:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:02.752 [2024-07-12 15:03:28.392471] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:02.752 15:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:02.752 15:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:02.752 15:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:02.752 15:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:02.752 15:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:02.752 15:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:02.752 15:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:02.752 15:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:02.752 15:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:02.752 15:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:02.752 15:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:02.752 15:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.011 15:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:03.011 "name": "Existed_Raid", 00:15:03.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.011 "strip_size_kb": 64, 00:15:03.011 "state": "configuring", 00:15:03.011 "raid_level": "concat", 00:15:03.011 "superblock": false, 00:15:03.011 "num_base_bdevs": 4, 00:15:03.011 "num_base_bdevs_discovered": 3, 00:15:03.011 "num_base_bdevs_operational": 4, 00:15:03.011 "base_bdevs_list": [ 00:15:03.011 { 00:15:03.011 "name": "BaseBdev1", 00:15:03.011 "uuid": "e388def8-405f-11ef-b2a4-e9dca065e82e", 00:15:03.011 "is_configured": true, 00:15:03.011 "data_offset": 0, 00:15:03.011 "data_size": 65536 00:15:03.011 }, 00:15:03.011 { 00:15:03.011 "name": null, 00:15:03.011 "uuid": "e0e354b7-405f-11ef-b2a4-e9dca065e82e", 00:15:03.011 "is_configured": false, 00:15:03.011 "data_offset": 0, 00:15:03.011 "data_size": 65536 00:15:03.011 }, 00:15:03.011 { 00:15:03.011 "name": "BaseBdev3", 00:15:03.011 "uuid": "e15cce37-405f-11ef-b2a4-e9dca065e82e", 00:15:03.011 "is_configured": true, 00:15:03.011 "data_offset": 0, 00:15:03.011 "data_size": 65536 00:15:03.011 }, 00:15:03.011 { 00:15:03.011 "name": "BaseBdev4", 00:15:03.011 "uuid": "e1dc5fea-405f-11ef-b2a4-e9dca065e82e", 00:15:03.011 "is_configured": true, 00:15:03.011 "data_offset": 0, 00:15:03.011 "data_size": 65536 00:15:03.011 } 00:15:03.011 ] 00:15:03.011 }' 00:15:03.011 15:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:03.011 15:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.270 15:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.270 15:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:03.529 15:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:15:03.529 15:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:03.788 [2024-07-12 15:03:29.532512] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:03.788 15:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:03.788 15:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:03.788 15:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:03.788 15:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:03.788 15:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:03.788 15:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:03.788 15:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:03.788 15:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:03.788 15:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:03.788 15:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:03.788 15:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.788 15:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.354 15:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:04.354 "name": "Existed_Raid", 00:15:04.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.354 "strip_size_kb": 64, 00:15:04.354 "state": "configuring", 00:15:04.354 "raid_level": "concat", 00:15:04.354 "superblock": false, 00:15:04.354 "num_base_bdevs": 4, 00:15:04.354 "num_base_bdevs_discovered": 2, 00:15:04.354 "num_base_bdevs_operational": 4, 00:15:04.354 "base_bdevs_list": [ 00:15:04.354 { 00:15:04.354 "name": null, 00:15:04.354 "uuid": "e388def8-405f-11ef-b2a4-e9dca065e82e", 00:15:04.354 "is_configured": false, 00:15:04.354 "data_offset": 0, 00:15:04.354 "data_size": 65536 00:15:04.354 }, 00:15:04.354 { 00:15:04.354 "name": null, 00:15:04.354 "uuid": "e0e354b7-405f-11ef-b2a4-e9dca065e82e", 00:15:04.354 "is_configured": false, 00:15:04.354 "data_offset": 0, 00:15:04.354 "data_size": 65536 00:15:04.354 }, 00:15:04.354 { 00:15:04.354 "name": "BaseBdev3", 00:15:04.354 "uuid": "e15cce37-405f-11ef-b2a4-e9dca065e82e", 00:15:04.354 "is_configured": true, 00:15:04.354 "data_offset": 0, 00:15:04.354 "data_size": 65536 00:15:04.354 }, 00:15:04.354 { 00:15:04.354 "name": "BaseBdev4", 00:15:04.354 "uuid": "e1dc5fea-405f-11ef-b2a4-e9dca065e82e", 00:15:04.354 "is_configured": true, 00:15:04.354 "data_offset": 0, 00:15:04.354 "data_size": 65536 00:15:04.354 } 00:15:04.354 ] 00:15:04.354 }' 00:15:04.354 15:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:04.354 15:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.613 15:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.613 15:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:04.872 15:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:15:04.872 15:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:05.131 [2024-07-12 15:03:30.750582] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:05.131 15:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:05.131 15:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:05.131 15:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:05.131 15:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:05.131 15:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:05.131 15:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:05.131 15:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:05.131 15:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:05.131 15:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:05.131 15:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:05.131 15:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.131 15:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.389 15:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:05.389 "name": "Existed_Raid", 00:15:05.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.389 "strip_size_kb": 64, 00:15:05.389 "state": "configuring", 00:15:05.389 "raid_level": "concat", 00:15:05.389 "superblock": false, 00:15:05.389 "num_base_bdevs": 4, 00:15:05.389 "num_base_bdevs_discovered": 3, 00:15:05.389 "num_base_bdevs_operational": 4, 00:15:05.389 "base_bdevs_list": [ 00:15:05.389 { 00:15:05.389 "name": null, 00:15:05.389 "uuid": "e388def8-405f-11ef-b2a4-e9dca065e82e", 00:15:05.389 "is_configured": false, 00:15:05.389 "data_offset": 0, 00:15:05.389 "data_size": 65536 00:15:05.389 }, 00:15:05.389 { 00:15:05.389 "name": "BaseBdev2", 00:15:05.389 "uuid": "e0e354b7-405f-11ef-b2a4-e9dca065e82e", 00:15:05.389 "is_configured": true, 00:15:05.389 "data_offset": 0, 00:15:05.389 "data_size": 65536 00:15:05.389 }, 00:15:05.389 { 00:15:05.389 "name": "BaseBdev3", 00:15:05.389 "uuid": "e15cce37-405f-11ef-b2a4-e9dca065e82e", 00:15:05.389 "is_configured": true, 00:15:05.389 "data_offset": 0, 00:15:05.389 "data_size": 65536 00:15:05.389 }, 00:15:05.389 { 00:15:05.389 "name": "BaseBdev4", 00:15:05.389 "uuid": "e1dc5fea-405f-11ef-b2a4-e9dca065e82e", 00:15:05.389 "is_configured": true, 00:15:05.389 "data_offset": 0, 00:15:05.389 "data_size": 65536 00:15:05.389 } 00:15:05.389 ] 00:15:05.389 }' 00:15:05.389 15:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:05.389 15:03:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.650 15:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.650 15:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:05.915 15:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:15:05.915 15:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:05.915 15:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.484 15:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u e388def8-405f-11ef-b2a4-e9dca065e82e 00:15:06.484 [2024-07-12 15:03:32.250883] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:06.484 [2024-07-12 15:03:32.250927] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3ec2d6834f00 00:15:06.484 [2024-07-12 15:03:32.250934] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:06.484 [2024-07-12 15:03:32.250966] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3ec2d6897e20 00:15:06.484 [2024-07-12 15:03:32.251080] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3ec2d6834f00 00:15:06.484 [2024-07-12 15:03:32.251086] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3ec2d6834f00 00:15:06.484 [2024-07-12 15:03:32.251132] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.484 NewBaseBdev 00:15:06.484 15:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:15:06.484 15:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:15:06.484 15:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:06.484 15:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:06.484 15:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:06.484 15:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:06.484 15:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:06.743 15:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:07.009 [ 00:15:07.009 { 00:15:07.009 "name": "NewBaseBdev", 00:15:07.009 "aliases": [ 00:15:07.009 "e388def8-405f-11ef-b2a4-e9dca065e82e" 00:15:07.009 ], 00:15:07.009 "product_name": "Malloc disk", 00:15:07.009 "block_size": 512, 00:15:07.009 "num_blocks": 65536, 00:15:07.009 "uuid": "e388def8-405f-11ef-b2a4-e9dca065e82e", 00:15:07.009 "assigned_rate_limits": { 00:15:07.009 "rw_ios_per_sec": 0, 00:15:07.009 "rw_mbytes_per_sec": 0, 00:15:07.009 "r_mbytes_per_sec": 0, 00:15:07.009 "w_mbytes_per_sec": 0 00:15:07.009 }, 00:15:07.009 "claimed": true, 00:15:07.009 "claim_type": "exclusive_write", 00:15:07.009 "zoned": false, 00:15:07.009 "supported_io_types": { 00:15:07.009 "read": true, 00:15:07.009 "write": true, 00:15:07.009 "unmap": true, 00:15:07.009 "flush": true, 00:15:07.009 "reset": true, 00:15:07.009 "nvme_admin": false, 00:15:07.009 "nvme_io": false, 00:15:07.009 "nvme_io_md": false, 00:15:07.009 "write_zeroes": true, 00:15:07.009 "zcopy": true, 00:15:07.009 "get_zone_info": false, 00:15:07.009 "zone_management": false, 00:15:07.009 "zone_append": false, 00:15:07.009 "compare": false, 00:15:07.009 "compare_and_write": false, 00:15:07.009 "abort": true, 00:15:07.009 "seek_hole": false, 00:15:07.009 "seek_data": false, 00:15:07.009 "copy": true, 00:15:07.009 "nvme_iov_md": false 00:15:07.009 }, 00:15:07.009 "memory_domains": [ 00:15:07.010 { 00:15:07.010 "dma_device_id": "system", 00:15:07.010 "dma_device_type": 1 00:15:07.010 }, 00:15:07.010 { 00:15:07.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.010 "dma_device_type": 2 00:15:07.010 } 00:15:07.010 ], 00:15:07.010 "driver_specific": {} 00:15:07.010 } 00:15:07.010 ] 00:15:07.010 15:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:07.010 15:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:15:07.010 15:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:07.010 15:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:07.010 15:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:07.010 15:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:07.010 15:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:07.010 15:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:07.010 15:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:07.010 15:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:07.010 15:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:07.010 15:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.010 15:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.272 15:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:07.272 "name": "Existed_Raid", 00:15:07.272 "uuid": "e78b4472-405f-11ef-b2a4-e9dca065e82e", 00:15:07.272 "strip_size_kb": 64, 00:15:07.272 "state": "online", 00:15:07.272 "raid_level": "concat", 00:15:07.272 "superblock": false, 00:15:07.272 "num_base_bdevs": 4, 00:15:07.272 "num_base_bdevs_discovered": 4, 00:15:07.272 "num_base_bdevs_operational": 4, 00:15:07.272 "base_bdevs_list": [ 00:15:07.272 { 00:15:07.272 "name": "NewBaseBdev", 00:15:07.272 "uuid": "e388def8-405f-11ef-b2a4-e9dca065e82e", 00:15:07.272 "is_configured": true, 00:15:07.272 "data_offset": 0, 00:15:07.272 "data_size": 65536 00:15:07.272 }, 00:15:07.272 { 00:15:07.272 "name": "BaseBdev2", 00:15:07.272 "uuid": "e0e354b7-405f-11ef-b2a4-e9dca065e82e", 00:15:07.272 "is_configured": true, 00:15:07.272 "data_offset": 0, 00:15:07.272 "data_size": 65536 00:15:07.272 }, 00:15:07.272 { 00:15:07.272 "name": "BaseBdev3", 00:15:07.272 "uuid": "e15cce37-405f-11ef-b2a4-e9dca065e82e", 00:15:07.272 "is_configured": true, 00:15:07.272 "data_offset": 0, 00:15:07.272 "data_size": 65536 00:15:07.272 }, 00:15:07.272 { 00:15:07.272 "name": "BaseBdev4", 00:15:07.272 "uuid": "e1dc5fea-405f-11ef-b2a4-e9dca065e82e", 00:15:07.272 "is_configured": true, 00:15:07.272 "data_offset": 0, 00:15:07.272 "data_size": 65536 00:15:07.272 } 00:15:07.272 ] 00:15:07.272 }' 00:15:07.272 15:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:07.272 15:03:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.529 15:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:07.529 15:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:07.529 15:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:07.529 15:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:07.529 15:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:07.529 15:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:07.529 15:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:07.529 15:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:07.787 [2024-07-12 15:03:33.610790] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:08.045 15:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:08.045 "name": "Existed_Raid", 00:15:08.045 "aliases": [ 00:15:08.045 "e78b4472-405f-11ef-b2a4-e9dca065e82e" 00:15:08.045 ], 00:15:08.045 "product_name": "Raid Volume", 00:15:08.045 "block_size": 512, 00:15:08.045 "num_blocks": 262144, 00:15:08.045 "uuid": "e78b4472-405f-11ef-b2a4-e9dca065e82e", 00:15:08.045 "assigned_rate_limits": { 00:15:08.045 "rw_ios_per_sec": 0, 00:15:08.045 "rw_mbytes_per_sec": 0, 00:15:08.045 "r_mbytes_per_sec": 0, 00:15:08.045 "w_mbytes_per_sec": 0 00:15:08.045 }, 00:15:08.045 "claimed": false, 00:15:08.045 "zoned": false, 00:15:08.045 "supported_io_types": { 00:15:08.045 "read": true, 00:15:08.045 "write": true, 00:15:08.045 "unmap": true, 00:15:08.045 "flush": true, 00:15:08.045 "reset": true, 00:15:08.045 "nvme_admin": false, 00:15:08.045 "nvme_io": false, 00:15:08.045 "nvme_io_md": false, 00:15:08.045 "write_zeroes": true, 00:15:08.045 "zcopy": false, 00:15:08.045 "get_zone_info": false, 00:15:08.045 "zone_management": false, 00:15:08.045 "zone_append": false, 00:15:08.045 "compare": false, 00:15:08.045 "compare_and_write": false, 00:15:08.045 "abort": false, 00:15:08.045 "seek_hole": false, 00:15:08.045 "seek_data": false, 00:15:08.045 "copy": false, 00:15:08.045 "nvme_iov_md": false 00:15:08.045 }, 00:15:08.045 "memory_domains": [ 00:15:08.045 { 00:15:08.045 "dma_device_id": "system", 00:15:08.045 "dma_device_type": 1 00:15:08.045 }, 00:15:08.045 { 00:15:08.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.045 "dma_device_type": 2 00:15:08.045 }, 00:15:08.045 { 00:15:08.045 "dma_device_id": "system", 00:15:08.045 "dma_device_type": 1 00:15:08.045 }, 00:15:08.045 { 00:15:08.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.045 "dma_device_type": 2 00:15:08.045 }, 00:15:08.045 { 00:15:08.045 "dma_device_id": "system", 00:15:08.045 "dma_device_type": 1 00:15:08.045 }, 00:15:08.045 { 00:15:08.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.045 "dma_device_type": 2 00:15:08.045 }, 00:15:08.045 { 00:15:08.045 "dma_device_id": "system", 00:15:08.045 "dma_device_type": 1 00:15:08.045 }, 00:15:08.045 { 00:15:08.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.045 "dma_device_type": 2 00:15:08.045 } 00:15:08.045 ], 00:15:08.045 "driver_specific": { 00:15:08.045 "raid": { 00:15:08.045 "uuid": "e78b4472-405f-11ef-b2a4-e9dca065e82e", 00:15:08.045 "strip_size_kb": 64, 00:15:08.045 "state": "online", 00:15:08.045 "raid_level": "concat", 00:15:08.045 "superblock": false, 00:15:08.045 "num_base_bdevs": 4, 00:15:08.045 "num_base_bdevs_discovered": 4, 00:15:08.045 "num_base_bdevs_operational": 4, 00:15:08.045 "base_bdevs_list": [ 00:15:08.045 { 00:15:08.045 "name": "NewBaseBdev", 00:15:08.045 "uuid": "e388def8-405f-11ef-b2a4-e9dca065e82e", 00:15:08.045 "is_configured": true, 00:15:08.045 "data_offset": 0, 00:15:08.045 "data_size": 65536 00:15:08.045 }, 00:15:08.045 { 00:15:08.045 "name": "BaseBdev2", 00:15:08.045 "uuid": "e0e354b7-405f-11ef-b2a4-e9dca065e82e", 00:15:08.045 "is_configured": true, 00:15:08.045 "data_offset": 0, 00:15:08.045 "data_size": 65536 00:15:08.045 }, 00:15:08.045 { 00:15:08.045 "name": "BaseBdev3", 00:15:08.045 "uuid": "e15cce37-405f-11ef-b2a4-e9dca065e82e", 00:15:08.045 "is_configured": true, 00:15:08.045 "data_offset": 0, 00:15:08.045 "data_size": 65536 00:15:08.045 }, 00:15:08.045 { 00:15:08.045 "name": "BaseBdev4", 00:15:08.045 "uuid": "e1dc5fea-405f-11ef-b2a4-e9dca065e82e", 00:15:08.045 "is_configured": true, 00:15:08.045 "data_offset": 0, 00:15:08.045 "data_size": 65536 00:15:08.045 } 00:15:08.045 ] 00:15:08.045 } 00:15:08.045 } 00:15:08.045 }' 00:15:08.045 15:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:08.045 15:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:08.045 BaseBdev2 00:15:08.045 BaseBdev3 00:15:08.045 BaseBdev4' 00:15:08.045 15:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:08.045 15:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:08.045 15:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:08.304 15:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:08.304 "name": "NewBaseBdev", 00:15:08.304 "aliases": [ 00:15:08.304 "e388def8-405f-11ef-b2a4-e9dca065e82e" 00:15:08.304 ], 00:15:08.304 "product_name": "Malloc disk", 00:15:08.304 "block_size": 512, 00:15:08.304 "num_blocks": 65536, 00:15:08.304 "uuid": "e388def8-405f-11ef-b2a4-e9dca065e82e", 00:15:08.304 "assigned_rate_limits": { 00:15:08.304 "rw_ios_per_sec": 0, 00:15:08.304 "rw_mbytes_per_sec": 0, 00:15:08.304 "r_mbytes_per_sec": 0, 00:15:08.304 "w_mbytes_per_sec": 0 00:15:08.304 }, 00:15:08.304 "claimed": true, 00:15:08.304 "claim_type": "exclusive_write", 00:15:08.304 "zoned": false, 00:15:08.304 "supported_io_types": { 00:15:08.304 "read": true, 00:15:08.304 "write": true, 00:15:08.304 "unmap": true, 00:15:08.304 "flush": true, 00:15:08.304 "reset": true, 00:15:08.304 "nvme_admin": false, 00:15:08.304 "nvme_io": false, 00:15:08.304 "nvme_io_md": false, 00:15:08.304 "write_zeroes": true, 00:15:08.304 "zcopy": true, 00:15:08.304 "get_zone_info": false, 00:15:08.304 "zone_management": false, 00:15:08.304 "zone_append": false, 00:15:08.304 "compare": false, 00:15:08.304 "compare_and_write": false, 00:15:08.304 "abort": true, 00:15:08.304 "seek_hole": false, 00:15:08.304 "seek_data": false, 00:15:08.304 "copy": true, 00:15:08.304 "nvme_iov_md": false 00:15:08.304 }, 00:15:08.304 "memory_domains": [ 00:15:08.304 { 00:15:08.304 "dma_device_id": "system", 00:15:08.304 "dma_device_type": 1 00:15:08.304 }, 00:15:08.304 { 00:15:08.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.304 "dma_device_type": 2 00:15:08.304 } 00:15:08.304 ], 00:15:08.304 "driver_specific": {} 00:15:08.304 }' 00:15:08.304 15:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:08.304 15:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:08.304 15:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:08.304 15:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:08.304 15:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:08.304 15:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:08.304 15:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:08.304 15:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:08.304 15:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:08.304 15:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:08.304 15:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:08.304 15:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:08.304 15:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:08.304 15:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:08.304 15:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:08.563 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:08.563 "name": "BaseBdev2", 00:15:08.563 "aliases": [ 00:15:08.563 "e0e354b7-405f-11ef-b2a4-e9dca065e82e" 00:15:08.563 ], 00:15:08.563 "product_name": "Malloc disk", 00:15:08.563 "block_size": 512, 00:15:08.563 "num_blocks": 65536, 00:15:08.563 "uuid": "e0e354b7-405f-11ef-b2a4-e9dca065e82e", 00:15:08.563 "assigned_rate_limits": { 00:15:08.563 "rw_ios_per_sec": 0, 00:15:08.563 "rw_mbytes_per_sec": 0, 00:15:08.563 "r_mbytes_per_sec": 0, 00:15:08.563 "w_mbytes_per_sec": 0 00:15:08.563 }, 00:15:08.563 "claimed": true, 00:15:08.563 "claim_type": "exclusive_write", 00:15:08.563 "zoned": false, 00:15:08.563 "supported_io_types": { 00:15:08.563 "read": true, 00:15:08.563 "write": true, 00:15:08.563 "unmap": true, 00:15:08.563 "flush": true, 00:15:08.563 "reset": true, 00:15:08.563 "nvme_admin": false, 00:15:08.563 "nvme_io": false, 00:15:08.563 "nvme_io_md": false, 00:15:08.563 "write_zeroes": true, 00:15:08.563 "zcopy": true, 00:15:08.563 "get_zone_info": false, 00:15:08.563 "zone_management": false, 00:15:08.563 "zone_append": false, 00:15:08.563 "compare": false, 00:15:08.563 "compare_and_write": false, 00:15:08.563 "abort": true, 00:15:08.563 "seek_hole": false, 00:15:08.563 "seek_data": false, 00:15:08.563 "copy": true, 00:15:08.563 "nvme_iov_md": false 00:15:08.563 }, 00:15:08.563 "memory_domains": [ 00:15:08.563 { 00:15:08.563 "dma_device_id": "system", 00:15:08.563 "dma_device_type": 1 00:15:08.563 }, 00:15:08.563 { 00:15:08.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.563 "dma_device_type": 2 00:15:08.563 } 00:15:08.563 ], 00:15:08.563 "driver_specific": {} 00:15:08.563 }' 00:15:08.563 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:08.563 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:08.563 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:08.563 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:08.563 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:08.563 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:08.563 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:08.563 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:08.563 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:08.563 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:08.563 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:08.563 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:08.563 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:08.563 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:08.563 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:08.821 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:08.821 "name": "BaseBdev3", 00:15:08.821 "aliases": [ 00:15:08.822 "e15cce37-405f-11ef-b2a4-e9dca065e82e" 00:15:08.822 ], 00:15:08.822 "product_name": "Malloc disk", 00:15:08.822 "block_size": 512, 00:15:08.822 "num_blocks": 65536, 00:15:08.822 "uuid": "e15cce37-405f-11ef-b2a4-e9dca065e82e", 00:15:08.822 "assigned_rate_limits": { 00:15:08.822 "rw_ios_per_sec": 0, 00:15:08.822 "rw_mbytes_per_sec": 0, 00:15:08.822 "r_mbytes_per_sec": 0, 00:15:08.822 "w_mbytes_per_sec": 0 00:15:08.822 }, 00:15:08.822 "claimed": true, 00:15:08.822 "claim_type": "exclusive_write", 00:15:08.822 "zoned": false, 00:15:08.822 "supported_io_types": { 00:15:08.822 "read": true, 00:15:08.822 "write": true, 00:15:08.822 "unmap": true, 00:15:08.822 "flush": true, 00:15:08.822 "reset": true, 00:15:08.822 "nvme_admin": false, 00:15:08.822 "nvme_io": false, 00:15:08.822 "nvme_io_md": false, 00:15:08.822 "write_zeroes": true, 00:15:08.822 "zcopy": true, 00:15:08.822 "get_zone_info": false, 00:15:08.822 "zone_management": false, 00:15:08.822 "zone_append": false, 00:15:08.822 "compare": false, 00:15:08.822 "compare_and_write": false, 00:15:08.822 "abort": true, 00:15:08.822 "seek_hole": false, 00:15:08.822 "seek_data": false, 00:15:08.822 "copy": true, 00:15:08.822 "nvme_iov_md": false 00:15:08.822 }, 00:15:08.822 "memory_domains": [ 00:15:08.822 { 00:15:08.822 "dma_device_id": "system", 00:15:08.822 "dma_device_type": 1 00:15:08.822 }, 00:15:08.822 { 00:15:08.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.822 "dma_device_type": 2 00:15:08.822 } 00:15:08.822 ], 00:15:08.822 "driver_specific": {} 00:15:08.822 }' 00:15:08.822 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:08.822 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:08.822 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:08.822 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:08.822 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:08.822 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:08.822 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:08.822 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:08.822 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:08.822 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:08.822 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:08.822 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:08.822 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:08.822 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:08.822 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:09.388 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:09.388 "name": "BaseBdev4", 00:15:09.388 "aliases": [ 00:15:09.388 "e1dc5fea-405f-11ef-b2a4-e9dca065e82e" 00:15:09.388 ], 00:15:09.388 "product_name": "Malloc disk", 00:15:09.388 "block_size": 512, 00:15:09.388 "num_blocks": 65536, 00:15:09.388 "uuid": "e1dc5fea-405f-11ef-b2a4-e9dca065e82e", 00:15:09.388 "assigned_rate_limits": { 00:15:09.388 "rw_ios_per_sec": 0, 00:15:09.388 "rw_mbytes_per_sec": 0, 00:15:09.388 "r_mbytes_per_sec": 0, 00:15:09.388 "w_mbytes_per_sec": 0 00:15:09.388 }, 00:15:09.388 "claimed": true, 00:15:09.388 "claim_type": "exclusive_write", 00:15:09.388 "zoned": false, 00:15:09.388 "supported_io_types": { 00:15:09.388 "read": true, 00:15:09.388 "write": true, 00:15:09.388 "unmap": true, 00:15:09.388 "flush": true, 00:15:09.388 "reset": true, 00:15:09.388 "nvme_admin": false, 00:15:09.388 "nvme_io": false, 00:15:09.388 "nvme_io_md": false, 00:15:09.388 "write_zeroes": true, 00:15:09.388 "zcopy": true, 00:15:09.388 "get_zone_info": false, 00:15:09.388 "zone_management": false, 00:15:09.388 "zone_append": false, 00:15:09.388 "compare": false, 00:15:09.388 "compare_and_write": false, 00:15:09.388 "abort": true, 00:15:09.388 "seek_hole": false, 00:15:09.388 "seek_data": false, 00:15:09.388 "copy": true, 00:15:09.388 "nvme_iov_md": false 00:15:09.388 }, 00:15:09.388 "memory_domains": [ 00:15:09.388 { 00:15:09.388 "dma_device_id": "system", 00:15:09.388 "dma_device_type": 1 00:15:09.388 }, 00:15:09.388 { 00:15:09.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.388 "dma_device_type": 2 00:15:09.388 } 00:15:09.388 ], 00:15:09.388 "driver_specific": {} 00:15:09.388 }' 00:15:09.388 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:09.388 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:09.388 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:09.388 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:09.388 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:09.388 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:09.388 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:09.388 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:09.388 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:09.388 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:09.388 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:09.388 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:09.388 15:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:09.646 [2024-07-12 15:03:35.234810] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:09.646 [2024-07-12 15:03:35.234837] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:09.646 [2024-07-12 15:03:35.234878] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:09.646 [2024-07-12 15:03:35.234894] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:09.646 [2024-07-12 15:03:35.234899] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3ec2d6834f00 name Existed_Raid, state offline 00:15:09.646 15:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 60698 00:15:09.646 15:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 60698 ']' 00:15:09.646 15:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 60698 00:15:09.646 15:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:15:09.646 15:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:15:09.646 15:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 60698 00:15:09.646 15:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:15:09.646 15:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:15:09.646 15:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:15:09.646 killing process with pid 60698 00:15:09.646 15:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60698' 00:15:09.646 15:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 60698 00:15:09.646 [2024-07-12 15:03:35.263658] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:09.646 15:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 60698 00:15:09.646 [2024-07-12 15:03:35.287678] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:09.646 15:03:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:09.646 00:15:09.646 real 0m28.439s 00:15:09.646 user 0m52.375s 00:15:09.646 sys 0m3.654s 00:15:09.646 15:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:09.646 15:03:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.646 ************************************ 00:15:09.646 END TEST raid_state_function_test 00:15:09.646 ************************************ 00:15:09.904 15:03:35 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:09.904 15:03:35 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:15:09.904 15:03:35 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:09.904 15:03:35 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:09.904 15:03:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:09.904 ************************************ 00:15:09.904 START TEST raid_state_function_test_sb 00:15:09.904 ************************************ 00:15:09.904 15:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 true 00:15:09.904 15:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:15:09.904 15:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:15:09.904 15:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:09.904 15:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:09.904 15:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:09.904 15:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:09.904 15:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:09.905 15:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:09.905 15:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:09.905 15:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:09.905 15:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:09.905 15:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:09.905 15:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:09.905 15:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:09.905 15:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:09.905 15:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:15:09.905 15:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:09.905 15:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:09.905 15:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:09.905 15:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:09.905 15:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:09.905 15:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:09.905 15:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:09.905 15:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:09.905 15:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:15:09.905 15:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:09.905 15:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:09.905 15:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:09.905 15:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:09.905 15:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=61521 00:15:09.905 Process raid pid: 61521 00:15:09.905 15:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 61521' 00:15:09.905 15:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:09.905 15:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 61521 /var/tmp/spdk-raid.sock 00:15:09.905 15:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 61521 ']' 00:15:09.905 15:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:09.905 15:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:09.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:09.905 15:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:09.905 15:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:09.905 15:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.905 [2024-07-12 15:03:35.519535] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:15:09.905 [2024-07-12 15:03:35.519755] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:10.472 EAL: TSC is not safe to use in SMP mode 00:15:10.472 EAL: TSC is not invariant 00:15:10.472 [2024-07-12 15:03:36.094871] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.472 [2024-07-12 15:03:36.193819] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:10.472 [2024-07-12 15:03:36.196039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.472 [2024-07-12 15:03:36.196795] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.472 [2024-07-12 15:03:36.196810] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:11.039 15:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:11.039 15:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:15:11.039 15:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:11.039 [2024-07-12 15:03:36.817692] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:11.039 [2024-07-12 15:03:36.817749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:11.039 [2024-07-12 15:03:36.817755] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:11.039 [2024-07-12 15:03:36.817764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:11.039 [2024-07-12 15:03:36.817767] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:11.039 [2024-07-12 15:03:36.817775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:11.039 [2024-07-12 15:03:36.817778] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:11.039 [2024-07-12 15:03:36.817786] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:11.039 15:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:11.039 15:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:11.039 15:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:11.039 15:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:11.039 15:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:11.039 15:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:11.039 15:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:11.039 15:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:11.039 15:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:11.039 15:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:11.039 15:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.039 15:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.298 15:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:11.298 "name": "Existed_Raid", 00:15:11.298 "uuid": "ea4418c6-405f-11ef-b2a4-e9dca065e82e", 00:15:11.298 "strip_size_kb": 64, 00:15:11.298 "state": "configuring", 00:15:11.298 "raid_level": "concat", 00:15:11.298 "superblock": true, 00:15:11.298 "num_base_bdevs": 4, 00:15:11.298 "num_base_bdevs_discovered": 0, 00:15:11.298 "num_base_bdevs_operational": 4, 00:15:11.298 "base_bdevs_list": [ 00:15:11.298 { 00:15:11.298 "name": "BaseBdev1", 00:15:11.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.298 "is_configured": false, 00:15:11.298 "data_offset": 0, 00:15:11.298 "data_size": 0 00:15:11.298 }, 00:15:11.298 { 00:15:11.298 "name": "BaseBdev2", 00:15:11.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.298 "is_configured": false, 00:15:11.298 "data_offset": 0, 00:15:11.298 "data_size": 0 00:15:11.298 }, 00:15:11.298 { 00:15:11.298 "name": "BaseBdev3", 00:15:11.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.298 "is_configured": false, 00:15:11.298 "data_offset": 0, 00:15:11.298 "data_size": 0 00:15:11.298 }, 00:15:11.298 { 00:15:11.298 "name": "BaseBdev4", 00:15:11.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.298 "is_configured": false, 00:15:11.298 "data_offset": 0, 00:15:11.298 "data_size": 0 00:15:11.298 } 00:15:11.298 ] 00:15:11.298 }' 00:15:11.298 15:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:11.298 15:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.865 15:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:11.865 [2024-07-12 15:03:37.601740] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:11.865 [2024-07-12 15:03:37.601777] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10b563634500 name Existed_Raid, state configuring 00:15:11.865 15:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:12.124 [2024-07-12 15:03:37.893819] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:12.124 [2024-07-12 15:03:37.893886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:12.124 [2024-07-12 15:03:37.893892] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:12.124 [2024-07-12 15:03:37.893917] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:12.124 [2024-07-12 15:03:37.893921] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:12.124 [2024-07-12 15:03:37.893929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:12.124 [2024-07-12 15:03:37.893933] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:12.124 [2024-07-12 15:03:37.893940] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:12.124 15:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:12.400 [2024-07-12 15:03:38.175077] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:12.400 BaseBdev1 00:15:12.400 15:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:12.400 15:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:12.400 15:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:12.400 15:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:12.400 15:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:12.400 15:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:12.400 15:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:12.679 15:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:12.938 [ 00:15:12.938 { 00:15:12.938 "name": "BaseBdev1", 00:15:12.938 "aliases": [ 00:15:12.938 "eb13086e-405f-11ef-b2a4-e9dca065e82e" 00:15:12.938 ], 00:15:12.938 "product_name": "Malloc disk", 00:15:12.938 "block_size": 512, 00:15:12.938 "num_blocks": 65536, 00:15:12.938 "uuid": "eb13086e-405f-11ef-b2a4-e9dca065e82e", 00:15:12.938 "assigned_rate_limits": { 00:15:12.938 "rw_ios_per_sec": 0, 00:15:12.938 "rw_mbytes_per_sec": 0, 00:15:12.938 "r_mbytes_per_sec": 0, 00:15:12.938 "w_mbytes_per_sec": 0 00:15:12.938 }, 00:15:12.938 "claimed": true, 00:15:12.938 "claim_type": "exclusive_write", 00:15:12.938 "zoned": false, 00:15:12.938 "supported_io_types": { 00:15:12.938 "read": true, 00:15:12.938 "write": true, 00:15:12.938 "unmap": true, 00:15:12.938 "flush": true, 00:15:12.938 "reset": true, 00:15:12.938 "nvme_admin": false, 00:15:12.938 "nvme_io": false, 00:15:12.938 "nvme_io_md": false, 00:15:12.938 "write_zeroes": true, 00:15:12.938 "zcopy": true, 00:15:12.938 "get_zone_info": false, 00:15:12.938 "zone_management": false, 00:15:12.938 "zone_append": false, 00:15:12.938 "compare": false, 00:15:12.938 "compare_and_write": false, 00:15:12.938 "abort": true, 00:15:12.938 "seek_hole": false, 00:15:12.938 "seek_data": false, 00:15:12.938 "copy": true, 00:15:12.938 "nvme_iov_md": false 00:15:12.938 }, 00:15:12.938 "memory_domains": [ 00:15:12.938 { 00:15:12.938 "dma_device_id": "system", 00:15:12.938 "dma_device_type": 1 00:15:12.938 }, 00:15:12.938 { 00:15:12.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.938 "dma_device_type": 2 00:15:12.938 } 00:15:12.938 ], 00:15:12.938 "driver_specific": {} 00:15:12.938 } 00:15:12.938 ] 00:15:12.938 15:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:12.938 15:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:12.938 15:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:12.938 15:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:12.938 15:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:12.938 15:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:12.938 15:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:12.938 15:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:12.938 15:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:12.938 15:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:12.938 15:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:12.938 15:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.938 15:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.505 15:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:13.505 "name": "Existed_Raid", 00:15:13.505 "uuid": "eae84cd7-405f-11ef-b2a4-e9dca065e82e", 00:15:13.505 "strip_size_kb": 64, 00:15:13.505 "state": "configuring", 00:15:13.505 "raid_level": "concat", 00:15:13.505 "superblock": true, 00:15:13.505 "num_base_bdevs": 4, 00:15:13.505 "num_base_bdevs_discovered": 1, 00:15:13.505 "num_base_bdevs_operational": 4, 00:15:13.505 "base_bdevs_list": [ 00:15:13.505 { 00:15:13.505 "name": "BaseBdev1", 00:15:13.505 "uuid": "eb13086e-405f-11ef-b2a4-e9dca065e82e", 00:15:13.505 "is_configured": true, 00:15:13.505 "data_offset": 2048, 00:15:13.505 "data_size": 63488 00:15:13.505 }, 00:15:13.505 { 00:15:13.505 "name": "BaseBdev2", 00:15:13.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.505 "is_configured": false, 00:15:13.505 "data_offset": 0, 00:15:13.505 "data_size": 0 00:15:13.505 }, 00:15:13.505 { 00:15:13.505 "name": "BaseBdev3", 00:15:13.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.505 "is_configured": false, 00:15:13.505 "data_offset": 0, 00:15:13.505 "data_size": 0 00:15:13.505 }, 00:15:13.505 { 00:15:13.505 "name": "BaseBdev4", 00:15:13.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.505 "is_configured": false, 00:15:13.505 "data_offset": 0, 00:15:13.505 "data_size": 0 00:15:13.505 } 00:15:13.505 ] 00:15:13.505 }' 00:15:13.505 15:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:13.505 15:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.505 15:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:13.778 [2024-07-12 15:03:39.557947] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:13.778 [2024-07-12 15:03:39.557985] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10b563634500 name Existed_Raid, state configuring 00:15:13.778 15:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:14.037 [2024-07-12 15:03:39.813961] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:14.037 [2024-07-12 15:03:39.814856] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:14.037 [2024-07-12 15:03:39.814898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:14.037 [2024-07-12 15:03:39.814904] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:14.037 [2024-07-12 15:03:39.814912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:14.037 [2024-07-12 15:03:39.814916] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:14.037 [2024-07-12 15:03:39.814924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:14.037 15:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:14.037 15:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:14.037 15:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:14.037 15:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:14.037 15:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:14.037 15:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:14.037 15:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:14.037 15:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:14.037 15:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:14.037 15:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:14.037 15:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:14.037 15:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:14.037 15:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.037 15:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.296 15:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:14.296 "name": "Existed_Raid", 00:15:14.297 "uuid": "ec0d4a65-405f-11ef-b2a4-e9dca065e82e", 00:15:14.297 "strip_size_kb": 64, 00:15:14.297 "state": "configuring", 00:15:14.297 "raid_level": "concat", 00:15:14.297 "superblock": true, 00:15:14.297 "num_base_bdevs": 4, 00:15:14.297 "num_base_bdevs_discovered": 1, 00:15:14.297 "num_base_bdevs_operational": 4, 00:15:14.297 "base_bdevs_list": [ 00:15:14.297 { 00:15:14.297 "name": "BaseBdev1", 00:15:14.297 "uuid": "eb13086e-405f-11ef-b2a4-e9dca065e82e", 00:15:14.297 "is_configured": true, 00:15:14.297 "data_offset": 2048, 00:15:14.297 "data_size": 63488 00:15:14.297 }, 00:15:14.297 { 00:15:14.297 "name": "BaseBdev2", 00:15:14.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.297 "is_configured": false, 00:15:14.297 "data_offset": 0, 00:15:14.297 "data_size": 0 00:15:14.297 }, 00:15:14.297 { 00:15:14.297 "name": "BaseBdev3", 00:15:14.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.297 "is_configured": false, 00:15:14.297 "data_offset": 0, 00:15:14.297 "data_size": 0 00:15:14.297 }, 00:15:14.297 { 00:15:14.297 "name": "BaseBdev4", 00:15:14.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.297 "is_configured": false, 00:15:14.297 "data_offset": 0, 00:15:14.297 "data_size": 0 00:15:14.297 } 00:15:14.297 ] 00:15:14.297 }' 00:15:14.297 15:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:14.297 15:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.864 15:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:14.864 [2024-07-12 15:03:40.638137] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:14.864 BaseBdev2 00:15:14.864 15:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:14.864 15:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:14.864 15:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:14.864 15:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:14.864 15:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:14.864 15:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:14.864 15:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:15.124 15:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:15.383 [ 00:15:15.383 { 00:15:15.383 "name": "BaseBdev2", 00:15:15.383 "aliases": [ 00:15:15.383 "ec8b077c-405f-11ef-b2a4-e9dca065e82e" 00:15:15.383 ], 00:15:15.383 "product_name": "Malloc disk", 00:15:15.383 "block_size": 512, 00:15:15.383 "num_blocks": 65536, 00:15:15.383 "uuid": "ec8b077c-405f-11ef-b2a4-e9dca065e82e", 00:15:15.383 "assigned_rate_limits": { 00:15:15.383 "rw_ios_per_sec": 0, 00:15:15.383 "rw_mbytes_per_sec": 0, 00:15:15.383 "r_mbytes_per_sec": 0, 00:15:15.383 "w_mbytes_per_sec": 0 00:15:15.383 }, 00:15:15.383 "claimed": true, 00:15:15.383 "claim_type": "exclusive_write", 00:15:15.383 "zoned": false, 00:15:15.383 "supported_io_types": { 00:15:15.383 "read": true, 00:15:15.383 "write": true, 00:15:15.383 "unmap": true, 00:15:15.383 "flush": true, 00:15:15.383 "reset": true, 00:15:15.383 "nvme_admin": false, 00:15:15.383 "nvme_io": false, 00:15:15.383 "nvme_io_md": false, 00:15:15.383 "write_zeroes": true, 00:15:15.383 "zcopy": true, 00:15:15.383 "get_zone_info": false, 00:15:15.383 "zone_management": false, 00:15:15.383 "zone_append": false, 00:15:15.383 "compare": false, 00:15:15.383 "compare_and_write": false, 00:15:15.383 "abort": true, 00:15:15.383 "seek_hole": false, 00:15:15.383 "seek_data": false, 00:15:15.383 "copy": true, 00:15:15.383 "nvme_iov_md": false 00:15:15.383 }, 00:15:15.383 "memory_domains": [ 00:15:15.383 { 00:15:15.383 "dma_device_id": "system", 00:15:15.383 "dma_device_type": 1 00:15:15.383 }, 00:15:15.383 { 00:15:15.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.383 "dma_device_type": 2 00:15:15.383 } 00:15:15.383 ], 00:15:15.383 "driver_specific": {} 00:15:15.383 } 00:15:15.383 ] 00:15:15.383 15:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:15.383 15:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:15.383 15:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:15.383 15:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:15.383 15:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:15.383 15:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:15.383 15:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:15.383 15:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:15.383 15:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:15.383 15:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:15.383 15:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:15.383 15:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:15.383 15:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:15.383 15:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.383 15:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.664 15:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:15.664 "name": "Existed_Raid", 00:15:15.664 "uuid": "ec0d4a65-405f-11ef-b2a4-e9dca065e82e", 00:15:15.664 "strip_size_kb": 64, 00:15:15.664 "state": "configuring", 00:15:15.664 "raid_level": "concat", 00:15:15.664 "superblock": true, 00:15:15.664 "num_base_bdevs": 4, 00:15:15.664 "num_base_bdevs_discovered": 2, 00:15:15.664 "num_base_bdevs_operational": 4, 00:15:15.664 "base_bdevs_list": [ 00:15:15.664 { 00:15:15.664 "name": "BaseBdev1", 00:15:15.664 "uuid": "eb13086e-405f-11ef-b2a4-e9dca065e82e", 00:15:15.664 "is_configured": true, 00:15:15.664 "data_offset": 2048, 00:15:15.664 "data_size": 63488 00:15:15.664 }, 00:15:15.664 { 00:15:15.664 "name": "BaseBdev2", 00:15:15.664 "uuid": "ec8b077c-405f-11ef-b2a4-e9dca065e82e", 00:15:15.664 "is_configured": true, 00:15:15.664 "data_offset": 2048, 00:15:15.664 "data_size": 63488 00:15:15.664 }, 00:15:15.664 { 00:15:15.664 "name": "BaseBdev3", 00:15:15.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.664 "is_configured": false, 00:15:15.664 "data_offset": 0, 00:15:15.664 "data_size": 0 00:15:15.664 }, 00:15:15.664 { 00:15:15.664 "name": "BaseBdev4", 00:15:15.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.664 "is_configured": false, 00:15:15.664 "data_offset": 0, 00:15:15.664 "data_size": 0 00:15:15.664 } 00:15:15.664 ] 00:15:15.664 }' 00:15:15.664 15:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:15.664 15:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.923 15:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:16.182 [2024-07-12 15:03:41.918290] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:16.182 BaseBdev3 00:15:16.182 15:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:15:16.182 15:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:16.182 15:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:16.182 15:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:16.182 15:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:16.182 15:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:16.182 15:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:16.440 15:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:16.699 [ 00:15:16.699 { 00:15:16.699 "name": "BaseBdev3", 00:15:16.699 "aliases": [ 00:15:16.699 "ed4e5d75-405f-11ef-b2a4-e9dca065e82e" 00:15:16.699 ], 00:15:16.699 "product_name": "Malloc disk", 00:15:16.699 "block_size": 512, 00:15:16.699 "num_blocks": 65536, 00:15:16.699 "uuid": "ed4e5d75-405f-11ef-b2a4-e9dca065e82e", 00:15:16.699 "assigned_rate_limits": { 00:15:16.699 "rw_ios_per_sec": 0, 00:15:16.699 "rw_mbytes_per_sec": 0, 00:15:16.699 "r_mbytes_per_sec": 0, 00:15:16.699 "w_mbytes_per_sec": 0 00:15:16.699 }, 00:15:16.699 "claimed": true, 00:15:16.699 "claim_type": "exclusive_write", 00:15:16.699 "zoned": false, 00:15:16.699 "supported_io_types": { 00:15:16.699 "read": true, 00:15:16.699 "write": true, 00:15:16.699 "unmap": true, 00:15:16.699 "flush": true, 00:15:16.699 "reset": true, 00:15:16.699 "nvme_admin": false, 00:15:16.699 "nvme_io": false, 00:15:16.699 "nvme_io_md": false, 00:15:16.699 "write_zeroes": true, 00:15:16.699 "zcopy": true, 00:15:16.699 "get_zone_info": false, 00:15:16.699 "zone_management": false, 00:15:16.699 "zone_append": false, 00:15:16.699 "compare": false, 00:15:16.699 "compare_and_write": false, 00:15:16.699 "abort": true, 00:15:16.699 "seek_hole": false, 00:15:16.699 "seek_data": false, 00:15:16.699 "copy": true, 00:15:16.699 "nvme_iov_md": false 00:15:16.699 }, 00:15:16.699 "memory_domains": [ 00:15:16.699 { 00:15:16.699 "dma_device_id": "system", 00:15:16.699 "dma_device_type": 1 00:15:16.699 }, 00:15:16.699 { 00:15:16.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.699 "dma_device_type": 2 00:15:16.699 } 00:15:16.699 ], 00:15:16.699 "driver_specific": {} 00:15:16.699 } 00:15:16.699 ] 00:15:16.699 15:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:16.699 15:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:16.699 15:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:16.699 15:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:16.699 15:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:16.699 15:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:16.699 15:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:16.699 15:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:16.699 15:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:16.699 15:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:16.699 15:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:16.699 15:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:16.699 15:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:16.699 15:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.699 15:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.958 15:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:16.958 "name": "Existed_Raid", 00:15:16.958 "uuid": "ec0d4a65-405f-11ef-b2a4-e9dca065e82e", 00:15:16.958 "strip_size_kb": 64, 00:15:16.958 "state": "configuring", 00:15:16.958 "raid_level": "concat", 00:15:16.958 "superblock": true, 00:15:16.958 "num_base_bdevs": 4, 00:15:16.958 "num_base_bdevs_discovered": 3, 00:15:16.958 "num_base_bdevs_operational": 4, 00:15:16.958 "base_bdevs_list": [ 00:15:16.958 { 00:15:16.958 "name": "BaseBdev1", 00:15:16.958 "uuid": "eb13086e-405f-11ef-b2a4-e9dca065e82e", 00:15:16.958 "is_configured": true, 00:15:16.958 "data_offset": 2048, 00:15:16.958 "data_size": 63488 00:15:16.958 }, 00:15:16.958 { 00:15:16.958 "name": "BaseBdev2", 00:15:16.958 "uuid": "ec8b077c-405f-11ef-b2a4-e9dca065e82e", 00:15:16.958 "is_configured": true, 00:15:16.958 "data_offset": 2048, 00:15:16.958 "data_size": 63488 00:15:16.958 }, 00:15:16.958 { 00:15:16.958 "name": "BaseBdev3", 00:15:16.958 "uuid": "ed4e5d75-405f-11ef-b2a4-e9dca065e82e", 00:15:16.958 "is_configured": true, 00:15:16.958 "data_offset": 2048, 00:15:16.958 "data_size": 63488 00:15:16.958 }, 00:15:16.958 { 00:15:16.958 "name": "BaseBdev4", 00:15:16.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.958 "is_configured": false, 00:15:16.958 "data_offset": 0, 00:15:16.958 "data_size": 0 00:15:16.958 } 00:15:16.958 ] 00:15:16.958 }' 00:15:16.958 15:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:16.958 15:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.250 15:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:17.509 [2024-07-12 15:03:43.198370] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:17.509 [2024-07-12 15:03:43.198442] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x10b563634a00 00:15:17.509 [2024-07-12 15:03:43.198449] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:17.509 [2024-07-12 15:03:43.198472] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10b563697e20 00:15:17.509 [2024-07-12 15:03:43.198526] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10b563634a00 00:15:17.509 [2024-07-12 15:03:43.198530] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x10b563634a00 00:15:17.509 [2024-07-12 15:03:43.198552] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.509 BaseBdev4 00:15:17.509 15:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:15:17.509 15:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:15:17.509 15:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:17.509 15:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:17.509 15:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:17.509 15:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:17.509 15:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:17.767 15:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:18.026 [ 00:15:18.026 { 00:15:18.026 "name": "BaseBdev4", 00:15:18.026 "aliases": [ 00:15:18.026 "ee11b053-405f-11ef-b2a4-e9dca065e82e" 00:15:18.026 ], 00:15:18.026 "product_name": "Malloc disk", 00:15:18.026 "block_size": 512, 00:15:18.026 "num_blocks": 65536, 00:15:18.026 "uuid": "ee11b053-405f-11ef-b2a4-e9dca065e82e", 00:15:18.026 "assigned_rate_limits": { 00:15:18.026 "rw_ios_per_sec": 0, 00:15:18.026 "rw_mbytes_per_sec": 0, 00:15:18.026 "r_mbytes_per_sec": 0, 00:15:18.026 "w_mbytes_per_sec": 0 00:15:18.026 }, 00:15:18.026 "claimed": true, 00:15:18.026 "claim_type": "exclusive_write", 00:15:18.026 "zoned": false, 00:15:18.026 "supported_io_types": { 00:15:18.026 "read": true, 00:15:18.026 "write": true, 00:15:18.026 "unmap": true, 00:15:18.026 "flush": true, 00:15:18.026 "reset": true, 00:15:18.026 "nvme_admin": false, 00:15:18.026 "nvme_io": false, 00:15:18.026 "nvme_io_md": false, 00:15:18.026 "write_zeroes": true, 00:15:18.026 "zcopy": true, 00:15:18.026 "get_zone_info": false, 00:15:18.026 "zone_management": false, 00:15:18.026 "zone_append": false, 00:15:18.026 "compare": false, 00:15:18.026 "compare_and_write": false, 00:15:18.026 "abort": true, 00:15:18.026 "seek_hole": false, 00:15:18.026 "seek_data": false, 00:15:18.026 "copy": true, 00:15:18.026 "nvme_iov_md": false 00:15:18.026 }, 00:15:18.026 "memory_domains": [ 00:15:18.026 { 00:15:18.026 "dma_device_id": "system", 00:15:18.026 "dma_device_type": 1 00:15:18.026 }, 00:15:18.026 { 00:15:18.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.026 "dma_device_type": 2 00:15:18.026 } 00:15:18.026 ], 00:15:18.026 "driver_specific": {} 00:15:18.026 } 00:15:18.026 ] 00:15:18.026 15:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:18.026 15:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:18.026 15:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:18.026 15:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:15:18.026 15:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:18.026 15:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:18.026 15:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:18.026 15:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:18.026 15:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:18.026 15:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:18.027 15:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:18.027 15:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:18.027 15:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:18.027 15:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.027 15:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.286 15:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:18.286 "name": "Existed_Raid", 00:15:18.286 "uuid": "ec0d4a65-405f-11ef-b2a4-e9dca065e82e", 00:15:18.286 "strip_size_kb": 64, 00:15:18.286 "state": "online", 00:15:18.286 "raid_level": "concat", 00:15:18.286 "superblock": true, 00:15:18.286 "num_base_bdevs": 4, 00:15:18.286 "num_base_bdevs_discovered": 4, 00:15:18.286 "num_base_bdevs_operational": 4, 00:15:18.286 "base_bdevs_list": [ 00:15:18.286 { 00:15:18.286 "name": "BaseBdev1", 00:15:18.286 "uuid": "eb13086e-405f-11ef-b2a4-e9dca065e82e", 00:15:18.286 "is_configured": true, 00:15:18.286 "data_offset": 2048, 00:15:18.286 "data_size": 63488 00:15:18.286 }, 00:15:18.286 { 00:15:18.286 "name": "BaseBdev2", 00:15:18.286 "uuid": "ec8b077c-405f-11ef-b2a4-e9dca065e82e", 00:15:18.286 "is_configured": true, 00:15:18.286 "data_offset": 2048, 00:15:18.286 "data_size": 63488 00:15:18.286 }, 00:15:18.286 { 00:15:18.286 "name": "BaseBdev3", 00:15:18.286 "uuid": "ed4e5d75-405f-11ef-b2a4-e9dca065e82e", 00:15:18.286 "is_configured": true, 00:15:18.286 "data_offset": 2048, 00:15:18.286 "data_size": 63488 00:15:18.286 }, 00:15:18.286 { 00:15:18.286 "name": "BaseBdev4", 00:15:18.286 "uuid": "ee11b053-405f-11ef-b2a4-e9dca065e82e", 00:15:18.286 "is_configured": true, 00:15:18.286 "data_offset": 2048, 00:15:18.286 "data_size": 63488 00:15:18.286 } 00:15:18.286 ] 00:15:18.286 }' 00:15:18.286 15:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:18.286 15:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.852 15:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:18.852 15:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:18.852 15:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:18.852 15:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:18.852 15:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:18.852 15:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:18.852 15:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:18.852 15:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:19.109 [2024-07-12 15:03:44.714319] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:19.109 15:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:19.109 "name": "Existed_Raid", 00:15:19.109 "aliases": [ 00:15:19.109 "ec0d4a65-405f-11ef-b2a4-e9dca065e82e" 00:15:19.109 ], 00:15:19.109 "product_name": "Raid Volume", 00:15:19.109 "block_size": 512, 00:15:19.109 "num_blocks": 253952, 00:15:19.109 "uuid": "ec0d4a65-405f-11ef-b2a4-e9dca065e82e", 00:15:19.109 "assigned_rate_limits": { 00:15:19.109 "rw_ios_per_sec": 0, 00:15:19.109 "rw_mbytes_per_sec": 0, 00:15:19.109 "r_mbytes_per_sec": 0, 00:15:19.109 "w_mbytes_per_sec": 0 00:15:19.109 }, 00:15:19.109 "claimed": false, 00:15:19.109 "zoned": false, 00:15:19.109 "supported_io_types": { 00:15:19.109 "read": true, 00:15:19.109 "write": true, 00:15:19.109 "unmap": true, 00:15:19.109 "flush": true, 00:15:19.109 "reset": true, 00:15:19.109 "nvme_admin": false, 00:15:19.109 "nvme_io": false, 00:15:19.109 "nvme_io_md": false, 00:15:19.109 "write_zeroes": true, 00:15:19.109 "zcopy": false, 00:15:19.109 "get_zone_info": false, 00:15:19.109 "zone_management": false, 00:15:19.109 "zone_append": false, 00:15:19.109 "compare": false, 00:15:19.109 "compare_and_write": false, 00:15:19.109 "abort": false, 00:15:19.109 "seek_hole": false, 00:15:19.109 "seek_data": false, 00:15:19.109 "copy": false, 00:15:19.109 "nvme_iov_md": false 00:15:19.109 }, 00:15:19.109 "memory_domains": [ 00:15:19.109 { 00:15:19.109 "dma_device_id": "system", 00:15:19.109 "dma_device_type": 1 00:15:19.109 }, 00:15:19.109 { 00:15:19.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.109 "dma_device_type": 2 00:15:19.109 }, 00:15:19.109 { 00:15:19.109 "dma_device_id": "system", 00:15:19.109 "dma_device_type": 1 00:15:19.109 }, 00:15:19.109 { 00:15:19.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.109 "dma_device_type": 2 00:15:19.109 }, 00:15:19.109 { 00:15:19.110 "dma_device_id": "system", 00:15:19.110 "dma_device_type": 1 00:15:19.110 }, 00:15:19.110 { 00:15:19.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.110 "dma_device_type": 2 00:15:19.110 }, 00:15:19.110 { 00:15:19.110 "dma_device_id": "system", 00:15:19.110 "dma_device_type": 1 00:15:19.110 }, 00:15:19.110 { 00:15:19.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.110 "dma_device_type": 2 00:15:19.110 } 00:15:19.110 ], 00:15:19.110 "driver_specific": { 00:15:19.110 "raid": { 00:15:19.110 "uuid": "ec0d4a65-405f-11ef-b2a4-e9dca065e82e", 00:15:19.110 "strip_size_kb": 64, 00:15:19.110 "state": "online", 00:15:19.110 "raid_level": "concat", 00:15:19.110 "superblock": true, 00:15:19.110 "num_base_bdevs": 4, 00:15:19.110 "num_base_bdevs_discovered": 4, 00:15:19.110 "num_base_bdevs_operational": 4, 00:15:19.110 "base_bdevs_list": [ 00:15:19.110 { 00:15:19.110 "name": "BaseBdev1", 00:15:19.110 "uuid": "eb13086e-405f-11ef-b2a4-e9dca065e82e", 00:15:19.110 "is_configured": true, 00:15:19.110 "data_offset": 2048, 00:15:19.110 "data_size": 63488 00:15:19.110 }, 00:15:19.110 { 00:15:19.110 "name": "BaseBdev2", 00:15:19.110 "uuid": "ec8b077c-405f-11ef-b2a4-e9dca065e82e", 00:15:19.110 "is_configured": true, 00:15:19.110 "data_offset": 2048, 00:15:19.110 "data_size": 63488 00:15:19.110 }, 00:15:19.110 { 00:15:19.110 "name": "BaseBdev3", 00:15:19.110 "uuid": "ed4e5d75-405f-11ef-b2a4-e9dca065e82e", 00:15:19.110 "is_configured": true, 00:15:19.110 "data_offset": 2048, 00:15:19.110 "data_size": 63488 00:15:19.110 }, 00:15:19.110 { 00:15:19.110 "name": "BaseBdev4", 00:15:19.110 "uuid": "ee11b053-405f-11ef-b2a4-e9dca065e82e", 00:15:19.110 "is_configured": true, 00:15:19.110 "data_offset": 2048, 00:15:19.110 "data_size": 63488 00:15:19.110 } 00:15:19.110 ] 00:15:19.110 } 00:15:19.110 } 00:15:19.110 }' 00:15:19.110 15:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:19.110 15:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:19.110 BaseBdev2 00:15:19.110 BaseBdev3 00:15:19.110 BaseBdev4' 00:15:19.110 15:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:19.110 15:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:19.110 15:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:19.368 15:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:19.368 "name": "BaseBdev1", 00:15:19.368 "aliases": [ 00:15:19.368 "eb13086e-405f-11ef-b2a4-e9dca065e82e" 00:15:19.368 ], 00:15:19.368 "product_name": "Malloc disk", 00:15:19.368 "block_size": 512, 00:15:19.368 "num_blocks": 65536, 00:15:19.368 "uuid": "eb13086e-405f-11ef-b2a4-e9dca065e82e", 00:15:19.368 "assigned_rate_limits": { 00:15:19.368 "rw_ios_per_sec": 0, 00:15:19.368 "rw_mbytes_per_sec": 0, 00:15:19.368 "r_mbytes_per_sec": 0, 00:15:19.368 "w_mbytes_per_sec": 0 00:15:19.368 }, 00:15:19.368 "claimed": true, 00:15:19.368 "claim_type": "exclusive_write", 00:15:19.368 "zoned": false, 00:15:19.368 "supported_io_types": { 00:15:19.368 "read": true, 00:15:19.368 "write": true, 00:15:19.368 "unmap": true, 00:15:19.368 "flush": true, 00:15:19.368 "reset": true, 00:15:19.368 "nvme_admin": false, 00:15:19.368 "nvme_io": false, 00:15:19.368 "nvme_io_md": false, 00:15:19.368 "write_zeroes": true, 00:15:19.368 "zcopy": true, 00:15:19.368 "get_zone_info": false, 00:15:19.368 "zone_management": false, 00:15:19.368 "zone_append": false, 00:15:19.368 "compare": false, 00:15:19.368 "compare_and_write": false, 00:15:19.368 "abort": true, 00:15:19.368 "seek_hole": false, 00:15:19.368 "seek_data": false, 00:15:19.368 "copy": true, 00:15:19.368 "nvme_iov_md": false 00:15:19.368 }, 00:15:19.368 "memory_domains": [ 00:15:19.368 { 00:15:19.368 "dma_device_id": "system", 00:15:19.368 "dma_device_type": 1 00:15:19.368 }, 00:15:19.368 { 00:15:19.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.368 "dma_device_type": 2 00:15:19.368 } 00:15:19.368 ], 00:15:19.368 "driver_specific": {} 00:15:19.368 }' 00:15:19.368 15:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:19.368 15:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:19.368 15:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:19.368 15:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:19.368 15:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:19.368 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:19.368 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:19.368 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:19.368 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:19.368 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:19.368 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:19.368 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:19.368 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:19.368 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:19.368 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:19.626 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:19.626 "name": "BaseBdev2", 00:15:19.626 "aliases": [ 00:15:19.626 "ec8b077c-405f-11ef-b2a4-e9dca065e82e" 00:15:19.626 ], 00:15:19.626 "product_name": "Malloc disk", 00:15:19.626 "block_size": 512, 00:15:19.626 "num_blocks": 65536, 00:15:19.626 "uuid": "ec8b077c-405f-11ef-b2a4-e9dca065e82e", 00:15:19.626 "assigned_rate_limits": { 00:15:19.626 "rw_ios_per_sec": 0, 00:15:19.626 "rw_mbytes_per_sec": 0, 00:15:19.626 "r_mbytes_per_sec": 0, 00:15:19.626 "w_mbytes_per_sec": 0 00:15:19.626 }, 00:15:19.626 "claimed": true, 00:15:19.626 "claim_type": "exclusive_write", 00:15:19.626 "zoned": false, 00:15:19.626 "supported_io_types": { 00:15:19.626 "read": true, 00:15:19.626 "write": true, 00:15:19.626 "unmap": true, 00:15:19.626 "flush": true, 00:15:19.626 "reset": true, 00:15:19.626 "nvme_admin": false, 00:15:19.626 "nvme_io": false, 00:15:19.626 "nvme_io_md": false, 00:15:19.626 "write_zeroes": true, 00:15:19.626 "zcopy": true, 00:15:19.626 "get_zone_info": false, 00:15:19.626 "zone_management": false, 00:15:19.626 "zone_append": false, 00:15:19.626 "compare": false, 00:15:19.626 "compare_and_write": false, 00:15:19.626 "abort": true, 00:15:19.626 "seek_hole": false, 00:15:19.626 "seek_data": false, 00:15:19.626 "copy": true, 00:15:19.626 "nvme_iov_md": false 00:15:19.626 }, 00:15:19.626 "memory_domains": [ 00:15:19.626 { 00:15:19.626 "dma_device_id": "system", 00:15:19.626 "dma_device_type": 1 00:15:19.626 }, 00:15:19.626 { 00:15:19.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.626 "dma_device_type": 2 00:15:19.626 } 00:15:19.626 ], 00:15:19.626 "driver_specific": {} 00:15:19.626 }' 00:15:19.626 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:19.626 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:19.626 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:19.626 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:19.626 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:19.626 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:19.626 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:19.626 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:19.626 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:19.626 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:19.626 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:19.626 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:19.626 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:19.626 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:19.626 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:19.884 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:19.884 "name": "BaseBdev3", 00:15:19.884 "aliases": [ 00:15:19.884 "ed4e5d75-405f-11ef-b2a4-e9dca065e82e" 00:15:19.884 ], 00:15:19.884 "product_name": "Malloc disk", 00:15:19.884 "block_size": 512, 00:15:19.884 "num_blocks": 65536, 00:15:19.884 "uuid": "ed4e5d75-405f-11ef-b2a4-e9dca065e82e", 00:15:19.884 "assigned_rate_limits": { 00:15:19.884 "rw_ios_per_sec": 0, 00:15:19.884 "rw_mbytes_per_sec": 0, 00:15:19.884 "r_mbytes_per_sec": 0, 00:15:19.884 "w_mbytes_per_sec": 0 00:15:19.884 }, 00:15:19.884 "claimed": true, 00:15:19.884 "claim_type": "exclusive_write", 00:15:19.884 "zoned": false, 00:15:19.884 "supported_io_types": { 00:15:19.884 "read": true, 00:15:19.884 "write": true, 00:15:19.884 "unmap": true, 00:15:19.884 "flush": true, 00:15:19.884 "reset": true, 00:15:19.884 "nvme_admin": false, 00:15:19.884 "nvme_io": false, 00:15:19.884 "nvme_io_md": false, 00:15:19.884 "write_zeroes": true, 00:15:19.884 "zcopy": true, 00:15:19.884 "get_zone_info": false, 00:15:19.884 "zone_management": false, 00:15:19.884 "zone_append": false, 00:15:19.884 "compare": false, 00:15:19.884 "compare_and_write": false, 00:15:19.884 "abort": true, 00:15:19.884 "seek_hole": false, 00:15:19.884 "seek_data": false, 00:15:19.884 "copy": true, 00:15:19.884 "nvme_iov_md": false 00:15:19.884 }, 00:15:19.884 "memory_domains": [ 00:15:19.884 { 00:15:19.884 "dma_device_id": "system", 00:15:19.884 "dma_device_type": 1 00:15:19.884 }, 00:15:19.884 { 00:15:19.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.884 "dma_device_type": 2 00:15:19.884 } 00:15:19.884 ], 00:15:19.884 "driver_specific": {} 00:15:19.884 }' 00:15:19.884 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:19.884 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:19.884 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:19.884 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:19.884 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:19.884 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:19.884 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:19.884 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:19.884 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:19.884 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:19.884 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:19.884 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:19.884 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:19.884 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:19.884 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:20.451 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:20.451 "name": "BaseBdev4", 00:15:20.451 "aliases": [ 00:15:20.451 "ee11b053-405f-11ef-b2a4-e9dca065e82e" 00:15:20.451 ], 00:15:20.451 "product_name": "Malloc disk", 00:15:20.451 "block_size": 512, 00:15:20.451 "num_blocks": 65536, 00:15:20.451 "uuid": "ee11b053-405f-11ef-b2a4-e9dca065e82e", 00:15:20.451 "assigned_rate_limits": { 00:15:20.451 "rw_ios_per_sec": 0, 00:15:20.451 "rw_mbytes_per_sec": 0, 00:15:20.451 "r_mbytes_per_sec": 0, 00:15:20.451 "w_mbytes_per_sec": 0 00:15:20.451 }, 00:15:20.451 "claimed": true, 00:15:20.451 "claim_type": "exclusive_write", 00:15:20.451 "zoned": false, 00:15:20.451 "supported_io_types": { 00:15:20.451 "read": true, 00:15:20.451 "write": true, 00:15:20.451 "unmap": true, 00:15:20.451 "flush": true, 00:15:20.451 "reset": true, 00:15:20.451 "nvme_admin": false, 00:15:20.451 "nvme_io": false, 00:15:20.451 "nvme_io_md": false, 00:15:20.451 "write_zeroes": true, 00:15:20.451 "zcopy": true, 00:15:20.451 "get_zone_info": false, 00:15:20.451 "zone_management": false, 00:15:20.451 "zone_append": false, 00:15:20.451 "compare": false, 00:15:20.451 "compare_and_write": false, 00:15:20.451 "abort": true, 00:15:20.451 "seek_hole": false, 00:15:20.451 "seek_data": false, 00:15:20.451 "copy": true, 00:15:20.451 "nvme_iov_md": false 00:15:20.451 }, 00:15:20.451 "memory_domains": [ 00:15:20.451 { 00:15:20.451 "dma_device_id": "system", 00:15:20.451 "dma_device_type": 1 00:15:20.451 }, 00:15:20.451 { 00:15:20.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.451 "dma_device_type": 2 00:15:20.451 } 00:15:20.451 ], 00:15:20.451 "driver_specific": {} 00:15:20.451 }' 00:15:20.451 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:20.451 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:20.451 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:20.451 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:20.451 15:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:20.451 15:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:20.451 15:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:20.451 15:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:20.451 15:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:20.451 15:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:20.451 15:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:20.451 15:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:20.451 15:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:20.451 [2024-07-12 15:03:46.266351] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:20.451 [2024-07-12 15:03:46.266384] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:20.451 [2024-07-12 15:03:46.266400] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:20.709 15:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:20.709 15:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:15:20.709 15:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:20.709 15:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:15:20.709 15:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:20.709 15:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:15:20.709 15:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:20.709 15:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:20.709 15:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:20.709 15:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:20.709 15:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:20.709 15:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:20.709 15:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:20.709 15:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:20.709 15:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:20.709 15:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.709 15:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.968 15:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:20.968 "name": "Existed_Raid", 00:15:20.968 "uuid": "ec0d4a65-405f-11ef-b2a4-e9dca065e82e", 00:15:20.968 "strip_size_kb": 64, 00:15:20.968 "state": "offline", 00:15:20.968 "raid_level": "concat", 00:15:20.968 "superblock": true, 00:15:20.968 "num_base_bdevs": 4, 00:15:20.968 "num_base_bdevs_discovered": 3, 00:15:20.968 "num_base_bdevs_operational": 3, 00:15:20.968 "base_bdevs_list": [ 00:15:20.968 { 00:15:20.968 "name": null, 00:15:20.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.968 "is_configured": false, 00:15:20.968 "data_offset": 2048, 00:15:20.968 "data_size": 63488 00:15:20.968 }, 00:15:20.968 { 00:15:20.968 "name": "BaseBdev2", 00:15:20.968 "uuid": "ec8b077c-405f-11ef-b2a4-e9dca065e82e", 00:15:20.968 "is_configured": true, 00:15:20.968 "data_offset": 2048, 00:15:20.968 "data_size": 63488 00:15:20.968 }, 00:15:20.968 { 00:15:20.968 "name": "BaseBdev3", 00:15:20.968 "uuid": "ed4e5d75-405f-11ef-b2a4-e9dca065e82e", 00:15:20.968 "is_configured": true, 00:15:20.968 "data_offset": 2048, 00:15:20.968 "data_size": 63488 00:15:20.968 }, 00:15:20.968 { 00:15:20.968 "name": "BaseBdev4", 00:15:20.968 "uuid": "ee11b053-405f-11ef-b2a4-e9dca065e82e", 00:15:20.968 "is_configured": true, 00:15:20.968 "data_offset": 2048, 00:15:20.968 "data_size": 63488 00:15:20.968 } 00:15:20.968 ] 00:15:20.968 }' 00:15:20.968 15:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:20.968 15:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.226 15:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:21.226 15:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:21.226 15:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.226 15:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:21.485 15:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:21.485 15:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:21.485 15:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:21.744 [2024-07-12 15:03:47.404491] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:21.744 15:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:21.744 15:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:21.744 15:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.744 15:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:22.002 15:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:22.002 15:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:22.002 15:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:22.261 [2024-07-12 15:03:47.941179] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:22.261 15:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:22.261 15:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:22.261 15:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:22.261 15:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.520 15:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:22.520 15:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:22.520 15:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:15:22.778 [2024-07-12 15:03:48.466727] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:22.778 [2024-07-12 15:03:48.466800] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10b563634a00 name Existed_Raid, state offline 00:15:22.778 15:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:22.778 15:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:22.778 15:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.778 15:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:23.037 15:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:23.037 15:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:23.037 15:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:15:23.037 15:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:15:23.037 15:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:23.037 15:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:23.296 BaseBdev2 00:15:23.296 15:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:15:23.296 15:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:23.296 15:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:23.296 15:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:23.296 15:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:23.296 15:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:23.296 15:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:23.555 15:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:23.814 [ 00:15:23.814 { 00:15:23.814 "name": "BaseBdev2", 00:15:23.814 "aliases": [ 00:15:23.814 "f1896198-405f-11ef-b2a4-e9dca065e82e" 00:15:23.814 ], 00:15:23.814 "product_name": "Malloc disk", 00:15:23.814 "block_size": 512, 00:15:23.814 "num_blocks": 65536, 00:15:23.814 "uuid": "f1896198-405f-11ef-b2a4-e9dca065e82e", 00:15:23.814 "assigned_rate_limits": { 00:15:23.814 "rw_ios_per_sec": 0, 00:15:23.814 "rw_mbytes_per_sec": 0, 00:15:23.814 "r_mbytes_per_sec": 0, 00:15:23.814 "w_mbytes_per_sec": 0 00:15:23.814 }, 00:15:23.814 "claimed": false, 00:15:23.814 "zoned": false, 00:15:23.814 "supported_io_types": { 00:15:23.814 "read": true, 00:15:23.814 "write": true, 00:15:23.814 "unmap": true, 00:15:23.814 "flush": true, 00:15:23.814 "reset": true, 00:15:23.814 "nvme_admin": false, 00:15:23.814 "nvme_io": false, 00:15:23.814 "nvme_io_md": false, 00:15:23.814 "write_zeroes": true, 00:15:23.814 "zcopy": true, 00:15:23.814 "get_zone_info": false, 00:15:23.814 "zone_management": false, 00:15:23.814 "zone_append": false, 00:15:23.814 "compare": false, 00:15:23.814 "compare_and_write": false, 00:15:23.814 "abort": true, 00:15:23.814 "seek_hole": false, 00:15:23.814 "seek_data": false, 00:15:23.814 "copy": true, 00:15:23.814 "nvme_iov_md": false 00:15:23.814 }, 00:15:23.814 "memory_domains": [ 00:15:23.814 { 00:15:23.814 "dma_device_id": "system", 00:15:23.814 "dma_device_type": 1 00:15:23.814 }, 00:15:23.814 { 00:15:23.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.814 "dma_device_type": 2 00:15:23.814 } 00:15:23.814 ], 00:15:23.814 "driver_specific": {} 00:15:23.814 } 00:15:23.814 ] 00:15:23.814 15:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:23.814 15:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:23.814 15:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:23.814 15:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:24.073 BaseBdev3 00:15:24.073 15:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:15:24.073 15:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:24.073 15:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:24.073 15:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:24.073 15:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:24.073 15:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:24.073 15:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:24.333 15:03:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:24.592 [ 00:15:24.592 { 00:15:24.592 "name": "BaseBdev3", 00:15:24.592 "aliases": [ 00:15:24.592 "f1f73fae-405f-11ef-b2a4-e9dca065e82e" 00:15:24.592 ], 00:15:24.592 "product_name": "Malloc disk", 00:15:24.592 "block_size": 512, 00:15:24.592 "num_blocks": 65536, 00:15:24.592 "uuid": "f1f73fae-405f-11ef-b2a4-e9dca065e82e", 00:15:24.592 "assigned_rate_limits": { 00:15:24.592 "rw_ios_per_sec": 0, 00:15:24.592 "rw_mbytes_per_sec": 0, 00:15:24.592 "r_mbytes_per_sec": 0, 00:15:24.592 "w_mbytes_per_sec": 0 00:15:24.592 }, 00:15:24.592 "claimed": false, 00:15:24.592 "zoned": false, 00:15:24.592 "supported_io_types": { 00:15:24.592 "read": true, 00:15:24.592 "write": true, 00:15:24.592 "unmap": true, 00:15:24.592 "flush": true, 00:15:24.592 "reset": true, 00:15:24.592 "nvme_admin": false, 00:15:24.592 "nvme_io": false, 00:15:24.592 "nvme_io_md": false, 00:15:24.592 "write_zeroes": true, 00:15:24.592 "zcopy": true, 00:15:24.592 "get_zone_info": false, 00:15:24.592 "zone_management": false, 00:15:24.592 "zone_append": false, 00:15:24.592 "compare": false, 00:15:24.592 "compare_and_write": false, 00:15:24.592 "abort": true, 00:15:24.592 "seek_hole": false, 00:15:24.592 "seek_data": false, 00:15:24.592 "copy": true, 00:15:24.592 "nvme_iov_md": false 00:15:24.592 }, 00:15:24.592 "memory_domains": [ 00:15:24.592 { 00:15:24.592 "dma_device_id": "system", 00:15:24.592 "dma_device_type": 1 00:15:24.592 }, 00:15:24.592 { 00:15:24.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.592 "dma_device_type": 2 00:15:24.592 } 00:15:24.592 ], 00:15:24.592 "driver_specific": {} 00:15:24.592 } 00:15:24.592 ] 00:15:24.592 15:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:24.592 15:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:24.592 15:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:24.592 15:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:24.851 BaseBdev4 00:15:24.851 15:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:15:24.851 15:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:15:24.851 15:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:24.851 15:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:24.851 15:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:24.851 15:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:24.852 15:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:25.111 15:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:25.370 [ 00:15:25.370 { 00:15:25.370 "name": "BaseBdev4", 00:15:25.370 "aliases": [ 00:15:25.370 "f26d0dbc-405f-11ef-b2a4-e9dca065e82e" 00:15:25.370 ], 00:15:25.370 "product_name": "Malloc disk", 00:15:25.370 "block_size": 512, 00:15:25.370 "num_blocks": 65536, 00:15:25.370 "uuid": "f26d0dbc-405f-11ef-b2a4-e9dca065e82e", 00:15:25.370 "assigned_rate_limits": { 00:15:25.370 "rw_ios_per_sec": 0, 00:15:25.370 "rw_mbytes_per_sec": 0, 00:15:25.370 "r_mbytes_per_sec": 0, 00:15:25.370 "w_mbytes_per_sec": 0 00:15:25.370 }, 00:15:25.370 "claimed": false, 00:15:25.370 "zoned": false, 00:15:25.370 "supported_io_types": { 00:15:25.370 "read": true, 00:15:25.370 "write": true, 00:15:25.370 "unmap": true, 00:15:25.370 "flush": true, 00:15:25.370 "reset": true, 00:15:25.370 "nvme_admin": false, 00:15:25.370 "nvme_io": false, 00:15:25.370 "nvme_io_md": false, 00:15:25.370 "write_zeroes": true, 00:15:25.370 "zcopy": true, 00:15:25.370 "get_zone_info": false, 00:15:25.370 "zone_management": false, 00:15:25.370 "zone_append": false, 00:15:25.370 "compare": false, 00:15:25.370 "compare_and_write": false, 00:15:25.370 "abort": true, 00:15:25.370 "seek_hole": false, 00:15:25.370 "seek_data": false, 00:15:25.370 "copy": true, 00:15:25.370 "nvme_iov_md": false 00:15:25.370 }, 00:15:25.370 "memory_domains": [ 00:15:25.370 { 00:15:25.370 "dma_device_id": "system", 00:15:25.370 "dma_device_type": 1 00:15:25.370 }, 00:15:25.370 { 00:15:25.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.370 "dma_device_type": 2 00:15:25.370 } 00:15:25.370 ], 00:15:25.370 "driver_specific": {} 00:15:25.370 } 00:15:25.370 ] 00:15:25.370 15:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:25.370 15:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:25.370 15:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:25.370 15:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:25.629 [2024-07-12 15:03:51.283957] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:25.629 [2024-07-12 15:03:51.284040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:25.629 [2024-07-12 15:03:51.284063] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:25.629 [2024-07-12 15:03:51.284830] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:25.629 [2024-07-12 15:03:51.284848] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:25.629 15:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:25.629 15:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:25.629 15:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:25.629 15:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:25.629 15:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:25.630 15:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:25.630 15:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:25.630 15:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:25.630 15:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:25.630 15:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:25.630 15:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.630 15:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.889 15:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:25.889 "name": "Existed_Raid", 00:15:25.889 "uuid": "f2e37925-405f-11ef-b2a4-e9dca065e82e", 00:15:25.889 "strip_size_kb": 64, 00:15:25.889 "state": "configuring", 00:15:25.889 "raid_level": "concat", 00:15:25.889 "superblock": true, 00:15:25.889 "num_base_bdevs": 4, 00:15:25.889 "num_base_bdevs_discovered": 3, 00:15:25.889 "num_base_bdevs_operational": 4, 00:15:25.889 "base_bdevs_list": [ 00:15:25.889 { 00:15:25.889 "name": "BaseBdev1", 00:15:25.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.889 "is_configured": false, 00:15:25.889 "data_offset": 0, 00:15:25.889 "data_size": 0 00:15:25.889 }, 00:15:25.889 { 00:15:25.889 "name": "BaseBdev2", 00:15:25.889 "uuid": "f1896198-405f-11ef-b2a4-e9dca065e82e", 00:15:25.889 "is_configured": true, 00:15:25.889 "data_offset": 2048, 00:15:25.889 "data_size": 63488 00:15:25.889 }, 00:15:25.889 { 00:15:25.889 "name": "BaseBdev3", 00:15:25.889 "uuid": "f1f73fae-405f-11ef-b2a4-e9dca065e82e", 00:15:25.889 "is_configured": true, 00:15:25.889 "data_offset": 2048, 00:15:25.889 "data_size": 63488 00:15:25.889 }, 00:15:25.889 { 00:15:25.889 "name": "BaseBdev4", 00:15:25.889 "uuid": "f26d0dbc-405f-11ef-b2a4-e9dca065e82e", 00:15:25.889 "is_configured": true, 00:15:25.889 "data_offset": 2048, 00:15:25.889 "data_size": 63488 00:15:25.889 } 00:15:25.889 ] 00:15:25.889 }' 00:15:25.889 15:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:25.889 15:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.148 15:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:26.407 [2024-07-12 15:03:52.152038] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:26.407 15:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:26.407 15:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:26.407 15:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:26.407 15:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:26.407 15:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:26.407 15:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:26.407 15:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:26.407 15:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:26.407 15:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:26.407 15:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:26.407 15:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.407 15:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.665 15:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:26.665 "name": "Existed_Raid", 00:15:26.665 "uuid": "f2e37925-405f-11ef-b2a4-e9dca065e82e", 00:15:26.665 "strip_size_kb": 64, 00:15:26.665 "state": "configuring", 00:15:26.665 "raid_level": "concat", 00:15:26.665 "superblock": true, 00:15:26.665 "num_base_bdevs": 4, 00:15:26.665 "num_base_bdevs_discovered": 2, 00:15:26.665 "num_base_bdevs_operational": 4, 00:15:26.665 "base_bdevs_list": [ 00:15:26.665 { 00:15:26.665 "name": "BaseBdev1", 00:15:26.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.665 "is_configured": false, 00:15:26.665 "data_offset": 0, 00:15:26.665 "data_size": 0 00:15:26.665 }, 00:15:26.665 { 00:15:26.665 "name": null, 00:15:26.665 "uuid": "f1896198-405f-11ef-b2a4-e9dca065e82e", 00:15:26.665 "is_configured": false, 00:15:26.665 "data_offset": 2048, 00:15:26.665 "data_size": 63488 00:15:26.665 }, 00:15:26.665 { 00:15:26.665 "name": "BaseBdev3", 00:15:26.665 "uuid": "f1f73fae-405f-11ef-b2a4-e9dca065e82e", 00:15:26.665 "is_configured": true, 00:15:26.665 "data_offset": 2048, 00:15:26.665 "data_size": 63488 00:15:26.665 }, 00:15:26.665 { 00:15:26.665 "name": "BaseBdev4", 00:15:26.665 "uuid": "f26d0dbc-405f-11ef-b2a4-e9dca065e82e", 00:15:26.665 "is_configured": true, 00:15:26.665 "data_offset": 2048, 00:15:26.665 "data_size": 63488 00:15:26.665 } 00:15:26.665 ] 00:15:26.665 }' 00:15:26.665 15:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:26.665 15:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.244 15:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.244 15:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:27.505 15:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:15:27.505 15:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:27.764 [2024-07-12 15:03:53.436430] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:27.764 BaseBdev1 00:15:27.764 15:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:15:27.764 15:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:27.764 15:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:27.764 15:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:27.764 15:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:27.764 15:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:27.764 15:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:28.022 15:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:28.280 [ 00:15:28.280 { 00:15:28.281 "name": "BaseBdev1", 00:15:28.281 "aliases": [ 00:15:28.281 "f42be4cf-405f-11ef-b2a4-e9dca065e82e" 00:15:28.281 ], 00:15:28.281 "product_name": "Malloc disk", 00:15:28.281 "block_size": 512, 00:15:28.281 "num_blocks": 65536, 00:15:28.281 "uuid": "f42be4cf-405f-11ef-b2a4-e9dca065e82e", 00:15:28.281 "assigned_rate_limits": { 00:15:28.281 "rw_ios_per_sec": 0, 00:15:28.281 "rw_mbytes_per_sec": 0, 00:15:28.281 "r_mbytes_per_sec": 0, 00:15:28.281 "w_mbytes_per_sec": 0 00:15:28.281 }, 00:15:28.281 "claimed": true, 00:15:28.281 "claim_type": "exclusive_write", 00:15:28.281 "zoned": false, 00:15:28.281 "supported_io_types": { 00:15:28.281 "read": true, 00:15:28.281 "write": true, 00:15:28.281 "unmap": true, 00:15:28.281 "flush": true, 00:15:28.281 "reset": true, 00:15:28.281 "nvme_admin": false, 00:15:28.281 "nvme_io": false, 00:15:28.281 "nvme_io_md": false, 00:15:28.281 "write_zeroes": true, 00:15:28.281 "zcopy": true, 00:15:28.281 "get_zone_info": false, 00:15:28.281 "zone_management": false, 00:15:28.281 "zone_append": false, 00:15:28.281 "compare": false, 00:15:28.281 "compare_and_write": false, 00:15:28.281 "abort": true, 00:15:28.281 "seek_hole": false, 00:15:28.281 "seek_data": false, 00:15:28.281 "copy": true, 00:15:28.281 "nvme_iov_md": false 00:15:28.281 }, 00:15:28.281 "memory_domains": [ 00:15:28.281 { 00:15:28.281 "dma_device_id": "system", 00:15:28.281 "dma_device_type": 1 00:15:28.281 }, 00:15:28.281 { 00:15:28.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.281 "dma_device_type": 2 00:15:28.281 } 00:15:28.281 ], 00:15:28.281 "driver_specific": {} 00:15:28.281 } 00:15:28.281 ] 00:15:28.281 15:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:28.281 15:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:28.281 15:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:28.281 15:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:28.281 15:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:28.281 15:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:28.281 15:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:28.281 15:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:28.281 15:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:28.281 15:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:28.281 15:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:28.281 15:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.281 15:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.540 15:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:28.540 "name": "Existed_Raid", 00:15:28.540 "uuid": "f2e37925-405f-11ef-b2a4-e9dca065e82e", 00:15:28.540 "strip_size_kb": 64, 00:15:28.540 "state": "configuring", 00:15:28.540 "raid_level": "concat", 00:15:28.540 "superblock": true, 00:15:28.540 "num_base_bdevs": 4, 00:15:28.540 "num_base_bdevs_discovered": 3, 00:15:28.540 "num_base_bdevs_operational": 4, 00:15:28.540 "base_bdevs_list": [ 00:15:28.540 { 00:15:28.540 "name": "BaseBdev1", 00:15:28.540 "uuid": "f42be4cf-405f-11ef-b2a4-e9dca065e82e", 00:15:28.540 "is_configured": true, 00:15:28.540 "data_offset": 2048, 00:15:28.540 "data_size": 63488 00:15:28.540 }, 00:15:28.540 { 00:15:28.540 "name": null, 00:15:28.540 "uuid": "f1896198-405f-11ef-b2a4-e9dca065e82e", 00:15:28.540 "is_configured": false, 00:15:28.540 "data_offset": 2048, 00:15:28.540 "data_size": 63488 00:15:28.540 }, 00:15:28.540 { 00:15:28.540 "name": "BaseBdev3", 00:15:28.540 "uuid": "f1f73fae-405f-11ef-b2a4-e9dca065e82e", 00:15:28.540 "is_configured": true, 00:15:28.540 "data_offset": 2048, 00:15:28.540 "data_size": 63488 00:15:28.540 }, 00:15:28.540 { 00:15:28.540 "name": "BaseBdev4", 00:15:28.540 "uuid": "f26d0dbc-405f-11ef-b2a4-e9dca065e82e", 00:15:28.540 "is_configured": true, 00:15:28.540 "data_offset": 2048, 00:15:28.540 "data_size": 63488 00:15:28.540 } 00:15:28.540 ] 00:15:28.540 }' 00:15:28.540 15:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:28.540 15:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.799 15:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.799 15:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:29.058 15:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:15:29.058 15:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:29.316 [2024-07-12 15:03:54.968527] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:29.316 15:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:29.316 15:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:29.316 15:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:29.316 15:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:29.316 15:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:29.316 15:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:29.316 15:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:29.316 15:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:29.316 15:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:29.316 15:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:29.316 15:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.316 15:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.574 15:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:29.574 "name": "Existed_Raid", 00:15:29.574 "uuid": "f2e37925-405f-11ef-b2a4-e9dca065e82e", 00:15:29.574 "strip_size_kb": 64, 00:15:29.574 "state": "configuring", 00:15:29.574 "raid_level": "concat", 00:15:29.574 "superblock": true, 00:15:29.574 "num_base_bdevs": 4, 00:15:29.574 "num_base_bdevs_discovered": 2, 00:15:29.574 "num_base_bdevs_operational": 4, 00:15:29.574 "base_bdevs_list": [ 00:15:29.574 { 00:15:29.574 "name": "BaseBdev1", 00:15:29.574 "uuid": "f42be4cf-405f-11ef-b2a4-e9dca065e82e", 00:15:29.574 "is_configured": true, 00:15:29.574 "data_offset": 2048, 00:15:29.574 "data_size": 63488 00:15:29.574 }, 00:15:29.574 { 00:15:29.574 "name": null, 00:15:29.574 "uuid": "f1896198-405f-11ef-b2a4-e9dca065e82e", 00:15:29.574 "is_configured": false, 00:15:29.574 "data_offset": 2048, 00:15:29.574 "data_size": 63488 00:15:29.574 }, 00:15:29.574 { 00:15:29.574 "name": null, 00:15:29.574 "uuid": "f1f73fae-405f-11ef-b2a4-e9dca065e82e", 00:15:29.574 "is_configured": false, 00:15:29.574 "data_offset": 2048, 00:15:29.574 "data_size": 63488 00:15:29.574 }, 00:15:29.574 { 00:15:29.574 "name": "BaseBdev4", 00:15:29.574 "uuid": "f26d0dbc-405f-11ef-b2a4-e9dca065e82e", 00:15:29.574 "is_configured": true, 00:15:29.574 "data_offset": 2048, 00:15:29.574 "data_size": 63488 00:15:29.574 } 00:15:29.574 ] 00:15:29.574 }' 00:15:29.574 15:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:29.574 15:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.833 15:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.833 15:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:30.092 15:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:15:30.092 15:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:30.350 [2024-07-12 15:03:56.100668] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:30.351 15:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:30.351 15:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:30.351 15:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:30.351 15:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:30.351 15:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:30.351 15:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:30.351 15:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:30.351 15:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:30.351 15:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:30.351 15:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:30.351 15:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.351 15:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.609 15:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:30.609 "name": "Existed_Raid", 00:15:30.609 "uuid": "f2e37925-405f-11ef-b2a4-e9dca065e82e", 00:15:30.609 "strip_size_kb": 64, 00:15:30.609 "state": "configuring", 00:15:30.609 "raid_level": "concat", 00:15:30.609 "superblock": true, 00:15:30.609 "num_base_bdevs": 4, 00:15:30.609 "num_base_bdevs_discovered": 3, 00:15:30.609 "num_base_bdevs_operational": 4, 00:15:30.609 "base_bdevs_list": [ 00:15:30.609 { 00:15:30.609 "name": "BaseBdev1", 00:15:30.610 "uuid": "f42be4cf-405f-11ef-b2a4-e9dca065e82e", 00:15:30.610 "is_configured": true, 00:15:30.610 "data_offset": 2048, 00:15:30.610 "data_size": 63488 00:15:30.610 }, 00:15:30.610 { 00:15:30.610 "name": null, 00:15:30.610 "uuid": "f1896198-405f-11ef-b2a4-e9dca065e82e", 00:15:30.610 "is_configured": false, 00:15:30.610 "data_offset": 2048, 00:15:30.610 "data_size": 63488 00:15:30.610 }, 00:15:30.610 { 00:15:30.610 "name": "BaseBdev3", 00:15:30.610 "uuid": "f1f73fae-405f-11ef-b2a4-e9dca065e82e", 00:15:30.610 "is_configured": true, 00:15:30.610 "data_offset": 2048, 00:15:30.610 "data_size": 63488 00:15:30.610 }, 00:15:30.610 { 00:15:30.610 "name": "BaseBdev4", 00:15:30.610 "uuid": "f26d0dbc-405f-11ef-b2a4-e9dca065e82e", 00:15:30.610 "is_configured": true, 00:15:30.610 "data_offset": 2048, 00:15:30.610 "data_size": 63488 00:15:30.610 } 00:15:30.610 ] 00:15:30.610 }' 00:15:30.610 15:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:30.610 15:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.177 15:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:31.177 15:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.435 15:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:15:31.435 15:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:31.694 [2024-07-12 15:03:57.300824] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:31.694 15:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:31.694 15:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:31.694 15:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:31.694 15:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:31.694 15:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:31.694 15:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:31.694 15:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:31.694 15:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:31.694 15:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:31.694 15:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:31.694 15:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.694 15:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.953 15:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:31.953 "name": "Existed_Raid", 00:15:31.953 "uuid": "f2e37925-405f-11ef-b2a4-e9dca065e82e", 00:15:31.953 "strip_size_kb": 64, 00:15:31.953 "state": "configuring", 00:15:31.953 "raid_level": "concat", 00:15:31.953 "superblock": true, 00:15:31.953 "num_base_bdevs": 4, 00:15:31.953 "num_base_bdevs_discovered": 2, 00:15:31.953 "num_base_bdevs_operational": 4, 00:15:31.953 "base_bdevs_list": [ 00:15:31.953 { 00:15:31.953 "name": null, 00:15:31.953 "uuid": "f42be4cf-405f-11ef-b2a4-e9dca065e82e", 00:15:31.953 "is_configured": false, 00:15:31.953 "data_offset": 2048, 00:15:31.953 "data_size": 63488 00:15:31.953 }, 00:15:31.953 { 00:15:31.953 "name": null, 00:15:31.953 "uuid": "f1896198-405f-11ef-b2a4-e9dca065e82e", 00:15:31.953 "is_configured": false, 00:15:31.953 "data_offset": 2048, 00:15:31.953 "data_size": 63488 00:15:31.953 }, 00:15:31.953 { 00:15:31.953 "name": "BaseBdev3", 00:15:31.953 "uuid": "f1f73fae-405f-11ef-b2a4-e9dca065e82e", 00:15:31.953 "is_configured": true, 00:15:31.953 "data_offset": 2048, 00:15:31.953 "data_size": 63488 00:15:31.953 }, 00:15:31.953 { 00:15:31.953 "name": "BaseBdev4", 00:15:31.953 "uuid": "f26d0dbc-405f-11ef-b2a4-e9dca065e82e", 00:15:31.953 "is_configured": true, 00:15:31.953 "data_offset": 2048, 00:15:31.953 "data_size": 63488 00:15:31.953 } 00:15:31.953 ] 00:15:31.953 }' 00:15:31.953 15:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:31.953 15:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.212 15:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:32.212 15:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.481 15:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:15:32.481 15:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:32.752 [2024-07-12 15:03:58.434753] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:32.753 15:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:32.753 15:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:32.753 15:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:32.753 15:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:32.753 15:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:32.753 15:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:32.753 15:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:32.753 15:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:32.753 15:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:32.753 15:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:32.753 15:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.753 15:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.011 15:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:33.011 "name": "Existed_Raid", 00:15:33.011 "uuid": "f2e37925-405f-11ef-b2a4-e9dca065e82e", 00:15:33.011 "strip_size_kb": 64, 00:15:33.011 "state": "configuring", 00:15:33.011 "raid_level": "concat", 00:15:33.011 "superblock": true, 00:15:33.011 "num_base_bdevs": 4, 00:15:33.011 "num_base_bdevs_discovered": 3, 00:15:33.011 "num_base_bdevs_operational": 4, 00:15:33.011 "base_bdevs_list": [ 00:15:33.011 { 00:15:33.011 "name": null, 00:15:33.011 "uuid": "f42be4cf-405f-11ef-b2a4-e9dca065e82e", 00:15:33.011 "is_configured": false, 00:15:33.011 "data_offset": 2048, 00:15:33.011 "data_size": 63488 00:15:33.011 }, 00:15:33.011 { 00:15:33.011 "name": "BaseBdev2", 00:15:33.011 "uuid": "f1896198-405f-11ef-b2a4-e9dca065e82e", 00:15:33.011 "is_configured": true, 00:15:33.011 "data_offset": 2048, 00:15:33.011 "data_size": 63488 00:15:33.011 }, 00:15:33.011 { 00:15:33.011 "name": "BaseBdev3", 00:15:33.011 "uuid": "f1f73fae-405f-11ef-b2a4-e9dca065e82e", 00:15:33.011 "is_configured": true, 00:15:33.011 "data_offset": 2048, 00:15:33.011 "data_size": 63488 00:15:33.011 }, 00:15:33.011 { 00:15:33.011 "name": "BaseBdev4", 00:15:33.011 "uuid": "f26d0dbc-405f-11ef-b2a4-e9dca065e82e", 00:15:33.011 "is_configured": true, 00:15:33.011 "data_offset": 2048, 00:15:33.011 "data_size": 63488 00:15:33.011 } 00:15:33.011 ] 00:15:33.011 }' 00:15:33.011 15:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:33.011 15:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.270 15:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.270 15:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:33.529 15:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:15:33.529 15:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.529 15:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:34.097 15:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u f42be4cf-405f-11ef-b2a4-e9dca065e82e 00:15:34.097 [2024-07-12 15:03:59.878949] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:34.097 [2024-07-12 15:03:59.879039] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x10b563634f00 00:15:34.097 [2024-07-12 15:03:59.879045] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:34.098 [2024-07-12 15:03:59.879066] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10b563697e20 00:15:34.098 [2024-07-12 15:03:59.879115] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10b563634f00 00:15:34.098 [2024-07-12 15:03:59.879120] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x10b563634f00 00:15:34.098 [2024-07-12 15:03:59.879140] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.098 NewBaseBdev 00:15:34.098 15:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:15:34.098 15:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:15:34.098 15:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:34.098 15:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:34.098 15:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:34.098 15:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:34.098 15:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:34.357 15:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:34.924 [ 00:15:34.924 { 00:15:34.924 "name": "NewBaseBdev", 00:15:34.924 "aliases": [ 00:15:34.924 "f42be4cf-405f-11ef-b2a4-e9dca065e82e" 00:15:34.924 ], 00:15:34.924 "product_name": "Malloc disk", 00:15:34.924 "block_size": 512, 00:15:34.924 "num_blocks": 65536, 00:15:34.924 "uuid": "f42be4cf-405f-11ef-b2a4-e9dca065e82e", 00:15:34.924 "assigned_rate_limits": { 00:15:34.924 "rw_ios_per_sec": 0, 00:15:34.924 "rw_mbytes_per_sec": 0, 00:15:34.924 "r_mbytes_per_sec": 0, 00:15:34.924 "w_mbytes_per_sec": 0 00:15:34.924 }, 00:15:34.924 "claimed": true, 00:15:34.924 "claim_type": "exclusive_write", 00:15:34.924 "zoned": false, 00:15:34.924 "supported_io_types": { 00:15:34.924 "read": true, 00:15:34.924 "write": true, 00:15:34.924 "unmap": true, 00:15:34.924 "flush": true, 00:15:34.924 "reset": true, 00:15:34.924 "nvme_admin": false, 00:15:34.924 "nvme_io": false, 00:15:34.924 "nvme_io_md": false, 00:15:34.924 "write_zeroes": true, 00:15:34.924 "zcopy": true, 00:15:34.924 "get_zone_info": false, 00:15:34.924 "zone_management": false, 00:15:34.924 "zone_append": false, 00:15:34.924 "compare": false, 00:15:34.924 "compare_and_write": false, 00:15:34.924 "abort": true, 00:15:34.924 "seek_hole": false, 00:15:34.924 "seek_data": false, 00:15:34.924 "copy": true, 00:15:34.924 "nvme_iov_md": false 00:15:34.924 }, 00:15:34.924 "memory_domains": [ 00:15:34.924 { 00:15:34.924 "dma_device_id": "system", 00:15:34.924 "dma_device_type": 1 00:15:34.924 }, 00:15:34.924 { 00:15:34.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.924 "dma_device_type": 2 00:15:34.924 } 00:15:34.924 ], 00:15:34.924 "driver_specific": {} 00:15:34.924 } 00:15:34.924 ] 00:15:34.924 15:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:34.924 15:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:15:34.924 15:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:34.924 15:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:34.924 15:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:34.924 15:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:34.924 15:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:34.924 15:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:34.924 15:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:34.924 15:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:34.924 15:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:34.924 15:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.924 15:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.924 15:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:34.924 "name": "Existed_Raid", 00:15:34.924 "uuid": "f2e37925-405f-11ef-b2a4-e9dca065e82e", 00:15:34.924 "strip_size_kb": 64, 00:15:34.924 "state": "online", 00:15:34.924 "raid_level": "concat", 00:15:34.924 "superblock": true, 00:15:34.924 "num_base_bdevs": 4, 00:15:34.924 "num_base_bdevs_discovered": 4, 00:15:34.924 "num_base_bdevs_operational": 4, 00:15:34.924 "base_bdevs_list": [ 00:15:34.924 { 00:15:34.924 "name": "NewBaseBdev", 00:15:34.924 "uuid": "f42be4cf-405f-11ef-b2a4-e9dca065e82e", 00:15:34.924 "is_configured": true, 00:15:34.924 "data_offset": 2048, 00:15:34.924 "data_size": 63488 00:15:34.924 }, 00:15:34.924 { 00:15:34.924 "name": "BaseBdev2", 00:15:34.924 "uuid": "f1896198-405f-11ef-b2a4-e9dca065e82e", 00:15:34.924 "is_configured": true, 00:15:34.924 "data_offset": 2048, 00:15:34.924 "data_size": 63488 00:15:34.924 }, 00:15:34.924 { 00:15:34.924 "name": "BaseBdev3", 00:15:34.924 "uuid": "f1f73fae-405f-11ef-b2a4-e9dca065e82e", 00:15:34.924 "is_configured": true, 00:15:34.924 "data_offset": 2048, 00:15:34.924 "data_size": 63488 00:15:34.924 }, 00:15:34.924 { 00:15:34.924 "name": "BaseBdev4", 00:15:34.924 "uuid": "f26d0dbc-405f-11ef-b2a4-e9dca065e82e", 00:15:34.924 "is_configured": true, 00:15:34.924 "data_offset": 2048, 00:15:34.924 "data_size": 63488 00:15:34.924 } 00:15:34.924 ] 00:15:34.924 }' 00:15:34.924 15:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:34.924 15:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.491 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:35.491 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:35.491 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:35.491 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:35.491 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:35.491 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:35.491 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:35.491 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:35.491 [2024-07-12 15:04:01.294927] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:35.491 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:35.491 "name": "Existed_Raid", 00:15:35.491 "aliases": [ 00:15:35.491 "f2e37925-405f-11ef-b2a4-e9dca065e82e" 00:15:35.491 ], 00:15:35.491 "product_name": "Raid Volume", 00:15:35.491 "block_size": 512, 00:15:35.491 "num_blocks": 253952, 00:15:35.491 "uuid": "f2e37925-405f-11ef-b2a4-e9dca065e82e", 00:15:35.491 "assigned_rate_limits": { 00:15:35.491 "rw_ios_per_sec": 0, 00:15:35.491 "rw_mbytes_per_sec": 0, 00:15:35.491 "r_mbytes_per_sec": 0, 00:15:35.491 "w_mbytes_per_sec": 0 00:15:35.491 }, 00:15:35.491 "claimed": false, 00:15:35.491 "zoned": false, 00:15:35.491 "supported_io_types": { 00:15:35.491 "read": true, 00:15:35.491 "write": true, 00:15:35.491 "unmap": true, 00:15:35.491 "flush": true, 00:15:35.491 "reset": true, 00:15:35.491 "nvme_admin": false, 00:15:35.491 "nvme_io": false, 00:15:35.491 "nvme_io_md": false, 00:15:35.491 "write_zeroes": true, 00:15:35.491 "zcopy": false, 00:15:35.491 "get_zone_info": false, 00:15:35.491 "zone_management": false, 00:15:35.491 "zone_append": false, 00:15:35.491 "compare": false, 00:15:35.491 "compare_and_write": false, 00:15:35.491 "abort": false, 00:15:35.491 "seek_hole": false, 00:15:35.491 "seek_data": false, 00:15:35.491 "copy": false, 00:15:35.491 "nvme_iov_md": false 00:15:35.491 }, 00:15:35.491 "memory_domains": [ 00:15:35.491 { 00:15:35.491 "dma_device_id": "system", 00:15:35.491 "dma_device_type": 1 00:15:35.491 }, 00:15:35.491 { 00:15:35.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.491 "dma_device_type": 2 00:15:35.491 }, 00:15:35.491 { 00:15:35.491 "dma_device_id": "system", 00:15:35.491 "dma_device_type": 1 00:15:35.491 }, 00:15:35.491 { 00:15:35.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.491 "dma_device_type": 2 00:15:35.491 }, 00:15:35.491 { 00:15:35.491 "dma_device_id": "system", 00:15:35.491 "dma_device_type": 1 00:15:35.491 }, 00:15:35.491 { 00:15:35.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.491 "dma_device_type": 2 00:15:35.491 }, 00:15:35.491 { 00:15:35.491 "dma_device_id": "system", 00:15:35.491 "dma_device_type": 1 00:15:35.491 }, 00:15:35.491 { 00:15:35.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.491 "dma_device_type": 2 00:15:35.491 } 00:15:35.491 ], 00:15:35.491 "driver_specific": { 00:15:35.491 "raid": { 00:15:35.491 "uuid": "f2e37925-405f-11ef-b2a4-e9dca065e82e", 00:15:35.491 "strip_size_kb": 64, 00:15:35.491 "state": "online", 00:15:35.491 "raid_level": "concat", 00:15:35.491 "superblock": true, 00:15:35.491 "num_base_bdevs": 4, 00:15:35.491 "num_base_bdevs_discovered": 4, 00:15:35.491 "num_base_bdevs_operational": 4, 00:15:35.491 "base_bdevs_list": [ 00:15:35.491 { 00:15:35.491 "name": "NewBaseBdev", 00:15:35.491 "uuid": "f42be4cf-405f-11ef-b2a4-e9dca065e82e", 00:15:35.491 "is_configured": true, 00:15:35.491 "data_offset": 2048, 00:15:35.491 "data_size": 63488 00:15:35.491 }, 00:15:35.491 { 00:15:35.491 "name": "BaseBdev2", 00:15:35.491 "uuid": "f1896198-405f-11ef-b2a4-e9dca065e82e", 00:15:35.491 "is_configured": true, 00:15:35.491 "data_offset": 2048, 00:15:35.491 "data_size": 63488 00:15:35.491 }, 00:15:35.491 { 00:15:35.491 "name": "BaseBdev3", 00:15:35.491 "uuid": "f1f73fae-405f-11ef-b2a4-e9dca065e82e", 00:15:35.491 "is_configured": true, 00:15:35.491 "data_offset": 2048, 00:15:35.491 "data_size": 63488 00:15:35.491 }, 00:15:35.491 { 00:15:35.491 "name": "BaseBdev4", 00:15:35.491 "uuid": "f26d0dbc-405f-11ef-b2a4-e9dca065e82e", 00:15:35.491 "is_configured": true, 00:15:35.491 "data_offset": 2048, 00:15:35.491 "data_size": 63488 00:15:35.491 } 00:15:35.491 ] 00:15:35.491 } 00:15:35.491 } 00:15:35.491 }' 00:15:35.491 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:35.748 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:35.748 BaseBdev2 00:15:35.748 BaseBdev3 00:15:35.748 BaseBdev4' 00:15:35.748 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:35.748 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:35.748 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:36.006 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:36.006 "name": "NewBaseBdev", 00:15:36.006 "aliases": [ 00:15:36.006 "f42be4cf-405f-11ef-b2a4-e9dca065e82e" 00:15:36.006 ], 00:15:36.006 "product_name": "Malloc disk", 00:15:36.006 "block_size": 512, 00:15:36.006 "num_blocks": 65536, 00:15:36.006 "uuid": "f42be4cf-405f-11ef-b2a4-e9dca065e82e", 00:15:36.006 "assigned_rate_limits": { 00:15:36.006 "rw_ios_per_sec": 0, 00:15:36.006 "rw_mbytes_per_sec": 0, 00:15:36.006 "r_mbytes_per_sec": 0, 00:15:36.006 "w_mbytes_per_sec": 0 00:15:36.006 }, 00:15:36.006 "claimed": true, 00:15:36.006 "claim_type": "exclusive_write", 00:15:36.006 "zoned": false, 00:15:36.006 "supported_io_types": { 00:15:36.006 "read": true, 00:15:36.006 "write": true, 00:15:36.006 "unmap": true, 00:15:36.006 "flush": true, 00:15:36.006 "reset": true, 00:15:36.006 "nvme_admin": false, 00:15:36.006 "nvme_io": false, 00:15:36.006 "nvme_io_md": false, 00:15:36.006 "write_zeroes": true, 00:15:36.006 "zcopy": true, 00:15:36.006 "get_zone_info": false, 00:15:36.006 "zone_management": false, 00:15:36.006 "zone_append": false, 00:15:36.006 "compare": false, 00:15:36.006 "compare_and_write": false, 00:15:36.006 "abort": true, 00:15:36.006 "seek_hole": false, 00:15:36.006 "seek_data": false, 00:15:36.006 "copy": true, 00:15:36.006 "nvme_iov_md": false 00:15:36.006 }, 00:15:36.006 "memory_domains": [ 00:15:36.006 { 00:15:36.006 "dma_device_id": "system", 00:15:36.006 "dma_device_type": 1 00:15:36.006 }, 00:15:36.006 { 00:15:36.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.006 "dma_device_type": 2 00:15:36.006 } 00:15:36.006 ], 00:15:36.006 "driver_specific": {} 00:15:36.006 }' 00:15:36.006 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:36.006 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:36.006 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:36.006 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:36.006 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:36.006 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:36.006 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:36.006 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:36.006 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:36.006 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:36.006 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:36.006 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:36.006 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:36.006 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:36.006 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:36.264 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:36.264 "name": "BaseBdev2", 00:15:36.264 "aliases": [ 00:15:36.264 "f1896198-405f-11ef-b2a4-e9dca065e82e" 00:15:36.264 ], 00:15:36.264 "product_name": "Malloc disk", 00:15:36.264 "block_size": 512, 00:15:36.264 "num_blocks": 65536, 00:15:36.264 "uuid": "f1896198-405f-11ef-b2a4-e9dca065e82e", 00:15:36.264 "assigned_rate_limits": { 00:15:36.264 "rw_ios_per_sec": 0, 00:15:36.264 "rw_mbytes_per_sec": 0, 00:15:36.264 "r_mbytes_per_sec": 0, 00:15:36.264 "w_mbytes_per_sec": 0 00:15:36.264 }, 00:15:36.264 "claimed": true, 00:15:36.264 "claim_type": "exclusive_write", 00:15:36.264 "zoned": false, 00:15:36.264 "supported_io_types": { 00:15:36.264 "read": true, 00:15:36.264 "write": true, 00:15:36.264 "unmap": true, 00:15:36.264 "flush": true, 00:15:36.264 "reset": true, 00:15:36.264 "nvme_admin": false, 00:15:36.264 "nvme_io": false, 00:15:36.264 "nvme_io_md": false, 00:15:36.264 "write_zeroes": true, 00:15:36.264 "zcopy": true, 00:15:36.264 "get_zone_info": false, 00:15:36.264 "zone_management": false, 00:15:36.264 "zone_append": false, 00:15:36.264 "compare": false, 00:15:36.264 "compare_and_write": false, 00:15:36.264 "abort": true, 00:15:36.264 "seek_hole": false, 00:15:36.264 "seek_data": false, 00:15:36.264 "copy": true, 00:15:36.264 "nvme_iov_md": false 00:15:36.264 }, 00:15:36.264 "memory_domains": [ 00:15:36.264 { 00:15:36.264 "dma_device_id": "system", 00:15:36.264 "dma_device_type": 1 00:15:36.264 }, 00:15:36.264 { 00:15:36.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.264 "dma_device_type": 2 00:15:36.264 } 00:15:36.264 ], 00:15:36.264 "driver_specific": {} 00:15:36.264 }' 00:15:36.264 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:36.264 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:36.264 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:36.264 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:36.264 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:36.264 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:36.264 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:36.264 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:36.264 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:36.264 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:36.264 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:36.264 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:36.264 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:36.264 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:36.264 15:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:36.523 15:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:36.523 "name": "BaseBdev3", 00:15:36.523 "aliases": [ 00:15:36.523 "f1f73fae-405f-11ef-b2a4-e9dca065e82e" 00:15:36.523 ], 00:15:36.523 "product_name": "Malloc disk", 00:15:36.523 "block_size": 512, 00:15:36.523 "num_blocks": 65536, 00:15:36.523 "uuid": "f1f73fae-405f-11ef-b2a4-e9dca065e82e", 00:15:36.523 "assigned_rate_limits": { 00:15:36.523 "rw_ios_per_sec": 0, 00:15:36.523 "rw_mbytes_per_sec": 0, 00:15:36.523 "r_mbytes_per_sec": 0, 00:15:36.523 "w_mbytes_per_sec": 0 00:15:36.523 }, 00:15:36.523 "claimed": true, 00:15:36.523 "claim_type": "exclusive_write", 00:15:36.523 "zoned": false, 00:15:36.523 "supported_io_types": { 00:15:36.523 "read": true, 00:15:36.523 "write": true, 00:15:36.523 "unmap": true, 00:15:36.523 "flush": true, 00:15:36.523 "reset": true, 00:15:36.523 "nvme_admin": false, 00:15:36.523 "nvme_io": false, 00:15:36.523 "nvme_io_md": false, 00:15:36.523 "write_zeroes": true, 00:15:36.523 "zcopy": true, 00:15:36.523 "get_zone_info": false, 00:15:36.523 "zone_management": false, 00:15:36.523 "zone_append": false, 00:15:36.523 "compare": false, 00:15:36.523 "compare_and_write": false, 00:15:36.523 "abort": true, 00:15:36.523 "seek_hole": false, 00:15:36.523 "seek_data": false, 00:15:36.523 "copy": true, 00:15:36.523 "nvme_iov_md": false 00:15:36.523 }, 00:15:36.523 "memory_domains": [ 00:15:36.523 { 00:15:36.523 "dma_device_id": "system", 00:15:36.523 "dma_device_type": 1 00:15:36.523 }, 00:15:36.523 { 00:15:36.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.523 "dma_device_type": 2 00:15:36.523 } 00:15:36.523 ], 00:15:36.523 "driver_specific": {} 00:15:36.523 }' 00:15:36.523 15:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:36.523 15:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:36.523 15:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:36.523 15:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:36.523 15:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:36.523 15:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:36.523 15:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:36.523 15:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:36.523 15:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:36.523 15:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:36.523 15:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:36.523 15:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:36.523 15:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:36.523 15:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:36.523 15:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:36.781 15:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:36.781 "name": "BaseBdev4", 00:15:36.781 "aliases": [ 00:15:36.781 "f26d0dbc-405f-11ef-b2a4-e9dca065e82e" 00:15:36.781 ], 00:15:36.781 "product_name": "Malloc disk", 00:15:36.781 "block_size": 512, 00:15:36.781 "num_blocks": 65536, 00:15:36.781 "uuid": "f26d0dbc-405f-11ef-b2a4-e9dca065e82e", 00:15:36.781 "assigned_rate_limits": { 00:15:36.781 "rw_ios_per_sec": 0, 00:15:36.781 "rw_mbytes_per_sec": 0, 00:15:36.781 "r_mbytes_per_sec": 0, 00:15:36.781 "w_mbytes_per_sec": 0 00:15:36.781 }, 00:15:36.781 "claimed": true, 00:15:36.781 "claim_type": "exclusive_write", 00:15:36.781 "zoned": false, 00:15:36.781 "supported_io_types": { 00:15:36.781 "read": true, 00:15:36.781 "write": true, 00:15:36.781 "unmap": true, 00:15:36.781 "flush": true, 00:15:36.781 "reset": true, 00:15:36.781 "nvme_admin": false, 00:15:36.781 "nvme_io": false, 00:15:36.781 "nvme_io_md": false, 00:15:36.781 "write_zeroes": true, 00:15:36.781 "zcopy": true, 00:15:36.781 "get_zone_info": false, 00:15:36.781 "zone_management": false, 00:15:36.781 "zone_append": false, 00:15:36.781 "compare": false, 00:15:36.781 "compare_and_write": false, 00:15:36.781 "abort": true, 00:15:36.781 "seek_hole": false, 00:15:36.781 "seek_data": false, 00:15:36.781 "copy": true, 00:15:36.781 "nvme_iov_md": false 00:15:36.781 }, 00:15:36.781 "memory_domains": [ 00:15:36.781 { 00:15:36.781 "dma_device_id": "system", 00:15:36.781 "dma_device_type": 1 00:15:36.781 }, 00:15:36.781 { 00:15:36.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.781 "dma_device_type": 2 00:15:36.781 } 00:15:36.781 ], 00:15:36.781 "driver_specific": {} 00:15:36.781 }' 00:15:36.781 15:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:36.781 15:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:36.781 15:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:36.781 15:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:36.781 15:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:36.781 15:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:36.781 15:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:36.781 15:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:36.781 15:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:36.781 15:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:36.781 15:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:36.781 15:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:36.781 15:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:37.038 [2024-07-12 15:04:02.818942] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:37.038 [2024-07-12 15:04:02.818971] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:37.038 [2024-07-12 15:04:02.819003] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.038 [2024-07-12 15:04:02.819019] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:37.038 [2024-07-12 15:04:02.819024] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10b563634f00 name Existed_Raid, state offline 00:15:37.038 15:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 61521 00:15:37.038 15:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 61521 ']' 00:15:37.038 15:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 61521 00:15:37.038 15:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:15:37.038 15:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:15:37.038 15:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 61521 00:15:37.038 15:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:15:37.038 15:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:15:37.038 15:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:15:37.038 killing process with pid 61521 00:15:37.038 15:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61521' 00:15:37.038 15:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 61521 00:15:37.038 [2024-07-12 15:04:02.846056] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:37.038 15:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 61521 00:15:37.295 [2024-07-12 15:04:02.870690] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:37.295 15:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:15:37.295 00:15:37.295 real 0m27.543s 00:15:37.295 user 0m50.342s 00:15:37.295 sys 0m3.866s 00:15:37.295 15:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:37.295 ************************************ 00:15:37.295 END TEST raid_state_function_test_sb 00:15:37.295 ************************************ 00:15:37.295 15:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.295 15:04:03 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:37.295 15:04:03 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:15:37.295 15:04:03 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:37.295 15:04:03 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:37.295 15:04:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:37.295 ************************************ 00:15:37.295 START TEST raid_superblock_test 00:15:37.295 ************************************ 00:15:37.295 15:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 4 00:15:37.295 15:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:15:37.295 15:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:15:37.295 15:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:15:37.295 15:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:15:37.295 15:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:15:37.295 15:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:15:37.295 15:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:15:37.295 15:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:15:37.295 15:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:15:37.295 15:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:15:37.295 15:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:15:37.295 15:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:15:37.295 15:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:15:37.295 15:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:15:37.295 15:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:15:37.295 15:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:15:37.295 15:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=62339 00:15:37.295 15:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 62339 /var/tmp/spdk-raid.sock 00:15:37.295 15:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 62339 ']' 00:15:37.295 15:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:37.295 15:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:37.295 15:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:37.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:37.295 15:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:37.295 15:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:37.295 15:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.295 [2024-07-12 15:04:03.106969] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:15:37.295 [2024-07-12 15:04:03.107168] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:37.859 EAL: TSC is not safe to use in SMP mode 00:15:37.859 EAL: TSC is not invariant 00:15:37.859 [2024-07-12 15:04:03.652393] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.117 [2024-07-12 15:04:03.751802] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:38.117 [2024-07-12 15:04:03.754240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.117 [2024-07-12 15:04:03.755173] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:38.117 [2024-07-12 15:04:03.755192] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:38.375 15:04:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:38.375 15:04:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:15:38.375 15:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:15:38.375 15:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:38.375 15:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:15:38.375 15:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:15:38.375 15:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:38.375 15:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:38.375 15:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:38.376 15:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:38.376 15:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:38.633 malloc1 00:15:38.633 15:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:38.891 [2024-07-12 15:04:04.588874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:38.891 [2024-07-12 15:04:04.588947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.891 [2024-07-12 15:04:04.588960] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x74c1fa34780 00:15:38.891 [2024-07-12 15:04:04.589106] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.891 [2024-07-12 15:04:04.590009] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.891 [2024-07-12 15:04:04.590043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:38.891 pt1 00:15:38.891 15:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:38.891 15:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:38.891 15:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:15:38.891 15:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:15:38.891 15:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:38.891 15:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:38.891 15:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:38.891 15:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:38.891 15:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:39.149 malloc2 00:15:39.149 15:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:39.408 [2024-07-12 15:04:05.088889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:39.408 [2024-07-12 15:04:05.088962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.408 [2024-07-12 15:04:05.088974] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x74c1fa34c80 00:15:39.408 [2024-07-12 15:04:05.088983] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.408 [2024-07-12 15:04:05.089676] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.408 [2024-07-12 15:04:05.089703] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:39.408 pt2 00:15:39.408 15:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:39.408 15:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:39.408 15:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:15:39.408 15:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:15:39.408 15:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:39.408 15:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:39.408 15:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:39.408 15:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:39.408 15:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:39.666 malloc3 00:15:39.666 15:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:39.925 [2024-07-12 15:04:05.608917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:39.925 [2024-07-12 15:04:05.608976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.925 [2024-07-12 15:04:05.608989] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x74c1fa35180 00:15:39.925 [2024-07-12 15:04:05.608997] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.925 [2024-07-12 15:04:05.609664] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.925 [2024-07-12 15:04:05.609689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:39.925 pt3 00:15:39.925 15:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:39.925 15:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:39.925 15:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:15:39.925 15:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:15:39.925 15:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:39.925 15:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:39.925 15:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:39.925 15:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:39.925 15:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:15:40.183 malloc4 00:15:40.183 15:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:40.442 [2024-07-12 15:04:06.092929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:40.442 [2024-07-12 15:04:06.093001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.442 [2024-07-12 15:04:06.093014] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x74c1fa35680 00:15:40.442 [2024-07-12 15:04:06.093022] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.442 [2024-07-12 15:04:06.093680] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.442 [2024-07-12 15:04:06.093708] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:40.442 pt4 00:15:40.442 15:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:40.442 15:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:40.442 15:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:15:40.701 [2024-07-12 15:04:06.332953] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:40.701 [2024-07-12 15:04:06.333529] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:40.701 [2024-07-12 15:04:06.333561] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:40.701 [2024-07-12 15:04:06.333572] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:40.701 [2024-07-12 15:04:06.333628] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x74c1fa35900 00:15:40.701 [2024-07-12 15:04:06.333634] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:40.701 [2024-07-12 15:04:06.333668] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x74c1fa97e20 00:15:40.701 [2024-07-12 15:04:06.333744] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x74c1fa35900 00:15:40.701 [2024-07-12 15:04:06.333749] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x74c1fa35900 00:15:40.701 [2024-07-12 15:04:06.333776] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.701 15:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:40.701 15:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:40.701 15:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:40.701 15:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:40.701 15:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:40.701 15:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:40.701 15:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:40.701 15:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:40.701 15:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:40.701 15:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:40.701 15:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.701 15:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.960 15:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:40.960 "name": "raid_bdev1", 00:15:40.960 "uuid": "fbdbc4a4-405f-11ef-b2a4-e9dca065e82e", 00:15:40.960 "strip_size_kb": 64, 00:15:40.960 "state": "online", 00:15:40.960 "raid_level": "concat", 00:15:40.960 "superblock": true, 00:15:40.960 "num_base_bdevs": 4, 00:15:40.960 "num_base_bdevs_discovered": 4, 00:15:40.960 "num_base_bdevs_operational": 4, 00:15:40.960 "base_bdevs_list": [ 00:15:40.960 { 00:15:40.960 "name": "pt1", 00:15:40.960 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:40.960 "is_configured": true, 00:15:40.960 "data_offset": 2048, 00:15:40.960 "data_size": 63488 00:15:40.960 }, 00:15:40.960 { 00:15:40.960 "name": "pt2", 00:15:40.960 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:40.960 "is_configured": true, 00:15:40.960 "data_offset": 2048, 00:15:40.960 "data_size": 63488 00:15:40.960 }, 00:15:40.960 { 00:15:40.960 "name": "pt3", 00:15:40.960 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:40.960 "is_configured": true, 00:15:40.960 "data_offset": 2048, 00:15:40.960 "data_size": 63488 00:15:40.960 }, 00:15:40.960 { 00:15:40.960 "name": "pt4", 00:15:40.960 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:40.960 "is_configured": true, 00:15:40.960 "data_offset": 2048, 00:15:40.960 "data_size": 63488 00:15:40.960 } 00:15:40.960 ] 00:15:40.960 }' 00:15:40.960 15:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:40.960 15:04:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.219 15:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:15:41.219 15:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:41.219 15:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:41.219 15:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:41.219 15:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:41.219 15:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:41.219 15:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:41.219 15:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:41.478 [2024-07-12 15:04:07.189025] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:41.478 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:41.478 "name": "raid_bdev1", 00:15:41.478 "aliases": [ 00:15:41.478 "fbdbc4a4-405f-11ef-b2a4-e9dca065e82e" 00:15:41.478 ], 00:15:41.478 "product_name": "Raid Volume", 00:15:41.478 "block_size": 512, 00:15:41.478 "num_blocks": 253952, 00:15:41.478 "uuid": "fbdbc4a4-405f-11ef-b2a4-e9dca065e82e", 00:15:41.478 "assigned_rate_limits": { 00:15:41.478 "rw_ios_per_sec": 0, 00:15:41.478 "rw_mbytes_per_sec": 0, 00:15:41.478 "r_mbytes_per_sec": 0, 00:15:41.478 "w_mbytes_per_sec": 0 00:15:41.478 }, 00:15:41.478 "claimed": false, 00:15:41.478 "zoned": false, 00:15:41.478 "supported_io_types": { 00:15:41.478 "read": true, 00:15:41.478 "write": true, 00:15:41.478 "unmap": true, 00:15:41.478 "flush": true, 00:15:41.478 "reset": true, 00:15:41.478 "nvme_admin": false, 00:15:41.478 "nvme_io": false, 00:15:41.478 "nvme_io_md": false, 00:15:41.478 "write_zeroes": true, 00:15:41.478 "zcopy": false, 00:15:41.478 "get_zone_info": false, 00:15:41.478 "zone_management": false, 00:15:41.478 "zone_append": false, 00:15:41.478 "compare": false, 00:15:41.478 "compare_and_write": false, 00:15:41.478 "abort": false, 00:15:41.478 "seek_hole": false, 00:15:41.478 "seek_data": false, 00:15:41.478 "copy": false, 00:15:41.478 "nvme_iov_md": false 00:15:41.478 }, 00:15:41.478 "memory_domains": [ 00:15:41.478 { 00:15:41.478 "dma_device_id": "system", 00:15:41.478 "dma_device_type": 1 00:15:41.478 }, 00:15:41.478 { 00:15:41.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.479 "dma_device_type": 2 00:15:41.479 }, 00:15:41.479 { 00:15:41.479 "dma_device_id": "system", 00:15:41.479 "dma_device_type": 1 00:15:41.479 }, 00:15:41.479 { 00:15:41.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.479 "dma_device_type": 2 00:15:41.479 }, 00:15:41.479 { 00:15:41.479 "dma_device_id": "system", 00:15:41.479 "dma_device_type": 1 00:15:41.479 }, 00:15:41.479 { 00:15:41.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.479 "dma_device_type": 2 00:15:41.479 }, 00:15:41.479 { 00:15:41.479 "dma_device_id": "system", 00:15:41.479 "dma_device_type": 1 00:15:41.479 }, 00:15:41.479 { 00:15:41.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.479 "dma_device_type": 2 00:15:41.479 } 00:15:41.479 ], 00:15:41.479 "driver_specific": { 00:15:41.479 "raid": { 00:15:41.479 "uuid": "fbdbc4a4-405f-11ef-b2a4-e9dca065e82e", 00:15:41.479 "strip_size_kb": 64, 00:15:41.479 "state": "online", 00:15:41.479 "raid_level": "concat", 00:15:41.479 "superblock": true, 00:15:41.479 "num_base_bdevs": 4, 00:15:41.479 "num_base_bdevs_discovered": 4, 00:15:41.479 "num_base_bdevs_operational": 4, 00:15:41.479 "base_bdevs_list": [ 00:15:41.479 { 00:15:41.479 "name": "pt1", 00:15:41.479 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:41.479 "is_configured": true, 00:15:41.479 "data_offset": 2048, 00:15:41.479 "data_size": 63488 00:15:41.479 }, 00:15:41.479 { 00:15:41.479 "name": "pt2", 00:15:41.479 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:41.479 "is_configured": true, 00:15:41.479 "data_offset": 2048, 00:15:41.479 "data_size": 63488 00:15:41.479 }, 00:15:41.479 { 00:15:41.479 "name": "pt3", 00:15:41.479 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:41.479 "is_configured": true, 00:15:41.479 "data_offset": 2048, 00:15:41.479 "data_size": 63488 00:15:41.479 }, 00:15:41.479 { 00:15:41.479 "name": "pt4", 00:15:41.479 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:41.479 "is_configured": true, 00:15:41.479 "data_offset": 2048, 00:15:41.479 "data_size": 63488 00:15:41.479 } 00:15:41.479 ] 00:15:41.479 } 00:15:41.479 } 00:15:41.479 }' 00:15:41.479 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:41.479 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:41.479 pt2 00:15:41.479 pt3 00:15:41.479 pt4' 00:15:41.479 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:41.479 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:41.479 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:41.738 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:41.738 "name": "pt1", 00:15:41.738 "aliases": [ 00:15:41.738 "00000000-0000-0000-0000-000000000001" 00:15:41.738 ], 00:15:41.738 "product_name": "passthru", 00:15:41.738 "block_size": 512, 00:15:41.738 "num_blocks": 65536, 00:15:41.738 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:41.738 "assigned_rate_limits": { 00:15:41.738 "rw_ios_per_sec": 0, 00:15:41.738 "rw_mbytes_per_sec": 0, 00:15:41.738 "r_mbytes_per_sec": 0, 00:15:41.738 "w_mbytes_per_sec": 0 00:15:41.738 }, 00:15:41.738 "claimed": true, 00:15:41.738 "claim_type": "exclusive_write", 00:15:41.738 "zoned": false, 00:15:41.738 "supported_io_types": { 00:15:41.738 "read": true, 00:15:41.738 "write": true, 00:15:41.738 "unmap": true, 00:15:41.738 "flush": true, 00:15:41.738 "reset": true, 00:15:41.738 "nvme_admin": false, 00:15:41.738 "nvme_io": false, 00:15:41.738 "nvme_io_md": false, 00:15:41.738 "write_zeroes": true, 00:15:41.738 "zcopy": true, 00:15:41.738 "get_zone_info": false, 00:15:41.738 "zone_management": false, 00:15:41.738 "zone_append": false, 00:15:41.738 "compare": false, 00:15:41.738 "compare_and_write": false, 00:15:41.738 "abort": true, 00:15:41.738 "seek_hole": false, 00:15:41.738 "seek_data": false, 00:15:41.738 "copy": true, 00:15:41.738 "nvme_iov_md": false 00:15:41.738 }, 00:15:41.738 "memory_domains": [ 00:15:41.738 { 00:15:41.738 "dma_device_id": "system", 00:15:41.738 "dma_device_type": 1 00:15:41.738 }, 00:15:41.738 { 00:15:41.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.738 "dma_device_type": 2 00:15:41.738 } 00:15:41.738 ], 00:15:41.738 "driver_specific": { 00:15:41.738 "passthru": { 00:15:41.738 "name": "pt1", 00:15:41.738 "base_bdev_name": "malloc1" 00:15:41.738 } 00:15:41.738 } 00:15:41.738 }' 00:15:41.738 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:41.738 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:41.738 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:41.738 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:41.738 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:41.738 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:41.738 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:41.738 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:41.738 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:41.738 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:41.738 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:41.738 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:41.738 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:41.738 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:41.738 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:41.997 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:41.997 "name": "pt2", 00:15:41.997 "aliases": [ 00:15:41.997 "00000000-0000-0000-0000-000000000002" 00:15:41.997 ], 00:15:41.997 "product_name": "passthru", 00:15:41.997 "block_size": 512, 00:15:41.997 "num_blocks": 65536, 00:15:41.997 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:41.997 "assigned_rate_limits": { 00:15:41.997 "rw_ios_per_sec": 0, 00:15:41.997 "rw_mbytes_per_sec": 0, 00:15:41.997 "r_mbytes_per_sec": 0, 00:15:41.997 "w_mbytes_per_sec": 0 00:15:41.997 }, 00:15:41.997 "claimed": true, 00:15:41.997 "claim_type": "exclusive_write", 00:15:41.997 "zoned": false, 00:15:41.997 "supported_io_types": { 00:15:41.997 "read": true, 00:15:41.997 "write": true, 00:15:41.997 "unmap": true, 00:15:41.997 "flush": true, 00:15:41.997 "reset": true, 00:15:41.997 "nvme_admin": false, 00:15:41.997 "nvme_io": false, 00:15:41.997 "nvme_io_md": false, 00:15:41.997 "write_zeroes": true, 00:15:41.997 "zcopy": true, 00:15:41.997 "get_zone_info": false, 00:15:41.997 "zone_management": false, 00:15:41.997 "zone_append": false, 00:15:41.997 "compare": false, 00:15:41.997 "compare_and_write": false, 00:15:41.997 "abort": true, 00:15:41.997 "seek_hole": false, 00:15:41.997 "seek_data": false, 00:15:41.997 "copy": true, 00:15:41.997 "nvme_iov_md": false 00:15:41.997 }, 00:15:41.997 "memory_domains": [ 00:15:41.997 { 00:15:41.997 "dma_device_id": "system", 00:15:41.997 "dma_device_type": 1 00:15:41.997 }, 00:15:41.997 { 00:15:41.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.997 "dma_device_type": 2 00:15:41.997 } 00:15:41.997 ], 00:15:41.997 "driver_specific": { 00:15:41.997 "passthru": { 00:15:41.997 "name": "pt2", 00:15:41.997 "base_bdev_name": "malloc2" 00:15:41.997 } 00:15:41.997 } 00:15:41.997 }' 00:15:41.997 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:41.997 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:41.997 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:41.997 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:41.997 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:41.997 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:41.997 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:41.997 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:42.256 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:42.256 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:42.256 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:42.256 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:42.256 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:42.256 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:42.256 15:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:42.256 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:42.256 "name": "pt3", 00:15:42.256 "aliases": [ 00:15:42.256 "00000000-0000-0000-0000-000000000003" 00:15:42.256 ], 00:15:42.256 "product_name": "passthru", 00:15:42.256 "block_size": 512, 00:15:42.256 "num_blocks": 65536, 00:15:42.256 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:42.256 "assigned_rate_limits": { 00:15:42.256 "rw_ios_per_sec": 0, 00:15:42.256 "rw_mbytes_per_sec": 0, 00:15:42.256 "r_mbytes_per_sec": 0, 00:15:42.256 "w_mbytes_per_sec": 0 00:15:42.256 }, 00:15:42.256 "claimed": true, 00:15:42.256 "claim_type": "exclusive_write", 00:15:42.256 "zoned": false, 00:15:42.256 "supported_io_types": { 00:15:42.256 "read": true, 00:15:42.256 "write": true, 00:15:42.256 "unmap": true, 00:15:42.256 "flush": true, 00:15:42.256 "reset": true, 00:15:42.256 "nvme_admin": false, 00:15:42.256 "nvme_io": false, 00:15:42.256 "nvme_io_md": false, 00:15:42.256 "write_zeroes": true, 00:15:42.256 "zcopy": true, 00:15:42.256 "get_zone_info": false, 00:15:42.256 "zone_management": false, 00:15:42.256 "zone_append": false, 00:15:42.256 "compare": false, 00:15:42.256 "compare_and_write": false, 00:15:42.256 "abort": true, 00:15:42.256 "seek_hole": false, 00:15:42.256 "seek_data": false, 00:15:42.256 "copy": true, 00:15:42.256 "nvme_iov_md": false 00:15:42.256 }, 00:15:42.256 "memory_domains": [ 00:15:42.256 { 00:15:42.256 "dma_device_id": "system", 00:15:42.256 "dma_device_type": 1 00:15:42.256 }, 00:15:42.256 { 00:15:42.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.256 "dma_device_type": 2 00:15:42.256 } 00:15:42.256 ], 00:15:42.256 "driver_specific": { 00:15:42.256 "passthru": { 00:15:42.256 "name": "pt3", 00:15:42.256 "base_bdev_name": "malloc3" 00:15:42.256 } 00:15:42.256 } 00:15:42.256 }' 00:15:42.256 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:42.515 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:42.515 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:42.515 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:42.515 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:42.515 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:42.515 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:42.515 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:42.515 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:42.515 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:42.515 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:42.515 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:42.515 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:42.515 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:15:42.515 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:42.773 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:42.773 "name": "pt4", 00:15:42.773 "aliases": [ 00:15:42.773 "00000000-0000-0000-0000-000000000004" 00:15:42.773 ], 00:15:42.773 "product_name": "passthru", 00:15:42.773 "block_size": 512, 00:15:42.773 "num_blocks": 65536, 00:15:42.773 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:42.773 "assigned_rate_limits": { 00:15:42.773 "rw_ios_per_sec": 0, 00:15:42.773 "rw_mbytes_per_sec": 0, 00:15:42.773 "r_mbytes_per_sec": 0, 00:15:42.773 "w_mbytes_per_sec": 0 00:15:42.773 }, 00:15:42.773 "claimed": true, 00:15:42.773 "claim_type": "exclusive_write", 00:15:42.773 "zoned": false, 00:15:42.773 "supported_io_types": { 00:15:42.773 "read": true, 00:15:42.773 "write": true, 00:15:42.773 "unmap": true, 00:15:42.773 "flush": true, 00:15:42.773 "reset": true, 00:15:42.773 "nvme_admin": false, 00:15:42.773 "nvme_io": false, 00:15:42.773 "nvme_io_md": false, 00:15:42.773 "write_zeroes": true, 00:15:42.773 "zcopy": true, 00:15:42.773 "get_zone_info": false, 00:15:42.773 "zone_management": false, 00:15:42.773 "zone_append": false, 00:15:42.773 "compare": false, 00:15:42.773 "compare_and_write": false, 00:15:42.773 "abort": true, 00:15:42.773 "seek_hole": false, 00:15:42.773 "seek_data": false, 00:15:42.773 "copy": true, 00:15:42.773 "nvme_iov_md": false 00:15:42.773 }, 00:15:42.773 "memory_domains": [ 00:15:42.773 { 00:15:42.773 "dma_device_id": "system", 00:15:42.773 "dma_device_type": 1 00:15:42.773 }, 00:15:42.773 { 00:15:42.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.773 "dma_device_type": 2 00:15:42.773 } 00:15:42.773 ], 00:15:42.773 "driver_specific": { 00:15:42.773 "passthru": { 00:15:42.773 "name": "pt4", 00:15:42.773 "base_bdev_name": "malloc4" 00:15:42.773 } 00:15:42.773 } 00:15:42.773 }' 00:15:42.773 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:42.773 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:42.773 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:42.773 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:42.773 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:42.773 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:42.773 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:42.773 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:42.773 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:42.773 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:42.773 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:42.773 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:42.773 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:42.773 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:15:43.032 [2024-07-12 15:04:08.709103] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:43.032 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=fbdbc4a4-405f-11ef-b2a4-e9dca065e82e 00:15:43.032 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z fbdbc4a4-405f-11ef-b2a4-e9dca065e82e ']' 00:15:43.032 15:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:43.291 [2024-07-12 15:04:09.041057] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:43.291 [2024-07-12 15:04:09.041080] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:43.291 [2024-07-12 15:04:09.041120] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:43.291 [2024-07-12 15:04:09.041136] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:43.291 [2024-07-12 15:04:09.041141] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x74c1fa35900 name raid_bdev1, state offline 00:15:43.291 15:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.291 15:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:15:43.549 15:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:15:43.549 15:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:15:43.549 15:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:43.549 15:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:43.808 15:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:43.808 15:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:44.066 15:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:44.066 15:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:44.324 15:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:44.324 15:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:15:44.583 15:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:44.583 15:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:44.841 15:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:15:44.841 15:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:44.841 15:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:15:44.841 15:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:44.841 15:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:44.841 15:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:44.841 15:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:44.841 15:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:44.841 15:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:44.841 15:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:44.841 15:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:44.841 15:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:44.841 15:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:45.100 [2024-07-12 15:04:10.865160] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:45.100 [2024-07-12 15:04:10.865785] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:45.100 [2024-07-12 15:04:10.865804] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:45.100 [2024-07-12 15:04:10.865813] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:45.100 [2024-07-12 15:04:10.865828] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:45.100 [2024-07-12 15:04:10.865866] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:45.100 [2024-07-12 15:04:10.865878] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:45.100 [2024-07-12 15:04:10.865887] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:45.100 [2024-07-12 15:04:10.865896] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:45.100 [2024-07-12 15:04:10.865900] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x74c1fa35680 name raid_bdev1, state configuring 00:15:45.100 request: 00:15:45.100 { 00:15:45.100 "name": "raid_bdev1", 00:15:45.100 "raid_level": "concat", 00:15:45.100 "base_bdevs": [ 00:15:45.100 "malloc1", 00:15:45.100 "malloc2", 00:15:45.100 "malloc3", 00:15:45.100 "malloc4" 00:15:45.100 ], 00:15:45.100 "strip_size_kb": 64, 00:15:45.100 "superblock": false, 00:15:45.100 "method": "bdev_raid_create", 00:15:45.100 "req_id": 1 00:15:45.100 } 00:15:45.100 Got JSON-RPC error response 00:15:45.100 response: 00:15:45.100 { 00:15:45.100 "code": -17, 00:15:45.100 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:45.100 } 00:15:45.100 15:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:15:45.100 15:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:45.100 15:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:45.100 15:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:45.100 15:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.100 15:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:15:45.363 15:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:15:45.363 15:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:15:45.363 15:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:45.662 [2024-07-12 15:04:11.385173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:45.662 [2024-07-12 15:04:11.385231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.662 [2024-07-12 15:04:11.385243] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x74c1fa35180 00:15:45.662 [2024-07-12 15:04:11.385251] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.662 [2024-07-12 15:04:11.385897] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.662 [2024-07-12 15:04:11.385923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:45.662 [2024-07-12 15:04:11.385950] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:45.662 [2024-07-12 15:04:11.385969] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:45.662 pt1 00:15:45.662 15:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:15:45.662 15:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:45.662 15:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:45.662 15:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:45.662 15:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:45.662 15:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:45.663 15:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:45.663 15:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:45.663 15:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:45.663 15:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:45.663 15:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.663 15:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.921 15:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:45.921 "name": "raid_bdev1", 00:15:45.921 "uuid": "fbdbc4a4-405f-11ef-b2a4-e9dca065e82e", 00:15:45.921 "strip_size_kb": 64, 00:15:45.921 "state": "configuring", 00:15:45.921 "raid_level": "concat", 00:15:45.921 "superblock": true, 00:15:45.921 "num_base_bdevs": 4, 00:15:45.921 "num_base_bdevs_discovered": 1, 00:15:45.921 "num_base_bdevs_operational": 4, 00:15:45.921 "base_bdevs_list": [ 00:15:45.921 { 00:15:45.921 "name": "pt1", 00:15:45.921 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:45.921 "is_configured": true, 00:15:45.921 "data_offset": 2048, 00:15:45.921 "data_size": 63488 00:15:45.921 }, 00:15:45.921 { 00:15:45.921 "name": null, 00:15:45.921 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:45.921 "is_configured": false, 00:15:45.921 "data_offset": 2048, 00:15:45.921 "data_size": 63488 00:15:45.921 }, 00:15:45.921 { 00:15:45.921 "name": null, 00:15:45.921 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:45.921 "is_configured": false, 00:15:45.921 "data_offset": 2048, 00:15:45.922 "data_size": 63488 00:15:45.922 }, 00:15:45.922 { 00:15:45.922 "name": null, 00:15:45.922 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:45.922 "is_configured": false, 00:15:45.922 "data_offset": 2048, 00:15:45.922 "data_size": 63488 00:15:45.922 } 00:15:45.922 ] 00:15:45.922 }' 00:15:45.922 15:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:45.922 15:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.488 15:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:15:46.488 15:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:46.488 [2024-07-12 15:04:12.289208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:46.488 [2024-07-12 15:04:12.289281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.488 [2024-07-12 15:04:12.289309] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x74c1fa34780 00:15:46.488 [2024-07-12 15:04:12.289317] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.488 [2024-07-12 15:04:12.289433] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.488 [2024-07-12 15:04:12.289444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:46.488 [2024-07-12 15:04:12.289469] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:46.488 [2024-07-12 15:04:12.289478] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:46.488 pt2 00:15:46.488 15:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:46.746 [2024-07-12 15:04:12.557228] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:46.746 15:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:15:46.746 15:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:46.746 15:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:46.746 15:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:46.746 15:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:46.746 15:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:46.746 15:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:46.746 15:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:47.004 15:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:47.004 15:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:47.004 15:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.004 15:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.004 15:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:47.004 "name": "raid_bdev1", 00:15:47.004 "uuid": "fbdbc4a4-405f-11ef-b2a4-e9dca065e82e", 00:15:47.004 "strip_size_kb": 64, 00:15:47.004 "state": "configuring", 00:15:47.004 "raid_level": "concat", 00:15:47.004 "superblock": true, 00:15:47.004 "num_base_bdevs": 4, 00:15:47.004 "num_base_bdevs_discovered": 1, 00:15:47.004 "num_base_bdevs_operational": 4, 00:15:47.004 "base_bdevs_list": [ 00:15:47.004 { 00:15:47.004 "name": "pt1", 00:15:47.004 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:47.004 "is_configured": true, 00:15:47.004 "data_offset": 2048, 00:15:47.004 "data_size": 63488 00:15:47.004 }, 00:15:47.004 { 00:15:47.004 "name": null, 00:15:47.004 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:47.004 "is_configured": false, 00:15:47.004 "data_offset": 2048, 00:15:47.004 "data_size": 63488 00:15:47.004 }, 00:15:47.004 { 00:15:47.004 "name": null, 00:15:47.004 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:47.004 "is_configured": false, 00:15:47.004 "data_offset": 2048, 00:15:47.004 "data_size": 63488 00:15:47.004 }, 00:15:47.004 { 00:15:47.004 "name": null, 00:15:47.004 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:47.004 "is_configured": false, 00:15:47.004 "data_offset": 2048, 00:15:47.004 "data_size": 63488 00:15:47.004 } 00:15:47.004 ] 00:15:47.004 }' 00:15:47.004 15:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:47.004 15:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.568 15:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:15:47.568 15:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:47.568 15:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:47.568 [2024-07-12 15:04:13.373274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:47.568 [2024-07-12 15:04:13.373324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.568 [2024-07-12 15:04:13.373336] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x74c1fa34780 00:15:47.568 [2024-07-12 15:04:13.373344] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.568 [2024-07-12 15:04:13.373459] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.568 [2024-07-12 15:04:13.373470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:47.568 [2024-07-12 15:04:13.373494] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:47.568 [2024-07-12 15:04:13.373503] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:47.568 pt2 00:15:47.568 15:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:47.568 15:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:47.568 15:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:47.826 [2024-07-12 15:04:13.645285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:47.826 [2024-07-12 15:04:13.645344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.826 [2024-07-12 15:04:13.645355] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x74c1fa35b80 00:15:47.826 [2024-07-12 15:04:13.645363] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.826 [2024-07-12 15:04:13.645475] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.826 [2024-07-12 15:04:13.645487] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:47.826 [2024-07-12 15:04:13.645518] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:47.826 [2024-07-12 15:04:13.645527] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:47.826 pt3 00:15:48.084 15:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:48.084 15:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:48.084 15:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:48.342 [2024-07-12 15:04:13.917298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:48.342 [2024-07-12 15:04:13.917346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.342 [2024-07-12 15:04:13.917358] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x74c1fa35900 00:15:48.342 [2024-07-12 15:04:13.917365] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.342 [2024-07-12 15:04:13.917474] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.342 [2024-07-12 15:04:13.917485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:48.342 [2024-07-12 15:04:13.917508] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:48.342 [2024-07-12 15:04:13.917517] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:48.342 [2024-07-12 15:04:13.917548] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x74c1fa34c80 00:15:48.342 [2024-07-12 15:04:13.917553] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:48.342 [2024-07-12 15:04:13.917574] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x74c1fa97e20 00:15:48.342 [2024-07-12 15:04:13.917627] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x74c1fa34c80 00:15:48.342 [2024-07-12 15:04:13.917632] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x74c1fa34c80 00:15:48.342 [2024-07-12 15:04:13.917654] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.342 pt4 00:15:48.342 15:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:48.342 15:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:48.342 15:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:48.342 15:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:48.342 15:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:48.342 15:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:48.342 15:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:48.342 15:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:48.342 15:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:48.342 15:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:48.342 15:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:48.342 15:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:48.342 15:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.342 15:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.610 15:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:48.610 "name": "raid_bdev1", 00:15:48.610 "uuid": "fbdbc4a4-405f-11ef-b2a4-e9dca065e82e", 00:15:48.610 "strip_size_kb": 64, 00:15:48.610 "state": "online", 00:15:48.610 "raid_level": "concat", 00:15:48.610 "superblock": true, 00:15:48.610 "num_base_bdevs": 4, 00:15:48.610 "num_base_bdevs_discovered": 4, 00:15:48.610 "num_base_bdevs_operational": 4, 00:15:48.610 "base_bdevs_list": [ 00:15:48.610 { 00:15:48.610 "name": "pt1", 00:15:48.610 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:48.610 "is_configured": true, 00:15:48.610 "data_offset": 2048, 00:15:48.610 "data_size": 63488 00:15:48.610 }, 00:15:48.610 { 00:15:48.610 "name": "pt2", 00:15:48.610 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:48.610 "is_configured": true, 00:15:48.610 "data_offset": 2048, 00:15:48.610 "data_size": 63488 00:15:48.610 }, 00:15:48.610 { 00:15:48.610 "name": "pt3", 00:15:48.610 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:48.610 "is_configured": true, 00:15:48.610 "data_offset": 2048, 00:15:48.610 "data_size": 63488 00:15:48.610 }, 00:15:48.610 { 00:15:48.610 "name": "pt4", 00:15:48.610 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:48.610 "is_configured": true, 00:15:48.610 "data_offset": 2048, 00:15:48.610 "data_size": 63488 00:15:48.610 } 00:15:48.610 ] 00:15:48.610 }' 00:15:48.610 15:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:48.610 15:04:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.868 15:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:15:48.868 15:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:48.868 15:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:48.868 15:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:48.868 15:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:48.868 15:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:48.868 15:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:48.868 15:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:49.147 [2024-07-12 15:04:14.721403] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:49.147 15:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:49.147 "name": "raid_bdev1", 00:15:49.147 "aliases": [ 00:15:49.147 "fbdbc4a4-405f-11ef-b2a4-e9dca065e82e" 00:15:49.147 ], 00:15:49.147 "product_name": "Raid Volume", 00:15:49.147 "block_size": 512, 00:15:49.147 "num_blocks": 253952, 00:15:49.147 "uuid": "fbdbc4a4-405f-11ef-b2a4-e9dca065e82e", 00:15:49.147 "assigned_rate_limits": { 00:15:49.147 "rw_ios_per_sec": 0, 00:15:49.147 "rw_mbytes_per_sec": 0, 00:15:49.147 "r_mbytes_per_sec": 0, 00:15:49.147 "w_mbytes_per_sec": 0 00:15:49.147 }, 00:15:49.147 "claimed": false, 00:15:49.147 "zoned": false, 00:15:49.147 "supported_io_types": { 00:15:49.147 "read": true, 00:15:49.147 "write": true, 00:15:49.147 "unmap": true, 00:15:49.147 "flush": true, 00:15:49.147 "reset": true, 00:15:49.147 "nvme_admin": false, 00:15:49.147 "nvme_io": false, 00:15:49.147 "nvme_io_md": false, 00:15:49.147 "write_zeroes": true, 00:15:49.147 "zcopy": false, 00:15:49.147 "get_zone_info": false, 00:15:49.147 "zone_management": false, 00:15:49.147 "zone_append": false, 00:15:49.147 "compare": false, 00:15:49.147 "compare_and_write": false, 00:15:49.147 "abort": false, 00:15:49.147 "seek_hole": false, 00:15:49.147 "seek_data": false, 00:15:49.147 "copy": false, 00:15:49.147 "nvme_iov_md": false 00:15:49.147 }, 00:15:49.147 "memory_domains": [ 00:15:49.147 { 00:15:49.147 "dma_device_id": "system", 00:15:49.147 "dma_device_type": 1 00:15:49.147 }, 00:15:49.147 { 00:15:49.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.147 "dma_device_type": 2 00:15:49.147 }, 00:15:49.147 { 00:15:49.147 "dma_device_id": "system", 00:15:49.147 "dma_device_type": 1 00:15:49.147 }, 00:15:49.147 { 00:15:49.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.147 "dma_device_type": 2 00:15:49.147 }, 00:15:49.147 { 00:15:49.147 "dma_device_id": "system", 00:15:49.147 "dma_device_type": 1 00:15:49.147 }, 00:15:49.147 { 00:15:49.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.147 "dma_device_type": 2 00:15:49.147 }, 00:15:49.147 { 00:15:49.147 "dma_device_id": "system", 00:15:49.147 "dma_device_type": 1 00:15:49.147 }, 00:15:49.147 { 00:15:49.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.147 "dma_device_type": 2 00:15:49.147 } 00:15:49.147 ], 00:15:49.147 "driver_specific": { 00:15:49.147 "raid": { 00:15:49.147 "uuid": "fbdbc4a4-405f-11ef-b2a4-e9dca065e82e", 00:15:49.147 "strip_size_kb": 64, 00:15:49.147 "state": "online", 00:15:49.147 "raid_level": "concat", 00:15:49.147 "superblock": true, 00:15:49.147 "num_base_bdevs": 4, 00:15:49.147 "num_base_bdevs_discovered": 4, 00:15:49.147 "num_base_bdevs_operational": 4, 00:15:49.147 "base_bdevs_list": [ 00:15:49.147 { 00:15:49.147 "name": "pt1", 00:15:49.147 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:49.147 "is_configured": true, 00:15:49.147 "data_offset": 2048, 00:15:49.147 "data_size": 63488 00:15:49.147 }, 00:15:49.147 { 00:15:49.147 "name": "pt2", 00:15:49.147 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.147 "is_configured": true, 00:15:49.147 "data_offset": 2048, 00:15:49.147 "data_size": 63488 00:15:49.147 }, 00:15:49.147 { 00:15:49.147 "name": "pt3", 00:15:49.147 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:49.147 "is_configured": true, 00:15:49.147 "data_offset": 2048, 00:15:49.147 "data_size": 63488 00:15:49.147 }, 00:15:49.147 { 00:15:49.147 "name": "pt4", 00:15:49.147 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:49.147 "is_configured": true, 00:15:49.147 "data_offset": 2048, 00:15:49.147 "data_size": 63488 00:15:49.147 } 00:15:49.147 ] 00:15:49.147 } 00:15:49.147 } 00:15:49.147 }' 00:15:49.147 15:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:49.147 15:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:49.147 pt2 00:15:49.147 pt3 00:15:49.147 pt4' 00:15:49.147 15:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:49.147 15:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:49.147 15:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:49.405 15:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:49.405 "name": "pt1", 00:15:49.405 "aliases": [ 00:15:49.405 "00000000-0000-0000-0000-000000000001" 00:15:49.405 ], 00:15:49.405 "product_name": "passthru", 00:15:49.405 "block_size": 512, 00:15:49.405 "num_blocks": 65536, 00:15:49.405 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:49.405 "assigned_rate_limits": { 00:15:49.405 "rw_ios_per_sec": 0, 00:15:49.405 "rw_mbytes_per_sec": 0, 00:15:49.405 "r_mbytes_per_sec": 0, 00:15:49.405 "w_mbytes_per_sec": 0 00:15:49.405 }, 00:15:49.405 "claimed": true, 00:15:49.405 "claim_type": "exclusive_write", 00:15:49.405 "zoned": false, 00:15:49.405 "supported_io_types": { 00:15:49.405 "read": true, 00:15:49.405 "write": true, 00:15:49.405 "unmap": true, 00:15:49.405 "flush": true, 00:15:49.405 "reset": true, 00:15:49.405 "nvme_admin": false, 00:15:49.405 "nvme_io": false, 00:15:49.405 "nvme_io_md": false, 00:15:49.405 "write_zeroes": true, 00:15:49.405 "zcopy": true, 00:15:49.405 "get_zone_info": false, 00:15:49.405 "zone_management": false, 00:15:49.405 "zone_append": false, 00:15:49.405 "compare": false, 00:15:49.405 "compare_and_write": false, 00:15:49.405 "abort": true, 00:15:49.405 "seek_hole": false, 00:15:49.405 "seek_data": false, 00:15:49.405 "copy": true, 00:15:49.405 "nvme_iov_md": false 00:15:49.405 }, 00:15:49.405 "memory_domains": [ 00:15:49.405 { 00:15:49.405 "dma_device_id": "system", 00:15:49.405 "dma_device_type": 1 00:15:49.405 }, 00:15:49.405 { 00:15:49.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.405 "dma_device_type": 2 00:15:49.405 } 00:15:49.405 ], 00:15:49.405 "driver_specific": { 00:15:49.405 "passthru": { 00:15:49.405 "name": "pt1", 00:15:49.405 "base_bdev_name": "malloc1" 00:15:49.405 } 00:15:49.405 } 00:15:49.405 }' 00:15:49.405 15:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:49.405 15:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:49.405 15:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:49.405 15:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:49.405 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:49.405 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:49.405 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:49.405 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:49.405 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:49.405 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:49.405 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:49.405 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:49.405 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:49.405 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:49.405 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:49.677 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:49.677 "name": "pt2", 00:15:49.677 "aliases": [ 00:15:49.677 "00000000-0000-0000-0000-000000000002" 00:15:49.677 ], 00:15:49.677 "product_name": "passthru", 00:15:49.677 "block_size": 512, 00:15:49.677 "num_blocks": 65536, 00:15:49.677 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.677 "assigned_rate_limits": { 00:15:49.677 "rw_ios_per_sec": 0, 00:15:49.677 "rw_mbytes_per_sec": 0, 00:15:49.677 "r_mbytes_per_sec": 0, 00:15:49.677 "w_mbytes_per_sec": 0 00:15:49.677 }, 00:15:49.677 "claimed": true, 00:15:49.677 "claim_type": "exclusive_write", 00:15:49.677 "zoned": false, 00:15:49.677 "supported_io_types": { 00:15:49.677 "read": true, 00:15:49.677 "write": true, 00:15:49.677 "unmap": true, 00:15:49.677 "flush": true, 00:15:49.677 "reset": true, 00:15:49.677 "nvme_admin": false, 00:15:49.677 "nvme_io": false, 00:15:49.677 "nvme_io_md": false, 00:15:49.677 "write_zeroes": true, 00:15:49.677 "zcopy": true, 00:15:49.677 "get_zone_info": false, 00:15:49.677 "zone_management": false, 00:15:49.677 "zone_append": false, 00:15:49.677 "compare": false, 00:15:49.677 "compare_and_write": false, 00:15:49.677 "abort": true, 00:15:49.677 "seek_hole": false, 00:15:49.677 "seek_data": false, 00:15:49.677 "copy": true, 00:15:49.677 "nvme_iov_md": false 00:15:49.677 }, 00:15:49.677 "memory_domains": [ 00:15:49.677 { 00:15:49.677 "dma_device_id": "system", 00:15:49.677 "dma_device_type": 1 00:15:49.677 }, 00:15:49.677 { 00:15:49.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.677 "dma_device_type": 2 00:15:49.677 } 00:15:49.677 ], 00:15:49.677 "driver_specific": { 00:15:49.677 "passthru": { 00:15:49.677 "name": "pt2", 00:15:49.677 "base_bdev_name": "malloc2" 00:15:49.677 } 00:15:49.677 } 00:15:49.677 }' 00:15:49.677 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:49.677 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:49.677 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:49.677 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:49.677 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:49.677 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:49.677 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:49.677 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:49.677 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:49.677 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:49.677 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:49.677 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:49.677 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:49.677 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:49.677 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:49.935 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:49.935 "name": "pt3", 00:15:49.935 "aliases": [ 00:15:49.935 "00000000-0000-0000-0000-000000000003" 00:15:49.935 ], 00:15:49.935 "product_name": "passthru", 00:15:49.935 "block_size": 512, 00:15:49.935 "num_blocks": 65536, 00:15:49.935 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:49.935 "assigned_rate_limits": { 00:15:49.935 "rw_ios_per_sec": 0, 00:15:49.935 "rw_mbytes_per_sec": 0, 00:15:49.935 "r_mbytes_per_sec": 0, 00:15:49.935 "w_mbytes_per_sec": 0 00:15:49.935 }, 00:15:49.935 "claimed": true, 00:15:49.935 "claim_type": "exclusive_write", 00:15:49.935 "zoned": false, 00:15:49.935 "supported_io_types": { 00:15:49.935 "read": true, 00:15:49.935 "write": true, 00:15:49.935 "unmap": true, 00:15:49.935 "flush": true, 00:15:49.935 "reset": true, 00:15:49.935 "nvme_admin": false, 00:15:49.935 "nvme_io": false, 00:15:49.935 "nvme_io_md": false, 00:15:49.935 "write_zeroes": true, 00:15:49.935 "zcopy": true, 00:15:49.935 "get_zone_info": false, 00:15:49.935 "zone_management": false, 00:15:49.935 "zone_append": false, 00:15:49.935 "compare": false, 00:15:49.935 "compare_and_write": false, 00:15:49.935 "abort": true, 00:15:49.935 "seek_hole": false, 00:15:49.935 "seek_data": false, 00:15:49.935 "copy": true, 00:15:49.935 "nvme_iov_md": false 00:15:49.935 }, 00:15:49.935 "memory_domains": [ 00:15:49.935 { 00:15:49.935 "dma_device_id": "system", 00:15:49.935 "dma_device_type": 1 00:15:49.935 }, 00:15:49.935 { 00:15:49.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.935 "dma_device_type": 2 00:15:49.935 } 00:15:49.935 ], 00:15:49.935 "driver_specific": { 00:15:49.935 "passthru": { 00:15:49.935 "name": "pt3", 00:15:49.935 "base_bdev_name": "malloc3" 00:15:49.935 } 00:15:49.935 } 00:15:49.935 }' 00:15:49.935 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:49.935 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:49.935 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:49.935 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:49.935 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:49.935 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:49.936 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:49.936 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:49.936 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:49.936 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:49.936 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:49.936 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:49.936 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:49.936 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:15:49.936 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:50.194 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:50.194 "name": "pt4", 00:15:50.194 "aliases": [ 00:15:50.194 "00000000-0000-0000-0000-000000000004" 00:15:50.194 ], 00:15:50.194 "product_name": "passthru", 00:15:50.194 "block_size": 512, 00:15:50.194 "num_blocks": 65536, 00:15:50.194 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:50.194 "assigned_rate_limits": { 00:15:50.194 "rw_ios_per_sec": 0, 00:15:50.194 "rw_mbytes_per_sec": 0, 00:15:50.194 "r_mbytes_per_sec": 0, 00:15:50.194 "w_mbytes_per_sec": 0 00:15:50.194 }, 00:15:50.194 "claimed": true, 00:15:50.194 "claim_type": "exclusive_write", 00:15:50.194 "zoned": false, 00:15:50.194 "supported_io_types": { 00:15:50.194 "read": true, 00:15:50.194 "write": true, 00:15:50.194 "unmap": true, 00:15:50.194 "flush": true, 00:15:50.194 "reset": true, 00:15:50.194 "nvme_admin": false, 00:15:50.194 "nvme_io": false, 00:15:50.194 "nvme_io_md": false, 00:15:50.194 "write_zeroes": true, 00:15:50.194 "zcopy": true, 00:15:50.194 "get_zone_info": false, 00:15:50.194 "zone_management": false, 00:15:50.194 "zone_append": false, 00:15:50.194 "compare": false, 00:15:50.194 "compare_and_write": false, 00:15:50.194 "abort": true, 00:15:50.194 "seek_hole": false, 00:15:50.194 "seek_data": false, 00:15:50.194 "copy": true, 00:15:50.194 "nvme_iov_md": false 00:15:50.194 }, 00:15:50.194 "memory_domains": [ 00:15:50.194 { 00:15:50.194 "dma_device_id": "system", 00:15:50.194 "dma_device_type": 1 00:15:50.194 }, 00:15:50.194 { 00:15:50.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.194 "dma_device_type": 2 00:15:50.194 } 00:15:50.194 ], 00:15:50.194 "driver_specific": { 00:15:50.194 "passthru": { 00:15:50.194 "name": "pt4", 00:15:50.194 "base_bdev_name": "malloc4" 00:15:50.194 } 00:15:50.194 } 00:15:50.194 }' 00:15:50.194 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:50.194 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:50.194 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:50.194 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:50.194 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:50.194 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:50.194 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:50.194 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:50.194 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:50.194 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:50.194 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:50.194 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:50.194 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:50.194 15:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:15:50.452 [2024-07-12 15:04:16.241559] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:50.452 15:04:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' fbdbc4a4-405f-11ef-b2a4-e9dca065e82e '!=' fbdbc4a4-405f-11ef-b2a4-e9dca065e82e ']' 00:15:50.452 15:04:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:15:50.452 15:04:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:50.452 15:04:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:50.452 15:04:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 62339 00:15:50.452 15:04:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 62339 ']' 00:15:50.452 15:04:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 62339 00:15:50.452 15:04:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:15:50.452 15:04:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:15:50.452 15:04:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 62339 00:15:50.452 15:04:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:15:50.452 15:04:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:15:50.452 15:04:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:15:50.452 15:04:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62339' 00:15:50.452 killing process with pid 62339 00:15:50.452 15:04:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 62339 00:15:50.452 [2024-07-12 15:04:16.272015] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:50.452 [2024-07-12 15:04:16.272039] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:50.452 [2024-07-12 15:04:16.272055] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:50.452 [2024-07-12 15:04:16.272059] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x74c1fa34c80 name raid_bdev1, state offline 00:15:50.452 15:04:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 62339 00:15:50.710 [2024-07-12 15:04:16.295985] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:50.710 15:04:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:15:50.710 00:15:50.710 real 0m13.376s 00:15:50.710 user 0m23.805s 00:15:50.710 sys 0m2.138s 00:15:50.710 15:04:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:50.710 15:04:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.710 ************************************ 00:15:50.710 END TEST raid_superblock_test 00:15:50.710 ************************************ 00:15:50.710 15:04:16 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:50.710 15:04:16 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:15:50.710 15:04:16 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:50.710 15:04:16 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:50.710 15:04:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:50.710 ************************************ 00:15:50.710 START TEST raid_read_error_test 00:15:50.710 ************************************ 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 read 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.4VDm79ZtA9 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=62740 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 62740 /var/tmp/spdk-raid.sock 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 62740 ']' 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:50.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:50.710 15:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.710 [2024-07-12 15:04:16.534423] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:15:50.710 [2024-07-12 15:04:16.534679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:51.277 EAL: TSC is not safe to use in SMP mode 00:15:51.277 EAL: TSC is not invariant 00:15:51.277 [2024-07-12 15:04:17.046169] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.536 [2024-07-12 15:04:17.131220] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:51.536 [2024-07-12 15:04:17.133316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.536 [2024-07-12 15:04:17.134065] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:51.536 [2024-07-12 15:04:17.134080] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:51.794 15:04:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:51.794 15:04:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:15:51.794 15:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:51.794 15:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:52.053 BaseBdev1_malloc 00:15:52.053 15:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:52.310 true 00:15:52.310 15:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:52.568 [2024-07-12 15:04:18.257863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:52.568 [2024-07-12 15:04:18.257927] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.568 [2024-07-12 15:04:18.257969] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24a551434780 00:15:52.568 [2024-07-12 15:04:18.257977] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.568 [2024-07-12 15:04:18.258670] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.568 [2024-07-12 15:04:18.258695] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:52.568 BaseBdev1 00:15:52.568 15:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:52.568 15:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:52.826 BaseBdev2_malloc 00:15:52.826 15:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:53.083 true 00:15:53.083 15:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:53.340 [2024-07-12 15:04:19.061895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:53.340 [2024-07-12 15:04:19.061950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.340 [2024-07-12 15:04:19.061975] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24a551434c80 00:15:53.340 [2024-07-12 15:04:19.061984] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.340 [2024-07-12 15:04:19.062643] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.340 [2024-07-12 15:04:19.062671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:53.340 BaseBdev2 00:15:53.340 15:04:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:53.340 15:04:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:53.597 BaseBdev3_malloc 00:15:53.597 15:04:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:15:53.855 true 00:15:53.855 15:04:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:54.112 [2024-07-12 15:04:19.817932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:54.112 [2024-07-12 15:04:19.817987] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.112 [2024-07-12 15:04:19.818013] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24a551435180 00:15:54.112 [2024-07-12 15:04:19.818022] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.112 [2024-07-12 15:04:19.818678] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.112 [2024-07-12 15:04:19.818703] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:54.112 BaseBdev3 00:15:54.112 15:04:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:54.112 15:04:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:54.370 BaseBdev4_malloc 00:15:54.370 15:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:15:54.628 true 00:15:54.628 15:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:54.886 [2024-07-12 15:04:20.561969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:54.886 [2024-07-12 15:04:20.562026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.886 [2024-07-12 15:04:20.562054] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24a551435680 00:15:54.886 [2024-07-12 15:04:20.562063] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.886 [2024-07-12 15:04:20.562724] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.886 [2024-07-12 15:04:20.562748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:54.886 BaseBdev4 00:15:54.886 15:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:15:55.143 [2024-07-12 15:04:20.825993] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:55.143 [2024-07-12 15:04:20.826572] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:55.143 [2024-07-12 15:04:20.826598] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:55.143 [2024-07-12 15:04:20.826613] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:55.143 [2024-07-12 15:04:20.826683] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x24a551435900 00:15:55.143 [2024-07-12 15:04:20.826698] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:55.143 [2024-07-12 15:04:20.826736] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24a5514a0e20 00:15:55.143 [2024-07-12 15:04:20.826812] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x24a551435900 00:15:55.143 [2024-07-12 15:04:20.826817] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x24a551435900 00:15:55.143 [2024-07-12 15:04:20.826844] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.143 15:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:55.143 15:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:55.143 15:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:55.143 15:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:55.143 15:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:55.143 15:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:55.143 15:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:55.143 15:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:55.143 15:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:55.143 15:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:55.143 15:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.143 15:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.401 15:04:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:55.401 "name": "raid_bdev1", 00:15:55.401 "uuid": "047f3af6-4060-11ef-b2a4-e9dca065e82e", 00:15:55.401 "strip_size_kb": 64, 00:15:55.401 "state": "online", 00:15:55.401 "raid_level": "concat", 00:15:55.401 "superblock": true, 00:15:55.401 "num_base_bdevs": 4, 00:15:55.401 "num_base_bdevs_discovered": 4, 00:15:55.401 "num_base_bdevs_operational": 4, 00:15:55.401 "base_bdevs_list": [ 00:15:55.401 { 00:15:55.401 "name": "BaseBdev1", 00:15:55.401 "uuid": "6fb20783-571b-0d53-9ea2-12f6fded77c5", 00:15:55.401 "is_configured": true, 00:15:55.401 "data_offset": 2048, 00:15:55.401 "data_size": 63488 00:15:55.401 }, 00:15:55.401 { 00:15:55.401 "name": "BaseBdev2", 00:15:55.401 "uuid": "1be61de2-41f9-0a56-b993-1de1c7f548e7", 00:15:55.401 "is_configured": true, 00:15:55.401 "data_offset": 2048, 00:15:55.401 "data_size": 63488 00:15:55.401 }, 00:15:55.401 { 00:15:55.401 "name": "BaseBdev3", 00:15:55.401 "uuid": "3bc5cbd9-7dfc-e257-acba-8685c7081245", 00:15:55.401 "is_configured": true, 00:15:55.401 "data_offset": 2048, 00:15:55.401 "data_size": 63488 00:15:55.401 }, 00:15:55.401 { 00:15:55.401 "name": "BaseBdev4", 00:15:55.401 "uuid": "1513ada5-9b74-bb5e-88f6-0d8bbbe9300c", 00:15:55.401 "is_configured": true, 00:15:55.401 "data_offset": 2048, 00:15:55.401 "data_size": 63488 00:15:55.401 } 00:15:55.401 ] 00:15:55.401 }' 00:15:55.401 15:04:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:55.401 15:04:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.659 15:04:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:55.659 15:04:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:55.659 [2024-07-12 15:04:21.470180] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24a5514a0ec0 00:15:57.037 15:04:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:57.037 15:04:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:57.037 15:04:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:15:57.037 15:04:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:15:57.037 15:04:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:57.037 15:04:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:57.037 15:04:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:57.037 15:04:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:57.037 15:04:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:57.037 15:04:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:57.037 15:04:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:57.037 15:04:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:57.037 15:04:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:57.037 15:04:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:57.037 15:04:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:57.037 15:04:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.295 15:04:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:57.295 "name": "raid_bdev1", 00:15:57.295 "uuid": "047f3af6-4060-11ef-b2a4-e9dca065e82e", 00:15:57.295 "strip_size_kb": 64, 00:15:57.295 "state": "online", 00:15:57.295 "raid_level": "concat", 00:15:57.295 "superblock": true, 00:15:57.295 "num_base_bdevs": 4, 00:15:57.295 "num_base_bdevs_discovered": 4, 00:15:57.295 "num_base_bdevs_operational": 4, 00:15:57.295 "base_bdevs_list": [ 00:15:57.295 { 00:15:57.296 "name": "BaseBdev1", 00:15:57.296 "uuid": "6fb20783-571b-0d53-9ea2-12f6fded77c5", 00:15:57.296 "is_configured": true, 00:15:57.296 "data_offset": 2048, 00:15:57.296 "data_size": 63488 00:15:57.296 }, 00:15:57.296 { 00:15:57.296 "name": "BaseBdev2", 00:15:57.296 "uuid": "1be61de2-41f9-0a56-b993-1de1c7f548e7", 00:15:57.296 "is_configured": true, 00:15:57.296 "data_offset": 2048, 00:15:57.296 "data_size": 63488 00:15:57.296 }, 00:15:57.296 { 00:15:57.296 "name": "BaseBdev3", 00:15:57.296 "uuid": "3bc5cbd9-7dfc-e257-acba-8685c7081245", 00:15:57.296 "is_configured": true, 00:15:57.296 "data_offset": 2048, 00:15:57.296 "data_size": 63488 00:15:57.296 }, 00:15:57.296 { 00:15:57.296 "name": "BaseBdev4", 00:15:57.296 "uuid": "1513ada5-9b74-bb5e-88f6-0d8bbbe9300c", 00:15:57.296 "is_configured": true, 00:15:57.296 "data_offset": 2048, 00:15:57.296 "data_size": 63488 00:15:57.296 } 00:15:57.296 ] 00:15:57.296 }' 00:15:57.296 15:04:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:57.296 15:04:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.554 15:04:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:57.812 [2024-07-12 15:04:23.588717] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:57.812 [2024-07-12 15:04:23.588746] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:57.812 [2024-07-12 15:04:23.589085] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.812 [2024-07-12 15:04:23.589105] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.812 [2024-07-12 15:04:23.589114] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:57.812 [2024-07-12 15:04:23.589119] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x24a551435900 name raid_bdev1, state offline 00:15:57.812 0 00:15:57.812 15:04:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 62740 00:15:57.812 15:04:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 62740 ']' 00:15:57.812 15:04:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 62740 00:15:57.812 15:04:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:15:57.813 15:04:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:15:57.813 15:04:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 62740 00:15:57.813 15:04:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:15:57.813 15:04:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:15:57.813 killing process with pid 62740 00:15:57.813 15:04:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:15:57.813 15:04:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62740' 00:15:57.813 15:04:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 62740 00:15:57.813 [2024-07-12 15:04:23.621675] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:57.813 15:04:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 62740 00:15:58.071 [2024-07-12 15:04:23.644489] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:58.071 15:04:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.4VDm79ZtA9 00:15:58.071 15:04:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:58.071 15:04:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:58.071 15:04:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.47 00:15:58.071 15:04:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:15:58.071 15:04:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:58.071 15:04:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:58.071 15:04:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.47 != \0\.\0\0 ]] 00:15:58.071 00:15:58.071 real 0m7.308s 00:15:58.071 user 0m11.762s 00:15:58.071 sys 0m1.090s 00:15:58.071 15:04:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:58.071 ************************************ 00:15:58.071 END TEST raid_read_error_test 00:15:58.071 ************************************ 00:15:58.071 15:04:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.071 15:04:23 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:58.071 15:04:23 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:15:58.071 15:04:23 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:58.071 15:04:23 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:58.071 15:04:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:58.071 ************************************ 00:15:58.071 START TEST raid_write_error_test 00:15:58.071 ************************************ 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 write 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.GNFURpTFVt 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=62878 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 62878 /var/tmp/spdk-raid.sock 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 62878 ']' 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:58.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:58.071 15:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.071 [2024-07-12 15:04:23.883466] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:15:58.071 [2024-07-12 15:04:23.883714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:58.639 EAL: TSC is not safe to use in SMP mode 00:15:58.639 EAL: TSC is not invariant 00:15:58.639 [2024-07-12 15:04:24.443394] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.898 [2024-07-12 15:04:24.529239] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:58.898 [2024-07-12 15:04:24.531348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.898 [2024-07-12 15:04:24.532131] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.898 [2024-07-12 15:04:24.532145] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:59.158 15:04:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:59.158 15:04:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:15:59.158 15:04:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:59.158 15:04:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:59.418 BaseBdev1_malloc 00:15:59.418 15:04:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:59.677 true 00:15:59.677 15:04:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:59.936 [2024-07-12 15:04:25.740307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:59.936 [2024-07-12 15:04:25.740376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.936 [2024-07-12 15:04:25.740404] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e7a7b034780 00:15:59.936 [2024-07-12 15:04:25.740413] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.936 [2024-07-12 15:04:25.741068] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.936 [2024-07-12 15:04:25.741094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:59.936 BaseBdev1 00:15:59.936 15:04:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:59.936 15:04:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:00.194 BaseBdev2_malloc 00:16:00.194 15:04:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:00.453 true 00:16:00.453 15:04:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:00.712 [2024-07-12 15:04:26.476345] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:00.713 [2024-07-12 15:04:26.476408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.713 [2024-07-12 15:04:26.476437] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e7a7b034c80 00:16:00.713 [2024-07-12 15:04:26.476446] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.713 [2024-07-12 15:04:26.477123] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.713 [2024-07-12 15:04:26.477150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:00.713 BaseBdev2 00:16:00.713 15:04:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:00.713 15:04:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:00.972 BaseBdev3_malloc 00:16:00.972 15:04:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:16:01.231 true 00:16:01.231 15:04:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:01.490 [2024-07-12 15:04:27.304411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:01.490 [2024-07-12 15:04:27.304469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.490 [2024-07-12 15:04:27.304495] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e7a7b035180 00:16:01.490 [2024-07-12 15:04:27.304505] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.490 [2024-07-12 15:04:27.305163] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.490 [2024-07-12 15:04:27.305187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:01.490 BaseBdev3 00:16:01.749 15:04:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:01.749 15:04:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:02.008 BaseBdev4_malloc 00:16:02.008 15:04:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:16:02.265 true 00:16:02.265 15:04:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:02.265 [2024-07-12 15:04:28.092441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:02.265 [2024-07-12 15:04:28.092497] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.265 [2024-07-12 15:04:28.092525] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e7a7b035680 00:16:02.265 [2024-07-12 15:04:28.092534] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.265 [2024-07-12 15:04:28.093187] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.265 [2024-07-12 15:04:28.093213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:02.523 BaseBdev4 00:16:02.523 15:04:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:16:02.813 [2024-07-12 15:04:28.356456] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:02.813 [2024-07-12 15:04:28.357042] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:02.813 [2024-07-12 15:04:28.357068] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:02.813 [2024-07-12 15:04:28.357084] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:02.813 [2024-07-12 15:04:28.357153] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3e7a7b035900 00:16:02.813 [2024-07-12 15:04:28.357159] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:02.813 [2024-07-12 15:04:28.357196] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3e7a7b0a0e20 00:16:02.813 [2024-07-12 15:04:28.357274] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3e7a7b035900 00:16:02.813 [2024-07-12 15:04:28.357280] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3e7a7b035900 00:16:02.813 [2024-07-12 15:04:28.357309] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.813 15:04:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:02.813 15:04:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:02.813 15:04:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:02.813 15:04:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:02.813 15:04:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:02.813 15:04:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:02.813 15:04:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:02.813 15:04:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:02.813 15:04:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:02.813 15:04:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:02.813 15:04:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.813 15:04:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.813 15:04:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:02.813 "name": "raid_bdev1", 00:16:02.813 "uuid": "08fc49ba-4060-11ef-b2a4-e9dca065e82e", 00:16:02.813 "strip_size_kb": 64, 00:16:02.813 "state": "online", 00:16:02.813 "raid_level": "concat", 00:16:02.813 "superblock": true, 00:16:02.813 "num_base_bdevs": 4, 00:16:02.813 "num_base_bdevs_discovered": 4, 00:16:02.813 "num_base_bdevs_operational": 4, 00:16:02.813 "base_bdevs_list": [ 00:16:02.813 { 00:16:02.813 "name": "BaseBdev1", 00:16:02.813 "uuid": "01a44b28-7315-fd52-abfc-d396abaffacb", 00:16:02.813 "is_configured": true, 00:16:02.813 "data_offset": 2048, 00:16:02.813 "data_size": 63488 00:16:02.813 }, 00:16:02.813 { 00:16:02.813 "name": "BaseBdev2", 00:16:02.813 "uuid": "ca1de8ef-7f1f-e654-a8da-69a760a09469", 00:16:02.813 "is_configured": true, 00:16:02.813 "data_offset": 2048, 00:16:02.813 "data_size": 63488 00:16:02.813 }, 00:16:02.813 { 00:16:02.813 "name": "BaseBdev3", 00:16:02.813 "uuid": "16fd27ef-8c57-6953-9de2-5a77cbb258cd", 00:16:02.813 "is_configured": true, 00:16:02.813 "data_offset": 2048, 00:16:02.813 "data_size": 63488 00:16:02.813 }, 00:16:02.813 { 00:16:02.813 "name": "BaseBdev4", 00:16:02.813 "uuid": "cf1dd2b4-1cd0-b256-93df-c1ce1f88e3e4", 00:16:02.813 "is_configured": true, 00:16:02.813 "data_offset": 2048, 00:16:02.813 "data_size": 63488 00:16:02.813 } 00:16:02.813 ] 00:16:02.813 }' 00:16:02.813 15:04:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:02.813 15:04:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.381 15:04:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:03.381 15:04:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:03.381 [2024-07-12 15:04:29.100657] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3e7a7b0a0ec0 00:16:04.318 15:04:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:04.577 15:04:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:04.577 15:04:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:16:04.577 15:04:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:16:04.577 15:04:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:04.577 15:04:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:04.577 15:04:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:04.577 15:04:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:04.577 15:04:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:04.577 15:04:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:04.577 15:04:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:04.577 15:04:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:04.577 15:04:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:04.577 15:04:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:04.577 15:04:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.577 15:04:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.863 15:04:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:04.863 "name": "raid_bdev1", 00:16:04.863 "uuid": "08fc49ba-4060-11ef-b2a4-e9dca065e82e", 00:16:04.863 "strip_size_kb": 64, 00:16:04.863 "state": "online", 00:16:04.863 "raid_level": "concat", 00:16:04.863 "superblock": true, 00:16:04.863 "num_base_bdevs": 4, 00:16:04.863 "num_base_bdevs_discovered": 4, 00:16:04.863 "num_base_bdevs_operational": 4, 00:16:04.863 "base_bdevs_list": [ 00:16:04.863 { 00:16:04.863 "name": "BaseBdev1", 00:16:04.863 "uuid": "01a44b28-7315-fd52-abfc-d396abaffacb", 00:16:04.863 "is_configured": true, 00:16:04.863 "data_offset": 2048, 00:16:04.863 "data_size": 63488 00:16:04.863 }, 00:16:04.863 { 00:16:04.863 "name": "BaseBdev2", 00:16:04.863 "uuid": "ca1de8ef-7f1f-e654-a8da-69a760a09469", 00:16:04.863 "is_configured": true, 00:16:04.863 "data_offset": 2048, 00:16:04.863 "data_size": 63488 00:16:04.863 }, 00:16:04.863 { 00:16:04.863 "name": "BaseBdev3", 00:16:04.863 "uuid": "16fd27ef-8c57-6953-9de2-5a77cbb258cd", 00:16:04.863 "is_configured": true, 00:16:04.863 "data_offset": 2048, 00:16:04.863 "data_size": 63488 00:16:04.863 }, 00:16:04.863 { 00:16:04.863 "name": "BaseBdev4", 00:16:04.863 "uuid": "cf1dd2b4-1cd0-b256-93df-c1ce1f88e3e4", 00:16:04.863 "is_configured": true, 00:16:04.863 "data_offset": 2048, 00:16:04.863 "data_size": 63488 00:16:04.863 } 00:16:04.863 ] 00:16:04.863 }' 00:16:04.863 15:04:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:04.863 15:04:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.121 15:04:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:05.379 [2024-07-12 15:04:31.066411] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:05.379 [2024-07-12 15:04:31.066436] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:05.379 [2024-07-12 15:04:31.066764] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.379 [2024-07-12 15:04:31.066775] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.379 [2024-07-12 15:04:31.066784] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.379 [2024-07-12 15:04:31.066788] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3e7a7b035900 name raid_bdev1, state offline 00:16:05.379 0 00:16:05.379 15:04:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 62878 00:16:05.379 15:04:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 62878 ']' 00:16:05.379 15:04:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 62878 00:16:05.379 15:04:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:16:05.379 15:04:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:16:05.379 15:04:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 62878 00:16:05.379 15:04:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:16:05.379 15:04:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:16:05.379 killing process with pid 62878 00:16:05.379 15:04:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:16:05.379 15:04:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62878' 00:16:05.379 15:04:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 62878 00:16:05.379 [2024-07-12 15:04:31.093590] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:05.379 15:04:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 62878 00:16:05.379 [2024-07-12 15:04:31.116395] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:05.636 15:04:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.GNFURpTFVt 00:16:05.636 15:04:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:05.636 15:04:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:05.636 15:04:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.51 00:16:05.636 15:04:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:16:05.636 15:04:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:05.636 15:04:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:05.636 15:04:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.51 != \0\.\0\0 ]] 00:16:05.636 00:16:05.636 real 0m7.428s 00:16:05.636 user 0m12.048s 00:16:05.636 sys 0m1.111s 00:16:05.636 15:04:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:05.636 15:04:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.636 ************************************ 00:16:05.636 END TEST raid_write_error_test 00:16:05.636 ************************************ 00:16:05.636 15:04:31 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:05.636 15:04:31 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:16:05.636 15:04:31 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:16:05.636 15:04:31 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:05.636 15:04:31 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:05.636 15:04:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:05.636 ************************************ 00:16:05.636 START TEST raid_state_function_test 00:16:05.636 ************************************ 00:16:05.636 15:04:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 false 00:16:05.636 15:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:16:05.636 15:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:16:05.636 15:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:16:05.636 15:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:05.636 15:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:05.636 15:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:05.636 15:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:05.636 15:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:05.636 15:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:05.636 15:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:05.636 15:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:05.637 15:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:05.637 15:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:05.637 15:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:05.637 15:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:05.637 15:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:16:05.637 15:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:05.637 15:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:05.637 15:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:05.637 15:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:05.637 15:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:05.637 15:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:05.637 15:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:05.637 15:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:05.637 15:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:16:05.637 15:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:16:05.637 15:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:16:05.637 15:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:16:05.637 15:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=63014 00:16:05.637 Process raid pid: 63014 00:16:05.637 15:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 63014' 00:16:05.637 15:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:05.637 15:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 63014 /var/tmp/spdk-raid.sock 00:16:05.637 15:04:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 63014 ']' 00:16:05.637 15:04:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:05.637 15:04:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:05.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:05.637 15:04:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:05.637 15:04:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:05.637 15:04:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.637 [2024-07-12 15:04:31.359967] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:16:05.637 [2024-07-12 15:04:31.360156] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:06.204 EAL: TSC is not safe to use in SMP mode 00:16:06.204 EAL: TSC is not invariant 00:16:06.205 [2024-07-12 15:04:31.869986] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.205 [2024-07-12 15:04:31.953166] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:06.205 [2024-07-12 15:04:31.955242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.205 [2024-07-12 15:04:31.956019] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:06.205 [2024-07-12 15:04:31.956035] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:06.774 15:04:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:06.774 15:04:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:16:06.774 15:04:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:07.032 [2024-07-12 15:04:32.743788] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:07.032 [2024-07-12 15:04:32.743847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:07.032 [2024-07-12 15:04:32.743853] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:07.032 [2024-07-12 15:04:32.743862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:07.032 [2024-07-12 15:04:32.743865] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:07.032 [2024-07-12 15:04:32.743873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:07.032 [2024-07-12 15:04:32.743876] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:07.032 [2024-07-12 15:04:32.743883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:07.032 15:04:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:07.032 15:04:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:07.032 15:04:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:07.032 15:04:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:07.032 15:04:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:07.032 15:04:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:07.032 15:04:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:07.032 15:04:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:07.032 15:04:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:07.032 15:04:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:07.032 15:04:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:07.032 15:04:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.290 15:04:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:07.290 "name": "Existed_Raid", 00:16:07.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.290 "strip_size_kb": 0, 00:16:07.290 "state": "configuring", 00:16:07.290 "raid_level": "raid1", 00:16:07.290 "superblock": false, 00:16:07.290 "num_base_bdevs": 4, 00:16:07.290 "num_base_bdevs_discovered": 0, 00:16:07.290 "num_base_bdevs_operational": 4, 00:16:07.290 "base_bdevs_list": [ 00:16:07.290 { 00:16:07.290 "name": "BaseBdev1", 00:16:07.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.290 "is_configured": false, 00:16:07.290 "data_offset": 0, 00:16:07.290 "data_size": 0 00:16:07.290 }, 00:16:07.290 { 00:16:07.290 "name": "BaseBdev2", 00:16:07.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.290 "is_configured": false, 00:16:07.290 "data_offset": 0, 00:16:07.290 "data_size": 0 00:16:07.290 }, 00:16:07.290 { 00:16:07.290 "name": "BaseBdev3", 00:16:07.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.290 "is_configured": false, 00:16:07.290 "data_offset": 0, 00:16:07.290 "data_size": 0 00:16:07.290 }, 00:16:07.290 { 00:16:07.290 "name": "BaseBdev4", 00:16:07.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.290 "is_configured": false, 00:16:07.290 "data_offset": 0, 00:16:07.290 "data_size": 0 00:16:07.290 } 00:16:07.290 ] 00:16:07.290 }' 00:16:07.290 15:04:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:07.290 15:04:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.547 15:04:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:07.805 [2024-07-12 15:04:33.495798] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:07.805 [2024-07-12 15:04:33.495825] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2825c5434500 name Existed_Raid, state configuring 00:16:07.805 15:04:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:08.064 [2024-07-12 15:04:33.775830] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:08.064 [2024-07-12 15:04:33.775889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:08.064 [2024-07-12 15:04:33.775895] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:08.064 [2024-07-12 15:04:33.775904] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:08.064 [2024-07-12 15:04:33.775907] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:08.064 [2024-07-12 15:04:33.775915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:08.064 [2024-07-12 15:04:33.775918] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:08.064 [2024-07-12 15:04:33.775925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:08.064 15:04:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:08.322 [2024-07-12 15:04:34.004902] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:08.322 BaseBdev1 00:16:08.322 15:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:08.322 15:04:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:08.322 15:04:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:08.322 15:04:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:08.322 15:04:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:08.322 15:04:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:08.322 15:04:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:08.579 15:04:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:08.838 [ 00:16:08.838 { 00:16:08.838 "name": "BaseBdev1", 00:16:08.838 "aliases": [ 00:16:08.838 "0c5a0346-4060-11ef-b2a4-e9dca065e82e" 00:16:08.838 ], 00:16:08.838 "product_name": "Malloc disk", 00:16:08.838 "block_size": 512, 00:16:08.838 "num_blocks": 65536, 00:16:08.838 "uuid": "0c5a0346-4060-11ef-b2a4-e9dca065e82e", 00:16:08.838 "assigned_rate_limits": { 00:16:08.838 "rw_ios_per_sec": 0, 00:16:08.838 "rw_mbytes_per_sec": 0, 00:16:08.838 "r_mbytes_per_sec": 0, 00:16:08.838 "w_mbytes_per_sec": 0 00:16:08.838 }, 00:16:08.838 "claimed": true, 00:16:08.838 "claim_type": "exclusive_write", 00:16:08.838 "zoned": false, 00:16:08.838 "supported_io_types": { 00:16:08.838 "read": true, 00:16:08.838 "write": true, 00:16:08.838 "unmap": true, 00:16:08.838 "flush": true, 00:16:08.838 "reset": true, 00:16:08.838 "nvme_admin": false, 00:16:08.838 "nvme_io": false, 00:16:08.838 "nvme_io_md": false, 00:16:08.838 "write_zeroes": true, 00:16:08.838 "zcopy": true, 00:16:08.838 "get_zone_info": false, 00:16:08.838 "zone_management": false, 00:16:08.838 "zone_append": false, 00:16:08.838 "compare": false, 00:16:08.838 "compare_and_write": false, 00:16:08.838 "abort": true, 00:16:08.838 "seek_hole": false, 00:16:08.838 "seek_data": false, 00:16:08.838 "copy": true, 00:16:08.838 "nvme_iov_md": false 00:16:08.838 }, 00:16:08.838 "memory_domains": [ 00:16:08.838 { 00:16:08.838 "dma_device_id": "system", 00:16:08.838 "dma_device_type": 1 00:16:08.838 }, 00:16:08.838 { 00:16:08.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.838 "dma_device_type": 2 00:16:08.838 } 00:16:08.838 ], 00:16:08.838 "driver_specific": {} 00:16:08.838 } 00:16:08.838 ] 00:16:08.838 15:04:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:08.838 15:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:08.838 15:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:08.838 15:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:08.838 15:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:08.838 15:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:08.838 15:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:08.838 15:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:08.838 15:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:08.838 15:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:08.838 15:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:08.838 15:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.838 15:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.097 15:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:09.097 "name": "Existed_Raid", 00:16:09.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.097 "strip_size_kb": 0, 00:16:09.097 "state": "configuring", 00:16:09.097 "raid_level": "raid1", 00:16:09.097 "superblock": false, 00:16:09.097 "num_base_bdevs": 4, 00:16:09.097 "num_base_bdevs_discovered": 1, 00:16:09.097 "num_base_bdevs_operational": 4, 00:16:09.097 "base_bdevs_list": [ 00:16:09.097 { 00:16:09.097 "name": "BaseBdev1", 00:16:09.097 "uuid": "0c5a0346-4060-11ef-b2a4-e9dca065e82e", 00:16:09.097 "is_configured": true, 00:16:09.097 "data_offset": 0, 00:16:09.097 "data_size": 65536 00:16:09.097 }, 00:16:09.097 { 00:16:09.097 "name": "BaseBdev2", 00:16:09.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.097 "is_configured": false, 00:16:09.097 "data_offset": 0, 00:16:09.097 "data_size": 0 00:16:09.097 }, 00:16:09.097 { 00:16:09.097 "name": "BaseBdev3", 00:16:09.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.097 "is_configured": false, 00:16:09.097 "data_offset": 0, 00:16:09.097 "data_size": 0 00:16:09.097 }, 00:16:09.097 { 00:16:09.097 "name": "BaseBdev4", 00:16:09.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.097 "is_configured": false, 00:16:09.097 "data_offset": 0, 00:16:09.097 "data_size": 0 00:16:09.097 } 00:16:09.097 ] 00:16:09.097 }' 00:16:09.097 15:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:09.097 15:04:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.355 15:04:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:09.633 [2024-07-12 15:04:35.327882] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:09.633 [2024-07-12 15:04:35.327917] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2825c5434500 name Existed_Raid, state configuring 00:16:09.633 15:04:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:09.909 [2024-07-12 15:04:35.563912] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:09.909 [2024-07-12 15:04:35.564745] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:09.909 [2024-07-12 15:04:35.564787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:09.909 [2024-07-12 15:04:35.564793] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:09.909 [2024-07-12 15:04:35.564801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:09.909 [2024-07-12 15:04:35.564805] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:09.909 [2024-07-12 15:04:35.564812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:09.909 15:04:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:09.909 15:04:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:09.909 15:04:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:09.909 15:04:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:09.909 15:04:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:09.909 15:04:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:09.909 15:04:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:09.909 15:04:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:09.909 15:04:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:09.909 15:04:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:09.909 15:04:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:09.909 15:04:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:09.909 15:04:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.909 15:04:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.168 15:04:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:10.168 "name": "Existed_Raid", 00:16:10.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.168 "strip_size_kb": 0, 00:16:10.168 "state": "configuring", 00:16:10.168 "raid_level": "raid1", 00:16:10.168 "superblock": false, 00:16:10.168 "num_base_bdevs": 4, 00:16:10.168 "num_base_bdevs_discovered": 1, 00:16:10.168 "num_base_bdevs_operational": 4, 00:16:10.168 "base_bdevs_list": [ 00:16:10.168 { 00:16:10.168 "name": "BaseBdev1", 00:16:10.168 "uuid": "0c5a0346-4060-11ef-b2a4-e9dca065e82e", 00:16:10.168 "is_configured": true, 00:16:10.168 "data_offset": 0, 00:16:10.168 "data_size": 65536 00:16:10.168 }, 00:16:10.168 { 00:16:10.168 "name": "BaseBdev2", 00:16:10.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.168 "is_configured": false, 00:16:10.168 "data_offset": 0, 00:16:10.168 "data_size": 0 00:16:10.168 }, 00:16:10.168 { 00:16:10.168 "name": "BaseBdev3", 00:16:10.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.168 "is_configured": false, 00:16:10.168 "data_offset": 0, 00:16:10.168 "data_size": 0 00:16:10.168 }, 00:16:10.168 { 00:16:10.168 "name": "BaseBdev4", 00:16:10.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.168 "is_configured": false, 00:16:10.168 "data_offset": 0, 00:16:10.168 "data_size": 0 00:16:10.168 } 00:16:10.168 ] 00:16:10.168 }' 00:16:10.168 15:04:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:10.168 15:04:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.426 15:04:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:10.685 [2024-07-12 15:04:36.440076] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:10.685 BaseBdev2 00:16:10.685 15:04:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:10.685 15:04:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:10.685 15:04:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:10.685 15:04:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:10.685 15:04:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:10.685 15:04:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:10.685 15:04:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:10.944 15:04:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:11.204 [ 00:16:11.204 { 00:16:11.204 "name": "BaseBdev2", 00:16:11.204 "aliases": [ 00:16:11.204 "0dcdbb49-4060-11ef-b2a4-e9dca065e82e" 00:16:11.204 ], 00:16:11.204 "product_name": "Malloc disk", 00:16:11.204 "block_size": 512, 00:16:11.204 "num_blocks": 65536, 00:16:11.204 "uuid": "0dcdbb49-4060-11ef-b2a4-e9dca065e82e", 00:16:11.204 "assigned_rate_limits": { 00:16:11.204 "rw_ios_per_sec": 0, 00:16:11.204 "rw_mbytes_per_sec": 0, 00:16:11.204 "r_mbytes_per_sec": 0, 00:16:11.204 "w_mbytes_per_sec": 0 00:16:11.204 }, 00:16:11.204 "claimed": true, 00:16:11.204 "claim_type": "exclusive_write", 00:16:11.204 "zoned": false, 00:16:11.204 "supported_io_types": { 00:16:11.204 "read": true, 00:16:11.204 "write": true, 00:16:11.204 "unmap": true, 00:16:11.204 "flush": true, 00:16:11.204 "reset": true, 00:16:11.204 "nvme_admin": false, 00:16:11.204 "nvme_io": false, 00:16:11.204 "nvme_io_md": false, 00:16:11.204 "write_zeroes": true, 00:16:11.204 "zcopy": true, 00:16:11.204 "get_zone_info": false, 00:16:11.204 "zone_management": false, 00:16:11.204 "zone_append": false, 00:16:11.204 "compare": false, 00:16:11.204 "compare_and_write": false, 00:16:11.204 "abort": true, 00:16:11.204 "seek_hole": false, 00:16:11.204 "seek_data": false, 00:16:11.204 "copy": true, 00:16:11.204 "nvme_iov_md": false 00:16:11.204 }, 00:16:11.204 "memory_domains": [ 00:16:11.204 { 00:16:11.204 "dma_device_id": "system", 00:16:11.204 "dma_device_type": 1 00:16:11.204 }, 00:16:11.204 { 00:16:11.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.204 "dma_device_type": 2 00:16:11.204 } 00:16:11.204 ], 00:16:11.204 "driver_specific": {} 00:16:11.204 } 00:16:11.204 ] 00:16:11.204 15:04:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:11.204 15:04:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:11.204 15:04:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:11.204 15:04:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:11.204 15:04:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:11.204 15:04:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:11.204 15:04:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:11.204 15:04:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:11.204 15:04:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:11.204 15:04:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:11.204 15:04:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:11.204 15:04:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:11.204 15:04:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:11.204 15:04:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.204 15:04:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.462 15:04:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:11.462 "name": "Existed_Raid", 00:16:11.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.462 "strip_size_kb": 0, 00:16:11.462 "state": "configuring", 00:16:11.462 "raid_level": "raid1", 00:16:11.462 "superblock": false, 00:16:11.462 "num_base_bdevs": 4, 00:16:11.462 "num_base_bdevs_discovered": 2, 00:16:11.462 "num_base_bdevs_operational": 4, 00:16:11.462 "base_bdevs_list": [ 00:16:11.462 { 00:16:11.462 "name": "BaseBdev1", 00:16:11.462 "uuid": "0c5a0346-4060-11ef-b2a4-e9dca065e82e", 00:16:11.462 "is_configured": true, 00:16:11.462 "data_offset": 0, 00:16:11.462 "data_size": 65536 00:16:11.462 }, 00:16:11.462 { 00:16:11.462 "name": "BaseBdev2", 00:16:11.462 "uuid": "0dcdbb49-4060-11ef-b2a4-e9dca065e82e", 00:16:11.462 "is_configured": true, 00:16:11.462 "data_offset": 0, 00:16:11.462 "data_size": 65536 00:16:11.462 }, 00:16:11.462 { 00:16:11.462 "name": "BaseBdev3", 00:16:11.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.462 "is_configured": false, 00:16:11.462 "data_offset": 0, 00:16:11.462 "data_size": 0 00:16:11.462 }, 00:16:11.462 { 00:16:11.462 "name": "BaseBdev4", 00:16:11.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.462 "is_configured": false, 00:16:11.462 "data_offset": 0, 00:16:11.462 "data_size": 0 00:16:11.462 } 00:16:11.462 ] 00:16:11.462 }' 00:16:11.462 15:04:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:11.462 15:04:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.721 15:04:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:11.979 [2024-07-12 15:04:37.764149] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:11.979 BaseBdev3 00:16:11.979 15:04:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:11.979 15:04:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:11.979 15:04:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:11.979 15:04:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:11.979 15:04:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:11.979 15:04:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:11.979 15:04:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:12.237 15:04:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:12.495 [ 00:16:12.495 { 00:16:12.495 "name": "BaseBdev3", 00:16:12.495 "aliases": [ 00:16:12.495 "0e97c4d8-4060-11ef-b2a4-e9dca065e82e" 00:16:12.495 ], 00:16:12.495 "product_name": "Malloc disk", 00:16:12.495 "block_size": 512, 00:16:12.495 "num_blocks": 65536, 00:16:12.495 "uuid": "0e97c4d8-4060-11ef-b2a4-e9dca065e82e", 00:16:12.495 "assigned_rate_limits": { 00:16:12.495 "rw_ios_per_sec": 0, 00:16:12.495 "rw_mbytes_per_sec": 0, 00:16:12.495 "r_mbytes_per_sec": 0, 00:16:12.495 "w_mbytes_per_sec": 0 00:16:12.495 }, 00:16:12.495 "claimed": true, 00:16:12.495 "claim_type": "exclusive_write", 00:16:12.495 "zoned": false, 00:16:12.495 "supported_io_types": { 00:16:12.495 "read": true, 00:16:12.495 "write": true, 00:16:12.495 "unmap": true, 00:16:12.495 "flush": true, 00:16:12.495 "reset": true, 00:16:12.495 "nvme_admin": false, 00:16:12.495 "nvme_io": false, 00:16:12.495 "nvme_io_md": false, 00:16:12.495 "write_zeroes": true, 00:16:12.495 "zcopy": true, 00:16:12.495 "get_zone_info": false, 00:16:12.495 "zone_management": false, 00:16:12.495 "zone_append": false, 00:16:12.495 "compare": false, 00:16:12.495 "compare_and_write": false, 00:16:12.495 "abort": true, 00:16:12.495 "seek_hole": false, 00:16:12.495 "seek_data": false, 00:16:12.495 "copy": true, 00:16:12.495 "nvme_iov_md": false 00:16:12.495 }, 00:16:12.495 "memory_domains": [ 00:16:12.495 { 00:16:12.495 "dma_device_id": "system", 00:16:12.495 "dma_device_type": 1 00:16:12.495 }, 00:16:12.495 { 00:16:12.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.495 "dma_device_type": 2 00:16:12.495 } 00:16:12.495 ], 00:16:12.495 "driver_specific": {} 00:16:12.495 } 00:16:12.495 ] 00:16:12.495 15:04:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:12.495 15:04:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:12.495 15:04:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:12.495 15:04:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:12.495 15:04:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:12.495 15:04:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:12.495 15:04:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:12.495 15:04:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:12.495 15:04:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:12.496 15:04:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:12.496 15:04:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:12.496 15:04:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:12.496 15:04:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:12.496 15:04:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.496 15:04:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.759 15:04:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:12.759 "name": "Existed_Raid", 00:16:12.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.759 "strip_size_kb": 0, 00:16:12.759 "state": "configuring", 00:16:12.759 "raid_level": "raid1", 00:16:12.759 "superblock": false, 00:16:12.759 "num_base_bdevs": 4, 00:16:12.759 "num_base_bdevs_discovered": 3, 00:16:12.759 "num_base_bdevs_operational": 4, 00:16:12.759 "base_bdevs_list": [ 00:16:12.759 { 00:16:12.759 "name": "BaseBdev1", 00:16:12.759 "uuid": "0c5a0346-4060-11ef-b2a4-e9dca065e82e", 00:16:12.759 "is_configured": true, 00:16:12.759 "data_offset": 0, 00:16:12.759 "data_size": 65536 00:16:12.759 }, 00:16:12.759 { 00:16:12.759 "name": "BaseBdev2", 00:16:12.759 "uuid": "0dcdbb49-4060-11ef-b2a4-e9dca065e82e", 00:16:12.759 "is_configured": true, 00:16:12.759 "data_offset": 0, 00:16:12.759 "data_size": 65536 00:16:12.759 }, 00:16:12.759 { 00:16:12.759 "name": "BaseBdev3", 00:16:12.759 "uuid": "0e97c4d8-4060-11ef-b2a4-e9dca065e82e", 00:16:12.759 "is_configured": true, 00:16:12.759 "data_offset": 0, 00:16:12.759 "data_size": 65536 00:16:12.759 }, 00:16:12.759 { 00:16:12.759 "name": "BaseBdev4", 00:16:12.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.759 "is_configured": false, 00:16:12.759 "data_offset": 0, 00:16:12.759 "data_size": 0 00:16:12.759 } 00:16:12.759 ] 00:16:12.759 }' 00:16:12.759 15:04:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:12.759 15:04:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.336 15:04:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:13.336 [2024-07-12 15:04:39.100254] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:13.336 [2024-07-12 15:04:39.100287] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2825c5434a00 00:16:13.336 [2024-07-12 15:04:39.100291] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:13.336 [2024-07-12 15:04:39.100331] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2825c5497e20 00:16:13.336 [2024-07-12 15:04:39.100431] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2825c5434a00 00:16:13.336 [2024-07-12 15:04:39.100436] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2825c5434a00 00:16:13.336 [2024-07-12 15:04:39.100469] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.336 BaseBdev4 00:16:13.336 15:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:16:13.336 15:04:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:16:13.336 15:04:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:13.336 15:04:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:13.336 15:04:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:13.336 15:04:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:13.336 15:04:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:13.594 15:04:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:13.854 [ 00:16:13.854 { 00:16:13.854 "name": "BaseBdev4", 00:16:13.854 "aliases": [ 00:16:13.854 "0f63a2da-4060-11ef-b2a4-e9dca065e82e" 00:16:13.854 ], 00:16:13.854 "product_name": "Malloc disk", 00:16:13.854 "block_size": 512, 00:16:13.854 "num_blocks": 65536, 00:16:13.854 "uuid": "0f63a2da-4060-11ef-b2a4-e9dca065e82e", 00:16:13.854 "assigned_rate_limits": { 00:16:13.854 "rw_ios_per_sec": 0, 00:16:13.854 "rw_mbytes_per_sec": 0, 00:16:13.854 "r_mbytes_per_sec": 0, 00:16:13.854 "w_mbytes_per_sec": 0 00:16:13.854 }, 00:16:13.854 "claimed": true, 00:16:13.854 "claim_type": "exclusive_write", 00:16:13.854 "zoned": false, 00:16:13.854 "supported_io_types": { 00:16:13.854 "read": true, 00:16:13.854 "write": true, 00:16:13.854 "unmap": true, 00:16:13.854 "flush": true, 00:16:13.854 "reset": true, 00:16:13.854 "nvme_admin": false, 00:16:13.854 "nvme_io": false, 00:16:13.854 "nvme_io_md": false, 00:16:13.854 "write_zeroes": true, 00:16:13.854 "zcopy": true, 00:16:13.854 "get_zone_info": false, 00:16:13.854 "zone_management": false, 00:16:13.854 "zone_append": false, 00:16:13.854 "compare": false, 00:16:13.854 "compare_and_write": false, 00:16:13.854 "abort": true, 00:16:13.854 "seek_hole": false, 00:16:13.854 "seek_data": false, 00:16:13.854 "copy": true, 00:16:13.854 "nvme_iov_md": false 00:16:13.854 }, 00:16:13.854 "memory_domains": [ 00:16:13.854 { 00:16:13.854 "dma_device_id": "system", 00:16:13.854 "dma_device_type": 1 00:16:13.854 }, 00:16:13.854 { 00:16:13.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.854 "dma_device_type": 2 00:16:13.854 } 00:16:13.854 ], 00:16:13.854 "driver_specific": {} 00:16:13.854 } 00:16:13.854 ] 00:16:13.854 15:04:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:13.854 15:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:13.854 15:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:13.854 15:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:13.854 15:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:13.854 15:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:13.854 15:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:13.854 15:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:13.854 15:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:13.854 15:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:13.854 15:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:13.854 15:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:13.854 15:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:13.854 15:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.854 15:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.114 15:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:14.114 "name": "Existed_Raid", 00:16:14.114 "uuid": "0f63abff-4060-11ef-b2a4-e9dca065e82e", 00:16:14.114 "strip_size_kb": 0, 00:16:14.114 "state": "online", 00:16:14.114 "raid_level": "raid1", 00:16:14.114 "superblock": false, 00:16:14.114 "num_base_bdevs": 4, 00:16:14.114 "num_base_bdevs_discovered": 4, 00:16:14.114 "num_base_bdevs_operational": 4, 00:16:14.114 "base_bdevs_list": [ 00:16:14.114 { 00:16:14.114 "name": "BaseBdev1", 00:16:14.114 "uuid": "0c5a0346-4060-11ef-b2a4-e9dca065e82e", 00:16:14.114 "is_configured": true, 00:16:14.114 "data_offset": 0, 00:16:14.114 "data_size": 65536 00:16:14.114 }, 00:16:14.114 { 00:16:14.114 "name": "BaseBdev2", 00:16:14.114 "uuid": "0dcdbb49-4060-11ef-b2a4-e9dca065e82e", 00:16:14.114 "is_configured": true, 00:16:14.114 "data_offset": 0, 00:16:14.114 "data_size": 65536 00:16:14.114 }, 00:16:14.114 { 00:16:14.114 "name": "BaseBdev3", 00:16:14.114 "uuid": "0e97c4d8-4060-11ef-b2a4-e9dca065e82e", 00:16:14.114 "is_configured": true, 00:16:14.114 "data_offset": 0, 00:16:14.114 "data_size": 65536 00:16:14.114 }, 00:16:14.114 { 00:16:14.114 "name": "BaseBdev4", 00:16:14.114 "uuid": "0f63a2da-4060-11ef-b2a4-e9dca065e82e", 00:16:14.114 "is_configured": true, 00:16:14.114 "data_offset": 0, 00:16:14.114 "data_size": 65536 00:16:14.114 } 00:16:14.114 ] 00:16:14.114 }' 00:16:14.114 15:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:14.114 15:04:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.681 15:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:14.681 15:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:14.681 15:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:14.681 15:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:14.681 15:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:14.681 15:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:14.682 15:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:14.682 15:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:14.682 [2024-07-12 15:04:40.488178] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:14.682 15:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:14.682 "name": "Existed_Raid", 00:16:14.682 "aliases": [ 00:16:14.682 "0f63abff-4060-11ef-b2a4-e9dca065e82e" 00:16:14.682 ], 00:16:14.682 "product_name": "Raid Volume", 00:16:14.682 "block_size": 512, 00:16:14.682 "num_blocks": 65536, 00:16:14.682 "uuid": "0f63abff-4060-11ef-b2a4-e9dca065e82e", 00:16:14.682 "assigned_rate_limits": { 00:16:14.682 "rw_ios_per_sec": 0, 00:16:14.682 "rw_mbytes_per_sec": 0, 00:16:14.682 "r_mbytes_per_sec": 0, 00:16:14.682 "w_mbytes_per_sec": 0 00:16:14.682 }, 00:16:14.682 "claimed": false, 00:16:14.682 "zoned": false, 00:16:14.682 "supported_io_types": { 00:16:14.682 "read": true, 00:16:14.682 "write": true, 00:16:14.682 "unmap": false, 00:16:14.682 "flush": false, 00:16:14.682 "reset": true, 00:16:14.682 "nvme_admin": false, 00:16:14.682 "nvme_io": false, 00:16:14.682 "nvme_io_md": false, 00:16:14.682 "write_zeroes": true, 00:16:14.682 "zcopy": false, 00:16:14.682 "get_zone_info": false, 00:16:14.682 "zone_management": false, 00:16:14.682 "zone_append": false, 00:16:14.682 "compare": false, 00:16:14.682 "compare_and_write": false, 00:16:14.682 "abort": false, 00:16:14.682 "seek_hole": false, 00:16:14.682 "seek_data": false, 00:16:14.682 "copy": false, 00:16:14.682 "nvme_iov_md": false 00:16:14.682 }, 00:16:14.682 "memory_domains": [ 00:16:14.682 { 00:16:14.682 "dma_device_id": "system", 00:16:14.682 "dma_device_type": 1 00:16:14.682 }, 00:16:14.682 { 00:16:14.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.682 "dma_device_type": 2 00:16:14.682 }, 00:16:14.682 { 00:16:14.682 "dma_device_id": "system", 00:16:14.682 "dma_device_type": 1 00:16:14.682 }, 00:16:14.682 { 00:16:14.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.682 "dma_device_type": 2 00:16:14.682 }, 00:16:14.682 { 00:16:14.682 "dma_device_id": "system", 00:16:14.682 "dma_device_type": 1 00:16:14.682 }, 00:16:14.682 { 00:16:14.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.682 "dma_device_type": 2 00:16:14.682 }, 00:16:14.682 { 00:16:14.682 "dma_device_id": "system", 00:16:14.682 "dma_device_type": 1 00:16:14.682 }, 00:16:14.682 { 00:16:14.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.682 "dma_device_type": 2 00:16:14.682 } 00:16:14.682 ], 00:16:14.682 "driver_specific": { 00:16:14.682 "raid": { 00:16:14.682 "uuid": "0f63abff-4060-11ef-b2a4-e9dca065e82e", 00:16:14.682 "strip_size_kb": 0, 00:16:14.682 "state": "online", 00:16:14.682 "raid_level": "raid1", 00:16:14.682 "superblock": false, 00:16:14.682 "num_base_bdevs": 4, 00:16:14.682 "num_base_bdevs_discovered": 4, 00:16:14.682 "num_base_bdevs_operational": 4, 00:16:14.682 "base_bdevs_list": [ 00:16:14.682 { 00:16:14.682 "name": "BaseBdev1", 00:16:14.682 "uuid": "0c5a0346-4060-11ef-b2a4-e9dca065e82e", 00:16:14.682 "is_configured": true, 00:16:14.682 "data_offset": 0, 00:16:14.682 "data_size": 65536 00:16:14.682 }, 00:16:14.682 { 00:16:14.682 "name": "BaseBdev2", 00:16:14.682 "uuid": "0dcdbb49-4060-11ef-b2a4-e9dca065e82e", 00:16:14.682 "is_configured": true, 00:16:14.682 "data_offset": 0, 00:16:14.682 "data_size": 65536 00:16:14.682 }, 00:16:14.682 { 00:16:14.682 "name": "BaseBdev3", 00:16:14.682 "uuid": "0e97c4d8-4060-11ef-b2a4-e9dca065e82e", 00:16:14.682 "is_configured": true, 00:16:14.682 "data_offset": 0, 00:16:14.682 "data_size": 65536 00:16:14.682 }, 00:16:14.682 { 00:16:14.682 "name": "BaseBdev4", 00:16:14.682 "uuid": "0f63a2da-4060-11ef-b2a4-e9dca065e82e", 00:16:14.682 "is_configured": true, 00:16:14.682 "data_offset": 0, 00:16:14.682 "data_size": 65536 00:16:14.682 } 00:16:14.682 ] 00:16:14.682 } 00:16:14.682 } 00:16:14.682 }' 00:16:14.682 15:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:14.941 15:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:14.941 BaseBdev2 00:16:14.941 BaseBdev3 00:16:14.941 BaseBdev4' 00:16:14.941 15:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:14.941 15:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:14.941 15:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:14.941 15:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:14.941 "name": "BaseBdev1", 00:16:14.941 "aliases": [ 00:16:14.941 "0c5a0346-4060-11ef-b2a4-e9dca065e82e" 00:16:14.941 ], 00:16:14.941 "product_name": "Malloc disk", 00:16:14.941 "block_size": 512, 00:16:14.941 "num_blocks": 65536, 00:16:14.941 "uuid": "0c5a0346-4060-11ef-b2a4-e9dca065e82e", 00:16:14.941 "assigned_rate_limits": { 00:16:14.941 "rw_ios_per_sec": 0, 00:16:14.941 "rw_mbytes_per_sec": 0, 00:16:14.941 "r_mbytes_per_sec": 0, 00:16:14.941 "w_mbytes_per_sec": 0 00:16:14.941 }, 00:16:14.941 "claimed": true, 00:16:14.941 "claim_type": "exclusive_write", 00:16:14.941 "zoned": false, 00:16:14.941 "supported_io_types": { 00:16:14.941 "read": true, 00:16:14.941 "write": true, 00:16:14.941 "unmap": true, 00:16:14.941 "flush": true, 00:16:14.941 "reset": true, 00:16:14.941 "nvme_admin": false, 00:16:14.941 "nvme_io": false, 00:16:14.941 "nvme_io_md": false, 00:16:14.941 "write_zeroes": true, 00:16:14.941 "zcopy": true, 00:16:14.941 "get_zone_info": false, 00:16:14.941 "zone_management": false, 00:16:14.941 "zone_append": false, 00:16:14.941 "compare": false, 00:16:14.941 "compare_and_write": false, 00:16:14.941 "abort": true, 00:16:14.941 "seek_hole": false, 00:16:14.941 "seek_data": false, 00:16:14.941 "copy": true, 00:16:14.941 "nvme_iov_md": false 00:16:14.941 }, 00:16:14.941 "memory_domains": [ 00:16:14.941 { 00:16:14.941 "dma_device_id": "system", 00:16:14.941 "dma_device_type": 1 00:16:14.941 }, 00:16:14.941 { 00:16:14.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.941 "dma_device_type": 2 00:16:14.941 } 00:16:14.941 ], 00:16:14.941 "driver_specific": {} 00:16:14.941 }' 00:16:14.941 15:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:14.941 15:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:14.942 15:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:14.942 15:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:15.201 15:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:15.201 15:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:15.201 15:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:15.201 15:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:15.201 15:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:15.201 15:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:15.201 15:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:15.201 15:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:15.201 15:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:15.201 15:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:15.201 15:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:15.459 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:15.460 "name": "BaseBdev2", 00:16:15.460 "aliases": [ 00:16:15.460 "0dcdbb49-4060-11ef-b2a4-e9dca065e82e" 00:16:15.460 ], 00:16:15.460 "product_name": "Malloc disk", 00:16:15.460 "block_size": 512, 00:16:15.460 "num_blocks": 65536, 00:16:15.460 "uuid": "0dcdbb49-4060-11ef-b2a4-e9dca065e82e", 00:16:15.460 "assigned_rate_limits": { 00:16:15.460 "rw_ios_per_sec": 0, 00:16:15.460 "rw_mbytes_per_sec": 0, 00:16:15.460 "r_mbytes_per_sec": 0, 00:16:15.460 "w_mbytes_per_sec": 0 00:16:15.460 }, 00:16:15.460 "claimed": true, 00:16:15.460 "claim_type": "exclusive_write", 00:16:15.460 "zoned": false, 00:16:15.460 "supported_io_types": { 00:16:15.460 "read": true, 00:16:15.460 "write": true, 00:16:15.460 "unmap": true, 00:16:15.460 "flush": true, 00:16:15.460 "reset": true, 00:16:15.460 "nvme_admin": false, 00:16:15.460 "nvme_io": false, 00:16:15.460 "nvme_io_md": false, 00:16:15.460 "write_zeroes": true, 00:16:15.460 "zcopy": true, 00:16:15.460 "get_zone_info": false, 00:16:15.460 "zone_management": false, 00:16:15.460 "zone_append": false, 00:16:15.460 "compare": false, 00:16:15.460 "compare_and_write": false, 00:16:15.460 "abort": true, 00:16:15.460 "seek_hole": false, 00:16:15.460 "seek_data": false, 00:16:15.460 "copy": true, 00:16:15.460 "nvme_iov_md": false 00:16:15.460 }, 00:16:15.460 "memory_domains": [ 00:16:15.460 { 00:16:15.460 "dma_device_id": "system", 00:16:15.460 "dma_device_type": 1 00:16:15.460 }, 00:16:15.460 { 00:16:15.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.460 "dma_device_type": 2 00:16:15.460 } 00:16:15.460 ], 00:16:15.460 "driver_specific": {} 00:16:15.460 }' 00:16:15.460 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:15.460 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:15.460 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:15.460 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:15.460 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:15.460 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:15.460 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:15.460 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:15.460 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:15.460 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:15.460 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:15.460 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:15.460 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:15.460 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:15.460 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:15.718 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:15.718 "name": "BaseBdev3", 00:16:15.718 "aliases": [ 00:16:15.718 "0e97c4d8-4060-11ef-b2a4-e9dca065e82e" 00:16:15.718 ], 00:16:15.718 "product_name": "Malloc disk", 00:16:15.718 "block_size": 512, 00:16:15.718 "num_blocks": 65536, 00:16:15.718 "uuid": "0e97c4d8-4060-11ef-b2a4-e9dca065e82e", 00:16:15.718 "assigned_rate_limits": { 00:16:15.718 "rw_ios_per_sec": 0, 00:16:15.718 "rw_mbytes_per_sec": 0, 00:16:15.718 "r_mbytes_per_sec": 0, 00:16:15.718 "w_mbytes_per_sec": 0 00:16:15.718 }, 00:16:15.718 "claimed": true, 00:16:15.718 "claim_type": "exclusive_write", 00:16:15.718 "zoned": false, 00:16:15.718 "supported_io_types": { 00:16:15.718 "read": true, 00:16:15.718 "write": true, 00:16:15.718 "unmap": true, 00:16:15.718 "flush": true, 00:16:15.718 "reset": true, 00:16:15.718 "nvme_admin": false, 00:16:15.718 "nvme_io": false, 00:16:15.718 "nvme_io_md": false, 00:16:15.718 "write_zeroes": true, 00:16:15.718 "zcopy": true, 00:16:15.718 "get_zone_info": false, 00:16:15.718 "zone_management": false, 00:16:15.718 "zone_append": false, 00:16:15.718 "compare": false, 00:16:15.718 "compare_and_write": false, 00:16:15.718 "abort": true, 00:16:15.718 "seek_hole": false, 00:16:15.718 "seek_data": false, 00:16:15.718 "copy": true, 00:16:15.718 "nvme_iov_md": false 00:16:15.718 }, 00:16:15.718 "memory_domains": [ 00:16:15.718 { 00:16:15.718 "dma_device_id": "system", 00:16:15.718 "dma_device_type": 1 00:16:15.718 }, 00:16:15.718 { 00:16:15.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.718 "dma_device_type": 2 00:16:15.718 } 00:16:15.718 ], 00:16:15.718 "driver_specific": {} 00:16:15.718 }' 00:16:15.718 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:15.718 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:15.718 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:15.718 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:15.718 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:15.718 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:15.718 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:15.718 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:15.718 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:15.718 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:15.718 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:15.718 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:15.718 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:15.718 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:15.718 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:15.984 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:15.984 "name": "BaseBdev4", 00:16:15.984 "aliases": [ 00:16:15.984 "0f63a2da-4060-11ef-b2a4-e9dca065e82e" 00:16:15.984 ], 00:16:15.984 "product_name": "Malloc disk", 00:16:15.984 "block_size": 512, 00:16:15.984 "num_blocks": 65536, 00:16:15.984 "uuid": "0f63a2da-4060-11ef-b2a4-e9dca065e82e", 00:16:15.984 "assigned_rate_limits": { 00:16:15.984 "rw_ios_per_sec": 0, 00:16:15.984 "rw_mbytes_per_sec": 0, 00:16:15.984 "r_mbytes_per_sec": 0, 00:16:15.984 "w_mbytes_per_sec": 0 00:16:15.984 }, 00:16:15.984 "claimed": true, 00:16:15.984 "claim_type": "exclusive_write", 00:16:15.984 "zoned": false, 00:16:15.984 "supported_io_types": { 00:16:15.984 "read": true, 00:16:15.984 "write": true, 00:16:15.984 "unmap": true, 00:16:15.984 "flush": true, 00:16:15.984 "reset": true, 00:16:15.984 "nvme_admin": false, 00:16:15.984 "nvme_io": false, 00:16:15.984 "nvme_io_md": false, 00:16:15.984 "write_zeroes": true, 00:16:15.984 "zcopy": true, 00:16:15.984 "get_zone_info": false, 00:16:15.984 "zone_management": false, 00:16:15.984 "zone_append": false, 00:16:15.984 "compare": false, 00:16:15.984 "compare_and_write": false, 00:16:15.984 "abort": true, 00:16:15.984 "seek_hole": false, 00:16:15.984 "seek_data": false, 00:16:15.984 "copy": true, 00:16:15.984 "nvme_iov_md": false 00:16:15.984 }, 00:16:15.984 "memory_domains": [ 00:16:15.984 { 00:16:15.984 "dma_device_id": "system", 00:16:15.984 "dma_device_type": 1 00:16:15.984 }, 00:16:15.984 { 00:16:15.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.984 "dma_device_type": 2 00:16:15.984 } 00:16:15.984 ], 00:16:15.984 "driver_specific": {} 00:16:15.984 }' 00:16:15.984 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:15.984 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:15.984 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:15.984 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:15.984 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:15.984 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:15.984 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:15.984 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:15.984 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:15.984 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:15.984 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:15.984 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:15.984 15:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:16.243 [2024-07-12 15:04:42.000330] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:16.243 15:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:16.243 15:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:16:16.243 15:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:16.243 15:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:16.243 15:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:16:16.243 15:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:16.243 15:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:16.243 15:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:16.243 15:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:16.243 15:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:16.243 15:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:16.243 15:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:16.243 15:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:16.243 15:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:16.243 15:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:16.243 15:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.243 15:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.501 15:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:16.501 "name": "Existed_Raid", 00:16:16.501 "uuid": "0f63abff-4060-11ef-b2a4-e9dca065e82e", 00:16:16.501 "strip_size_kb": 0, 00:16:16.501 "state": "online", 00:16:16.501 "raid_level": "raid1", 00:16:16.501 "superblock": false, 00:16:16.501 "num_base_bdevs": 4, 00:16:16.501 "num_base_bdevs_discovered": 3, 00:16:16.501 "num_base_bdevs_operational": 3, 00:16:16.501 "base_bdevs_list": [ 00:16:16.501 { 00:16:16.501 "name": null, 00:16:16.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.501 "is_configured": false, 00:16:16.501 "data_offset": 0, 00:16:16.501 "data_size": 65536 00:16:16.501 }, 00:16:16.501 { 00:16:16.501 "name": "BaseBdev2", 00:16:16.501 "uuid": "0dcdbb49-4060-11ef-b2a4-e9dca065e82e", 00:16:16.501 "is_configured": true, 00:16:16.501 "data_offset": 0, 00:16:16.501 "data_size": 65536 00:16:16.501 }, 00:16:16.501 { 00:16:16.501 "name": "BaseBdev3", 00:16:16.501 "uuid": "0e97c4d8-4060-11ef-b2a4-e9dca065e82e", 00:16:16.501 "is_configured": true, 00:16:16.501 "data_offset": 0, 00:16:16.501 "data_size": 65536 00:16:16.501 }, 00:16:16.501 { 00:16:16.501 "name": "BaseBdev4", 00:16:16.501 "uuid": "0f63a2da-4060-11ef-b2a4-e9dca065e82e", 00:16:16.501 "is_configured": true, 00:16:16.501 "data_offset": 0, 00:16:16.501 "data_size": 65536 00:16:16.501 } 00:16:16.501 ] 00:16:16.501 }' 00:16:16.501 15:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:16.501 15:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.759 15:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:16.759 15:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:16.759 15:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:16.759 15:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.017 15:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:17.017 15:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:17.017 15:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:17.275 [2024-07-12 15:04:43.030351] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:17.275 15:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:17.275 15:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:17.275 15:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.275 15:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:17.533 15:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:17.533 15:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:17.533 15:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:17.791 [2024-07-12 15:04:43.556403] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:17.791 15:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:17.791 15:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:17.791 15:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:17.791 15:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.049 15:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:18.049 15:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:18.049 15:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:18.307 [2024-07-12 15:04:44.106589] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:18.307 [2024-07-12 15:04:44.106628] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:18.307 [2024-07-12 15:04:44.112484] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:18.307 [2024-07-12 15:04:44.112503] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:18.307 [2024-07-12 15:04:44.112507] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2825c5434a00 name Existed_Raid, state offline 00:16:18.307 15:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:18.307 15:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:18.307 15:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:18.307 15:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.873 15:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:18.873 15:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:18.873 15:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:16:18.873 15:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:18.873 15:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:18.874 15:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:18.874 BaseBdev2 00:16:18.874 15:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:18.874 15:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:18.874 15:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:18.874 15:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:18.874 15:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:18.874 15:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:18.874 15:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:19.132 15:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:19.455 [ 00:16:19.455 { 00:16:19.455 "name": "BaseBdev2", 00:16:19.455 "aliases": [ 00:16:19.455 "12b1a67d-4060-11ef-b2a4-e9dca065e82e" 00:16:19.455 ], 00:16:19.455 "product_name": "Malloc disk", 00:16:19.455 "block_size": 512, 00:16:19.455 "num_blocks": 65536, 00:16:19.455 "uuid": "12b1a67d-4060-11ef-b2a4-e9dca065e82e", 00:16:19.455 "assigned_rate_limits": { 00:16:19.455 "rw_ios_per_sec": 0, 00:16:19.455 "rw_mbytes_per_sec": 0, 00:16:19.455 "r_mbytes_per_sec": 0, 00:16:19.455 "w_mbytes_per_sec": 0 00:16:19.455 }, 00:16:19.455 "claimed": false, 00:16:19.455 "zoned": false, 00:16:19.455 "supported_io_types": { 00:16:19.455 "read": true, 00:16:19.455 "write": true, 00:16:19.455 "unmap": true, 00:16:19.455 "flush": true, 00:16:19.455 "reset": true, 00:16:19.455 "nvme_admin": false, 00:16:19.455 "nvme_io": false, 00:16:19.455 "nvme_io_md": false, 00:16:19.455 "write_zeroes": true, 00:16:19.455 "zcopy": true, 00:16:19.455 "get_zone_info": false, 00:16:19.455 "zone_management": false, 00:16:19.455 "zone_append": false, 00:16:19.455 "compare": false, 00:16:19.455 "compare_and_write": false, 00:16:19.455 "abort": true, 00:16:19.455 "seek_hole": false, 00:16:19.455 "seek_data": false, 00:16:19.455 "copy": true, 00:16:19.455 "nvme_iov_md": false 00:16:19.455 }, 00:16:19.455 "memory_domains": [ 00:16:19.455 { 00:16:19.455 "dma_device_id": "system", 00:16:19.455 "dma_device_type": 1 00:16:19.455 }, 00:16:19.455 { 00:16:19.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.455 "dma_device_type": 2 00:16:19.455 } 00:16:19.455 ], 00:16:19.455 "driver_specific": {} 00:16:19.455 } 00:16:19.455 ] 00:16:19.455 15:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:19.455 15:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:19.455 15:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:19.455 15:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:19.714 BaseBdev3 00:16:19.714 15:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:19.714 15:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:19.714 15:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:19.714 15:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:19.714 15:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:19.714 15:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:19.714 15:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:19.972 15:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:20.231 [ 00:16:20.231 { 00:16:20.231 "name": "BaseBdev3", 00:16:20.231 "aliases": [ 00:16:20.231 "1326d7e4-4060-11ef-b2a4-e9dca065e82e" 00:16:20.231 ], 00:16:20.231 "product_name": "Malloc disk", 00:16:20.231 "block_size": 512, 00:16:20.231 "num_blocks": 65536, 00:16:20.231 "uuid": "1326d7e4-4060-11ef-b2a4-e9dca065e82e", 00:16:20.231 "assigned_rate_limits": { 00:16:20.231 "rw_ios_per_sec": 0, 00:16:20.231 "rw_mbytes_per_sec": 0, 00:16:20.231 "r_mbytes_per_sec": 0, 00:16:20.231 "w_mbytes_per_sec": 0 00:16:20.231 }, 00:16:20.231 "claimed": false, 00:16:20.231 "zoned": false, 00:16:20.231 "supported_io_types": { 00:16:20.231 "read": true, 00:16:20.231 "write": true, 00:16:20.231 "unmap": true, 00:16:20.231 "flush": true, 00:16:20.231 "reset": true, 00:16:20.231 "nvme_admin": false, 00:16:20.231 "nvme_io": false, 00:16:20.231 "nvme_io_md": false, 00:16:20.231 "write_zeroes": true, 00:16:20.231 "zcopy": true, 00:16:20.231 "get_zone_info": false, 00:16:20.231 "zone_management": false, 00:16:20.231 "zone_append": false, 00:16:20.231 "compare": false, 00:16:20.231 "compare_and_write": false, 00:16:20.231 "abort": true, 00:16:20.231 "seek_hole": false, 00:16:20.231 "seek_data": false, 00:16:20.231 "copy": true, 00:16:20.231 "nvme_iov_md": false 00:16:20.231 }, 00:16:20.231 "memory_domains": [ 00:16:20.231 { 00:16:20.231 "dma_device_id": "system", 00:16:20.231 "dma_device_type": 1 00:16:20.231 }, 00:16:20.231 { 00:16:20.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.231 "dma_device_type": 2 00:16:20.231 } 00:16:20.231 ], 00:16:20.231 "driver_specific": {} 00:16:20.231 } 00:16:20.231 ] 00:16:20.231 15:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:20.231 15:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:20.231 15:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:20.231 15:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:20.489 BaseBdev4 00:16:20.489 15:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:16:20.489 15:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:16:20.489 15:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:20.489 15:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:20.489 15:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:20.489 15:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:20.489 15:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:20.747 15:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:21.005 [ 00:16:21.005 { 00:16:21.005 "name": "BaseBdev4", 00:16:21.005 "aliases": [ 00:16:21.005 "13910de7-4060-11ef-b2a4-e9dca065e82e" 00:16:21.005 ], 00:16:21.005 "product_name": "Malloc disk", 00:16:21.005 "block_size": 512, 00:16:21.005 "num_blocks": 65536, 00:16:21.005 "uuid": "13910de7-4060-11ef-b2a4-e9dca065e82e", 00:16:21.005 "assigned_rate_limits": { 00:16:21.005 "rw_ios_per_sec": 0, 00:16:21.005 "rw_mbytes_per_sec": 0, 00:16:21.005 "r_mbytes_per_sec": 0, 00:16:21.005 "w_mbytes_per_sec": 0 00:16:21.005 }, 00:16:21.005 "claimed": false, 00:16:21.005 "zoned": false, 00:16:21.005 "supported_io_types": { 00:16:21.005 "read": true, 00:16:21.005 "write": true, 00:16:21.005 "unmap": true, 00:16:21.005 "flush": true, 00:16:21.005 "reset": true, 00:16:21.005 "nvme_admin": false, 00:16:21.005 "nvme_io": false, 00:16:21.005 "nvme_io_md": false, 00:16:21.005 "write_zeroes": true, 00:16:21.005 "zcopy": true, 00:16:21.005 "get_zone_info": false, 00:16:21.005 "zone_management": false, 00:16:21.005 "zone_append": false, 00:16:21.005 "compare": false, 00:16:21.005 "compare_and_write": false, 00:16:21.005 "abort": true, 00:16:21.005 "seek_hole": false, 00:16:21.005 "seek_data": false, 00:16:21.005 "copy": true, 00:16:21.005 "nvme_iov_md": false 00:16:21.005 }, 00:16:21.005 "memory_domains": [ 00:16:21.005 { 00:16:21.005 "dma_device_id": "system", 00:16:21.005 "dma_device_type": 1 00:16:21.005 }, 00:16:21.005 { 00:16:21.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.005 "dma_device_type": 2 00:16:21.005 } 00:16:21.005 ], 00:16:21.005 "driver_specific": {} 00:16:21.005 } 00:16:21.005 ] 00:16:21.005 15:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:21.005 15:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:21.005 15:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:21.005 15:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:21.264 [2024-07-12 15:04:46.848680] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:21.264 [2024-07-12 15:04:46.848735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:21.264 [2024-07-12 15:04:46.848744] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:21.264 [2024-07-12 15:04:46.849296] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:21.264 [2024-07-12 15:04:46.849314] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:21.264 15:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:21.264 15:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:21.264 15:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:21.264 15:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:21.264 15:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:21.264 15:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:21.264 15:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:21.264 15:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:21.264 15:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:21.264 15:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:21.264 15:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.264 15:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.522 15:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:21.522 "name": "Existed_Raid", 00:16:21.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.522 "strip_size_kb": 0, 00:16:21.522 "state": "configuring", 00:16:21.522 "raid_level": "raid1", 00:16:21.522 "superblock": false, 00:16:21.522 "num_base_bdevs": 4, 00:16:21.522 "num_base_bdevs_discovered": 3, 00:16:21.522 "num_base_bdevs_operational": 4, 00:16:21.522 "base_bdevs_list": [ 00:16:21.522 { 00:16:21.522 "name": "BaseBdev1", 00:16:21.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.522 "is_configured": false, 00:16:21.522 "data_offset": 0, 00:16:21.522 "data_size": 0 00:16:21.522 }, 00:16:21.522 { 00:16:21.522 "name": "BaseBdev2", 00:16:21.522 "uuid": "12b1a67d-4060-11ef-b2a4-e9dca065e82e", 00:16:21.522 "is_configured": true, 00:16:21.522 "data_offset": 0, 00:16:21.522 "data_size": 65536 00:16:21.522 }, 00:16:21.522 { 00:16:21.522 "name": "BaseBdev3", 00:16:21.522 "uuid": "1326d7e4-4060-11ef-b2a4-e9dca065e82e", 00:16:21.522 "is_configured": true, 00:16:21.522 "data_offset": 0, 00:16:21.522 "data_size": 65536 00:16:21.522 }, 00:16:21.522 { 00:16:21.522 "name": "BaseBdev4", 00:16:21.522 "uuid": "13910de7-4060-11ef-b2a4-e9dca065e82e", 00:16:21.522 "is_configured": true, 00:16:21.522 "data_offset": 0, 00:16:21.522 "data_size": 65536 00:16:21.522 } 00:16:21.522 ] 00:16:21.522 }' 00:16:21.522 15:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:21.522 15:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.780 15:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:22.039 [2024-07-12 15:04:47.772716] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:22.039 15:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:22.039 15:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:22.039 15:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:22.039 15:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:22.039 15:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:22.039 15:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:22.039 15:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:22.039 15:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:22.039 15:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:22.039 15:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:22.039 15:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.039 15:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.298 15:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:22.298 "name": "Existed_Raid", 00:16:22.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.298 "strip_size_kb": 0, 00:16:22.298 "state": "configuring", 00:16:22.298 "raid_level": "raid1", 00:16:22.298 "superblock": false, 00:16:22.298 "num_base_bdevs": 4, 00:16:22.298 "num_base_bdevs_discovered": 2, 00:16:22.298 "num_base_bdevs_operational": 4, 00:16:22.298 "base_bdevs_list": [ 00:16:22.298 { 00:16:22.298 "name": "BaseBdev1", 00:16:22.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.298 "is_configured": false, 00:16:22.298 "data_offset": 0, 00:16:22.298 "data_size": 0 00:16:22.298 }, 00:16:22.298 { 00:16:22.298 "name": null, 00:16:22.298 "uuid": "12b1a67d-4060-11ef-b2a4-e9dca065e82e", 00:16:22.298 "is_configured": false, 00:16:22.298 "data_offset": 0, 00:16:22.298 "data_size": 65536 00:16:22.298 }, 00:16:22.298 { 00:16:22.298 "name": "BaseBdev3", 00:16:22.298 "uuid": "1326d7e4-4060-11ef-b2a4-e9dca065e82e", 00:16:22.298 "is_configured": true, 00:16:22.298 "data_offset": 0, 00:16:22.298 "data_size": 65536 00:16:22.298 }, 00:16:22.298 { 00:16:22.298 "name": "BaseBdev4", 00:16:22.298 "uuid": "13910de7-4060-11ef-b2a4-e9dca065e82e", 00:16:22.298 "is_configured": true, 00:16:22.298 "data_offset": 0, 00:16:22.298 "data_size": 65536 00:16:22.298 } 00:16:22.298 ] 00:16:22.298 }' 00:16:22.298 15:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:22.298 15:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.865 15:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.865 15:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:22.865 15:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:22.865 15:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:23.124 [2024-07-12 15:04:48.912902] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:23.124 BaseBdev1 00:16:23.124 15:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:23.124 15:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:23.124 15:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:23.124 15:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:23.124 15:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:23.124 15:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:23.124 15:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:23.384 15:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:23.643 [ 00:16:23.643 { 00:16:23.643 "name": "BaseBdev1", 00:16:23.643 "aliases": [ 00:16:23.643 "153cef87-4060-11ef-b2a4-e9dca065e82e" 00:16:23.643 ], 00:16:23.643 "product_name": "Malloc disk", 00:16:23.643 "block_size": 512, 00:16:23.643 "num_blocks": 65536, 00:16:23.643 "uuid": "153cef87-4060-11ef-b2a4-e9dca065e82e", 00:16:23.643 "assigned_rate_limits": { 00:16:23.643 "rw_ios_per_sec": 0, 00:16:23.643 "rw_mbytes_per_sec": 0, 00:16:23.643 "r_mbytes_per_sec": 0, 00:16:23.643 "w_mbytes_per_sec": 0 00:16:23.643 }, 00:16:23.643 "claimed": true, 00:16:23.643 "claim_type": "exclusive_write", 00:16:23.643 "zoned": false, 00:16:23.643 "supported_io_types": { 00:16:23.643 "read": true, 00:16:23.643 "write": true, 00:16:23.643 "unmap": true, 00:16:23.643 "flush": true, 00:16:23.643 "reset": true, 00:16:23.643 "nvme_admin": false, 00:16:23.643 "nvme_io": false, 00:16:23.643 "nvme_io_md": false, 00:16:23.643 "write_zeroes": true, 00:16:23.643 "zcopy": true, 00:16:23.643 "get_zone_info": false, 00:16:23.643 "zone_management": false, 00:16:23.643 "zone_append": false, 00:16:23.643 "compare": false, 00:16:23.643 "compare_and_write": false, 00:16:23.643 "abort": true, 00:16:23.643 "seek_hole": false, 00:16:23.643 "seek_data": false, 00:16:23.643 "copy": true, 00:16:23.643 "nvme_iov_md": false 00:16:23.643 }, 00:16:23.643 "memory_domains": [ 00:16:23.643 { 00:16:23.643 "dma_device_id": "system", 00:16:23.643 "dma_device_type": 1 00:16:23.643 }, 00:16:23.643 { 00:16:23.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.643 "dma_device_type": 2 00:16:23.643 } 00:16:23.643 ], 00:16:23.643 "driver_specific": {} 00:16:23.643 } 00:16:23.643 ] 00:16:23.643 15:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:23.643 15:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:23.643 15:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:23.643 15:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:23.643 15:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:23.643 15:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:23.643 15:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:23.643 15:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:23.643 15:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:23.643 15:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:23.643 15:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:23.643 15:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.643 15:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.902 15:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:23.902 "name": "Existed_Raid", 00:16:23.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.902 "strip_size_kb": 0, 00:16:23.902 "state": "configuring", 00:16:23.902 "raid_level": "raid1", 00:16:23.902 "superblock": false, 00:16:23.902 "num_base_bdevs": 4, 00:16:23.902 "num_base_bdevs_discovered": 3, 00:16:23.902 "num_base_bdevs_operational": 4, 00:16:23.902 "base_bdevs_list": [ 00:16:23.902 { 00:16:23.902 "name": "BaseBdev1", 00:16:23.902 "uuid": "153cef87-4060-11ef-b2a4-e9dca065e82e", 00:16:23.902 "is_configured": true, 00:16:23.902 "data_offset": 0, 00:16:23.902 "data_size": 65536 00:16:23.902 }, 00:16:23.902 { 00:16:23.902 "name": null, 00:16:23.902 "uuid": "12b1a67d-4060-11ef-b2a4-e9dca065e82e", 00:16:23.902 "is_configured": false, 00:16:23.902 "data_offset": 0, 00:16:23.902 "data_size": 65536 00:16:23.902 }, 00:16:23.902 { 00:16:23.902 "name": "BaseBdev3", 00:16:23.902 "uuid": "1326d7e4-4060-11ef-b2a4-e9dca065e82e", 00:16:23.902 "is_configured": true, 00:16:23.902 "data_offset": 0, 00:16:23.902 "data_size": 65536 00:16:23.902 }, 00:16:23.902 { 00:16:23.902 "name": "BaseBdev4", 00:16:23.902 "uuid": "13910de7-4060-11ef-b2a4-e9dca065e82e", 00:16:23.902 "is_configured": true, 00:16:23.902 "data_offset": 0, 00:16:23.902 "data_size": 65536 00:16:23.902 } 00:16:23.902 ] 00:16:23.902 }' 00:16:23.902 15:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:23.902 15:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.470 15:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.470 15:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:24.728 15:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:24.728 15:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:24.985 [2024-07-12 15:04:50.636868] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:24.985 15:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:24.985 15:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:24.985 15:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:24.985 15:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:24.985 15:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:24.985 15:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:24.985 15:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:24.985 15:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:24.985 15:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:24.985 15:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:24.985 15:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.985 15:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.244 15:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:25.244 "name": "Existed_Raid", 00:16:25.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.244 "strip_size_kb": 0, 00:16:25.244 "state": "configuring", 00:16:25.244 "raid_level": "raid1", 00:16:25.244 "superblock": false, 00:16:25.244 "num_base_bdevs": 4, 00:16:25.244 "num_base_bdevs_discovered": 2, 00:16:25.244 "num_base_bdevs_operational": 4, 00:16:25.244 "base_bdevs_list": [ 00:16:25.244 { 00:16:25.244 "name": "BaseBdev1", 00:16:25.244 "uuid": "153cef87-4060-11ef-b2a4-e9dca065e82e", 00:16:25.244 "is_configured": true, 00:16:25.244 "data_offset": 0, 00:16:25.244 "data_size": 65536 00:16:25.244 }, 00:16:25.244 { 00:16:25.244 "name": null, 00:16:25.244 "uuid": "12b1a67d-4060-11ef-b2a4-e9dca065e82e", 00:16:25.244 "is_configured": false, 00:16:25.244 "data_offset": 0, 00:16:25.244 "data_size": 65536 00:16:25.244 }, 00:16:25.244 { 00:16:25.244 "name": null, 00:16:25.244 "uuid": "1326d7e4-4060-11ef-b2a4-e9dca065e82e", 00:16:25.244 "is_configured": false, 00:16:25.244 "data_offset": 0, 00:16:25.244 "data_size": 65536 00:16:25.244 }, 00:16:25.244 { 00:16:25.244 "name": "BaseBdev4", 00:16:25.244 "uuid": "13910de7-4060-11ef-b2a4-e9dca065e82e", 00:16:25.244 "is_configured": true, 00:16:25.244 "data_offset": 0, 00:16:25.244 "data_size": 65536 00:16:25.244 } 00:16:25.244 ] 00:16:25.244 }' 00:16:25.244 15:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:25.244 15:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.501 15:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.501 15:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:25.757 15:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:25.757 15:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:26.015 [2024-07-12 15:04:51.676919] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:26.015 15:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:26.015 15:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:26.015 15:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:26.015 15:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:26.015 15:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:26.015 15:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:26.015 15:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:26.015 15:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:26.015 15:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:26.015 15:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:26.015 15:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.015 15:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.272 15:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:26.272 "name": "Existed_Raid", 00:16:26.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.272 "strip_size_kb": 0, 00:16:26.272 "state": "configuring", 00:16:26.272 "raid_level": "raid1", 00:16:26.272 "superblock": false, 00:16:26.272 "num_base_bdevs": 4, 00:16:26.272 "num_base_bdevs_discovered": 3, 00:16:26.272 "num_base_bdevs_operational": 4, 00:16:26.272 "base_bdevs_list": [ 00:16:26.272 { 00:16:26.272 "name": "BaseBdev1", 00:16:26.272 "uuid": "153cef87-4060-11ef-b2a4-e9dca065e82e", 00:16:26.272 "is_configured": true, 00:16:26.272 "data_offset": 0, 00:16:26.272 "data_size": 65536 00:16:26.272 }, 00:16:26.272 { 00:16:26.272 "name": null, 00:16:26.272 "uuid": "12b1a67d-4060-11ef-b2a4-e9dca065e82e", 00:16:26.272 "is_configured": false, 00:16:26.272 "data_offset": 0, 00:16:26.272 "data_size": 65536 00:16:26.272 }, 00:16:26.272 { 00:16:26.272 "name": "BaseBdev3", 00:16:26.272 "uuid": "1326d7e4-4060-11ef-b2a4-e9dca065e82e", 00:16:26.272 "is_configured": true, 00:16:26.272 "data_offset": 0, 00:16:26.272 "data_size": 65536 00:16:26.272 }, 00:16:26.272 { 00:16:26.272 "name": "BaseBdev4", 00:16:26.272 "uuid": "13910de7-4060-11ef-b2a4-e9dca065e82e", 00:16:26.272 "is_configured": true, 00:16:26.272 "data_offset": 0, 00:16:26.272 "data_size": 65536 00:16:26.272 } 00:16:26.272 ] 00:16:26.272 }' 00:16:26.272 15:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:26.272 15:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.529 15:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:26.529 15:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.788 15:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:26.788 15:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:27.046 [2024-07-12 15:04:52.736973] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:27.046 15:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:27.046 15:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:27.046 15:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:27.046 15:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:27.046 15:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:27.046 15:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:27.046 15:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:27.046 15:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:27.046 15:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:27.046 15:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:27.046 15:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.046 15:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.303 15:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:27.303 "name": "Existed_Raid", 00:16:27.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.303 "strip_size_kb": 0, 00:16:27.303 "state": "configuring", 00:16:27.303 "raid_level": "raid1", 00:16:27.303 "superblock": false, 00:16:27.303 "num_base_bdevs": 4, 00:16:27.303 "num_base_bdevs_discovered": 2, 00:16:27.304 "num_base_bdevs_operational": 4, 00:16:27.304 "base_bdevs_list": [ 00:16:27.304 { 00:16:27.304 "name": null, 00:16:27.304 "uuid": "153cef87-4060-11ef-b2a4-e9dca065e82e", 00:16:27.304 "is_configured": false, 00:16:27.304 "data_offset": 0, 00:16:27.304 "data_size": 65536 00:16:27.304 }, 00:16:27.304 { 00:16:27.304 "name": null, 00:16:27.304 "uuid": "12b1a67d-4060-11ef-b2a4-e9dca065e82e", 00:16:27.304 "is_configured": false, 00:16:27.304 "data_offset": 0, 00:16:27.304 "data_size": 65536 00:16:27.304 }, 00:16:27.304 { 00:16:27.304 "name": "BaseBdev3", 00:16:27.304 "uuid": "1326d7e4-4060-11ef-b2a4-e9dca065e82e", 00:16:27.304 "is_configured": true, 00:16:27.304 "data_offset": 0, 00:16:27.304 "data_size": 65536 00:16:27.304 }, 00:16:27.304 { 00:16:27.304 "name": "BaseBdev4", 00:16:27.304 "uuid": "13910de7-4060-11ef-b2a4-e9dca065e82e", 00:16:27.304 "is_configured": true, 00:16:27.304 "data_offset": 0, 00:16:27.304 "data_size": 65536 00:16:27.304 } 00:16:27.304 ] 00:16:27.304 }' 00:16:27.304 15:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:27.304 15:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.561 15:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.561 15:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:27.819 15:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:27.819 15:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:28.077 [2024-07-12 15:04:53.783076] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:28.077 15:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:28.077 15:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:28.077 15:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:28.077 15:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:28.077 15:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:28.077 15:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:28.077 15:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:28.077 15:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:28.077 15:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:28.077 15:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:28.077 15:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.077 15:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.336 15:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:28.336 "name": "Existed_Raid", 00:16:28.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.336 "strip_size_kb": 0, 00:16:28.336 "state": "configuring", 00:16:28.336 "raid_level": "raid1", 00:16:28.336 "superblock": false, 00:16:28.336 "num_base_bdevs": 4, 00:16:28.336 "num_base_bdevs_discovered": 3, 00:16:28.336 "num_base_bdevs_operational": 4, 00:16:28.336 "base_bdevs_list": [ 00:16:28.336 { 00:16:28.336 "name": null, 00:16:28.336 "uuid": "153cef87-4060-11ef-b2a4-e9dca065e82e", 00:16:28.336 "is_configured": false, 00:16:28.336 "data_offset": 0, 00:16:28.336 "data_size": 65536 00:16:28.336 }, 00:16:28.336 { 00:16:28.336 "name": "BaseBdev2", 00:16:28.336 "uuid": "12b1a67d-4060-11ef-b2a4-e9dca065e82e", 00:16:28.336 "is_configured": true, 00:16:28.336 "data_offset": 0, 00:16:28.336 "data_size": 65536 00:16:28.336 }, 00:16:28.336 { 00:16:28.336 "name": "BaseBdev3", 00:16:28.336 "uuid": "1326d7e4-4060-11ef-b2a4-e9dca065e82e", 00:16:28.336 "is_configured": true, 00:16:28.336 "data_offset": 0, 00:16:28.336 "data_size": 65536 00:16:28.336 }, 00:16:28.336 { 00:16:28.336 "name": "BaseBdev4", 00:16:28.336 "uuid": "13910de7-4060-11ef-b2a4-e9dca065e82e", 00:16:28.336 "is_configured": true, 00:16:28.336 "data_offset": 0, 00:16:28.336 "data_size": 65536 00:16:28.336 } 00:16:28.336 ] 00:16:28.336 }' 00:16:28.336 15:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:28.336 15:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.902 15:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:28.902 15:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.160 15:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:29.160 15:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.160 15:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:29.418 15:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 153cef87-4060-11ef-b2a4-e9dca065e82e 00:16:29.674 [2024-07-12 15:04:55.347268] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:29.675 [2024-07-12 15:04:55.347297] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2825c5434f00 00:16:29.675 [2024-07-12 15:04:55.347305] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:29.675 [2024-07-12 15:04:55.347336] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2825c5497e20 00:16:29.675 [2024-07-12 15:04:55.347418] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2825c5434f00 00:16:29.675 [2024-07-12 15:04:55.347423] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2825c5434f00 00:16:29.675 [2024-07-12 15:04:55.347458] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.675 NewBaseBdev 00:16:29.675 15:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:29.675 15:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:16:29.675 15:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:29.675 15:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:29.675 15:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:29.675 15:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:29.675 15:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:29.959 15:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:30.218 [ 00:16:30.218 { 00:16:30.218 "name": "NewBaseBdev", 00:16:30.218 "aliases": [ 00:16:30.218 "153cef87-4060-11ef-b2a4-e9dca065e82e" 00:16:30.218 ], 00:16:30.218 "product_name": "Malloc disk", 00:16:30.218 "block_size": 512, 00:16:30.218 "num_blocks": 65536, 00:16:30.218 "uuid": "153cef87-4060-11ef-b2a4-e9dca065e82e", 00:16:30.218 "assigned_rate_limits": { 00:16:30.218 "rw_ios_per_sec": 0, 00:16:30.218 "rw_mbytes_per_sec": 0, 00:16:30.218 "r_mbytes_per_sec": 0, 00:16:30.218 "w_mbytes_per_sec": 0 00:16:30.218 }, 00:16:30.218 "claimed": true, 00:16:30.218 "claim_type": "exclusive_write", 00:16:30.218 "zoned": false, 00:16:30.218 "supported_io_types": { 00:16:30.218 "read": true, 00:16:30.218 "write": true, 00:16:30.218 "unmap": true, 00:16:30.218 "flush": true, 00:16:30.218 "reset": true, 00:16:30.218 "nvme_admin": false, 00:16:30.218 "nvme_io": false, 00:16:30.218 "nvme_io_md": false, 00:16:30.218 "write_zeroes": true, 00:16:30.218 "zcopy": true, 00:16:30.218 "get_zone_info": false, 00:16:30.218 "zone_management": false, 00:16:30.218 "zone_append": false, 00:16:30.218 "compare": false, 00:16:30.218 "compare_and_write": false, 00:16:30.218 "abort": true, 00:16:30.218 "seek_hole": false, 00:16:30.218 "seek_data": false, 00:16:30.218 "copy": true, 00:16:30.218 "nvme_iov_md": false 00:16:30.218 }, 00:16:30.218 "memory_domains": [ 00:16:30.218 { 00:16:30.218 "dma_device_id": "system", 00:16:30.218 "dma_device_type": 1 00:16:30.218 }, 00:16:30.218 { 00:16:30.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.218 "dma_device_type": 2 00:16:30.218 } 00:16:30.218 ], 00:16:30.218 "driver_specific": {} 00:16:30.218 } 00:16:30.218 ] 00:16:30.218 15:04:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:30.218 15:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:30.218 15:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:30.218 15:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:30.218 15:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:30.218 15:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:30.218 15:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:30.218 15:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:30.218 15:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:30.219 15:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:30.219 15:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:30.219 15:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.219 15:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.477 15:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:30.477 "name": "Existed_Raid", 00:16:30.477 "uuid": "1912c474-4060-11ef-b2a4-e9dca065e82e", 00:16:30.477 "strip_size_kb": 0, 00:16:30.477 "state": "online", 00:16:30.477 "raid_level": "raid1", 00:16:30.477 "superblock": false, 00:16:30.477 "num_base_bdevs": 4, 00:16:30.477 "num_base_bdevs_discovered": 4, 00:16:30.477 "num_base_bdevs_operational": 4, 00:16:30.477 "base_bdevs_list": [ 00:16:30.477 { 00:16:30.477 "name": "NewBaseBdev", 00:16:30.477 "uuid": "153cef87-4060-11ef-b2a4-e9dca065e82e", 00:16:30.477 "is_configured": true, 00:16:30.477 "data_offset": 0, 00:16:30.477 "data_size": 65536 00:16:30.477 }, 00:16:30.477 { 00:16:30.477 "name": "BaseBdev2", 00:16:30.477 "uuid": "12b1a67d-4060-11ef-b2a4-e9dca065e82e", 00:16:30.477 "is_configured": true, 00:16:30.477 "data_offset": 0, 00:16:30.477 "data_size": 65536 00:16:30.477 }, 00:16:30.477 { 00:16:30.477 "name": "BaseBdev3", 00:16:30.477 "uuid": "1326d7e4-4060-11ef-b2a4-e9dca065e82e", 00:16:30.477 "is_configured": true, 00:16:30.477 "data_offset": 0, 00:16:30.477 "data_size": 65536 00:16:30.477 }, 00:16:30.477 { 00:16:30.477 "name": "BaseBdev4", 00:16:30.477 "uuid": "13910de7-4060-11ef-b2a4-e9dca065e82e", 00:16:30.477 "is_configured": true, 00:16:30.477 "data_offset": 0, 00:16:30.477 "data_size": 65536 00:16:30.477 } 00:16:30.477 ] 00:16:30.477 }' 00:16:30.477 15:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:30.477 15:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.736 15:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:30.736 15:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:30.736 15:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:30.736 15:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:30.736 15:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:30.736 15:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:30.736 15:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:30.736 15:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:30.995 [2024-07-12 15:04:56.735367] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:30.995 15:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:30.995 "name": "Existed_Raid", 00:16:30.995 "aliases": [ 00:16:30.995 "1912c474-4060-11ef-b2a4-e9dca065e82e" 00:16:30.995 ], 00:16:30.995 "product_name": "Raid Volume", 00:16:30.995 "block_size": 512, 00:16:30.995 "num_blocks": 65536, 00:16:30.995 "uuid": "1912c474-4060-11ef-b2a4-e9dca065e82e", 00:16:30.995 "assigned_rate_limits": { 00:16:30.995 "rw_ios_per_sec": 0, 00:16:30.995 "rw_mbytes_per_sec": 0, 00:16:30.995 "r_mbytes_per_sec": 0, 00:16:30.995 "w_mbytes_per_sec": 0 00:16:30.995 }, 00:16:30.995 "claimed": false, 00:16:30.995 "zoned": false, 00:16:30.995 "supported_io_types": { 00:16:30.995 "read": true, 00:16:30.995 "write": true, 00:16:30.995 "unmap": false, 00:16:30.995 "flush": false, 00:16:30.995 "reset": true, 00:16:30.995 "nvme_admin": false, 00:16:30.995 "nvme_io": false, 00:16:30.995 "nvme_io_md": false, 00:16:30.995 "write_zeroes": true, 00:16:30.995 "zcopy": false, 00:16:30.995 "get_zone_info": false, 00:16:30.995 "zone_management": false, 00:16:30.995 "zone_append": false, 00:16:30.995 "compare": false, 00:16:30.995 "compare_and_write": false, 00:16:30.995 "abort": false, 00:16:30.995 "seek_hole": false, 00:16:30.995 "seek_data": false, 00:16:30.995 "copy": false, 00:16:30.995 "nvme_iov_md": false 00:16:30.995 }, 00:16:30.995 "memory_domains": [ 00:16:30.995 { 00:16:30.995 "dma_device_id": "system", 00:16:30.995 "dma_device_type": 1 00:16:30.995 }, 00:16:30.995 { 00:16:30.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.995 "dma_device_type": 2 00:16:30.995 }, 00:16:30.995 { 00:16:30.995 "dma_device_id": "system", 00:16:30.995 "dma_device_type": 1 00:16:30.995 }, 00:16:30.995 { 00:16:30.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.995 "dma_device_type": 2 00:16:30.995 }, 00:16:30.995 { 00:16:30.995 "dma_device_id": "system", 00:16:30.995 "dma_device_type": 1 00:16:30.995 }, 00:16:30.995 { 00:16:30.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.995 "dma_device_type": 2 00:16:30.995 }, 00:16:30.995 { 00:16:30.995 "dma_device_id": "system", 00:16:30.995 "dma_device_type": 1 00:16:30.995 }, 00:16:30.995 { 00:16:30.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.995 "dma_device_type": 2 00:16:30.995 } 00:16:30.995 ], 00:16:30.995 "driver_specific": { 00:16:30.995 "raid": { 00:16:30.995 "uuid": "1912c474-4060-11ef-b2a4-e9dca065e82e", 00:16:30.995 "strip_size_kb": 0, 00:16:30.995 "state": "online", 00:16:30.995 "raid_level": "raid1", 00:16:30.995 "superblock": false, 00:16:30.995 "num_base_bdevs": 4, 00:16:30.995 "num_base_bdevs_discovered": 4, 00:16:30.995 "num_base_bdevs_operational": 4, 00:16:30.995 "base_bdevs_list": [ 00:16:30.995 { 00:16:30.995 "name": "NewBaseBdev", 00:16:30.995 "uuid": "153cef87-4060-11ef-b2a4-e9dca065e82e", 00:16:30.995 "is_configured": true, 00:16:30.995 "data_offset": 0, 00:16:30.995 "data_size": 65536 00:16:30.995 }, 00:16:30.995 { 00:16:30.995 "name": "BaseBdev2", 00:16:30.995 "uuid": "12b1a67d-4060-11ef-b2a4-e9dca065e82e", 00:16:30.995 "is_configured": true, 00:16:30.995 "data_offset": 0, 00:16:30.995 "data_size": 65536 00:16:30.995 }, 00:16:30.995 { 00:16:30.995 "name": "BaseBdev3", 00:16:30.995 "uuid": "1326d7e4-4060-11ef-b2a4-e9dca065e82e", 00:16:30.995 "is_configured": true, 00:16:30.995 "data_offset": 0, 00:16:30.995 "data_size": 65536 00:16:30.995 }, 00:16:30.995 { 00:16:30.995 "name": "BaseBdev4", 00:16:30.995 "uuid": "13910de7-4060-11ef-b2a4-e9dca065e82e", 00:16:30.995 "is_configured": true, 00:16:30.995 "data_offset": 0, 00:16:30.995 "data_size": 65536 00:16:30.995 } 00:16:30.995 ] 00:16:30.995 } 00:16:30.995 } 00:16:30.995 }' 00:16:30.995 15:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:30.995 15:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:30.995 BaseBdev2 00:16:30.995 BaseBdev3 00:16:30.995 BaseBdev4' 00:16:30.995 15:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:30.995 15:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:30.995 15:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:31.253 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:31.253 "name": "NewBaseBdev", 00:16:31.253 "aliases": [ 00:16:31.253 "153cef87-4060-11ef-b2a4-e9dca065e82e" 00:16:31.253 ], 00:16:31.253 "product_name": "Malloc disk", 00:16:31.253 "block_size": 512, 00:16:31.253 "num_blocks": 65536, 00:16:31.253 "uuid": "153cef87-4060-11ef-b2a4-e9dca065e82e", 00:16:31.253 "assigned_rate_limits": { 00:16:31.253 "rw_ios_per_sec": 0, 00:16:31.253 "rw_mbytes_per_sec": 0, 00:16:31.253 "r_mbytes_per_sec": 0, 00:16:31.253 "w_mbytes_per_sec": 0 00:16:31.253 }, 00:16:31.253 "claimed": true, 00:16:31.253 "claim_type": "exclusive_write", 00:16:31.253 "zoned": false, 00:16:31.253 "supported_io_types": { 00:16:31.253 "read": true, 00:16:31.253 "write": true, 00:16:31.253 "unmap": true, 00:16:31.253 "flush": true, 00:16:31.253 "reset": true, 00:16:31.253 "nvme_admin": false, 00:16:31.253 "nvme_io": false, 00:16:31.253 "nvme_io_md": false, 00:16:31.253 "write_zeroes": true, 00:16:31.253 "zcopy": true, 00:16:31.253 "get_zone_info": false, 00:16:31.253 "zone_management": false, 00:16:31.253 "zone_append": false, 00:16:31.253 "compare": false, 00:16:31.253 "compare_and_write": false, 00:16:31.253 "abort": true, 00:16:31.253 "seek_hole": false, 00:16:31.253 "seek_data": false, 00:16:31.253 "copy": true, 00:16:31.253 "nvme_iov_md": false 00:16:31.253 }, 00:16:31.253 "memory_domains": [ 00:16:31.253 { 00:16:31.253 "dma_device_id": "system", 00:16:31.253 "dma_device_type": 1 00:16:31.253 }, 00:16:31.253 { 00:16:31.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.253 "dma_device_type": 2 00:16:31.253 } 00:16:31.253 ], 00:16:31.253 "driver_specific": {} 00:16:31.253 }' 00:16:31.253 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:31.253 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:31.253 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:31.253 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:31.253 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:31.253 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:31.253 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:31.253 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:31.511 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:31.511 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:31.511 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:31.511 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:31.511 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:31.511 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:31.511 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:31.769 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:31.769 "name": "BaseBdev2", 00:16:31.769 "aliases": [ 00:16:31.769 "12b1a67d-4060-11ef-b2a4-e9dca065e82e" 00:16:31.769 ], 00:16:31.769 "product_name": "Malloc disk", 00:16:31.769 "block_size": 512, 00:16:31.769 "num_blocks": 65536, 00:16:31.769 "uuid": "12b1a67d-4060-11ef-b2a4-e9dca065e82e", 00:16:31.769 "assigned_rate_limits": { 00:16:31.769 "rw_ios_per_sec": 0, 00:16:31.769 "rw_mbytes_per_sec": 0, 00:16:31.769 "r_mbytes_per_sec": 0, 00:16:31.769 "w_mbytes_per_sec": 0 00:16:31.769 }, 00:16:31.769 "claimed": true, 00:16:31.769 "claim_type": "exclusive_write", 00:16:31.769 "zoned": false, 00:16:31.769 "supported_io_types": { 00:16:31.769 "read": true, 00:16:31.769 "write": true, 00:16:31.769 "unmap": true, 00:16:31.769 "flush": true, 00:16:31.769 "reset": true, 00:16:31.769 "nvme_admin": false, 00:16:31.769 "nvme_io": false, 00:16:31.769 "nvme_io_md": false, 00:16:31.769 "write_zeroes": true, 00:16:31.769 "zcopy": true, 00:16:31.769 "get_zone_info": false, 00:16:31.769 "zone_management": false, 00:16:31.769 "zone_append": false, 00:16:31.769 "compare": false, 00:16:31.769 "compare_and_write": false, 00:16:31.769 "abort": true, 00:16:31.769 "seek_hole": false, 00:16:31.769 "seek_data": false, 00:16:31.769 "copy": true, 00:16:31.769 "nvme_iov_md": false 00:16:31.769 }, 00:16:31.769 "memory_domains": [ 00:16:31.769 { 00:16:31.769 "dma_device_id": "system", 00:16:31.769 "dma_device_type": 1 00:16:31.769 }, 00:16:31.769 { 00:16:31.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.769 "dma_device_type": 2 00:16:31.769 } 00:16:31.769 ], 00:16:31.769 "driver_specific": {} 00:16:31.769 }' 00:16:31.769 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:31.769 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:31.769 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:31.769 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:31.769 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:31.769 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:31.769 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:31.769 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:31.769 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:31.769 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:31.769 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:31.769 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:31.769 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:31.769 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:31.769 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:32.027 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:32.027 "name": "BaseBdev3", 00:16:32.027 "aliases": [ 00:16:32.027 "1326d7e4-4060-11ef-b2a4-e9dca065e82e" 00:16:32.027 ], 00:16:32.027 "product_name": "Malloc disk", 00:16:32.027 "block_size": 512, 00:16:32.027 "num_blocks": 65536, 00:16:32.027 "uuid": "1326d7e4-4060-11ef-b2a4-e9dca065e82e", 00:16:32.027 "assigned_rate_limits": { 00:16:32.027 "rw_ios_per_sec": 0, 00:16:32.027 "rw_mbytes_per_sec": 0, 00:16:32.027 "r_mbytes_per_sec": 0, 00:16:32.027 "w_mbytes_per_sec": 0 00:16:32.027 }, 00:16:32.027 "claimed": true, 00:16:32.027 "claim_type": "exclusive_write", 00:16:32.027 "zoned": false, 00:16:32.027 "supported_io_types": { 00:16:32.027 "read": true, 00:16:32.027 "write": true, 00:16:32.027 "unmap": true, 00:16:32.027 "flush": true, 00:16:32.027 "reset": true, 00:16:32.027 "nvme_admin": false, 00:16:32.027 "nvme_io": false, 00:16:32.027 "nvme_io_md": false, 00:16:32.027 "write_zeroes": true, 00:16:32.028 "zcopy": true, 00:16:32.028 "get_zone_info": false, 00:16:32.028 "zone_management": false, 00:16:32.028 "zone_append": false, 00:16:32.028 "compare": false, 00:16:32.028 "compare_and_write": false, 00:16:32.028 "abort": true, 00:16:32.028 "seek_hole": false, 00:16:32.028 "seek_data": false, 00:16:32.028 "copy": true, 00:16:32.028 "nvme_iov_md": false 00:16:32.028 }, 00:16:32.028 "memory_domains": [ 00:16:32.028 { 00:16:32.028 "dma_device_id": "system", 00:16:32.028 "dma_device_type": 1 00:16:32.028 }, 00:16:32.028 { 00:16:32.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.028 "dma_device_type": 2 00:16:32.028 } 00:16:32.028 ], 00:16:32.028 "driver_specific": {} 00:16:32.028 }' 00:16:32.028 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.028 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.028 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:32.028 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.028 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.028 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:32.028 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.028 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.028 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:32.028 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.028 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.028 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:32.028 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:32.028 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:32.028 15:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:32.285 15:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:32.285 "name": "BaseBdev4", 00:16:32.285 "aliases": [ 00:16:32.285 "13910de7-4060-11ef-b2a4-e9dca065e82e" 00:16:32.285 ], 00:16:32.285 "product_name": "Malloc disk", 00:16:32.285 "block_size": 512, 00:16:32.285 "num_blocks": 65536, 00:16:32.285 "uuid": "13910de7-4060-11ef-b2a4-e9dca065e82e", 00:16:32.286 "assigned_rate_limits": { 00:16:32.286 "rw_ios_per_sec": 0, 00:16:32.286 "rw_mbytes_per_sec": 0, 00:16:32.286 "r_mbytes_per_sec": 0, 00:16:32.286 "w_mbytes_per_sec": 0 00:16:32.286 }, 00:16:32.286 "claimed": true, 00:16:32.286 "claim_type": "exclusive_write", 00:16:32.286 "zoned": false, 00:16:32.286 "supported_io_types": { 00:16:32.286 "read": true, 00:16:32.286 "write": true, 00:16:32.286 "unmap": true, 00:16:32.286 "flush": true, 00:16:32.286 "reset": true, 00:16:32.286 "nvme_admin": false, 00:16:32.286 "nvme_io": false, 00:16:32.286 "nvme_io_md": false, 00:16:32.286 "write_zeroes": true, 00:16:32.286 "zcopy": true, 00:16:32.286 "get_zone_info": false, 00:16:32.286 "zone_management": false, 00:16:32.286 "zone_append": false, 00:16:32.286 "compare": false, 00:16:32.286 "compare_and_write": false, 00:16:32.286 "abort": true, 00:16:32.286 "seek_hole": false, 00:16:32.286 "seek_data": false, 00:16:32.286 "copy": true, 00:16:32.286 "nvme_iov_md": false 00:16:32.286 }, 00:16:32.286 "memory_domains": [ 00:16:32.286 { 00:16:32.286 "dma_device_id": "system", 00:16:32.286 "dma_device_type": 1 00:16:32.286 }, 00:16:32.286 { 00:16:32.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.286 "dma_device_type": 2 00:16:32.286 } 00:16:32.286 ], 00:16:32.286 "driver_specific": {} 00:16:32.286 }' 00:16:32.286 15:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.286 15:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.286 15:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:32.286 15:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.544 15:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.544 15:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:32.544 15:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.544 15:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.544 15:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:32.544 15:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.544 15:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.544 15:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:32.544 15:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:32.801 [2024-07-12 15:04:58.423463] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:32.801 [2024-07-12 15:04:58.423498] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.801 [2024-07-12 15:04:58.423547] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.801 [2024-07-12 15:04:58.423647] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.801 [2024-07-12 15:04:58.423653] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2825c5434f00 name Existed_Raid, state offline 00:16:32.801 15:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 63014 00:16:32.801 15:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 63014 ']' 00:16:32.801 15:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 63014 00:16:32.801 15:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:16:32.801 15:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:16:32.801 15:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 63014 00:16:32.801 15:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:16:32.801 15:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:16:32.801 15:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:16:32.801 killing process with pid 63014 00:16:32.801 15:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63014' 00:16:32.801 15:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 63014 00:16:32.801 [2024-07-12 15:04:58.450773] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:32.801 15:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 63014 00:16:32.801 [2024-07-12 15:04:58.483658] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:16:33.059 00:16:33.059 real 0m27.387s 00:16:33.059 user 0m50.338s 00:16:33.059 sys 0m3.516s 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.059 ************************************ 00:16:33.059 END TEST raid_state_function_test 00:16:33.059 ************************************ 00:16:33.059 15:04:58 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:33.059 15:04:58 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:16:33.059 15:04:58 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:33.059 15:04:58 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:33.059 15:04:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:33.059 ************************************ 00:16:33.059 START TEST raid_state_function_test_sb 00:16:33.059 ************************************ 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 true 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=63833 00:16:33.059 Process raid pid: 63833 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 63833' 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 63833 /var/tmp/spdk-raid.sock 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 63833 ']' 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:33.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:33.059 15:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.059 [2024-07-12 15:04:58.791592] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:16:33.059 [2024-07-12 15:04:58.791745] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:33.623 EAL: TSC is not safe to use in SMP mode 00:16:33.623 EAL: TSC is not invariant 00:16:33.623 [2024-07-12 15:04:59.319349] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.623 [2024-07-12 15:04:59.417047] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:33.623 [2024-07-12 15:04:59.419484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.623 [2024-07-12 15:04:59.420419] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:33.623 [2024-07-12 15:04:59.420437] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:34.187 15:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:34.187 15:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:16:34.187 15:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:34.445 [2024-07-12 15:05:00.105786] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:34.445 [2024-07-12 15:05:00.105840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:34.445 [2024-07-12 15:05:00.105846] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:34.445 [2024-07-12 15:05:00.105855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:34.445 [2024-07-12 15:05:00.105858] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:34.445 [2024-07-12 15:05:00.105866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:34.445 [2024-07-12 15:05:00.105869] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:34.445 [2024-07-12 15:05:00.105876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:34.445 15:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:34.445 15:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:34.445 15:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:34.445 15:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:34.445 15:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:34.445 15:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:34.445 15:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:34.445 15:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:34.445 15:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:34.445 15:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:34.445 15:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.445 15:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.704 15:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:34.704 "name": "Existed_Raid", 00:16:34.704 "uuid": "1be8d9f5-4060-11ef-b2a4-e9dca065e82e", 00:16:34.704 "strip_size_kb": 0, 00:16:34.704 "state": "configuring", 00:16:34.704 "raid_level": "raid1", 00:16:34.704 "superblock": true, 00:16:34.704 "num_base_bdevs": 4, 00:16:34.704 "num_base_bdevs_discovered": 0, 00:16:34.704 "num_base_bdevs_operational": 4, 00:16:34.704 "base_bdevs_list": [ 00:16:34.704 { 00:16:34.704 "name": "BaseBdev1", 00:16:34.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.704 "is_configured": false, 00:16:34.704 "data_offset": 0, 00:16:34.704 "data_size": 0 00:16:34.704 }, 00:16:34.704 { 00:16:34.704 "name": "BaseBdev2", 00:16:34.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.704 "is_configured": false, 00:16:34.704 "data_offset": 0, 00:16:34.704 "data_size": 0 00:16:34.704 }, 00:16:34.704 { 00:16:34.704 "name": "BaseBdev3", 00:16:34.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.704 "is_configured": false, 00:16:34.704 "data_offset": 0, 00:16:34.704 "data_size": 0 00:16:34.704 }, 00:16:34.704 { 00:16:34.704 "name": "BaseBdev4", 00:16:34.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.704 "is_configured": false, 00:16:34.704 "data_offset": 0, 00:16:34.704 "data_size": 0 00:16:34.704 } 00:16:34.704 ] 00:16:34.704 }' 00:16:34.704 15:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:34.704 15:05:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.961 15:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:35.218 [2024-07-12 15:05:01.037806] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:35.218 [2024-07-12 15:05:01.037837] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x27c126234500 name Existed_Raid, state configuring 00:16:35.476 15:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:35.733 [2024-07-12 15:05:01.341864] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:35.733 [2024-07-12 15:05:01.341932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:35.733 [2024-07-12 15:05:01.341938] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:35.733 [2024-07-12 15:05:01.341946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:35.733 [2024-07-12 15:05:01.341950] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:35.733 [2024-07-12 15:05:01.341958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:35.733 [2024-07-12 15:05:01.341961] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:35.733 [2024-07-12 15:05:01.341983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:35.733 15:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:35.991 [2024-07-12 15:05:01.638955] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:35.991 BaseBdev1 00:16:35.991 15:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:35.991 15:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:35.991 15:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:35.991 15:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:35.991 15:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:35.991 15:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:35.991 15:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:36.249 15:05:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:36.508 [ 00:16:36.508 { 00:16:36.508 "name": "BaseBdev1", 00:16:36.508 "aliases": [ 00:16:36.508 "1cd2a390-4060-11ef-b2a4-e9dca065e82e" 00:16:36.508 ], 00:16:36.508 "product_name": "Malloc disk", 00:16:36.508 "block_size": 512, 00:16:36.508 "num_blocks": 65536, 00:16:36.508 "uuid": "1cd2a390-4060-11ef-b2a4-e9dca065e82e", 00:16:36.508 "assigned_rate_limits": { 00:16:36.508 "rw_ios_per_sec": 0, 00:16:36.508 "rw_mbytes_per_sec": 0, 00:16:36.508 "r_mbytes_per_sec": 0, 00:16:36.508 "w_mbytes_per_sec": 0 00:16:36.508 }, 00:16:36.508 "claimed": true, 00:16:36.508 "claim_type": "exclusive_write", 00:16:36.508 "zoned": false, 00:16:36.508 "supported_io_types": { 00:16:36.508 "read": true, 00:16:36.508 "write": true, 00:16:36.508 "unmap": true, 00:16:36.508 "flush": true, 00:16:36.508 "reset": true, 00:16:36.508 "nvme_admin": false, 00:16:36.508 "nvme_io": false, 00:16:36.508 "nvme_io_md": false, 00:16:36.508 "write_zeroes": true, 00:16:36.508 "zcopy": true, 00:16:36.508 "get_zone_info": false, 00:16:36.508 "zone_management": false, 00:16:36.508 "zone_append": false, 00:16:36.508 "compare": false, 00:16:36.508 "compare_and_write": false, 00:16:36.508 "abort": true, 00:16:36.508 "seek_hole": false, 00:16:36.508 "seek_data": false, 00:16:36.508 "copy": true, 00:16:36.508 "nvme_iov_md": false 00:16:36.508 }, 00:16:36.508 "memory_domains": [ 00:16:36.508 { 00:16:36.508 "dma_device_id": "system", 00:16:36.508 "dma_device_type": 1 00:16:36.508 }, 00:16:36.508 { 00:16:36.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.508 "dma_device_type": 2 00:16:36.508 } 00:16:36.508 ], 00:16:36.508 "driver_specific": {} 00:16:36.508 } 00:16:36.508 ] 00:16:36.508 15:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:36.508 15:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:36.508 15:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:36.508 15:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:36.508 15:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:36.509 15:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:36.509 15:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:36.509 15:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:36.509 15:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:36.509 15:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:36.509 15:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:36.509 15:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.509 15:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.767 15:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:36.767 "name": "Existed_Raid", 00:16:36.767 "uuid": "1ca5765c-4060-11ef-b2a4-e9dca065e82e", 00:16:36.767 "strip_size_kb": 0, 00:16:36.767 "state": "configuring", 00:16:36.767 "raid_level": "raid1", 00:16:36.767 "superblock": true, 00:16:36.767 "num_base_bdevs": 4, 00:16:36.767 "num_base_bdevs_discovered": 1, 00:16:36.767 "num_base_bdevs_operational": 4, 00:16:36.767 "base_bdevs_list": [ 00:16:36.767 { 00:16:36.767 "name": "BaseBdev1", 00:16:36.767 "uuid": "1cd2a390-4060-11ef-b2a4-e9dca065e82e", 00:16:36.767 "is_configured": true, 00:16:36.767 "data_offset": 2048, 00:16:36.767 "data_size": 63488 00:16:36.767 }, 00:16:36.767 { 00:16:36.767 "name": "BaseBdev2", 00:16:36.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.767 "is_configured": false, 00:16:36.767 "data_offset": 0, 00:16:36.767 "data_size": 0 00:16:36.767 }, 00:16:36.767 { 00:16:36.767 "name": "BaseBdev3", 00:16:36.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.767 "is_configured": false, 00:16:36.767 "data_offset": 0, 00:16:36.767 "data_size": 0 00:16:36.767 }, 00:16:36.767 { 00:16:36.767 "name": "BaseBdev4", 00:16:36.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.767 "is_configured": false, 00:16:36.767 "data_offset": 0, 00:16:36.767 "data_size": 0 00:16:36.767 } 00:16:36.767 ] 00:16:36.767 }' 00:16:36.767 15:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:36.767 15:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.332 15:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:37.590 [2024-07-12 15:05:03.178015] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:37.590 [2024-07-12 15:05:03.178051] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x27c126234500 name Existed_Raid, state configuring 00:16:37.590 15:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:37.590 [2024-07-12 15:05:03.410043] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:37.590 [2024-07-12 15:05:03.410849] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:37.590 [2024-07-12 15:05:03.410891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:37.590 [2024-07-12 15:05:03.410897] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:37.590 [2024-07-12 15:05:03.410905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:37.590 [2024-07-12 15:05:03.410909] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:37.590 [2024-07-12 15:05:03.410916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:37.869 15:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:37.869 15:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:37.869 15:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:37.869 15:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:37.869 15:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:37.869 15:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:37.869 15:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:37.869 15:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:37.869 15:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:37.869 15:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:37.869 15:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:37.869 15:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:37.869 15:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.869 15:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.128 15:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:38.128 "name": "Existed_Raid", 00:16:38.128 "uuid": "1de10a68-4060-11ef-b2a4-e9dca065e82e", 00:16:38.128 "strip_size_kb": 0, 00:16:38.129 "state": "configuring", 00:16:38.129 "raid_level": "raid1", 00:16:38.129 "superblock": true, 00:16:38.129 "num_base_bdevs": 4, 00:16:38.129 "num_base_bdevs_discovered": 1, 00:16:38.129 "num_base_bdevs_operational": 4, 00:16:38.129 "base_bdevs_list": [ 00:16:38.129 { 00:16:38.129 "name": "BaseBdev1", 00:16:38.129 "uuid": "1cd2a390-4060-11ef-b2a4-e9dca065e82e", 00:16:38.129 "is_configured": true, 00:16:38.129 "data_offset": 2048, 00:16:38.129 "data_size": 63488 00:16:38.129 }, 00:16:38.129 { 00:16:38.129 "name": "BaseBdev2", 00:16:38.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.129 "is_configured": false, 00:16:38.129 "data_offset": 0, 00:16:38.129 "data_size": 0 00:16:38.129 }, 00:16:38.129 { 00:16:38.129 "name": "BaseBdev3", 00:16:38.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.129 "is_configured": false, 00:16:38.129 "data_offset": 0, 00:16:38.129 "data_size": 0 00:16:38.129 }, 00:16:38.129 { 00:16:38.129 "name": "BaseBdev4", 00:16:38.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.129 "is_configured": false, 00:16:38.129 "data_offset": 0, 00:16:38.129 "data_size": 0 00:16:38.129 } 00:16:38.129 ] 00:16:38.129 }' 00:16:38.129 15:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:38.129 15:05:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.387 15:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:38.645 [2024-07-12 15:05:04.330216] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:38.645 BaseBdev2 00:16:38.645 15:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:38.645 15:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:38.645 15:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:38.645 15:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:38.645 15:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:38.645 15:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:38.645 15:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:38.903 15:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:39.161 [ 00:16:39.161 { 00:16:39.161 "name": "BaseBdev2", 00:16:39.161 "aliases": [ 00:16:39.161 "1e6d6df4-4060-11ef-b2a4-e9dca065e82e" 00:16:39.161 ], 00:16:39.161 "product_name": "Malloc disk", 00:16:39.161 "block_size": 512, 00:16:39.161 "num_blocks": 65536, 00:16:39.161 "uuid": "1e6d6df4-4060-11ef-b2a4-e9dca065e82e", 00:16:39.161 "assigned_rate_limits": { 00:16:39.161 "rw_ios_per_sec": 0, 00:16:39.161 "rw_mbytes_per_sec": 0, 00:16:39.161 "r_mbytes_per_sec": 0, 00:16:39.161 "w_mbytes_per_sec": 0 00:16:39.161 }, 00:16:39.161 "claimed": true, 00:16:39.161 "claim_type": "exclusive_write", 00:16:39.161 "zoned": false, 00:16:39.161 "supported_io_types": { 00:16:39.161 "read": true, 00:16:39.161 "write": true, 00:16:39.161 "unmap": true, 00:16:39.161 "flush": true, 00:16:39.161 "reset": true, 00:16:39.161 "nvme_admin": false, 00:16:39.161 "nvme_io": false, 00:16:39.161 "nvme_io_md": false, 00:16:39.161 "write_zeroes": true, 00:16:39.161 "zcopy": true, 00:16:39.161 "get_zone_info": false, 00:16:39.161 "zone_management": false, 00:16:39.161 "zone_append": false, 00:16:39.161 "compare": false, 00:16:39.161 "compare_and_write": false, 00:16:39.161 "abort": true, 00:16:39.161 "seek_hole": false, 00:16:39.161 "seek_data": false, 00:16:39.161 "copy": true, 00:16:39.161 "nvme_iov_md": false 00:16:39.161 }, 00:16:39.161 "memory_domains": [ 00:16:39.161 { 00:16:39.161 "dma_device_id": "system", 00:16:39.161 "dma_device_type": 1 00:16:39.161 }, 00:16:39.161 { 00:16:39.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.161 "dma_device_type": 2 00:16:39.161 } 00:16:39.161 ], 00:16:39.161 "driver_specific": {} 00:16:39.161 } 00:16:39.161 ] 00:16:39.161 15:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:39.161 15:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:39.161 15:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:39.161 15:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:39.162 15:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:39.162 15:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:39.162 15:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:39.162 15:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:39.162 15:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:39.162 15:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:39.162 15:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:39.162 15:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:39.162 15:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:39.162 15:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.162 15:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.419 15:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:39.419 "name": "Existed_Raid", 00:16:39.419 "uuid": "1de10a68-4060-11ef-b2a4-e9dca065e82e", 00:16:39.419 "strip_size_kb": 0, 00:16:39.419 "state": "configuring", 00:16:39.419 "raid_level": "raid1", 00:16:39.419 "superblock": true, 00:16:39.419 "num_base_bdevs": 4, 00:16:39.419 "num_base_bdevs_discovered": 2, 00:16:39.419 "num_base_bdevs_operational": 4, 00:16:39.419 "base_bdevs_list": [ 00:16:39.419 { 00:16:39.419 "name": "BaseBdev1", 00:16:39.419 "uuid": "1cd2a390-4060-11ef-b2a4-e9dca065e82e", 00:16:39.419 "is_configured": true, 00:16:39.419 "data_offset": 2048, 00:16:39.419 "data_size": 63488 00:16:39.419 }, 00:16:39.419 { 00:16:39.419 "name": "BaseBdev2", 00:16:39.419 "uuid": "1e6d6df4-4060-11ef-b2a4-e9dca065e82e", 00:16:39.419 "is_configured": true, 00:16:39.419 "data_offset": 2048, 00:16:39.419 "data_size": 63488 00:16:39.419 }, 00:16:39.419 { 00:16:39.419 "name": "BaseBdev3", 00:16:39.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.419 "is_configured": false, 00:16:39.419 "data_offset": 0, 00:16:39.419 "data_size": 0 00:16:39.419 }, 00:16:39.419 { 00:16:39.419 "name": "BaseBdev4", 00:16:39.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.419 "is_configured": false, 00:16:39.419 "data_offset": 0, 00:16:39.419 "data_size": 0 00:16:39.419 } 00:16:39.419 ] 00:16:39.419 }' 00:16:39.419 15:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:39.419 15:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.677 15:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:39.934 [2024-07-12 15:05:05.654286] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:39.934 BaseBdev3 00:16:39.934 15:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:39.934 15:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:39.934 15:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:39.934 15:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:39.934 15:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:39.934 15:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:39.934 15:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:40.194 15:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:40.452 [ 00:16:40.452 { 00:16:40.452 "name": "BaseBdev3", 00:16:40.452 "aliases": [ 00:16:40.452 "1f3777fa-4060-11ef-b2a4-e9dca065e82e" 00:16:40.452 ], 00:16:40.452 "product_name": "Malloc disk", 00:16:40.452 "block_size": 512, 00:16:40.452 "num_blocks": 65536, 00:16:40.452 "uuid": "1f3777fa-4060-11ef-b2a4-e9dca065e82e", 00:16:40.452 "assigned_rate_limits": { 00:16:40.452 "rw_ios_per_sec": 0, 00:16:40.452 "rw_mbytes_per_sec": 0, 00:16:40.452 "r_mbytes_per_sec": 0, 00:16:40.452 "w_mbytes_per_sec": 0 00:16:40.452 }, 00:16:40.452 "claimed": true, 00:16:40.452 "claim_type": "exclusive_write", 00:16:40.452 "zoned": false, 00:16:40.452 "supported_io_types": { 00:16:40.453 "read": true, 00:16:40.453 "write": true, 00:16:40.453 "unmap": true, 00:16:40.453 "flush": true, 00:16:40.453 "reset": true, 00:16:40.453 "nvme_admin": false, 00:16:40.453 "nvme_io": false, 00:16:40.453 "nvme_io_md": false, 00:16:40.453 "write_zeroes": true, 00:16:40.453 "zcopy": true, 00:16:40.453 "get_zone_info": false, 00:16:40.453 "zone_management": false, 00:16:40.453 "zone_append": false, 00:16:40.453 "compare": false, 00:16:40.453 "compare_and_write": false, 00:16:40.453 "abort": true, 00:16:40.453 "seek_hole": false, 00:16:40.453 "seek_data": false, 00:16:40.453 "copy": true, 00:16:40.453 "nvme_iov_md": false 00:16:40.453 }, 00:16:40.453 "memory_domains": [ 00:16:40.453 { 00:16:40.453 "dma_device_id": "system", 00:16:40.453 "dma_device_type": 1 00:16:40.453 }, 00:16:40.453 { 00:16:40.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.453 "dma_device_type": 2 00:16:40.453 } 00:16:40.453 ], 00:16:40.453 "driver_specific": {} 00:16:40.453 } 00:16:40.453 ] 00:16:40.453 15:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:40.453 15:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:40.453 15:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:40.453 15:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:40.453 15:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:40.453 15:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:40.453 15:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:40.453 15:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:40.453 15:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:40.453 15:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:40.453 15:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:40.453 15:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:40.453 15:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:40.453 15:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.453 15:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.711 15:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:40.711 "name": "Existed_Raid", 00:16:40.711 "uuid": "1de10a68-4060-11ef-b2a4-e9dca065e82e", 00:16:40.711 "strip_size_kb": 0, 00:16:40.711 "state": "configuring", 00:16:40.711 "raid_level": "raid1", 00:16:40.711 "superblock": true, 00:16:40.711 "num_base_bdevs": 4, 00:16:40.711 "num_base_bdevs_discovered": 3, 00:16:40.711 "num_base_bdevs_operational": 4, 00:16:40.711 "base_bdevs_list": [ 00:16:40.711 { 00:16:40.711 "name": "BaseBdev1", 00:16:40.711 "uuid": "1cd2a390-4060-11ef-b2a4-e9dca065e82e", 00:16:40.711 "is_configured": true, 00:16:40.711 "data_offset": 2048, 00:16:40.711 "data_size": 63488 00:16:40.711 }, 00:16:40.711 { 00:16:40.711 "name": "BaseBdev2", 00:16:40.711 "uuid": "1e6d6df4-4060-11ef-b2a4-e9dca065e82e", 00:16:40.711 "is_configured": true, 00:16:40.711 "data_offset": 2048, 00:16:40.711 "data_size": 63488 00:16:40.711 }, 00:16:40.711 { 00:16:40.711 "name": "BaseBdev3", 00:16:40.711 "uuid": "1f3777fa-4060-11ef-b2a4-e9dca065e82e", 00:16:40.711 "is_configured": true, 00:16:40.711 "data_offset": 2048, 00:16:40.711 "data_size": 63488 00:16:40.711 }, 00:16:40.711 { 00:16:40.711 "name": "BaseBdev4", 00:16:40.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.711 "is_configured": false, 00:16:40.711 "data_offset": 0, 00:16:40.711 "data_size": 0 00:16:40.711 } 00:16:40.711 ] 00:16:40.711 }' 00:16:40.711 15:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:40.711 15:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.278 15:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:41.278 [2024-07-12 15:05:07.054352] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:41.278 [2024-07-12 15:05:07.054425] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x27c126234a00 00:16:41.278 [2024-07-12 15:05:07.054432] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:41.278 [2024-07-12 15:05:07.054454] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x27c126297e20 00:16:41.278 [2024-07-12 15:05:07.054510] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x27c126234a00 00:16:41.278 [2024-07-12 15:05:07.054514] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x27c126234a00 00:16:41.278 [2024-07-12 15:05:07.054536] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.278 BaseBdev4 00:16:41.278 15:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:16:41.278 15:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:16:41.278 15:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:41.278 15:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:41.278 15:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:41.278 15:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:41.278 15:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:41.536 15:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:41.800 [ 00:16:41.800 { 00:16:41.800 "name": "BaseBdev4", 00:16:41.800 "aliases": [ 00:16:41.800 "200d19ca-4060-11ef-b2a4-e9dca065e82e" 00:16:41.800 ], 00:16:41.800 "product_name": "Malloc disk", 00:16:41.800 "block_size": 512, 00:16:41.800 "num_blocks": 65536, 00:16:41.800 "uuid": "200d19ca-4060-11ef-b2a4-e9dca065e82e", 00:16:41.800 "assigned_rate_limits": { 00:16:41.800 "rw_ios_per_sec": 0, 00:16:41.800 "rw_mbytes_per_sec": 0, 00:16:41.800 "r_mbytes_per_sec": 0, 00:16:41.800 "w_mbytes_per_sec": 0 00:16:41.800 }, 00:16:41.800 "claimed": true, 00:16:41.800 "claim_type": "exclusive_write", 00:16:41.800 "zoned": false, 00:16:41.800 "supported_io_types": { 00:16:41.800 "read": true, 00:16:41.800 "write": true, 00:16:41.800 "unmap": true, 00:16:41.800 "flush": true, 00:16:41.800 "reset": true, 00:16:41.800 "nvme_admin": false, 00:16:41.800 "nvme_io": false, 00:16:41.800 "nvme_io_md": false, 00:16:41.800 "write_zeroes": true, 00:16:41.800 "zcopy": true, 00:16:41.800 "get_zone_info": false, 00:16:41.800 "zone_management": false, 00:16:41.800 "zone_append": false, 00:16:41.800 "compare": false, 00:16:41.800 "compare_and_write": false, 00:16:41.800 "abort": true, 00:16:41.800 "seek_hole": false, 00:16:41.800 "seek_data": false, 00:16:41.800 "copy": true, 00:16:41.800 "nvme_iov_md": false 00:16:41.800 }, 00:16:41.800 "memory_domains": [ 00:16:41.800 { 00:16:41.800 "dma_device_id": "system", 00:16:41.800 "dma_device_type": 1 00:16:41.800 }, 00:16:41.800 { 00:16:41.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.800 "dma_device_type": 2 00:16:41.800 } 00:16:41.800 ], 00:16:41.800 "driver_specific": {} 00:16:41.800 } 00:16:41.800 ] 00:16:41.800 15:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:41.800 15:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:41.800 15:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:41.800 15:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:41.800 15:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:41.800 15:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:41.800 15:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:41.800 15:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:41.801 15:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:41.801 15:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:41.801 15:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:41.801 15:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:41.801 15:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:41.801 15:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.801 15:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.120 15:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:42.120 "name": "Existed_Raid", 00:16:42.120 "uuid": "1de10a68-4060-11ef-b2a4-e9dca065e82e", 00:16:42.120 "strip_size_kb": 0, 00:16:42.120 "state": "online", 00:16:42.120 "raid_level": "raid1", 00:16:42.120 "superblock": true, 00:16:42.120 "num_base_bdevs": 4, 00:16:42.120 "num_base_bdevs_discovered": 4, 00:16:42.120 "num_base_bdevs_operational": 4, 00:16:42.120 "base_bdevs_list": [ 00:16:42.120 { 00:16:42.120 "name": "BaseBdev1", 00:16:42.120 "uuid": "1cd2a390-4060-11ef-b2a4-e9dca065e82e", 00:16:42.120 "is_configured": true, 00:16:42.120 "data_offset": 2048, 00:16:42.120 "data_size": 63488 00:16:42.120 }, 00:16:42.120 { 00:16:42.120 "name": "BaseBdev2", 00:16:42.120 "uuid": "1e6d6df4-4060-11ef-b2a4-e9dca065e82e", 00:16:42.120 "is_configured": true, 00:16:42.120 "data_offset": 2048, 00:16:42.120 "data_size": 63488 00:16:42.120 }, 00:16:42.120 { 00:16:42.120 "name": "BaseBdev3", 00:16:42.120 "uuid": "1f3777fa-4060-11ef-b2a4-e9dca065e82e", 00:16:42.120 "is_configured": true, 00:16:42.120 "data_offset": 2048, 00:16:42.120 "data_size": 63488 00:16:42.120 }, 00:16:42.120 { 00:16:42.120 "name": "BaseBdev4", 00:16:42.120 "uuid": "200d19ca-4060-11ef-b2a4-e9dca065e82e", 00:16:42.120 "is_configured": true, 00:16:42.120 "data_offset": 2048, 00:16:42.120 "data_size": 63488 00:16:42.120 } 00:16:42.120 ] 00:16:42.120 }' 00:16:42.120 15:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:42.120 15:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.688 15:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:42.688 15:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:42.688 15:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:42.688 15:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:42.688 15:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:42.688 15:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:42.688 15:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:42.688 15:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:42.688 [2024-07-12 15:05:08.466336] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:42.688 15:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:42.688 "name": "Existed_Raid", 00:16:42.688 "aliases": [ 00:16:42.688 "1de10a68-4060-11ef-b2a4-e9dca065e82e" 00:16:42.688 ], 00:16:42.688 "product_name": "Raid Volume", 00:16:42.688 "block_size": 512, 00:16:42.688 "num_blocks": 63488, 00:16:42.688 "uuid": "1de10a68-4060-11ef-b2a4-e9dca065e82e", 00:16:42.688 "assigned_rate_limits": { 00:16:42.688 "rw_ios_per_sec": 0, 00:16:42.688 "rw_mbytes_per_sec": 0, 00:16:42.688 "r_mbytes_per_sec": 0, 00:16:42.688 "w_mbytes_per_sec": 0 00:16:42.688 }, 00:16:42.688 "claimed": false, 00:16:42.688 "zoned": false, 00:16:42.688 "supported_io_types": { 00:16:42.688 "read": true, 00:16:42.688 "write": true, 00:16:42.688 "unmap": false, 00:16:42.688 "flush": false, 00:16:42.688 "reset": true, 00:16:42.688 "nvme_admin": false, 00:16:42.688 "nvme_io": false, 00:16:42.688 "nvme_io_md": false, 00:16:42.688 "write_zeroes": true, 00:16:42.688 "zcopy": false, 00:16:42.688 "get_zone_info": false, 00:16:42.688 "zone_management": false, 00:16:42.688 "zone_append": false, 00:16:42.688 "compare": false, 00:16:42.688 "compare_and_write": false, 00:16:42.688 "abort": false, 00:16:42.688 "seek_hole": false, 00:16:42.688 "seek_data": false, 00:16:42.688 "copy": false, 00:16:42.688 "nvme_iov_md": false 00:16:42.688 }, 00:16:42.688 "memory_domains": [ 00:16:42.688 { 00:16:42.688 "dma_device_id": "system", 00:16:42.688 "dma_device_type": 1 00:16:42.688 }, 00:16:42.688 { 00:16:42.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.688 "dma_device_type": 2 00:16:42.688 }, 00:16:42.688 { 00:16:42.688 "dma_device_id": "system", 00:16:42.688 "dma_device_type": 1 00:16:42.688 }, 00:16:42.688 { 00:16:42.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.688 "dma_device_type": 2 00:16:42.688 }, 00:16:42.688 { 00:16:42.688 "dma_device_id": "system", 00:16:42.688 "dma_device_type": 1 00:16:42.688 }, 00:16:42.688 { 00:16:42.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.688 "dma_device_type": 2 00:16:42.688 }, 00:16:42.688 { 00:16:42.688 "dma_device_id": "system", 00:16:42.688 "dma_device_type": 1 00:16:42.688 }, 00:16:42.688 { 00:16:42.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.688 "dma_device_type": 2 00:16:42.688 } 00:16:42.688 ], 00:16:42.688 "driver_specific": { 00:16:42.688 "raid": { 00:16:42.688 "uuid": "1de10a68-4060-11ef-b2a4-e9dca065e82e", 00:16:42.688 "strip_size_kb": 0, 00:16:42.688 "state": "online", 00:16:42.688 "raid_level": "raid1", 00:16:42.688 "superblock": true, 00:16:42.688 "num_base_bdevs": 4, 00:16:42.688 "num_base_bdevs_discovered": 4, 00:16:42.688 "num_base_bdevs_operational": 4, 00:16:42.688 "base_bdevs_list": [ 00:16:42.688 { 00:16:42.688 "name": "BaseBdev1", 00:16:42.688 "uuid": "1cd2a390-4060-11ef-b2a4-e9dca065e82e", 00:16:42.688 "is_configured": true, 00:16:42.688 "data_offset": 2048, 00:16:42.688 "data_size": 63488 00:16:42.688 }, 00:16:42.688 { 00:16:42.688 "name": "BaseBdev2", 00:16:42.688 "uuid": "1e6d6df4-4060-11ef-b2a4-e9dca065e82e", 00:16:42.688 "is_configured": true, 00:16:42.688 "data_offset": 2048, 00:16:42.688 "data_size": 63488 00:16:42.688 }, 00:16:42.688 { 00:16:42.688 "name": "BaseBdev3", 00:16:42.688 "uuid": "1f3777fa-4060-11ef-b2a4-e9dca065e82e", 00:16:42.688 "is_configured": true, 00:16:42.688 "data_offset": 2048, 00:16:42.688 "data_size": 63488 00:16:42.688 }, 00:16:42.688 { 00:16:42.688 "name": "BaseBdev4", 00:16:42.688 "uuid": "200d19ca-4060-11ef-b2a4-e9dca065e82e", 00:16:42.688 "is_configured": true, 00:16:42.688 "data_offset": 2048, 00:16:42.688 "data_size": 63488 00:16:42.688 } 00:16:42.688 ] 00:16:42.688 } 00:16:42.688 } 00:16:42.688 }' 00:16:42.688 15:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:42.688 15:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:42.688 BaseBdev2 00:16:42.688 BaseBdev3 00:16:42.688 BaseBdev4' 00:16:42.688 15:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:42.688 15:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:42.688 15:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:42.947 15:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:42.947 "name": "BaseBdev1", 00:16:42.947 "aliases": [ 00:16:42.947 "1cd2a390-4060-11ef-b2a4-e9dca065e82e" 00:16:42.947 ], 00:16:42.947 "product_name": "Malloc disk", 00:16:42.947 "block_size": 512, 00:16:42.947 "num_blocks": 65536, 00:16:42.947 "uuid": "1cd2a390-4060-11ef-b2a4-e9dca065e82e", 00:16:42.947 "assigned_rate_limits": { 00:16:42.947 "rw_ios_per_sec": 0, 00:16:42.947 "rw_mbytes_per_sec": 0, 00:16:42.947 "r_mbytes_per_sec": 0, 00:16:42.947 "w_mbytes_per_sec": 0 00:16:42.947 }, 00:16:42.947 "claimed": true, 00:16:42.947 "claim_type": "exclusive_write", 00:16:42.947 "zoned": false, 00:16:42.947 "supported_io_types": { 00:16:42.947 "read": true, 00:16:42.947 "write": true, 00:16:42.947 "unmap": true, 00:16:42.947 "flush": true, 00:16:42.947 "reset": true, 00:16:42.947 "nvme_admin": false, 00:16:42.947 "nvme_io": false, 00:16:42.947 "nvme_io_md": false, 00:16:42.947 "write_zeroes": true, 00:16:42.947 "zcopy": true, 00:16:42.947 "get_zone_info": false, 00:16:42.947 "zone_management": false, 00:16:42.947 "zone_append": false, 00:16:42.947 "compare": false, 00:16:42.947 "compare_and_write": false, 00:16:42.947 "abort": true, 00:16:42.947 "seek_hole": false, 00:16:42.947 "seek_data": false, 00:16:42.947 "copy": true, 00:16:42.947 "nvme_iov_md": false 00:16:42.947 }, 00:16:42.947 "memory_domains": [ 00:16:42.947 { 00:16:42.947 "dma_device_id": "system", 00:16:42.947 "dma_device_type": 1 00:16:42.947 }, 00:16:42.947 { 00:16:42.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.947 "dma_device_type": 2 00:16:42.947 } 00:16:42.947 ], 00:16:42.947 "driver_specific": {} 00:16:42.947 }' 00:16:42.947 15:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:43.205 15:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:43.205 15:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:43.205 15:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:43.205 15:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:43.205 15:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:43.205 15:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:43.205 15:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:43.205 15:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:43.205 15:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:43.205 15:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:43.205 15:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:43.205 15:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:43.205 15:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:43.205 15:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:43.477 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:43.477 "name": "BaseBdev2", 00:16:43.477 "aliases": [ 00:16:43.477 "1e6d6df4-4060-11ef-b2a4-e9dca065e82e" 00:16:43.477 ], 00:16:43.477 "product_name": "Malloc disk", 00:16:43.477 "block_size": 512, 00:16:43.477 "num_blocks": 65536, 00:16:43.477 "uuid": "1e6d6df4-4060-11ef-b2a4-e9dca065e82e", 00:16:43.477 "assigned_rate_limits": { 00:16:43.477 "rw_ios_per_sec": 0, 00:16:43.477 "rw_mbytes_per_sec": 0, 00:16:43.477 "r_mbytes_per_sec": 0, 00:16:43.477 "w_mbytes_per_sec": 0 00:16:43.477 }, 00:16:43.477 "claimed": true, 00:16:43.477 "claim_type": "exclusive_write", 00:16:43.477 "zoned": false, 00:16:43.477 "supported_io_types": { 00:16:43.477 "read": true, 00:16:43.477 "write": true, 00:16:43.477 "unmap": true, 00:16:43.477 "flush": true, 00:16:43.477 "reset": true, 00:16:43.477 "nvme_admin": false, 00:16:43.477 "nvme_io": false, 00:16:43.477 "nvme_io_md": false, 00:16:43.477 "write_zeroes": true, 00:16:43.477 "zcopy": true, 00:16:43.477 "get_zone_info": false, 00:16:43.477 "zone_management": false, 00:16:43.477 "zone_append": false, 00:16:43.477 "compare": false, 00:16:43.477 "compare_and_write": false, 00:16:43.477 "abort": true, 00:16:43.477 "seek_hole": false, 00:16:43.477 "seek_data": false, 00:16:43.477 "copy": true, 00:16:43.478 "nvme_iov_md": false 00:16:43.478 }, 00:16:43.478 "memory_domains": [ 00:16:43.478 { 00:16:43.478 "dma_device_id": "system", 00:16:43.478 "dma_device_type": 1 00:16:43.478 }, 00:16:43.478 { 00:16:43.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.478 "dma_device_type": 2 00:16:43.478 } 00:16:43.478 ], 00:16:43.478 "driver_specific": {} 00:16:43.478 }' 00:16:43.478 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:43.478 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:43.478 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:43.478 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:43.478 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:43.478 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:43.478 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:43.478 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:43.478 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:43.478 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:43.478 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:43.478 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:43.478 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:43.478 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:43.479 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:43.744 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:43.744 "name": "BaseBdev3", 00:16:43.744 "aliases": [ 00:16:43.744 "1f3777fa-4060-11ef-b2a4-e9dca065e82e" 00:16:43.744 ], 00:16:43.744 "product_name": "Malloc disk", 00:16:43.744 "block_size": 512, 00:16:43.744 "num_blocks": 65536, 00:16:43.744 "uuid": "1f3777fa-4060-11ef-b2a4-e9dca065e82e", 00:16:43.744 "assigned_rate_limits": { 00:16:43.744 "rw_ios_per_sec": 0, 00:16:43.744 "rw_mbytes_per_sec": 0, 00:16:43.744 "r_mbytes_per_sec": 0, 00:16:43.744 "w_mbytes_per_sec": 0 00:16:43.744 }, 00:16:43.744 "claimed": true, 00:16:43.744 "claim_type": "exclusive_write", 00:16:43.744 "zoned": false, 00:16:43.744 "supported_io_types": { 00:16:43.744 "read": true, 00:16:43.744 "write": true, 00:16:43.744 "unmap": true, 00:16:43.744 "flush": true, 00:16:43.744 "reset": true, 00:16:43.744 "nvme_admin": false, 00:16:43.744 "nvme_io": false, 00:16:43.744 "nvme_io_md": false, 00:16:43.744 "write_zeroes": true, 00:16:43.744 "zcopy": true, 00:16:43.744 "get_zone_info": false, 00:16:43.744 "zone_management": false, 00:16:43.744 "zone_append": false, 00:16:43.744 "compare": false, 00:16:43.744 "compare_and_write": false, 00:16:43.744 "abort": true, 00:16:43.744 "seek_hole": false, 00:16:43.744 "seek_data": false, 00:16:43.744 "copy": true, 00:16:43.744 "nvme_iov_md": false 00:16:43.744 }, 00:16:43.744 "memory_domains": [ 00:16:43.744 { 00:16:43.744 "dma_device_id": "system", 00:16:43.744 "dma_device_type": 1 00:16:43.744 }, 00:16:43.744 { 00:16:43.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.744 "dma_device_type": 2 00:16:43.744 } 00:16:43.744 ], 00:16:43.744 "driver_specific": {} 00:16:43.744 }' 00:16:43.744 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:43.744 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:43.744 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:43.744 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:43.744 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:43.744 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:43.744 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:43.744 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:43.744 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:43.744 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:43.744 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:43.744 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:43.744 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:43.744 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:43.744 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:44.003 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:44.003 "name": "BaseBdev4", 00:16:44.003 "aliases": [ 00:16:44.003 "200d19ca-4060-11ef-b2a4-e9dca065e82e" 00:16:44.003 ], 00:16:44.003 "product_name": "Malloc disk", 00:16:44.003 "block_size": 512, 00:16:44.003 "num_blocks": 65536, 00:16:44.003 "uuid": "200d19ca-4060-11ef-b2a4-e9dca065e82e", 00:16:44.003 "assigned_rate_limits": { 00:16:44.003 "rw_ios_per_sec": 0, 00:16:44.003 "rw_mbytes_per_sec": 0, 00:16:44.003 "r_mbytes_per_sec": 0, 00:16:44.003 "w_mbytes_per_sec": 0 00:16:44.003 }, 00:16:44.003 "claimed": true, 00:16:44.003 "claim_type": "exclusive_write", 00:16:44.003 "zoned": false, 00:16:44.003 "supported_io_types": { 00:16:44.003 "read": true, 00:16:44.003 "write": true, 00:16:44.003 "unmap": true, 00:16:44.003 "flush": true, 00:16:44.003 "reset": true, 00:16:44.003 "nvme_admin": false, 00:16:44.003 "nvme_io": false, 00:16:44.003 "nvme_io_md": false, 00:16:44.003 "write_zeroes": true, 00:16:44.003 "zcopy": true, 00:16:44.003 "get_zone_info": false, 00:16:44.003 "zone_management": false, 00:16:44.003 "zone_append": false, 00:16:44.003 "compare": false, 00:16:44.003 "compare_and_write": false, 00:16:44.003 "abort": true, 00:16:44.003 "seek_hole": false, 00:16:44.003 "seek_data": false, 00:16:44.003 "copy": true, 00:16:44.003 "nvme_iov_md": false 00:16:44.003 }, 00:16:44.003 "memory_domains": [ 00:16:44.003 { 00:16:44.003 "dma_device_id": "system", 00:16:44.003 "dma_device_type": 1 00:16:44.003 }, 00:16:44.003 { 00:16:44.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.003 "dma_device_type": 2 00:16:44.003 } 00:16:44.003 ], 00:16:44.003 "driver_specific": {} 00:16:44.003 }' 00:16:44.003 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:44.003 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:44.003 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:44.003 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:44.003 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:44.003 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:44.003 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:44.003 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:44.003 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:44.261 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:44.261 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:44.261 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:44.261 15:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:44.520 [2024-07-12 15:05:10.114390] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:44.520 15:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:44.520 15:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:16:44.520 15:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:44.520 15:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:16:44.520 15:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:16:44.520 15:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:44.520 15:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:44.520 15:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:44.520 15:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:44.520 15:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:44.520 15:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:44.520 15:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:44.520 15:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:44.520 15:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:44.520 15:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:44.520 15:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.520 15:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.778 15:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:44.778 "name": "Existed_Raid", 00:16:44.778 "uuid": "1de10a68-4060-11ef-b2a4-e9dca065e82e", 00:16:44.778 "strip_size_kb": 0, 00:16:44.778 "state": "online", 00:16:44.778 "raid_level": "raid1", 00:16:44.778 "superblock": true, 00:16:44.778 "num_base_bdevs": 4, 00:16:44.778 "num_base_bdevs_discovered": 3, 00:16:44.778 "num_base_bdevs_operational": 3, 00:16:44.779 "base_bdevs_list": [ 00:16:44.779 { 00:16:44.779 "name": null, 00:16:44.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.779 "is_configured": false, 00:16:44.779 "data_offset": 2048, 00:16:44.779 "data_size": 63488 00:16:44.779 }, 00:16:44.779 { 00:16:44.779 "name": "BaseBdev2", 00:16:44.779 "uuid": "1e6d6df4-4060-11ef-b2a4-e9dca065e82e", 00:16:44.779 "is_configured": true, 00:16:44.779 "data_offset": 2048, 00:16:44.779 "data_size": 63488 00:16:44.779 }, 00:16:44.779 { 00:16:44.779 "name": "BaseBdev3", 00:16:44.779 "uuid": "1f3777fa-4060-11ef-b2a4-e9dca065e82e", 00:16:44.779 "is_configured": true, 00:16:44.779 "data_offset": 2048, 00:16:44.779 "data_size": 63488 00:16:44.779 }, 00:16:44.779 { 00:16:44.779 "name": "BaseBdev4", 00:16:44.779 "uuid": "200d19ca-4060-11ef-b2a4-e9dca065e82e", 00:16:44.779 "is_configured": true, 00:16:44.779 "data_offset": 2048, 00:16:44.779 "data_size": 63488 00:16:44.779 } 00:16:44.779 ] 00:16:44.779 }' 00:16:44.779 15:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:44.779 15:05:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.038 15:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:45.038 15:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:45.038 15:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.038 15:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:45.314 15:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:45.314 15:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:45.314 15:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:45.573 [2024-07-12 15:05:11.196403] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:45.573 15:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:45.573 15:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:45.573 15:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:45.573 15:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.831 15:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:45.831 15:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:45.831 15:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:46.090 [2024-07-12 15:05:11.782475] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:46.090 15:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:46.090 15:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:46.090 15:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.090 15:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:46.348 15:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:46.348 15:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:46.348 15:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:46.607 [2024-07-12 15:05:12.320420] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:46.607 [2024-07-12 15:05:12.320476] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:46.607 [2024-07-12 15:05:12.326724] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:46.607 [2024-07-12 15:05:12.326744] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:46.607 [2024-07-12 15:05:12.326749] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x27c126234a00 name Existed_Raid, state offline 00:16:46.607 15:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:46.607 15:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:46.607 15:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.607 15:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:46.866 15:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:46.866 15:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:46.866 15:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:16:46.866 15:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:46.866 15:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:46.866 15:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:47.126 BaseBdev2 00:16:47.126 15:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:47.126 15:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:47.126 15:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:47.126 15:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:47.126 15:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:47.126 15:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:47.126 15:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:47.384 15:05:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:47.952 [ 00:16:47.952 { 00:16:47.952 "name": "BaseBdev2", 00:16:47.952 "aliases": [ 00:16:47.952 "23840393-4060-11ef-b2a4-e9dca065e82e" 00:16:47.952 ], 00:16:47.952 "product_name": "Malloc disk", 00:16:47.952 "block_size": 512, 00:16:47.952 "num_blocks": 65536, 00:16:47.952 "uuid": "23840393-4060-11ef-b2a4-e9dca065e82e", 00:16:47.952 "assigned_rate_limits": { 00:16:47.952 "rw_ios_per_sec": 0, 00:16:47.952 "rw_mbytes_per_sec": 0, 00:16:47.952 "r_mbytes_per_sec": 0, 00:16:47.952 "w_mbytes_per_sec": 0 00:16:47.952 }, 00:16:47.952 "claimed": false, 00:16:47.952 "zoned": false, 00:16:47.952 "supported_io_types": { 00:16:47.952 "read": true, 00:16:47.952 "write": true, 00:16:47.952 "unmap": true, 00:16:47.952 "flush": true, 00:16:47.952 "reset": true, 00:16:47.952 "nvme_admin": false, 00:16:47.952 "nvme_io": false, 00:16:47.952 "nvme_io_md": false, 00:16:47.952 "write_zeroes": true, 00:16:47.952 "zcopy": true, 00:16:47.952 "get_zone_info": false, 00:16:47.952 "zone_management": false, 00:16:47.952 "zone_append": false, 00:16:47.952 "compare": false, 00:16:47.952 "compare_and_write": false, 00:16:47.952 "abort": true, 00:16:47.952 "seek_hole": false, 00:16:47.952 "seek_data": false, 00:16:47.952 "copy": true, 00:16:47.952 "nvme_iov_md": false 00:16:47.952 }, 00:16:47.952 "memory_domains": [ 00:16:47.952 { 00:16:47.952 "dma_device_id": "system", 00:16:47.952 "dma_device_type": 1 00:16:47.952 }, 00:16:47.952 { 00:16:47.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.952 "dma_device_type": 2 00:16:47.952 } 00:16:47.952 ], 00:16:47.952 "driver_specific": {} 00:16:47.952 } 00:16:47.952 ] 00:16:47.952 15:05:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:47.952 15:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:47.952 15:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:47.952 15:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:47.952 BaseBdev3 00:16:47.952 15:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:47.952 15:05:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:47.952 15:05:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:47.952 15:05:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:47.952 15:05:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:47.952 15:05:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:47.952 15:05:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:48.519 15:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:48.519 [ 00:16:48.519 { 00:16:48.519 "name": "BaseBdev3", 00:16:48.519 "aliases": [ 00:16:48.519 "240b8568-4060-11ef-b2a4-e9dca065e82e" 00:16:48.519 ], 00:16:48.519 "product_name": "Malloc disk", 00:16:48.519 "block_size": 512, 00:16:48.519 "num_blocks": 65536, 00:16:48.519 "uuid": "240b8568-4060-11ef-b2a4-e9dca065e82e", 00:16:48.519 "assigned_rate_limits": { 00:16:48.519 "rw_ios_per_sec": 0, 00:16:48.519 "rw_mbytes_per_sec": 0, 00:16:48.519 "r_mbytes_per_sec": 0, 00:16:48.519 "w_mbytes_per_sec": 0 00:16:48.519 }, 00:16:48.519 "claimed": false, 00:16:48.519 "zoned": false, 00:16:48.519 "supported_io_types": { 00:16:48.519 "read": true, 00:16:48.519 "write": true, 00:16:48.519 "unmap": true, 00:16:48.519 "flush": true, 00:16:48.519 "reset": true, 00:16:48.519 "nvme_admin": false, 00:16:48.519 "nvme_io": false, 00:16:48.519 "nvme_io_md": false, 00:16:48.519 "write_zeroes": true, 00:16:48.519 "zcopy": true, 00:16:48.519 "get_zone_info": false, 00:16:48.519 "zone_management": false, 00:16:48.519 "zone_append": false, 00:16:48.519 "compare": false, 00:16:48.519 "compare_and_write": false, 00:16:48.519 "abort": true, 00:16:48.519 "seek_hole": false, 00:16:48.519 "seek_data": false, 00:16:48.519 "copy": true, 00:16:48.519 "nvme_iov_md": false 00:16:48.519 }, 00:16:48.519 "memory_domains": [ 00:16:48.519 { 00:16:48.519 "dma_device_id": "system", 00:16:48.519 "dma_device_type": 1 00:16:48.519 }, 00:16:48.519 { 00:16:48.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.519 "dma_device_type": 2 00:16:48.519 } 00:16:48.519 ], 00:16:48.519 "driver_specific": {} 00:16:48.519 } 00:16:48.519 ] 00:16:48.519 15:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:48.519 15:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:48.519 15:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:48.519 15:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:48.779 BaseBdev4 00:16:48.779 15:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:16:48.779 15:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:16:48.779 15:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:48.779 15:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:48.779 15:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:48.779 15:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:48.779 15:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:49.038 15:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:49.605 [ 00:16:49.605 { 00:16:49.605 "name": "BaseBdev4", 00:16:49.605 "aliases": [ 00:16:49.605 "2484fcce-4060-11ef-b2a4-e9dca065e82e" 00:16:49.605 ], 00:16:49.605 "product_name": "Malloc disk", 00:16:49.605 "block_size": 512, 00:16:49.605 "num_blocks": 65536, 00:16:49.605 "uuid": "2484fcce-4060-11ef-b2a4-e9dca065e82e", 00:16:49.605 "assigned_rate_limits": { 00:16:49.605 "rw_ios_per_sec": 0, 00:16:49.605 "rw_mbytes_per_sec": 0, 00:16:49.605 "r_mbytes_per_sec": 0, 00:16:49.605 "w_mbytes_per_sec": 0 00:16:49.605 }, 00:16:49.605 "claimed": false, 00:16:49.605 "zoned": false, 00:16:49.605 "supported_io_types": { 00:16:49.605 "read": true, 00:16:49.605 "write": true, 00:16:49.605 "unmap": true, 00:16:49.605 "flush": true, 00:16:49.605 "reset": true, 00:16:49.605 "nvme_admin": false, 00:16:49.605 "nvme_io": false, 00:16:49.605 "nvme_io_md": false, 00:16:49.605 "write_zeroes": true, 00:16:49.605 "zcopy": true, 00:16:49.605 "get_zone_info": false, 00:16:49.605 "zone_management": false, 00:16:49.605 "zone_append": false, 00:16:49.605 "compare": false, 00:16:49.605 "compare_and_write": false, 00:16:49.605 "abort": true, 00:16:49.605 "seek_hole": false, 00:16:49.605 "seek_data": false, 00:16:49.605 "copy": true, 00:16:49.605 "nvme_iov_md": false 00:16:49.605 }, 00:16:49.605 "memory_domains": [ 00:16:49.605 { 00:16:49.605 "dma_device_id": "system", 00:16:49.605 "dma_device_type": 1 00:16:49.605 }, 00:16:49.605 { 00:16:49.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.605 "dma_device_type": 2 00:16:49.605 } 00:16:49.605 ], 00:16:49.605 "driver_specific": {} 00:16:49.605 } 00:16:49.605 ] 00:16:49.605 15:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:49.605 15:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:49.605 15:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:49.605 15:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:49.605 [2024-07-12 15:05:15.402836] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:49.605 [2024-07-12 15:05:15.402889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:49.605 [2024-07-12 15:05:15.402898] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:49.605 [2024-07-12 15:05:15.403458] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:49.605 [2024-07-12 15:05:15.403477] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:49.605 15:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:49.605 15:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:49.605 15:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:49.605 15:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:49.605 15:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:49.605 15:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:49.605 15:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:49.605 15:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:49.605 15:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:49.605 15:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:49.605 15:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.605 15:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.864 15:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:49.864 "name": "Existed_Raid", 00:16:49.864 "uuid": "2506ff12-4060-11ef-b2a4-e9dca065e82e", 00:16:49.864 "strip_size_kb": 0, 00:16:49.864 "state": "configuring", 00:16:49.864 "raid_level": "raid1", 00:16:49.864 "superblock": true, 00:16:49.864 "num_base_bdevs": 4, 00:16:49.864 "num_base_bdevs_discovered": 3, 00:16:49.864 "num_base_bdevs_operational": 4, 00:16:49.864 "base_bdevs_list": [ 00:16:49.864 { 00:16:49.864 "name": "BaseBdev1", 00:16:49.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.864 "is_configured": false, 00:16:49.864 "data_offset": 0, 00:16:49.864 "data_size": 0 00:16:49.864 }, 00:16:49.864 { 00:16:49.864 "name": "BaseBdev2", 00:16:49.864 "uuid": "23840393-4060-11ef-b2a4-e9dca065e82e", 00:16:49.864 "is_configured": true, 00:16:49.864 "data_offset": 2048, 00:16:49.864 "data_size": 63488 00:16:49.864 }, 00:16:49.864 { 00:16:49.864 "name": "BaseBdev3", 00:16:49.864 "uuid": "240b8568-4060-11ef-b2a4-e9dca065e82e", 00:16:49.864 "is_configured": true, 00:16:49.864 "data_offset": 2048, 00:16:49.864 "data_size": 63488 00:16:49.864 }, 00:16:49.864 { 00:16:49.864 "name": "BaseBdev4", 00:16:49.864 "uuid": "2484fcce-4060-11ef-b2a4-e9dca065e82e", 00:16:49.864 "is_configured": true, 00:16:49.864 "data_offset": 2048, 00:16:49.864 "data_size": 63488 00:16:49.864 } 00:16:49.864 ] 00:16:49.864 }' 00:16:49.864 15:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:49.864 15:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.432 15:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:50.432 [2024-07-12 15:05:16.250869] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:50.691 15:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:50.691 15:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:50.691 15:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:50.691 15:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:50.691 15:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:50.691 15:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:50.691 15:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:50.691 15:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:50.691 15:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:50.691 15:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:50.691 15:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.691 15:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.691 15:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:50.691 "name": "Existed_Raid", 00:16:50.691 "uuid": "2506ff12-4060-11ef-b2a4-e9dca065e82e", 00:16:50.691 "strip_size_kb": 0, 00:16:50.691 "state": "configuring", 00:16:50.691 "raid_level": "raid1", 00:16:50.691 "superblock": true, 00:16:50.691 "num_base_bdevs": 4, 00:16:50.691 "num_base_bdevs_discovered": 2, 00:16:50.691 "num_base_bdevs_operational": 4, 00:16:50.691 "base_bdevs_list": [ 00:16:50.691 { 00:16:50.691 "name": "BaseBdev1", 00:16:50.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.691 "is_configured": false, 00:16:50.691 "data_offset": 0, 00:16:50.691 "data_size": 0 00:16:50.691 }, 00:16:50.691 { 00:16:50.691 "name": null, 00:16:50.691 "uuid": "23840393-4060-11ef-b2a4-e9dca065e82e", 00:16:50.691 "is_configured": false, 00:16:50.691 "data_offset": 2048, 00:16:50.691 "data_size": 63488 00:16:50.691 }, 00:16:50.691 { 00:16:50.691 "name": "BaseBdev3", 00:16:50.691 "uuid": "240b8568-4060-11ef-b2a4-e9dca065e82e", 00:16:50.691 "is_configured": true, 00:16:50.691 "data_offset": 2048, 00:16:50.691 "data_size": 63488 00:16:50.691 }, 00:16:50.691 { 00:16:50.691 "name": "BaseBdev4", 00:16:50.691 "uuid": "2484fcce-4060-11ef-b2a4-e9dca065e82e", 00:16:50.691 "is_configured": true, 00:16:50.691 "data_offset": 2048, 00:16:50.691 "data_size": 63488 00:16:50.691 } 00:16:50.691 ] 00:16:50.691 }' 00:16:50.691 15:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:50.691 15:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.259 15:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.259 15:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:51.517 15:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:51.517 15:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:51.776 [2024-07-12 15:05:17.407064] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:51.776 BaseBdev1 00:16:51.776 15:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:51.776 15:05:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:51.776 15:05:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:51.776 15:05:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:51.776 15:05:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:51.776 15:05:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:51.776 15:05:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:52.034 15:05:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:52.293 [ 00:16:52.293 { 00:16:52.293 "name": "BaseBdev1", 00:16:52.293 "aliases": [ 00:16:52.293 "2638ccc9-4060-11ef-b2a4-e9dca065e82e" 00:16:52.293 ], 00:16:52.293 "product_name": "Malloc disk", 00:16:52.293 "block_size": 512, 00:16:52.293 "num_blocks": 65536, 00:16:52.293 "uuid": "2638ccc9-4060-11ef-b2a4-e9dca065e82e", 00:16:52.293 "assigned_rate_limits": { 00:16:52.293 "rw_ios_per_sec": 0, 00:16:52.293 "rw_mbytes_per_sec": 0, 00:16:52.293 "r_mbytes_per_sec": 0, 00:16:52.293 "w_mbytes_per_sec": 0 00:16:52.293 }, 00:16:52.293 "claimed": true, 00:16:52.293 "claim_type": "exclusive_write", 00:16:52.293 "zoned": false, 00:16:52.293 "supported_io_types": { 00:16:52.293 "read": true, 00:16:52.293 "write": true, 00:16:52.293 "unmap": true, 00:16:52.293 "flush": true, 00:16:52.293 "reset": true, 00:16:52.293 "nvme_admin": false, 00:16:52.293 "nvme_io": false, 00:16:52.293 "nvme_io_md": false, 00:16:52.293 "write_zeroes": true, 00:16:52.293 "zcopy": true, 00:16:52.293 "get_zone_info": false, 00:16:52.293 "zone_management": false, 00:16:52.293 "zone_append": false, 00:16:52.293 "compare": false, 00:16:52.293 "compare_and_write": false, 00:16:52.293 "abort": true, 00:16:52.293 "seek_hole": false, 00:16:52.293 "seek_data": false, 00:16:52.293 "copy": true, 00:16:52.293 "nvme_iov_md": false 00:16:52.293 }, 00:16:52.293 "memory_domains": [ 00:16:52.293 { 00:16:52.293 "dma_device_id": "system", 00:16:52.293 "dma_device_type": 1 00:16:52.293 }, 00:16:52.293 { 00:16:52.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.293 "dma_device_type": 2 00:16:52.293 } 00:16:52.293 ], 00:16:52.293 "driver_specific": {} 00:16:52.293 } 00:16:52.293 ] 00:16:52.293 15:05:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:52.293 15:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:52.293 15:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:52.293 15:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:52.293 15:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:52.293 15:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:52.293 15:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:52.293 15:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:52.293 15:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:52.293 15:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:52.293 15:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:52.293 15:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.293 15:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.551 15:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:52.551 "name": "Existed_Raid", 00:16:52.551 "uuid": "2506ff12-4060-11ef-b2a4-e9dca065e82e", 00:16:52.551 "strip_size_kb": 0, 00:16:52.551 "state": "configuring", 00:16:52.551 "raid_level": "raid1", 00:16:52.551 "superblock": true, 00:16:52.551 "num_base_bdevs": 4, 00:16:52.551 "num_base_bdevs_discovered": 3, 00:16:52.551 "num_base_bdevs_operational": 4, 00:16:52.551 "base_bdevs_list": [ 00:16:52.551 { 00:16:52.551 "name": "BaseBdev1", 00:16:52.551 "uuid": "2638ccc9-4060-11ef-b2a4-e9dca065e82e", 00:16:52.551 "is_configured": true, 00:16:52.551 "data_offset": 2048, 00:16:52.551 "data_size": 63488 00:16:52.551 }, 00:16:52.551 { 00:16:52.551 "name": null, 00:16:52.551 "uuid": "23840393-4060-11ef-b2a4-e9dca065e82e", 00:16:52.551 "is_configured": false, 00:16:52.551 "data_offset": 2048, 00:16:52.551 "data_size": 63488 00:16:52.551 }, 00:16:52.551 { 00:16:52.551 "name": "BaseBdev3", 00:16:52.551 "uuid": "240b8568-4060-11ef-b2a4-e9dca065e82e", 00:16:52.551 "is_configured": true, 00:16:52.551 "data_offset": 2048, 00:16:52.551 "data_size": 63488 00:16:52.551 }, 00:16:52.551 { 00:16:52.551 "name": "BaseBdev4", 00:16:52.551 "uuid": "2484fcce-4060-11ef-b2a4-e9dca065e82e", 00:16:52.551 "is_configured": true, 00:16:52.551 "data_offset": 2048, 00:16:52.551 "data_size": 63488 00:16:52.551 } 00:16:52.551 ] 00:16:52.551 }' 00:16:52.551 15:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:52.551 15:05:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.809 15:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:52.809 15:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.067 15:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:53.067 15:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:53.326 [2024-07-12 15:05:19.031004] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:53.326 15:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:53.326 15:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:53.326 15:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:53.326 15:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:53.326 15:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:53.326 15:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:53.326 15:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:53.326 15:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:53.326 15:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:53.326 15:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:53.326 15:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.326 15:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.585 15:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:53.585 "name": "Existed_Raid", 00:16:53.585 "uuid": "2506ff12-4060-11ef-b2a4-e9dca065e82e", 00:16:53.585 "strip_size_kb": 0, 00:16:53.585 "state": "configuring", 00:16:53.585 "raid_level": "raid1", 00:16:53.585 "superblock": true, 00:16:53.585 "num_base_bdevs": 4, 00:16:53.585 "num_base_bdevs_discovered": 2, 00:16:53.585 "num_base_bdevs_operational": 4, 00:16:53.585 "base_bdevs_list": [ 00:16:53.585 { 00:16:53.585 "name": "BaseBdev1", 00:16:53.585 "uuid": "2638ccc9-4060-11ef-b2a4-e9dca065e82e", 00:16:53.585 "is_configured": true, 00:16:53.585 "data_offset": 2048, 00:16:53.585 "data_size": 63488 00:16:53.585 }, 00:16:53.585 { 00:16:53.585 "name": null, 00:16:53.585 "uuid": "23840393-4060-11ef-b2a4-e9dca065e82e", 00:16:53.585 "is_configured": false, 00:16:53.585 "data_offset": 2048, 00:16:53.585 "data_size": 63488 00:16:53.585 }, 00:16:53.585 { 00:16:53.585 "name": null, 00:16:53.585 "uuid": "240b8568-4060-11ef-b2a4-e9dca065e82e", 00:16:53.585 "is_configured": false, 00:16:53.585 "data_offset": 2048, 00:16:53.585 "data_size": 63488 00:16:53.585 }, 00:16:53.585 { 00:16:53.585 "name": "BaseBdev4", 00:16:53.585 "uuid": "2484fcce-4060-11ef-b2a4-e9dca065e82e", 00:16:53.585 "is_configured": true, 00:16:53.585 "data_offset": 2048, 00:16:53.585 "data_size": 63488 00:16:53.585 } 00:16:53.585 ] 00:16:53.585 }' 00:16:53.585 15:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:53.585 15:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.843 15:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.843 15:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:54.102 15:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:54.102 15:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:54.361 [2024-07-12 15:05:20.143064] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:54.361 15:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:54.361 15:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:54.361 15:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:54.361 15:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:54.361 15:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:54.361 15:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:54.361 15:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:54.361 15:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:54.361 15:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:54.361 15:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:54.361 15:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.361 15:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.620 15:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:54.620 "name": "Existed_Raid", 00:16:54.620 "uuid": "2506ff12-4060-11ef-b2a4-e9dca065e82e", 00:16:54.620 "strip_size_kb": 0, 00:16:54.620 "state": "configuring", 00:16:54.620 "raid_level": "raid1", 00:16:54.620 "superblock": true, 00:16:54.620 "num_base_bdevs": 4, 00:16:54.620 "num_base_bdevs_discovered": 3, 00:16:54.620 "num_base_bdevs_operational": 4, 00:16:54.620 "base_bdevs_list": [ 00:16:54.620 { 00:16:54.620 "name": "BaseBdev1", 00:16:54.620 "uuid": "2638ccc9-4060-11ef-b2a4-e9dca065e82e", 00:16:54.620 "is_configured": true, 00:16:54.620 "data_offset": 2048, 00:16:54.620 "data_size": 63488 00:16:54.620 }, 00:16:54.620 { 00:16:54.620 "name": null, 00:16:54.620 "uuid": "23840393-4060-11ef-b2a4-e9dca065e82e", 00:16:54.620 "is_configured": false, 00:16:54.620 "data_offset": 2048, 00:16:54.620 "data_size": 63488 00:16:54.620 }, 00:16:54.620 { 00:16:54.620 "name": "BaseBdev3", 00:16:54.620 "uuid": "240b8568-4060-11ef-b2a4-e9dca065e82e", 00:16:54.620 "is_configured": true, 00:16:54.620 "data_offset": 2048, 00:16:54.620 "data_size": 63488 00:16:54.620 }, 00:16:54.620 { 00:16:54.620 "name": "BaseBdev4", 00:16:54.620 "uuid": "2484fcce-4060-11ef-b2a4-e9dca065e82e", 00:16:54.620 "is_configured": true, 00:16:54.620 "data_offset": 2048, 00:16:54.620 "data_size": 63488 00:16:54.620 } 00:16:54.620 ] 00:16:54.620 }' 00:16:54.620 15:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:54.620 15:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.187 15:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.187 15:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:55.187 15:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:55.187 15:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:55.445 [2024-07-12 15:05:21.243158] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:55.445 15:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:55.445 15:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:55.445 15:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:55.445 15:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:55.445 15:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:55.445 15:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:55.445 15:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:55.445 15:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:55.445 15:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:55.445 15:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:55.445 15:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.445 15:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.703 15:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:55.703 "name": "Existed_Raid", 00:16:55.703 "uuid": "2506ff12-4060-11ef-b2a4-e9dca065e82e", 00:16:55.703 "strip_size_kb": 0, 00:16:55.703 "state": "configuring", 00:16:55.703 "raid_level": "raid1", 00:16:55.703 "superblock": true, 00:16:55.703 "num_base_bdevs": 4, 00:16:55.703 "num_base_bdevs_discovered": 2, 00:16:55.703 "num_base_bdevs_operational": 4, 00:16:55.703 "base_bdevs_list": [ 00:16:55.703 { 00:16:55.703 "name": null, 00:16:55.703 "uuid": "2638ccc9-4060-11ef-b2a4-e9dca065e82e", 00:16:55.703 "is_configured": false, 00:16:55.703 "data_offset": 2048, 00:16:55.703 "data_size": 63488 00:16:55.703 }, 00:16:55.703 { 00:16:55.703 "name": null, 00:16:55.703 "uuid": "23840393-4060-11ef-b2a4-e9dca065e82e", 00:16:55.703 "is_configured": false, 00:16:55.703 "data_offset": 2048, 00:16:55.703 "data_size": 63488 00:16:55.703 }, 00:16:55.703 { 00:16:55.703 "name": "BaseBdev3", 00:16:55.703 "uuid": "240b8568-4060-11ef-b2a4-e9dca065e82e", 00:16:55.703 "is_configured": true, 00:16:55.703 "data_offset": 2048, 00:16:55.703 "data_size": 63488 00:16:55.703 }, 00:16:55.703 { 00:16:55.703 "name": "BaseBdev4", 00:16:55.703 "uuid": "2484fcce-4060-11ef-b2a4-e9dca065e82e", 00:16:55.703 "is_configured": true, 00:16:55.703 "data_offset": 2048, 00:16:55.703 "data_size": 63488 00:16:55.703 } 00:16:55.703 ] 00:16:55.703 }' 00:16:55.703 15:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:55.703 15:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.271 15:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.271 15:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:56.530 15:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:56.530 15:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:56.788 [2024-07-12 15:05:22.389219] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:56.788 15:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:56.788 15:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:56.788 15:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:56.788 15:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:56.788 15:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:56.788 15:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:56.788 15:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:56.788 15:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:56.788 15:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:56.788 15:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:56.788 15:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.788 15:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.046 15:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:57.046 "name": "Existed_Raid", 00:16:57.046 "uuid": "2506ff12-4060-11ef-b2a4-e9dca065e82e", 00:16:57.046 "strip_size_kb": 0, 00:16:57.046 "state": "configuring", 00:16:57.046 "raid_level": "raid1", 00:16:57.047 "superblock": true, 00:16:57.047 "num_base_bdevs": 4, 00:16:57.047 "num_base_bdevs_discovered": 3, 00:16:57.047 "num_base_bdevs_operational": 4, 00:16:57.047 "base_bdevs_list": [ 00:16:57.047 { 00:16:57.047 "name": null, 00:16:57.047 "uuid": "2638ccc9-4060-11ef-b2a4-e9dca065e82e", 00:16:57.047 "is_configured": false, 00:16:57.047 "data_offset": 2048, 00:16:57.047 "data_size": 63488 00:16:57.047 }, 00:16:57.047 { 00:16:57.047 "name": "BaseBdev2", 00:16:57.047 "uuid": "23840393-4060-11ef-b2a4-e9dca065e82e", 00:16:57.047 "is_configured": true, 00:16:57.047 "data_offset": 2048, 00:16:57.047 "data_size": 63488 00:16:57.047 }, 00:16:57.047 { 00:16:57.047 "name": "BaseBdev3", 00:16:57.047 "uuid": "240b8568-4060-11ef-b2a4-e9dca065e82e", 00:16:57.047 "is_configured": true, 00:16:57.047 "data_offset": 2048, 00:16:57.047 "data_size": 63488 00:16:57.047 }, 00:16:57.047 { 00:16:57.047 "name": "BaseBdev4", 00:16:57.047 "uuid": "2484fcce-4060-11ef-b2a4-e9dca065e82e", 00:16:57.047 "is_configured": true, 00:16:57.047 "data_offset": 2048, 00:16:57.047 "data_size": 63488 00:16:57.047 } 00:16:57.047 ] 00:16:57.047 }' 00:16:57.047 15:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:57.047 15:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.304 15:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.304 15:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:57.562 15:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:57.562 15:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.562 15:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:57.821 15:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 2638ccc9-4060-11ef-b2a4-e9dca065e82e 00:16:58.079 [2024-07-12 15:05:23.769430] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:58.079 [2024-07-12 15:05:23.769515] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x27c126234f00 00:16:58.079 [2024-07-12 15:05:23.769521] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:58.080 [2024-07-12 15:05:23.769543] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x27c126297e20 00:16:58.080 [2024-07-12 15:05:23.769591] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x27c126234f00 00:16:58.080 [2024-07-12 15:05:23.769607] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x27c126234f00 00:16:58.080 [2024-07-12 15:05:23.769627] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.080 NewBaseBdev 00:16:58.080 15:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:58.080 15:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:16:58.080 15:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:58.080 15:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:58.080 15:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:58.080 15:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:58.080 15:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:58.337 15:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:58.598 [ 00:16:58.598 { 00:16:58.598 "name": "NewBaseBdev", 00:16:58.598 "aliases": [ 00:16:58.598 "2638ccc9-4060-11ef-b2a4-e9dca065e82e" 00:16:58.598 ], 00:16:58.598 "product_name": "Malloc disk", 00:16:58.598 "block_size": 512, 00:16:58.598 "num_blocks": 65536, 00:16:58.598 "uuid": "2638ccc9-4060-11ef-b2a4-e9dca065e82e", 00:16:58.598 "assigned_rate_limits": { 00:16:58.598 "rw_ios_per_sec": 0, 00:16:58.598 "rw_mbytes_per_sec": 0, 00:16:58.598 "r_mbytes_per_sec": 0, 00:16:58.598 "w_mbytes_per_sec": 0 00:16:58.598 }, 00:16:58.598 "claimed": true, 00:16:58.598 "claim_type": "exclusive_write", 00:16:58.598 "zoned": false, 00:16:58.598 "supported_io_types": { 00:16:58.598 "read": true, 00:16:58.598 "write": true, 00:16:58.598 "unmap": true, 00:16:58.598 "flush": true, 00:16:58.598 "reset": true, 00:16:58.598 "nvme_admin": false, 00:16:58.598 "nvme_io": false, 00:16:58.598 "nvme_io_md": false, 00:16:58.598 "write_zeroes": true, 00:16:58.598 "zcopy": true, 00:16:58.598 "get_zone_info": false, 00:16:58.598 "zone_management": false, 00:16:58.598 "zone_append": false, 00:16:58.598 "compare": false, 00:16:58.598 "compare_and_write": false, 00:16:58.598 "abort": true, 00:16:58.598 "seek_hole": false, 00:16:58.598 "seek_data": false, 00:16:58.598 "copy": true, 00:16:58.598 "nvme_iov_md": false 00:16:58.598 }, 00:16:58.598 "memory_domains": [ 00:16:58.598 { 00:16:58.598 "dma_device_id": "system", 00:16:58.598 "dma_device_type": 1 00:16:58.598 }, 00:16:58.598 { 00:16:58.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.598 "dma_device_type": 2 00:16:58.598 } 00:16:58.598 ], 00:16:58.598 "driver_specific": {} 00:16:58.598 } 00:16:58.598 ] 00:16:58.598 15:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:58.598 15:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:58.598 15:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:58.598 15:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:58.598 15:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:58.598 15:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:58.598 15:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:58.598 15:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:58.598 15:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:58.598 15:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:58.598 15:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:58.598 15:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.598 15:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.856 15:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:58.856 "name": "Existed_Raid", 00:16:58.856 "uuid": "2506ff12-4060-11ef-b2a4-e9dca065e82e", 00:16:58.856 "strip_size_kb": 0, 00:16:58.856 "state": "online", 00:16:58.856 "raid_level": "raid1", 00:16:58.856 "superblock": true, 00:16:58.856 "num_base_bdevs": 4, 00:16:58.856 "num_base_bdevs_discovered": 4, 00:16:58.856 "num_base_bdevs_operational": 4, 00:16:58.856 "base_bdevs_list": [ 00:16:58.856 { 00:16:58.856 "name": "NewBaseBdev", 00:16:58.856 "uuid": "2638ccc9-4060-11ef-b2a4-e9dca065e82e", 00:16:58.856 "is_configured": true, 00:16:58.856 "data_offset": 2048, 00:16:58.856 "data_size": 63488 00:16:58.856 }, 00:16:58.856 { 00:16:58.856 "name": "BaseBdev2", 00:16:58.856 "uuid": "23840393-4060-11ef-b2a4-e9dca065e82e", 00:16:58.856 "is_configured": true, 00:16:58.856 "data_offset": 2048, 00:16:58.856 "data_size": 63488 00:16:58.856 }, 00:16:58.856 { 00:16:58.856 "name": "BaseBdev3", 00:16:58.856 "uuid": "240b8568-4060-11ef-b2a4-e9dca065e82e", 00:16:58.856 "is_configured": true, 00:16:58.856 "data_offset": 2048, 00:16:58.856 "data_size": 63488 00:16:58.856 }, 00:16:58.856 { 00:16:58.856 "name": "BaseBdev4", 00:16:58.856 "uuid": "2484fcce-4060-11ef-b2a4-e9dca065e82e", 00:16:58.856 "is_configured": true, 00:16:58.856 "data_offset": 2048, 00:16:58.856 "data_size": 63488 00:16:58.856 } 00:16:58.856 ] 00:16:58.856 }' 00:16:58.856 15:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:58.856 15:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.114 15:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:59.114 15:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:59.114 15:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:59.114 15:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:59.114 15:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:59.114 15:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:59.114 15:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:59.114 15:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:59.373 [2024-07-12 15:05:25.129395] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:59.373 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:59.373 "name": "Existed_Raid", 00:16:59.373 "aliases": [ 00:16:59.373 "2506ff12-4060-11ef-b2a4-e9dca065e82e" 00:16:59.373 ], 00:16:59.373 "product_name": "Raid Volume", 00:16:59.373 "block_size": 512, 00:16:59.373 "num_blocks": 63488, 00:16:59.373 "uuid": "2506ff12-4060-11ef-b2a4-e9dca065e82e", 00:16:59.373 "assigned_rate_limits": { 00:16:59.373 "rw_ios_per_sec": 0, 00:16:59.373 "rw_mbytes_per_sec": 0, 00:16:59.373 "r_mbytes_per_sec": 0, 00:16:59.373 "w_mbytes_per_sec": 0 00:16:59.373 }, 00:16:59.373 "claimed": false, 00:16:59.373 "zoned": false, 00:16:59.373 "supported_io_types": { 00:16:59.373 "read": true, 00:16:59.373 "write": true, 00:16:59.373 "unmap": false, 00:16:59.373 "flush": false, 00:16:59.373 "reset": true, 00:16:59.373 "nvme_admin": false, 00:16:59.373 "nvme_io": false, 00:16:59.373 "nvme_io_md": false, 00:16:59.373 "write_zeroes": true, 00:16:59.373 "zcopy": false, 00:16:59.373 "get_zone_info": false, 00:16:59.373 "zone_management": false, 00:16:59.373 "zone_append": false, 00:16:59.373 "compare": false, 00:16:59.373 "compare_and_write": false, 00:16:59.373 "abort": false, 00:16:59.373 "seek_hole": false, 00:16:59.373 "seek_data": false, 00:16:59.373 "copy": false, 00:16:59.373 "nvme_iov_md": false 00:16:59.373 }, 00:16:59.373 "memory_domains": [ 00:16:59.373 { 00:16:59.373 "dma_device_id": "system", 00:16:59.373 "dma_device_type": 1 00:16:59.373 }, 00:16:59.373 { 00:16:59.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.373 "dma_device_type": 2 00:16:59.373 }, 00:16:59.373 { 00:16:59.373 "dma_device_id": "system", 00:16:59.373 "dma_device_type": 1 00:16:59.373 }, 00:16:59.373 { 00:16:59.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.373 "dma_device_type": 2 00:16:59.373 }, 00:16:59.373 { 00:16:59.373 "dma_device_id": "system", 00:16:59.374 "dma_device_type": 1 00:16:59.374 }, 00:16:59.374 { 00:16:59.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.374 "dma_device_type": 2 00:16:59.374 }, 00:16:59.374 { 00:16:59.374 "dma_device_id": "system", 00:16:59.374 "dma_device_type": 1 00:16:59.374 }, 00:16:59.374 { 00:16:59.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.374 "dma_device_type": 2 00:16:59.374 } 00:16:59.374 ], 00:16:59.374 "driver_specific": { 00:16:59.374 "raid": { 00:16:59.374 "uuid": "2506ff12-4060-11ef-b2a4-e9dca065e82e", 00:16:59.374 "strip_size_kb": 0, 00:16:59.374 "state": "online", 00:16:59.374 "raid_level": "raid1", 00:16:59.374 "superblock": true, 00:16:59.374 "num_base_bdevs": 4, 00:16:59.374 "num_base_bdevs_discovered": 4, 00:16:59.374 "num_base_bdevs_operational": 4, 00:16:59.374 "base_bdevs_list": [ 00:16:59.374 { 00:16:59.374 "name": "NewBaseBdev", 00:16:59.374 "uuid": "2638ccc9-4060-11ef-b2a4-e9dca065e82e", 00:16:59.374 "is_configured": true, 00:16:59.374 "data_offset": 2048, 00:16:59.374 "data_size": 63488 00:16:59.374 }, 00:16:59.374 { 00:16:59.374 "name": "BaseBdev2", 00:16:59.374 "uuid": "23840393-4060-11ef-b2a4-e9dca065e82e", 00:16:59.374 "is_configured": true, 00:16:59.374 "data_offset": 2048, 00:16:59.374 "data_size": 63488 00:16:59.374 }, 00:16:59.374 { 00:16:59.374 "name": "BaseBdev3", 00:16:59.374 "uuid": "240b8568-4060-11ef-b2a4-e9dca065e82e", 00:16:59.374 "is_configured": true, 00:16:59.374 "data_offset": 2048, 00:16:59.374 "data_size": 63488 00:16:59.374 }, 00:16:59.374 { 00:16:59.374 "name": "BaseBdev4", 00:16:59.374 "uuid": "2484fcce-4060-11ef-b2a4-e9dca065e82e", 00:16:59.374 "is_configured": true, 00:16:59.374 "data_offset": 2048, 00:16:59.374 "data_size": 63488 00:16:59.374 } 00:16:59.374 ] 00:16:59.374 } 00:16:59.374 } 00:16:59.374 }' 00:16:59.374 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:59.374 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:59.374 BaseBdev2 00:16:59.374 BaseBdev3 00:16:59.374 BaseBdev4' 00:16:59.374 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:59.374 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:59.374 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:59.633 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:59.633 "name": "NewBaseBdev", 00:16:59.633 "aliases": [ 00:16:59.633 "2638ccc9-4060-11ef-b2a4-e9dca065e82e" 00:16:59.633 ], 00:16:59.633 "product_name": "Malloc disk", 00:16:59.633 "block_size": 512, 00:16:59.633 "num_blocks": 65536, 00:16:59.633 "uuid": "2638ccc9-4060-11ef-b2a4-e9dca065e82e", 00:16:59.633 "assigned_rate_limits": { 00:16:59.633 "rw_ios_per_sec": 0, 00:16:59.633 "rw_mbytes_per_sec": 0, 00:16:59.633 "r_mbytes_per_sec": 0, 00:16:59.633 "w_mbytes_per_sec": 0 00:16:59.633 }, 00:16:59.633 "claimed": true, 00:16:59.633 "claim_type": "exclusive_write", 00:16:59.633 "zoned": false, 00:16:59.633 "supported_io_types": { 00:16:59.633 "read": true, 00:16:59.633 "write": true, 00:16:59.633 "unmap": true, 00:16:59.633 "flush": true, 00:16:59.633 "reset": true, 00:16:59.633 "nvme_admin": false, 00:16:59.633 "nvme_io": false, 00:16:59.633 "nvme_io_md": false, 00:16:59.633 "write_zeroes": true, 00:16:59.633 "zcopy": true, 00:16:59.633 "get_zone_info": false, 00:16:59.633 "zone_management": false, 00:16:59.633 "zone_append": false, 00:16:59.633 "compare": false, 00:16:59.633 "compare_and_write": false, 00:16:59.633 "abort": true, 00:16:59.633 "seek_hole": false, 00:16:59.633 "seek_data": false, 00:16:59.633 "copy": true, 00:16:59.633 "nvme_iov_md": false 00:16:59.633 }, 00:16:59.633 "memory_domains": [ 00:16:59.633 { 00:16:59.633 "dma_device_id": "system", 00:16:59.633 "dma_device_type": 1 00:16:59.633 }, 00:16:59.633 { 00:16:59.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.633 "dma_device_type": 2 00:16:59.633 } 00:16:59.633 ], 00:16:59.633 "driver_specific": {} 00:16:59.633 }' 00:16:59.633 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:59.633 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:59.633 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:59.633 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:59.633 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:59.633 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:59.633 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:59.633 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:59.633 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:59.633 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:59.633 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:59.633 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:59.633 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:59.633 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:59.633 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:59.892 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:59.892 "name": "BaseBdev2", 00:16:59.892 "aliases": [ 00:16:59.892 "23840393-4060-11ef-b2a4-e9dca065e82e" 00:16:59.892 ], 00:16:59.892 "product_name": "Malloc disk", 00:16:59.892 "block_size": 512, 00:16:59.892 "num_blocks": 65536, 00:16:59.892 "uuid": "23840393-4060-11ef-b2a4-e9dca065e82e", 00:16:59.892 "assigned_rate_limits": { 00:16:59.892 "rw_ios_per_sec": 0, 00:16:59.892 "rw_mbytes_per_sec": 0, 00:16:59.892 "r_mbytes_per_sec": 0, 00:16:59.892 "w_mbytes_per_sec": 0 00:16:59.892 }, 00:16:59.892 "claimed": true, 00:16:59.892 "claim_type": "exclusive_write", 00:16:59.892 "zoned": false, 00:16:59.892 "supported_io_types": { 00:16:59.892 "read": true, 00:16:59.892 "write": true, 00:16:59.892 "unmap": true, 00:16:59.892 "flush": true, 00:16:59.892 "reset": true, 00:16:59.892 "nvme_admin": false, 00:16:59.892 "nvme_io": false, 00:16:59.892 "nvme_io_md": false, 00:16:59.892 "write_zeroes": true, 00:16:59.892 "zcopy": true, 00:16:59.892 "get_zone_info": false, 00:16:59.892 "zone_management": false, 00:16:59.892 "zone_append": false, 00:16:59.892 "compare": false, 00:16:59.892 "compare_and_write": false, 00:16:59.892 "abort": true, 00:16:59.892 "seek_hole": false, 00:16:59.892 "seek_data": false, 00:16:59.892 "copy": true, 00:16:59.892 "nvme_iov_md": false 00:16:59.892 }, 00:16:59.892 "memory_domains": [ 00:16:59.892 { 00:16:59.892 "dma_device_id": "system", 00:16:59.892 "dma_device_type": 1 00:16:59.892 }, 00:16:59.892 { 00:16:59.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.892 "dma_device_type": 2 00:16:59.892 } 00:16:59.892 ], 00:16:59.892 "driver_specific": {} 00:16:59.892 }' 00:16:59.892 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:59.892 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:59.892 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:59.892 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:59.892 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:00.149 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:00.149 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:00.149 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:00.149 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:00.149 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:00.149 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:00.149 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:00.149 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:00.150 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:00.150 15:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:00.408 15:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:00.408 "name": "BaseBdev3", 00:17:00.408 "aliases": [ 00:17:00.408 "240b8568-4060-11ef-b2a4-e9dca065e82e" 00:17:00.408 ], 00:17:00.408 "product_name": "Malloc disk", 00:17:00.408 "block_size": 512, 00:17:00.408 "num_blocks": 65536, 00:17:00.408 "uuid": "240b8568-4060-11ef-b2a4-e9dca065e82e", 00:17:00.408 "assigned_rate_limits": { 00:17:00.408 "rw_ios_per_sec": 0, 00:17:00.408 "rw_mbytes_per_sec": 0, 00:17:00.408 "r_mbytes_per_sec": 0, 00:17:00.408 "w_mbytes_per_sec": 0 00:17:00.408 }, 00:17:00.408 "claimed": true, 00:17:00.408 "claim_type": "exclusive_write", 00:17:00.408 "zoned": false, 00:17:00.408 "supported_io_types": { 00:17:00.408 "read": true, 00:17:00.408 "write": true, 00:17:00.408 "unmap": true, 00:17:00.408 "flush": true, 00:17:00.408 "reset": true, 00:17:00.408 "nvme_admin": false, 00:17:00.408 "nvme_io": false, 00:17:00.408 "nvme_io_md": false, 00:17:00.408 "write_zeroes": true, 00:17:00.408 "zcopy": true, 00:17:00.408 "get_zone_info": false, 00:17:00.408 "zone_management": false, 00:17:00.408 "zone_append": false, 00:17:00.408 "compare": false, 00:17:00.408 "compare_and_write": false, 00:17:00.408 "abort": true, 00:17:00.408 "seek_hole": false, 00:17:00.408 "seek_data": false, 00:17:00.408 "copy": true, 00:17:00.408 "nvme_iov_md": false 00:17:00.408 }, 00:17:00.408 "memory_domains": [ 00:17:00.408 { 00:17:00.408 "dma_device_id": "system", 00:17:00.408 "dma_device_type": 1 00:17:00.408 }, 00:17:00.408 { 00:17:00.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.408 "dma_device_type": 2 00:17:00.408 } 00:17:00.408 ], 00:17:00.408 "driver_specific": {} 00:17:00.408 }' 00:17:00.408 15:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:00.408 15:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:00.408 15:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:00.408 15:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:00.408 15:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:00.408 15:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:00.408 15:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:00.408 15:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:00.408 15:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:00.408 15:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:00.408 15:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:00.408 15:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:00.408 15:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:00.408 15:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:17:00.408 15:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:00.666 15:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:00.666 "name": "BaseBdev4", 00:17:00.666 "aliases": [ 00:17:00.666 "2484fcce-4060-11ef-b2a4-e9dca065e82e" 00:17:00.666 ], 00:17:00.666 "product_name": "Malloc disk", 00:17:00.666 "block_size": 512, 00:17:00.666 "num_blocks": 65536, 00:17:00.666 "uuid": "2484fcce-4060-11ef-b2a4-e9dca065e82e", 00:17:00.666 "assigned_rate_limits": { 00:17:00.666 "rw_ios_per_sec": 0, 00:17:00.666 "rw_mbytes_per_sec": 0, 00:17:00.666 "r_mbytes_per_sec": 0, 00:17:00.666 "w_mbytes_per_sec": 0 00:17:00.666 }, 00:17:00.666 "claimed": true, 00:17:00.666 "claim_type": "exclusive_write", 00:17:00.666 "zoned": false, 00:17:00.666 "supported_io_types": { 00:17:00.666 "read": true, 00:17:00.666 "write": true, 00:17:00.666 "unmap": true, 00:17:00.666 "flush": true, 00:17:00.666 "reset": true, 00:17:00.666 "nvme_admin": false, 00:17:00.666 "nvme_io": false, 00:17:00.666 "nvme_io_md": false, 00:17:00.666 "write_zeroes": true, 00:17:00.666 "zcopy": true, 00:17:00.666 "get_zone_info": false, 00:17:00.666 "zone_management": false, 00:17:00.666 "zone_append": false, 00:17:00.666 "compare": false, 00:17:00.666 "compare_and_write": false, 00:17:00.666 "abort": true, 00:17:00.666 "seek_hole": false, 00:17:00.666 "seek_data": false, 00:17:00.666 "copy": true, 00:17:00.666 "nvme_iov_md": false 00:17:00.666 }, 00:17:00.666 "memory_domains": [ 00:17:00.666 { 00:17:00.666 "dma_device_id": "system", 00:17:00.666 "dma_device_type": 1 00:17:00.666 }, 00:17:00.666 { 00:17:00.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.666 "dma_device_type": 2 00:17:00.666 } 00:17:00.666 ], 00:17:00.666 "driver_specific": {} 00:17:00.666 }' 00:17:00.666 15:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:00.666 15:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:00.666 15:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:00.666 15:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:00.666 15:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:00.666 15:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:00.666 15:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:00.666 15:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:00.666 15:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:00.666 15:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:00.666 15:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:00.666 15:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:00.666 15:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:00.925 [2024-07-12 15:05:26.745430] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:00.925 [2024-07-12 15:05:26.745459] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:00.925 [2024-07-12 15:05:26.745491] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.925 [2024-07-12 15:05:26.745564] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.925 [2024-07-12 15:05:26.745569] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x27c126234f00 name Existed_Raid, state offline 00:17:01.184 15:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 63833 00:17:01.184 15:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 63833 ']' 00:17:01.184 15:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 63833 00:17:01.184 15:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:17:01.184 15:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:17:01.184 15:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 63833 00:17:01.184 15:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:17:01.184 15:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:17:01.184 15:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:17:01.184 killing process with pid 63833 00:17:01.184 15:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63833' 00:17:01.184 15:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 63833 00:17:01.184 15:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 63833 00:17:01.184 [2024-07-12 15:05:26.772779] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:01.184 [2024-07-12 15:05:26.796330] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:01.184 15:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:17:01.184 00:17:01.184 real 0m28.204s 00:17:01.184 user 0m51.865s 00:17:01.184 sys 0m3.685s 00:17:01.184 ************************************ 00:17:01.184 END TEST raid_state_function_test_sb 00:17:01.184 ************************************ 00:17:01.184 15:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:01.184 15:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.443 15:05:27 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:01.443 15:05:27 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:17:01.443 15:05:27 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:01.443 15:05:27 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:01.443 15:05:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:01.443 ************************************ 00:17:01.443 START TEST raid_superblock_test 00:17:01.443 ************************************ 00:17:01.443 15:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 4 00:17:01.443 15:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:17:01.443 15:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:17:01.443 15:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:17:01.443 15:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:17:01.443 15:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:17:01.443 15:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:17:01.443 15:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:17:01.443 15:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:17:01.443 15:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:17:01.443 15:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:17:01.443 15:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:17:01.443 15:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:17:01.443 15:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:17:01.443 15:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:17:01.443 15:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:17:01.443 15:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=64655 00:17:01.443 15:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 64655 /var/tmp/spdk-raid.sock 00:17:01.443 15:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:01.443 15:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 64655 ']' 00:17:01.443 15:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:01.443 15:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:01.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:01.443 15:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:01.443 15:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:01.443 15:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.443 [2024-07-12 15:05:27.043119] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:17:01.443 [2024-07-12 15:05:27.043306] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:02.008 EAL: TSC is not safe to use in SMP mode 00:17:02.008 EAL: TSC is not invariant 00:17:02.008 [2024-07-12 15:05:27.593513] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.009 [2024-07-12 15:05:27.714544] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:02.009 [2024-07-12 15:05:27.717155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.009 [2024-07-12 15:05:27.718098] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:02.009 [2024-07-12 15:05:27.718116] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:02.573 15:05:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:02.573 15:05:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:17:02.573 15:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:17:02.573 15:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:02.573 15:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:17:02.573 15:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:17:02.573 15:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:02.573 15:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:02.573 15:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:02.573 15:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:02.573 15:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:02.573 malloc1 00:17:02.573 15:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:02.831 [2024-07-12 15:05:28.547266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:02.831 [2024-07-12 15:05:28.547336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.831 [2024-07-12 15:05:28.547348] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x35134b834780 00:17:02.831 [2024-07-12 15:05:28.547357] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.831 [2024-07-12 15:05:28.548268] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.831 [2024-07-12 15:05:28.548295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:02.831 pt1 00:17:02.831 15:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:02.831 15:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:02.831 15:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:17:02.831 15:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:17:02.831 15:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:02.831 15:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:02.831 15:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:02.831 15:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:02.831 15:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:03.088 malloc2 00:17:03.088 15:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:03.346 [2024-07-12 15:05:29.139297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:03.346 [2024-07-12 15:05:29.139357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.346 [2024-07-12 15:05:29.139370] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x35134b834c80 00:17:03.346 [2024-07-12 15:05:29.139378] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.346 [2024-07-12 15:05:29.140058] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.346 [2024-07-12 15:05:29.140085] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:03.346 pt2 00:17:03.346 15:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:03.346 15:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:03.346 15:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:17:03.346 15:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:17:03.346 15:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:03.346 15:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:03.346 15:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:03.346 15:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:03.346 15:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:03.604 malloc3 00:17:03.604 15:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:03.862 [2024-07-12 15:05:29.623315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:03.862 [2024-07-12 15:05:29.623374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.862 [2024-07-12 15:05:29.623387] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x35134b835180 00:17:03.862 [2024-07-12 15:05:29.623395] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.862 [2024-07-12 15:05:29.624063] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.862 [2024-07-12 15:05:29.624094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:03.862 pt3 00:17:03.862 15:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:03.862 15:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:03.862 15:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:17:03.862 15:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:17:03.862 15:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:03.862 15:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:03.862 15:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:03.862 15:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:03.862 15:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:17:04.121 malloc4 00:17:04.121 15:05:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:04.380 [2024-07-12 15:05:30.147351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:04.380 [2024-07-12 15:05:30.147407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.380 [2024-07-12 15:05:30.147426] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x35134b835680 00:17:04.380 [2024-07-12 15:05:30.147434] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.380 [2024-07-12 15:05:30.148092] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.380 [2024-07-12 15:05:30.148117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:04.380 pt4 00:17:04.380 15:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:04.380 15:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:04.380 15:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:17:04.639 [2024-07-12 15:05:30.387376] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:04.639 [2024-07-12 15:05:30.387972] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:04.639 [2024-07-12 15:05:30.387993] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:04.639 [2024-07-12 15:05:30.388005] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:04.639 [2024-07-12 15:05:30.388061] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x35134b835900 00:17:04.639 [2024-07-12 15:05:30.388068] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:04.639 [2024-07-12 15:05:30.388112] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x35134b897e20 00:17:04.639 [2024-07-12 15:05:30.388198] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x35134b835900 00:17:04.639 [2024-07-12 15:05:30.388203] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x35134b835900 00:17:04.639 [2024-07-12 15:05:30.388231] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.639 15:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:04.639 15:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:04.639 15:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:04.639 15:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:04.639 15:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:04.639 15:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:04.639 15:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:04.639 15:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:04.639 15:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:04.639 15:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:04.639 15:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.639 15:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.899 15:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:04.899 "name": "raid_bdev1", 00:17:04.899 "uuid": "2df57484-4060-11ef-b2a4-e9dca065e82e", 00:17:04.899 "strip_size_kb": 0, 00:17:04.899 "state": "online", 00:17:04.899 "raid_level": "raid1", 00:17:04.899 "superblock": true, 00:17:04.899 "num_base_bdevs": 4, 00:17:04.899 "num_base_bdevs_discovered": 4, 00:17:04.899 "num_base_bdevs_operational": 4, 00:17:04.899 "base_bdevs_list": [ 00:17:04.899 { 00:17:04.899 "name": "pt1", 00:17:04.899 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:04.899 "is_configured": true, 00:17:04.899 "data_offset": 2048, 00:17:04.899 "data_size": 63488 00:17:04.899 }, 00:17:04.899 { 00:17:04.899 "name": "pt2", 00:17:04.899 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:04.899 "is_configured": true, 00:17:04.899 "data_offset": 2048, 00:17:04.899 "data_size": 63488 00:17:04.899 }, 00:17:04.899 { 00:17:04.899 "name": "pt3", 00:17:04.899 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:04.899 "is_configured": true, 00:17:04.899 "data_offset": 2048, 00:17:04.899 "data_size": 63488 00:17:04.899 }, 00:17:04.899 { 00:17:04.899 "name": "pt4", 00:17:04.899 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:04.899 "is_configured": true, 00:17:04.899 "data_offset": 2048, 00:17:04.899 "data_size": 63488 00:17:04.899 } 00:17:04.899 ] 00:17:04.899 }' 00:17:04.899 15:05:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:04.899 15:05:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.467 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:17:05.467 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:05.467 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:05.467 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:05.467 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:05.467 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:05.467 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:05.467 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:05.467 [2024-07-12 15:05:31.271452] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:05.467 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:05.467 "name": "raid_bdev1", 00:17:05.467 "aliases": [ 00:17:05.467 "2df57484-4060-11ef-b2a4-e9dca065e82e" 00:17:05.467 ], 00:17:05.467 "product_name": "Raid Volume", 00:17:05.467 "block_size": 512, 00:17:05.467 "num_blocks": 63488, 00:17:05.467 "uuid": "2df57484-4060-11ef-b2a4-e9dca065e82e", 00:17:05.467 "assigned_rate_limits": { 00:17:05.467 "rw_ios_per_sec": 0, 00:17:05.467 "rw_mbytes_per_sec": 0, 00:17:05.467 "r_mbytes_per_sec": 0, 00:17:05.467 "w_mbytes_per_sec": 0 00:17:05.467 }, 00:17:05.467 "claimed": false, 00:17:05.467 "zoned": false, 00:17:05.467 "supported_io_types": { 00:17:05.467 "read": true, 00:17:05.467 "write": true, 00:17:05.467 "unmap": false, 00:17:05.467 "flush": false, 00:17:05.467 "reset": true, 00:17:05.467 "nvme_admin": false, 00:17:05.467 "nvme_io": false, 00:17:05.467 "nvme_io_md": false, 00:17:05.467 "write_zeroes": true, 00:17:05.467 "zcopy": false, 00:17:05.467 "get_zone_info": false, 00:17:05.467 "zone_management": false, 00:17:05.467 "zone_append": false, 00:17:05.467 "compare": false, 00:17:05.468 "compare_and_write": false, 00:17:05.468 "abort": false, 00:17:05.468 "seek_hole": false, 00:17:05.468 "seek_data": false, 00:17:05.468 "copy": false, 00:17:05.468 "nvme_iov_md": false 00:17:05.468 }, 00:17:05.468 "memory_domains": [ 00:17:05.468 { 00:17:05.468 "dma_device_id": "system", 00:17:05.468 "dma_device_type": 1 00:17:05.468 }, 00:17:05.468 { 00:17:05.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.468 "dma_device_type": 2 00:17:05.468 }, 00:17:05.468 { 00:17:05.468 "dma_device_id": "system", 00:17:05.468 "dma_device_type": 1 00:17:05.468 }, 00:17:05.468 { 00:17:05.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.468 "dma_device_type": 2 00:17:05.468 }, 00:17:05.468 { 00:17:05.468 "dma_device_id": "system", 00:17:05.468 "dma_device_type": 1 00:17:05.468 }, 00:17:05.468 { 00:17:05.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.468 "dma_device_type": 2 00:17:05.468 }, 00:17:05.468 { 00:17:05.468 "dma_device_id": "system", 00:17:05.468 "dma_device_type": 1 00:17:05.468 }, 00:17:05.468 { 00:17:05.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.468 "dma_device_type": 2 00:17:05.468 } 00:17:05.468 ], 00:17:05.468 "driver_specific": { 00:17:05.468 "raid": { 00:17:05.468 "uuid": "2df57484-4060-11ef-b2a4-e9dca065e82e", 00:17:05.468 "strip_size_kb": 0, 00:17:05.468 "state": "online", 00:17:05.468 "raid_level": "raid1", 00:17:05.468 "superblock": true, 00:17:05.468 "num_base_bdevs": 4, 00:17:05.468 "num_base_bdevs_discovered": 4, 00:17:05.468 "num_base_bdevs_operational": 4, 00:17:05.468 "base_bdevs_list": [ 00:17:05.468 { 00:17:05.468 "name": "pt1", 00:17:05.468 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:05.468 "is_configured": true, 00:17:05.468 "data_offset": 2048, 00:17:05.468 "data_size": 63488 00:17:05.468 }, 00:17:05.468 { 00:17:05.468 "name": "pt2", 00:17:05.468 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:05.468 "is_configured": true, 00:17:05.468 "data_offset": 2048, 00:17:05.468 "data_size": 63488 00:17:05.468 }, 00:17:05.468 { 00:17:05.468 "name": "pt3", 00:17:05.468 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:05.468 "is_configured": true, 00:17:05.468 "data_offset": 2048, 00:17:05.468 "data_size": 63488 00:17:05.468 }, 00:17:05.468 { 00:17:05.468 "name": "pt4", 00:17:05.468 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:05.468 "is_configured": true, 00:17:05.468 "data_offset": 2048, 00:17:05.468 "data_size": 63488 00:17:05.468 } 00:17:05.468 ] 00:17:05.468 } 00:17:05.468 } 00:17:05.468 }' 00:17:05.468 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:05.725 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:05.725 pt2 00:17:05.725 pt3 00:17:05.725 pt4' 00:17:05.725 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:05.725 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:05.725 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:05.982 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:05.982 "name": "pt1", 00:17:05.982 "aliases": [ 00:17:05.982 "00000000-0000-0000-0000-000000000001" 00:17:05.982 ], 00:17:05.982 "product_name": "passthru", 00:17:05.982 "block_size": 512, 00:17:05.982 "num_blocks": 65536, 00:17:05.982 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:05.982 "assigned_rate_limits": { 00:17:05.982 "rw_ios_per_sec": 0, 00:17:05.982 "rw_mbytes_per_sec": 0, 00:17:05.982 "r_mbytes_per_sec": 0, 00:17:05.982 "w_mbytes_per_sec": 0 00:17:05.982 }, 00:17:05.982 "claimed": true, 00:17:05.982 "claim_type": "exclusive_write", 00:17:05.982 "zoned": false, 00:17:05.982 "supported_io_types": { 00:17:05.982 "read": true, 00:17:05.982 "write": true, 00:17:05.982 "unmap": true, 00:17:05.982 "flush": true, 00:17:05.982 "reset": true, 00:17:05.982 "nvme_admin": false, 00:17:05.982 "nvme_io": false, 00:17:05.982 "nvme_io_md": false, 00:17:05.982 "write_zeroes": true, 00:17:05.982 "zcopy": true, 00:17:05.982 "get_zone_info": false, 00:17:05.982 "zone_management": false, 00:17:05.982 "zone_append": false, 00:17:05.982 "compare": false, 00:17:05.982 "compare_and_write": false, 00:17:05.982 "abort": true, 00:17:05.982 "seek_hole": false, 00:17:05.982 "seek_data": false, 00:17:05.982 "copy": true, 00:17:05.982 "nvme_iov_md": false 00:17:05.982 }, 00:17:05.982 "memory_domains": [ 00:17:05.982 { 00:17:05.982 "dma_device_id": "system", 00:17:05.982 "dma_device_type": 1 00:17:05.982 }, 00:17:05.982 { 00:17:05.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.982 "dma_device_type": 2 00:17:05.982 } 00:17:05.982 ], 00:17:05.982 "driver_specific": { 00:17:05.982 "passthru": { 00:17:05.982 "name": "pt1", 00:17:05.982 "base_bdev_name": "malloc1" 00:17:05.982 } 00:17:05.982 } 00:17:05.982 }' 00:17:05.982 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:05.982 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:05.982 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:05.982 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:05.982 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:05.982 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:05.982 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:05.982 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:05.982 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:05.982 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:05.982 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:05.982 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:05.982 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:05.982 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:05.982 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:06.240 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:06.240 "name": "pt2", 00:17:06.240 "aliases": [ 00:17:06.240 "00000000-0000-0000-0000-000000000002" 00:17:06.240 ], 00:17:06.240 "product_name": "passthru", 00:17:06.240 "block_size": 512, 00:17:06.240 "num_blocks": 65536, 00:17:06.240 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:06.240 "assigned_rate_limits": { 00:17:06.240 "rw_ios_per_sec": 0, 00:17:06.240 "rw_mbytes_per_sec": 0, 00:17:06.240 "r_mbytes_per_sec": 0, 00:17:06.240 "w_mbytes_per_sec": 0 00:17:06.240 }, 00:17:06.240 "claimed": true, 00:17:06.240 "claim_type": "exclusive_write", 00:17:06.240 "zoned": false, 00:17:06.240 "supported_io_types": { 00:17:06.240 "read": true, 00:17:06.240 "write": true, 00:17:06.240 "unmap": true, 00:17:06.240 "flush": true, 00:17:06.240 "reset": true, 00:17:06.240 "nvme_admin": false, 00:17:06.240 "nvme_io": false, 00:17:06.240 "nvme_io_md": false, 00:17:06.240 "write_zeroes": true, 00:17:06.240 "zcopy": true, 00:17:06.240 "get_zone_info": false, 00:17:06.240 "zone_management": false, 00:17:06.240 "zone_append": false, 00:17:06.240 "compare": false, 00:17:06.240 "compare_and_write": false, 00:17:06.240 "abort": true, 00:17:06.240 "seek_hole": false, 00:17:06.240 "seek_data": false, 00:17:06.240 "copy": true, 00:17:06.240 "nvme_iov_md": false 00:17:06.240 }, 00:17:06.240 "memory_domains": [ 00:17:06.240 { 00:17:06.240 "dma_device_id": "system", 00:17:06.240 "dma_device_type": 1 00:17:06.240 }, 00:17:06.240 { 00:17:06.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.240 "dma_device_type": 2 00:17:06.240 } 00:17:06.240 ], 00:17:06.240 "driver_specific": { 00:17:06.240 "passthru": { 00:17:06.240 "name": "pt2", 00:17:06.240 "base_bdev_name": "malloc2" 00:17:06.240 } 00:17:06.240 } 00:17:06.240 }' 00:17:06.240 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:06.240 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:06.240 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:06.240 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:06.240 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:06.240 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:06.240 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:06.240 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:06.240 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:06.240 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:06.240 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:06.240 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:06.240 15:05:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:06.240 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:06.240 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:06.498 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:06.498 "name": "pt3", 00:17:06.498 "aliases": [ 00:17:06.498 "00000000-0000-0000-0000-000000000003" 00:17:06.498 ], 00:17:06.498 "product_name": "passthru", 00:17:06.498 "block_size": 512, 00:17:06.498 "num_blocks": 65536, 00:17:06.498 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:06.498 "assigned_rate_limits": { 00:17:06.498 "rw_ios_per_sec": 0, 00:17:06.498 "rw_mbytes_per_sec": 0, 00:17:06.498 "r_mbytes_per_sec": 0, 00:17:06.499 "w_mbytes_per_sec": 0 00:17:06.499 }, 00:17:06.499 "claimed": true, 00:17:06.499 "claim_type": "exclusive_write", 00:17:06.499 "zoned": false, 00:17:06.499 "supported_io_types": { 00:17:06.499 "read": true, 00:17:06.499 "write": true, 00:17:06.499 "unmap": true, 00:17:06.499 "flush": true, 00:17:06.499 "reset": true, 00:17:06.499 "nvme_admin": false, 00:17:06.499 "nvme_io": false, 00:17:06.499 "nvme_io_md": false, 00:17:06.499 "write_zeroes": true, 00:17:06.499 "zcopy": true, 00:17:06.499 "get_zone_info": false, 00:17:06.499 "zone_management": false, 00:17:06.499 "zone_append": false, 00:17:06.499 "compare": false, 00:17:06.499 "compare_and_write": false, 00:17:06.499 "abort": true, 00:17:06.499 "seek_hole": false, 00:17:06.499 "seek_data": false, 00:17:06.499 "copy": true, 00:17:06.499 "nvme_iov_md": false 00:17:06.499 }, 00:17:06.499 "memory_domains": [ 00:17:06.499 { 00:17:06.499 "dma_device_id": "system", 00:17:06.499 "dma_device_type": 1 00:17:06.499 }, 00:17:06.499 { 00:17:06.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.499 "dma_device_type": 2 00:17:06.499 } 00:17:06.499 ], 00:17:06.499 "driver_specific": { 00:17:06.499 "passthru": { 00:17:06.499 "name": "pt3", 00:17:06.499 "base_bdev_name": "malloc3" 00:17:06.499 } 00:17:06.499 } 00:17:06.499 }' 00:17:06.499 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:06.499 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:06.499 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:06.499 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:06.499 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:06.499 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:06.499 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:06.499 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:06.499 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:06.499 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:06.499 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:06.499 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:06.499 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:06.499 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:17:06.499 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:07.064 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:07.064 "name": "pt4", 00:17:07.065 "aliases": [ 00:17:07.065 "00000000-0000-0000-0000-000000000004" 00:17:07.065 ], 00:17:07.065 "product_name": "passthru", 00:17:07.065 "block_size": 512, 00:17:07.065 "num_blocks": 65536, 00:17:07.065 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:07.065 "assigned_rate_limits": { 00:17:07.065 "rw_ios_per_sec": 0, 00:17:07.065 "rw_mbytes_per_sec": 0, 00:17:07.065 "r_mbytes_per_sec": 0, 00:17:07.065 "w_mbytes_per_sec": 0 00:17:07.065 }, 00:17:07.065 "claimed": true, 00:17:07.065 "claim_type": "exclusive_write", 00:17:07.065 "zoned": false, 00:17:07.065 "supported_io_types": { 00:17:07.065 "read": true, 00:17:07.065 "write": true, 00:17:07.065 "unmap": true, 00:17:07.065 "flush": true, 00:17:07.065 "reset": true, 00:17:07.065 "nvme_admin": false, 00:17:07.065 "nvme_io": false, 00:17:07.065 "nvme_io_md": false, 00:17:07.065 "write_zeroes": true, 00:17:07.065 "zcopy": true, 00:17:07.065 "get_zone_info": false, 00:17:07.065 "zone_management": false, 00:17:07.065 "zone_append": false, 00:17:07.065 "compare": false, 00:17:07.065 "compare_and_write": false, 00:17:07.065 "abort": true, 00:17:07.065 "seek_hole": false, 00:17:07.065 "seek_data": false, 00:17:07.065 "copy": true, 00:17:07.065 "nvme_iov_md": false 00:17:07.065 }, 00:17:07.065 "memory_domains": [ 00:17:07.065 { 00:17:07.065 "dma_device_id": "system", 00:17:07.065 "dma_device_type": 1 00:17:07.065 }, 00:17:07.065 { 00:17:07.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.065 "dma_device_type": 2 00:17:07.065 } 00:17:07.065 ], 00:17:07.065 "driver_specific": { 00:17:07.065 "passthru": { 00:17:07.065 "name": "pt4", 00:17:07.065 "base_bdev_name": "malloc4" 00:17:07.065 } 00:17:07.065 } 00:17:07.065 }' 00:17:07.065 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:07.065 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:07.065 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:07.065 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:07.065 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:07.065 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:07.065 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:07.065 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:07.065 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:07.065 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:07.065 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:07.065 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:07.065 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:07.065 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:17:07.322 [2024-07-12 15:05:32.919568] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:07.322 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=2df57484-4060-11ef-b2a4-e9dca065e82e 00:17:07.322 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 2df57484-4060-11ef-b2a4-e9dca065e82e ']' 00:17:07.322 15:05:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:07.580 [2024-07-12 15:05:33.243531] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:07.580 [2024-07-12 15:05:33.243556] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:07.580 [2024-07-12 15:05:33.243579] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:07.580 [2024-07-12 15:05:33.243598] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:07.580 [2024-07-12 15:05:33.243603] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x35134b835900 name raid_bdev1, state offline 00:17:07.580 15:05:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.580 15:05:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:17:07.838 15:05:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:17:07.838 15:05:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:17:07.838 15:05:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:07.838 15:05:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:08.096 15:05:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:08.096 15:05:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:08.354 15:05:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:08.354 15:05:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:08.613 15:05:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:08.613 15:05:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:08.870 15:05:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:08.870 15:05:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:09.145 15:05:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:17:09.145 15:05:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:09.145 15:05:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:17:09.145 15:05:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:09.145 15:05:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:09.145 15:05:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:09.145 15:05:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:09.145 15:05:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:09.145 15:05:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:09.145 15:05:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:09.145 15:05:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:09.145 15:05:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:09.145 15:05:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:09.404 [2024-07-12 15:05:35.091655] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:09.404 [2024-07-12 15:05:35.092246] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:09.404 [2024-07-12 15:05:35.092266] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:09.404 [2024-07-12 15:05:35.092275] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:09.404 [2024-07-12 15:05:35.092289] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:09.404 [2024-07-12 15:05:35.092332] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:09.404 [2024-07-12 15:05:35.092344] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:09.404 [2024-07-12 15:05:35.092354] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:09.404 [2024-07-12 15:05:35.092362] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:09.404 [2024-07-12 15:05:35.092367] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x35134b835680 name raid_bdev1, state configuring 00:17:09.404 request: 00:17:09.404 { 00:17:09.404 "name": "raid_bdev1", 00:17:09.404 "raid_level": "raid1", 00:17:09.404 "base_bdevs": [ 00:17:09.404 "malloc1", 00:17:09.404 "malloc2", 00:17:09.404 "malloc3", 00:17:09.404 "malloc4" 00:17:09.404 ], 00:17:09.404 "superblock": false, 00:17:09.404 "method": "bdev_raid_create", 00:17:09.404 "req_id": 1 00:17:09.404 } 00:17:09.404 Got JSON-RPC error response 00:17:09.404 response: 00:17:09.404 { 00:17:09.404 "code": -17, 00:17:09.404 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:09.404 } 00:17:09.404 15:05:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:17:09.404 15:05:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:09.404 15:05:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:09.404 15:05:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:09.404 15:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.404 15:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:17:09.663 15:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:17:09.663 15:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:17:09.663 15:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:09.921 [2024-07-12 15:05:35.563669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:09.921 [2024-07-12 15:05:35.563760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.921 [2024-07-12 15:05:35.563789] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x35134b835180 00:17:09.921 [2024-07-12 15:05:35.563798] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.921 [2024-07-12 15:05:35.564450] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.921 [2024-07-12 15:05:35.564475] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:09.921 [2024-07-12 15:05:35.564501] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:09.921 [2024-07-12 15:05:35.564513] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:09.921 pt1 00:17:09.921 15:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:17:09.921 15:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:09.921 15:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:09.921 15:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:09.921 15:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:09.921 15:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:09.921 15:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:09.921 15:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:09.921 15:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:09.921 15:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:09.921 15:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.921 15:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.179 15:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:10.179 "name": "raid_bdev1", 00:17:10.179 "uuid": "2df57484-4060-11ef-b2a4-e9dca065e82e", 00:17:10.179 "strip_size_kb": 0, 00:17:10.179 "state": "configuring", 00:17:10.179 "raid_level": "raid1", 00:17:10.179 "superblock": true, 00:17:10.179 "num_base_bdevs": 4, 00:17:10.179 "num_base_bdevs_discovered": 1, 00:17:10.179 "num_base_bdevs_operational": 4, 00:17:10.179 "base_bdevs_list": [ 00:17:10.179 { 00:17:10.179 "name": "pt1", 00:17:10.179 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:10.179 "is_configured": true, 00:17:10.179 "data_offset": 2048, 00:17:10.179 "data_size": 63488 00:17:10.179 }, 00:17:10.179 { 00:17:10.179 "name": null, 00:17:10.179 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:10.179 "is_configured": false, 00:17:10.179 "data_offset": 2048, 00:17:10.179 "data_size": 63488 00:17:10.179 }, 00:17:10.179 { 00:17:10.179 "name": null, 00:17:10.179 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:10.179 "is_configured": false, 00:17:10.179 "data_offset": 2048, 00:17:10.179 "data_size": 63488 00:17:10.179 }, 00:17:10.179 { 00:17:10.179 "name": null, 00:17:10.179 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:10.179 "is_configured": false, 00:17:10.179 "data_offset": 2048, 00:17:10.179 "data_size": 63488 00:17:10.179 } 00:17:10.179 ] 00:17:10.179 }' 00:17:10.179 15:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:10.179 15:05:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.438 15:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:17:10.438 15:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:10.696 [2024-07-12 15:05:36.403705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:10.696 [2024-07-12 15:05:36.403777] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.696 [2024-07-12 15:05:36.403805] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x35134b834780 00:17:10.696 [2024-07-12 15:05:36.403813] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.696 [2024-07-12 15:05:36.403947] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.696 [2024-07-12 15:05:36.403958] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:10.696 [2024-07-12 15:05:36.403983] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:10.696 [2024-07-12 15:05:36.403992] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:10.696 pt2 00:17:10.696 15:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:10.955 [2024-07-12 15:05:36.635730] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:10.955 15:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:17:10.955 15:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:10.955 15:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:10.955 15:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:10.955 15:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:10.955 15:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:10.955 15:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:10.955 15:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:10.955 15:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:10.955 15:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:10.955 15:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.955 15:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.214 15:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:11.214 "name": "raid_bdev1", 00:17:11.214 "uuid": "2df57484-4060-11ef-b2a4-e9dca065e82e", 00:17:11.214 "strip_size_kb": 0, 00:17:11.214 "state": "configuring", 00:17:11.214 "raid_level": "raid1", 00:17:11.214 "superblock": true, 00:17:11.214 "num_base_bdevs": 4, 00:17:11.214 "num_base_bdevs_discovered": 1, 00:17:11.214 "num_base_bdevs_operational": 4, 00:17:11.214 "base_bdevs_list": [ 00:17:11.214 { 00:17:11.214 "name": "pt1", 00:17:11.214 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:11.214 "is_configured": true, 00:17:11.214 "data_offset": 2048, 00:17:11.214 "data_size": 63488 00:17:11.214 }, 00:17:11.214 { 00:17:11.214 "name": null, 00:17:11.214 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:11.214 "is_configured": false, 00:17:11.214 "data_offset": 2048, 00:17:11.214 "data_size": 63488 00:17:11.214 }, 00:17:11.214 { 00:17:11.214 "name": null, 00:17:11.214 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:11.214 "is_configured": false, 00:17:11.214 "data_offset": 2048, 00:17:11.214 "data_size": 63488 00:17:11.214 }, 00:17:11.214 { 00:17:11.214 "name": null, 00:17:11.214 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:11.214 "is_configured": false, 00:17:11.214 "data_offset": 2048, 00:17:11.214 "data_size": 63488 00:17:11.214 } 00:17:11.214 ] 00:17:11.214 }' 00:17:11.214 15:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:11.214 15:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.473 15:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:17:11.473 15:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:11.473 15:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:11.732 [2024-07-12 15:05:37.435768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:11.732 [2024-07-12 15:05:37.435835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.732 [2024-07-12 15:05:37.435879] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x35134b834780 00:17:11.732 [2024-07-12 15:05:37.435888] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.732 [2024-07-12 15:05:37.436007] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.732 [2024-07-12 15:05:37.436018] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:11.732 [2024-07-12 15:05:37.436043] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:11.732 [2024-07-12 15:05:37.436052] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:11.732 pt2 00:17:11.732 15:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:11.732 15:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:11.732 15:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:11.990 [2024-07-12 15:05:37.715821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:11.990 [2024-07-12 15:05:37.715889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.990 [2024-07-12 15:05:37.715917] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x35134b835b80 00:17:11.990 [2024-07-12 15:05:37.715925] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.990 [2024-07-12 15:05:37.716041] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.990 [2024-07-12 15:05:37.716069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:11.990 [2024-07-12 15:05:37.716092] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:11.990 [2024-07-12 15:05:37.716101] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:11.990 pt3 00:17:11.990 15:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:11.990 15:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:11.990 15:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:12.248 [2024-07-12 15:05:37.947846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:12.248 [2024-07-12 15:05:37.947907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.248 [2024-07-12 15:05:37.947919] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x35134b835900 00:17:12.248 [2024-07-12 15:05:37.947927] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.248 [2024-07-12 15:05:37.948061] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.248 [2024-07-12 15:05:37.948073] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:12.248 [2024-07-12 15:05:37.948097] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:12.248 [2024-07-12 15:05:37.948106] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:12.248 [2024-07-12 15:05:37.948138] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x35134b834c80 00:17:12.248 [2024-07-12 15:05:37.948143] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:12.248 [2024-07-12 15:05:37.948164] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x35134b897e20 00:17:12.248 [2024-07-12 15:05:37.948227] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x35134b834c80 00:17:12.248 [2024-07-12 15:05:37.948232] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x35134b834c80 00:17:12.248 [2024-07-12 15:05:37.948254] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:12.248 pt4 00:17:12.248 15:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:12.248 15:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:12.248 15:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:12.248 15:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:12.248 15:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:12.248 15:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:12.248 15:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:12.248 15:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:12.248 15:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:12.248 15:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:12.248 15:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:12.248 15:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:12.248 15:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.248 15:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.507 15:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:12.507 "name": "raid_bdev1", 00:17:12.507 "uuid": "2df57484-4060-11ef-b2a4-e9dca065e82e", 00:17:12.507 "strip_size_kb": 0, 00:17:12.507 "state": "online", 00:17:12.507 "raid_level": "raid1", 00:17:12.507 "superblock": true, 00:17:12.507 "num_base_bdevs": 4, 00:17:12.507 "num_base_bdevs_discovered": 4, 00:17:12.507 "num_base_bdevs_operational": 4, 00:17:12.507 "base_bdevs_list": [ 00:17:12.507 { 00:17:12.507 "name": "pt1", 00:17:12.507 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:12.507 "is_configured": true, 00:17:12.507 "data_offset": 2048, 00:17:12.507 "data_size": 63488 00:17:12.507 }, 00:17:12.507 { 00:17:12.507 "name": "pt2", 00:17:12.507 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:12.507 "is_configured": true, 00:17:12.507 "data_offset": 2048, 00:17:12.507 "data_size": 63488 00:17:12.507 }, 00:17:12.507 { 00:17:12.507 "name": "pt3", 00:17:12.507 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:12.507 "is_configured": true, 00:17:12.507 "data_offset": 2048, 00:17:12.507 "data_size": 63488 00:17:12.507 }, 00:17:12.507 { 00:17:12.507 "name": "pt4", 00:17:12.507 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:12.507 "is_configured": true, 00:17:12.507 "data_offset": 2048, 00:17:12.507 "data_size": 63488 00:17:12.507 } 00:17:12.507 ] 00:17:12.507 }' 00:17:12.507 15:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:12.507 15:05:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.794 15:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:17:12.794 15:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:12.794 15:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:12.794 15:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:12.794 15:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:12.795 15:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:12.795 15:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:12.795 15:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:13.053 [2024-07-12 15:05:38.819952] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:13.053 15:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:13.053 "name": "raid_bdev1", 00:17:13.053 "aliases": [ 00:17:13.053 "2df57484-4060-11ef-b2a4-e9dca065e82e" 00:17:13.053 ], 00:17:13.053 "product_name": "Raid Volume", 00:17:13.053 "block_size": 512, 00:17:13.053 "num_blocks": 63488, 00:17:13.053 "uuid": "2df57484-4060-11ef-b2a4-e9dca065e82e", 00:17:13.053 "assigned_rate_limits": { 00:17:13.053 "rw_ios_per_sec": 0, 00:17:13.053 "rw_mbytes_per_sec": 0, 00:17:13.053 "r_mbytes_per_sec": 0, 00:17:13.053 "w_mbytes_per_sec": 0 00:17:13.053 }, 00:17:13.053 "claimed": false, 00:17:13.053 "zoned": false, 00:17:13.053 "supported_io_types": { 00:17:13.053 "read": true, 00:17:13.053 "write": true, 00:17:13.053 "unmap": false, 00:17:13.053 "flush": false, 00:17:13.053 "reset": true, 00:17:13.053 "nvme_admin": false, 00:17:13.053 "nvme_io": false, 00:17:13.053 "nvme_io_md": false, 00:17:13.053 "write_zeroes": true, 00:17:13.053 "zcopy": false, 00:17:13.053 "get_zone_info": false, 00:17:13.053 "zone_management": false, 00:17:13.053 "zone_append": false, 00:17:13.053 "compare": false, 00:17:13.053 "compare_and_write": false, 00:17:13.053 "abort": false, 00:17:13.053 "seek_hole": false, 00:17:13.053 "seek_data": false, 00:17:13.053 "copy": false, 00:17:13.053 "nvme_iov_md": false 00:17:13.053 }, 00:17:13.053 "memory_domains": [ 00:17:13.053 { 00:17:13.053 "dma_device_id": "system", 00:17:13.053 "dma_device_type": 1 00:17:13.053 }, 00:17:13.053 { 00:17:13.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.053 "dma_device_type": 2 00:17:13.053 }, 00:17:13.053 { 00:17:13.053 "dma_device_id": "system", 00:17:13.053 "dma_device_type": 1 00:17:13.053 }, 00:17:13.053 { 00:17:13.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.053 "dma_device_type": 2 00:17:13.053 }, 00:17:13.053 { 00:17:13.053 "dma_device_id": "system", 00:17:13.053 "dma_device_type": 1 00:17:13.053 }, 00:17:13.053 { 00:17:13.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.053 "dma_device_type": 2 00:17:13.053 }, 00:17:13.053 { 00:17:13.053 "dma_device_id": "system", 00:17:13.053 "dma_device_type": 1 00:17:13.053 }, 00:17:13.053 { 00:17:13.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.053 "dma_device_type": 2 00:17:13.053 } 00:17:13.053 ], 00:17:13.053 "driver_specific": { 00:17:13.053 "raid": { 00:17:13.053 "uuid": "2df57484-4060-11ef-b2a4-e9dca065e82e", 00:17:13.053 "strip_size_kb": 0, 00:17:13.053 "state": "online", 00:17:13.053 "raid_level": "raid1", 00:17:13.053 "superblock": true, 00:17:13.053 "num_base_bdevs": 4, 00:17:13.053 "num_base_bdevs_discovered": 4, 00:17:13.053 "num_base_bdevs_operational": 4, 00:17:13.053 "base_bdevs_list": [ 00:17:13.053 { 00:17:13.053 "name": "pt1", 00:17:13.053 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:13.053 "is_configured": true, 00:17:13.053 "data_offset": 2048, 00:17:13.053 "data_size": 63488 00:17:13.053 }, 00:17:13.053 { 00:17:13.053 "name": "pt2", 00:17:13.053 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:13.053 "is_configured": true, 00:17:13.053 "data_offset": 2048, 00:17:13.053 "data_size": 63488 00:17:13.053 }, 00:17:13.053 { 00:17:13.053 "name": "pt3", 00:17:13.053 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:13.053 "is_configured": true, 00:17:13.053 "data_offset": 2048, 00:17:13.053 "data_size": 63488 00:17:13.053 }, 00:17:13.053 { 00:17:13.053 "name": "pt4", 00:17:13.053 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:13.053 "is_configured": true, 00:17:13.053 "data_offset": 2048, 00:17:13.053 "data_size": 63488 00:17:13.053 } 00:17:13.053 ] 00:17:13.053 } 00:17:13.053 } 00:17:13.053 }' 00:17:13.053 15:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:13.053 15:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:13.053 pt2 00:17:13.053 pt3 00:17:13.053 pt4' 00:17:13.053 15:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:13.053 15:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:13.053 15:05:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:13.620 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:13.620 "name": "pt1", 00:17:13.620 "aliases": [ 00:17:13.620 "00000000-0000-0000-0000-000000000001" 00:17:13.620 ], 00:17:13.620 "product_name": "passthru", 00:17:13.620 "block_size": 512, 00:17:13.620 "num_blocks": 65536, 00:17:13.620 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:13.620 "assigned_rate_limits": { 00:17:13.620 "rw_ios_per_sec": 0, 00:17:13.620 "rw_mbytes_per_sec": 0, 00:17:13.620 "r_mbytes_per_sec": 0, 00:17:13.620 "w_mbytes_per_sec": 0 00:17:13.620 }, 00:17:13.620 "claimed": true, 00:17:13.620 "claim_type": "exclusive_write", 00:17:13.620 "zoned": false, 00:17:13.620 "supported_io_types": { 00:17:13.620 "read": true, 00:17:13.620 "write": true, 00:17:13.620 "unmap": true, 00:17:13.620 "flush": true, 00:17:13.620 "reset": true, 00:17:13.620 "nvme_admin": false, 00:17:13.620 "nvme_io": false, 00:17:13.620 "nvme_io_md": false, 00:17:13.620 "write_zeroes": true, 00:17:13.620 "zcopy": true, 00:17:13.620 "get_zone_info": false, 00:17:13.620 "zone_management": false, 00:17:13.620 "zone_append": false, 00:17:13.620 "compare": false, 00:17:13.620 "compare_and_write": false, 00:17:13.620 "abort": true, 00:17:13.620 "seek_hole": false, 00:17:13.620 "seek_data": false, 00:17:13.620 "copy": true, 00:17:13.620 "nvme_iov_md": false 00:17:13.620 }, 00:17:13.620 "memory_domains": [ 00:17:13.620 { 00:17:13.620 "dma_device_id": "system", 00:17:13.620 "dma_device_type": 1 00:17:13.620 }, 00:17:13.620 { 00:17:13.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.620 "dma_device_type": 2 00:17:13.620 } 00:17:13.620 ], 00:17:13.620 "driver_specific": { 00:17:13.620 "passthru": { 00:17:13.620 "name": "pt1", 00:17:13.620 "base_bdev_name": "malloc1" 00:17:13.620 } 00:17:13.620 } 00:17:13.620 }' 00:17:13.620 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:13.620 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:13.620 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:13.620 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:13.620 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:13.620 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:13.620 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:13.620 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:13.620 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:13.620 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:13.620 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:13.620 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:13.620 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:13.620 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:13.620 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:13.879 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:13.879 "name": "pt2", 00:17:13.879 "aliases": [ 00:17:13.879 "00000000-0000-0000-0000-000000000002" 00:17:13.879 ], 00:17:13.879 "product_name": "passthru", 00:17:13.879 "block_size": 512, 00:17:13.879 "num_blocks": 65536, 00:17:13.879 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:13.879 "assigned_rate_limits": { 00:17:13.879 "rw_ios_per_sec": 0, 00:17:13.879 "rw_mbytes_per_sec": 0, 00:17:13.879 "r_mbytes_per_sec": 0, 00:17:13.879 "w_mbytes_per_sec": 0 00:17:13.879 }, 00:17:13.879 "claimed": true, 00:17:13.879 "claim_type": "exclusive_write", 00:17:13.879 "zoned": false, 00:17:13.879 "supported_io_types": { 00:17:13.879 "read": true, 00:17:13.879 "write": true, 00:17:13.879 "unmap": true, 00:17:13.879 "flush": true, 00:17:13.879 "reset": true, 00:17:13.879 "nvme_admin": false, 00:17:13.879 "nvme_io": false, 00:17:13.879 "nvme_io_md": false, 00:17:13.879 "write_zeroes": true, 00:17:13.879 "zcopy": true, 00:17:13.879 "get_zone_info": false, 00:17:13.879 "zone_management": false, 00:17:13.879 "zone_append": false, 00:17:13.879 "compare": false, 00:17:13.879 "compare_and_write": false, 00:17:13.879 "abort": true, 00:17:13.879 "seek_hole": false, 00:17:13.879 "seek_data": false, 00:17:13.879 "copy": true, 00:17:13.879 "nvme_iov_md": false 00:17:13.879 }, 00:17:13.879 "memory_domains": [ 00:17:13.879 { 00:17:13.879 "dma_device_id": "system", 00:17:13.879 "dma_device_type": 1 00:17:13.879 }, 00:17:13.879 { 00:17:13.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.879 "dma_device_type": 2 00:17:13.879 } 00:17:13.879 ], 00:17:13.879 "driver_specific": { 00:17:13.879 "passthru": { 00:17:13.879 "name": "pt2", 00:17:13.879 "base_bdev_name": "malloc2" 00:17:13.879 } 00:17:13.879 } 00:17:13.879 }' 00:17:13.879 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:13.879 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:13.879 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:13.879 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:13.879 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:13.879 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:13.879 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:13.879 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:13.879 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:13.879 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:13.879 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:13.879 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:13.879 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:13.879 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:13.879 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:14.138 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:14.138 "name": "pt3", 00:17:14.138 "aliases": [ 00:17:14.138 "00000000-0000-0000-0000-000000000003" 00:17:14.138 ], 00:17:14.138 "product_name": "passthru", 00:17:14.138 "block_size": 512, 00:17:14.138 "num_blocks": 65536, 00:17:14.138 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:14.138 "assigned_rate_limits": { 00:17:14.138 "rw_ios_per_sec": 0, 00:17:14.138 "rw_mbytes_per_sec": 0, 00:17:14.138 "r_mbytes_per_sec": 0, 00:17:14.138 "w_mbytes_per_sec": 0 00:17:14.138 }, 00:17:14.138 "claimed": true, 00:17:14.138 "claim_type": "exclusive_write", 00:17:14.138 "zoned": false, 00:17:14.138 "supported_io_types": { 00:17:14.138 "read": true, 00:17:14.138 "write": true, 00:17:14.138 "unmap": true, 00:17:14.138 "flush": true, 00:17:14.138 "reset": true, 00:17:14.138 "nvme_admin": false, 00:17:14.138 "nvme_io": false, 00:17:14.138 "nvme_io_md": false, 00:17:14.138 "write_zeroes": true, 00:17:14.138 "zcopy": true, 00:17:14.138 "get_zone_info": false, 00:17:14.138 "zone_management": false, 00:17:14.138 "zone_append": false, 00:17:14.138 "compare": false, 00:17:14.138 "compare_and_write": false, 00:17:14.138 "abort": true, 00:17:14.138 "seek_hole": false, 00:17:14.138 "seek_data": false, 00:17:14.138 "copy": true, 00:17:14.138 "nvme_iov_md": false 00:17:14.138 }, 00:17:14.138 "memory_domains": [ 00:17:14.138 { 00:17:14.138 "dma_device_id": "system", 00:17:14.138 "dma_device_type": 1 00:17:14.138 }, 00:17:14.138 { 00:17:14.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.138 "dma_device_type": 2 00:17:14.138 } 00:17:14.138 ], 00:17:14.138 "driver_specific": { 00:17:14.138 "passthru": { 00:17:14.138 "name": "pt3", 00:17:14.138 "base_bdev_name": "malloc3" 00:17:14.138 } 00:17:14.138 } 00:17:14.138 }' 00:17:14.138 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:14.138 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:14.138 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:14.138 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:14.138 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:14.138 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:14.138 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:14.138 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:14.138 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:14.138 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:14.138 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:14.138 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:14.138 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:14.138 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:17:14.138 15:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:14.397 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:14.397 "name": "pt4", 00:17:14.397 "aliases": [ 00:17:14.397 "00000000-0000-0000-0000-000000000004" 00:17:14.397 ], 00:17:14.397 "product_name": "passthru", 00:17:14.397 "block_size": 512, 00:17:14.397 "num_blocks": 65536, 00:17:14.397 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:14.397 "assigned_rate_limits": { 00:17:14.397 "rw_ios_per_sec": 0, 00:17:14.397 "rw_mbytes_per_sec": 0, 00:17:14.397 "r_mbytes_per_sec": 0, 00:17:14.397 "w_mbytes_per_sec": 0 00:17:14.397 }, 00:17:14.397 "claimed": true, 00:17:14.397 "claim_type": "exclusive_write", 00:17:14.397 "zoned": false, 00:17:14.397 "supported_io_types": { 00:17:14.397 "read": true, 00:17:14.397 "write": true, 00:17:14.397 "unmap": true, 00:17:14.397 "flush": true, 00:17:14.397 "reset": true, 00:17:14.397 "nvme_admin": false, 00:17:14.397 "nvme_io": false, 00:17:14.397 "nvme_io_md": false, 00:17:14.397 "write_zeroes": true, 00:17:14.397 "zcopy": true, 00:17:14.397 "get_zone_info": false, 00:17:14.397 "zone_management": false, 00:17:14.397 "zone_append": false, 00:17:14.397 "compare": false, 00:17:14.397 "compare_and_write": false, 00:17:14.397 "abort": true, 00:17:14.397 "seek_hole": false, 00:17:14.397 "seek_data": false, 00:17:14.397 "copy": true, 00:17:14.397 "nvme_iov_md": false 00:17:14.397 }, 00:17:14.397 "memory_domains": [ 00:17:14.397 { 00:17:14.397 "dma_device_id": "system", 00:17:14.397 "dma_device_type": 1 00:17:14.397 }, 00:17:14.397 { 00:17:14.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.397 "dma_device_type": 2 00:17:14.397 } 00:17:14.397 ], 00:17:14.397 "driver_specific": { 00:17:14.397 "passthru": { 00:17:14.397 "name": "pt4", 00:17:14.397 "base_bdev_name": "malloc4" 00:17:14.397 } 00:17:14.397 } 00:17:14.397 }' 00:17:14.397 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:14.397 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:14.397 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:14.397 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:14.397 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:14.397 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:14.397 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:14.397 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:14.655 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:14.655 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:14.655 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:14.655 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:14.655 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:14.655 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:17:14.655 [2024-07-12 15:05:40.464036] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:14.655 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 2df57484-4060-11ef-b2a4-e9dca065e82e '!=' 2df57484-4060-11ef-b2a4-e9dca065e82e ']' 00:17:14.655 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:17:14.655 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:14.655 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:14.655 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:14.914 [2024-07-12 15:05:40.704031] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:14.914 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:14.914 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:14.914 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:14.914 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:14.914 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:14.914 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:14.914 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:14.914 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:14.914 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:14.914 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:14.914 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.914 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.173 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:15.173 "name": "raid_bdev1", 00:17:15.173 "uuid": "2df57484-4060-11ef-b2a4-e9dca065e82e", 00:17:15.173 "strip_size_kb": 0, 00:17:15.173 "state": "online", 00:17:15.173 "raid_level": "raid1", 00:17:15.173 "superblock": true, 00:17:15.173 "num_base_bdevs": 4, 00:17:15.173 "num_base_bdevs_discovered": 3, 00:17:15.173 "num_base_bdevs_operational": 3, 00:17:15.173 "base_bdevs_list": [ 00:17:15.173 { 00:17:15.173 "name": null, 00:17:15.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.173 "is_configured": false, 00:17:15.173 "data_offset": 2048, 00:17:15.173 "data_size": 63488 00:17:15.173 }, 00:17:15.173 { 00:17:15.173 "name": "pt2", 00:17:15.173 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:15.173 "is_configured": true, 00:17:15.173 "data_offset": 2048, 00:17:15.173 "data_size": 63488 00:17:15.173 }, 00:17:15.173 { 00:17:15.173 "name": "pt3", 00:17:15.173 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:15.173 "is_configured": true, 00:17:15.173 "data_offset": 2048, 00:17:15.173 "data_size": 63488 00:17:15.173 }, 00:17:15.173 { 00:17:15.173 "name": "pt4", 00:17:15.173 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:15.173 "is_configured": true, 00:17:15.173 "data_offset": 2048, 00:17:15.173 "data_size": 63488 00:17:15.173 } 00:17:15.173 ] 00:17:15.173 }' 00:17:15.173 15:05:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:15.173 15:05:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.739 15:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:15.739 [2024-07-12 15:05:41.568078] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:15.739 [2024-07-12 15:05:41.568104] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:15.739 [2024-07-12 15:05:41.568128] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:15.739 [2024-07-12 15:05:41.568144] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:15.739 [2024-07-12 15:05:41.568149] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x35134b834c80 name raid_bdev1, state offline 00:17:15.998 15:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:17:15.998 15:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.257 15:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:17:16.257 15:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:17:16.257 15:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:17:16.257 15:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:16.257 15:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:16.515 15:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:17:16.516 15:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:16.516 15:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:16.774 15:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:17:16.774 15:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:16.774 15:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:17.033 15:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:17:17.033 15:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:17.033 15:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:17:17.033 15:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:17:17.033 15:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:17.290 [2024-07-12 15:05:42.884139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:17.290 [2024-07-12 15:05:42.884218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.290 [2024-07-12 15:05:42.884246] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x35134b835900 00:17:17.290 [2024-07-12 15:05:42.884255] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.290 [2024-07-12 15:05:42.884964] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.290 [2024-07-12 15:05:42.884995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:17.290 [2024-07-12 15:05:42.885033] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:17.290 [2024-07-12 15:05:42.885045] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:17.290 pt2 00:17:17.290 15:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:17.290 15:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:17.290 15:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:17.290 15:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:17.290 15:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:17.290 15:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:17.290 15:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:17.290 15:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:17.290 15:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:17.290 15:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:17.290 15:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.290 15:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.548 15:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:17.548 "name": "raid_bdev1", 00:17:17.548 "uuid": "2df57484-4060-11ef-b2a4-e9dca065e82e", 00:17:17.548 "strip_size_kb": 0, 00:17:17.548 "state": "configuring", 00:17:17.548 "raid_level": "raid1", 00:17:17.548 "superblock": true, 00:17:17.548 "num_base_bdevs": 4, 00:17:17.548 "num_base_bdevs_discovered": 1, 00:17:17.548 "num_base_bdevs_operational": 3, 00:17:17.548 "base_bdevs_list": [ 00:17:17.548 { 00:17:17.548 "name": null, 00:17:17.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.548 "is_configured": false, 00:17:17.548 "data_offset": 2048, 00:17:17.548 "data_size": 63488 00:17:17.548 }, 00:17:17.548 { 00:17:17.548 "name": "pt2", 00:17:17.548 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:17.548 "is_configured": true, 00:17:17.548 "data_offset": 2048, 00:17:17.548 "data_size": 63488 00:17:17.548 }, 00:17:17.548 { 00:17:17.548 "name": null, 00:17:17.548 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:17.548 "is_configured": false, 00:17:17.548 "data_offset": 2048, 00:17:17.548 "data_size": 63488 00:17:17.548 }, 00:17:17.548 { 00:17:17.548 "name": null, 00:17:17.548 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:17.548 "is_configured": false, 00:17:17.548 "data_offset": 2048, 00:17:17.548 "data_size": 63488 00:17:17.548 } 00:17:17.548 ] 00:17:17.548 }' 00:17:17.548 15:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:17.548 15:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.806 15:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:17:17.806 15:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:17:17.806 15:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:18.065 [2024-07-12 15:05:43.644166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:18.065 [2024-07-12 15:05:43.644240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.065 [2024-07-12 15:05:43.644268] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x35134b835680 00:17:18.065 [2024-07-12 15:05:43.644276] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.065 [2024-07-12 15:05:43.644426] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.065 [2024-07-12 15:05:43.644438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:18.065 [2024-07-12 15:05:43.644463] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:18.065 [2024-07-12 15:05:43.644472] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:18.065 pt3 00:17:18.065 15:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:18.065 15:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:18.065 15:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:18.065 15:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:18.065 15:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:18.065 15:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:18.065 15:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:18.065 15:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:18.065 15:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:18.065 15:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:18.065 15:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.065 15:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.065 15:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:18.065 "name": "raid_bdev1", 00:17:18.065 "uuid": "2df57484-4060-11ef-b2a4-e9dca065e82e", 00:17:18.065 "strip_size_kb": 0, 00:17:18.065 "state": "configuring", 00:17:18.065 "raid_level": "raid1", 00:17:18.065 "superblock": true, 00:17:18.065 "num_base_bdevs": 4, 00:17:18.065 "num_base_bdevs_discovered": 2, 00:17:18.065 "num_base_bdevs_operational": 3, 00:17:18.065 "base_bdevs_list": [ 00:17:18.065 { 00:17:18.065 "name": null, 00:17:18.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.065 "is_configured": false, 00:17:18.065 "data_offset": 2048, 00:17:18.065 "data_size": 63488 00:17:18.065 }, 00:17:18.065 { 00:17:18.065 "name": "pt2", 00:17:18.065 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:18.065 "is_configured": true, 00:17:18.065 "data_offset": 2048, 00:17:18.065 "data_size": 63488 00:17:18.065 }, 00:17:18.065 { 00:17:18.065 "name": "pt3", 00:17:18.065 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:18.065 "is_configured": true, 00:17:18.065 "data_offset": 2048, 00:17:18.065 "data_size": 63488 00:17:18.065 }, 00:17:18.065 { 00:17:18.065 "name": null, 00:17:18.065 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:18.065 "is_configured": false, 00:17:18.065 "data_offset": 2048, 00:17:18.065 "data_size": 63488 00:17:18.065 } 00:17:18.065 ] 00:17:18.065 }' 00:17:18.065 15:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:18.065 15:05:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.632 15:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:17:18.632 15:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:17:18.632 15:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:17:18.632 15:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:18.632 [2024-07-12 15:05:44.424204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:18.632 [2024-07-12 15:05:44.424276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.632 [2024-07-12 15:05:44.424304] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x35134b834c80 00:17:18.632 [2024-07-12 15:05:44.424312] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.632 [2024-07-12 15:05:44.424426] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.632 [2024-07-12 15:05:44.424437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:18.632 [2024-07-12 15:05:44.424477] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:18.632 [2024-07-12 15:05:44.424485] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:18.632 [2024-07-12 15:05:44.424515] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x35134b834780 00:17:18.632 [2024-07-12 15:05:44.424519] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:18.632 [2024-07-12 15:05:44.424540] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x35134b897e20 00:17:18.632 [2024-07-12 15:05:44.424589] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x35134b834780 00:17:18.632 [2024-07-12 15:05:44.424594] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x35134b834780 00:17:18.632 [2024-07-12 15:05:44.424615] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.632 pt4 00:17:18.632 15:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:18.632 15:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:18.632 15:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:18.632 15:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:18.632 15:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:18.632 15:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:18.632 15:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:18.632 15:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:18.632 15:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:18.632 15:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:18.632 15:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.632 15:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.199 15:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:19.199 "name": "raid_bdev1", 00:17:19.199 "uuid": "2df57484-4060-11ef-b2a4-e9dca065e82e", 00:17:19.199 "strip_size_kb": 0, 00:17:19.199 "state": "online", 00:17:19.199 "raid_level": "raid1", 00:17:19.199 "superblock": true, 00:17:19.199 "num_base_bdevs": 4, 00:17:19.199 "num_base_bdevs_discovered": 3, 00:17:19.199 "num_base_bdevs_operational": 3, 00:17:19.199 "base_bdevs_list": [ 00:17:19.199 { 00:17:19.199 "name": null, 00:17:19.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.199 "is_configured": false, 00:17:19.199 "data_offset": 2048, 00:17:19.199 "data_size": 63488 00:17:19.199 }, 00:17:19.199 { 00:17:19.199 "name": "pt2", 00:17:19.199 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.199 "is_configured": true, 00:17:19.199 "data_offset": 2048, 00:17:19.199 "data_size": 63488 00:17:19.199 }, 00:17:19.199 { 00:17:19.199 "name": "pt3", 00:17:19.199 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:19.199 "is_configured": true, 00:17:19.199 "data_offset": 2048, 00:17:19.199 "data_size": 63488 00:17:19.199 }, 00:17:19.199 { 00:17:19.199 "name": "pt4", 00:17:19.199 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:19.199 "is_configured": true, 00:17:19.199 "data_offset": 2048, 00:17:19.199 "data_size": 63488 00:17:19.199 } 00:17:19.199 ] 00:17:19.199 }' 00:17:19.199 15:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:19.199 15:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.458 15:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:19.458 [2024-07-12 15:05:45.260280] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:19.458 [2024-07-12 15:05:45.260303] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:19.458 [2024-07-12 15:05:45.260343] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:19.458 [2024-07-12 15:05:45.260375] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:19.458 [2024-07-12 15:05:45.260379] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x35134b834780 name raid_bdev1, state offline 00:17:19.458 15:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.458 15:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:17:20.024 15:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:17:20.024 15:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:17:20.024 15:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:17:20.024 15:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:17:20.024 15:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:20.024 15:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:20.283 [2024-07-12 15:05:46.084350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:20.283 [2024-07-12 15:05:46.084437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.283 [2024-07-12 15:05:46.084450] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x35134b834c80 00:17:20.283 [2024-07-12 15:05:46.084458] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.283 [2024-07-12 15:05:46.085122] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.283 [2024-07-12 15:05:46.085147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:20.283 [2024-07-12 15:05:46.085174] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:20.283 [2024-07-12 15:05:46.085186] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:20.283 [2024-07-12 15:05:46.085215] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:20.283 [2024-07-12 15:05:46.085219] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:20.283 [2024-07-12 15:05:46.085224] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x35134b834780 name raid_bdev1, state configuring 00:17:20.283 [2024-07-12 15:05:46.085232] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:20.283 [2024-07-12 15:05:46.085251] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:20.283 pt1 00:17:20.283 15:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:17:20.283 15:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:20.283 15:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:20.283 15:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:20.283 15:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:20.283 15:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:20.283 15:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:20.283 15:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:20.283 15:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:20.283 15:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:20.283 15:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:20.283 15:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.283 15:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.542 15:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:20.542 "name": "raid_bdev1", 00:17:20.542 "uuid": "2df57484-4060-11ef-b2a4-e9dca065e82e", 00:17:20.542 "strip_size_kb": 0, 00:17:20.542 "state": "configuring", 00:17:20.542 "raid_level": "raid1", 00:17:20.542 "superblock": true, 00:17:20.542 "num_base_bdevs": 4, 00:17:20.542 "num_base_bdevs_discovered": 2, 00:17:20.542 "num_base_bdevs_operational": 3, 00:17:20.542 "base_bdevs_list": [ 00:17:20.542 { 00:17:20.542 "name": null, 00:17:20.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.542 "is_configured": false, 00:17:20.542 "data_offset": 2048, 00:17:20.542 "data_size": 63488 00:17:20.542 }, 00:17:20.542 { 00:17:20.542 "name": "pt2", 00:17:20.542 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:20.542 "is_configured": true, 00:17:20.542 "data_offset": 2048, 00:17:20.542 "data_size": 63488 00:17:20.542 }, 00:17:20.542 { 00:17:20.542 "name": "pt3", 00:17:20.542 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:20.542 "is_configured": true, 00:17:20.542 "data_offset": 2048, 00:17:20.542 "data_size": 63488 00:17:20.542 }, 00:17:20.542 { 00:17:20.542 "name": null, 00:17:20.542 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:20.542 "is_configured": false, 00:17:20.542 "data_offset": 2048, 00:17:20.542 "data_size": 63488 00:17:20.542 } 00:17:20.542 ] 00:17:20.542 }' 00:17:20.542 15:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:20.542 15:05:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.109 15:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:17:21.109 15:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:21.109 15:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:17:21.109 15:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:21.367 [2024-07-12 15:05:47.164410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:21.367 [2024-07-12 15:05:47.164487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.367 [2024-07-12 15:05:47.164516] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x35134b835180 00:17:21.367 [2024-07-12 15:05:47.164525] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.367 [2024-07-12 15:05:47.164669] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.367 [2024-07-12 15:05:47.164681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:21.367 [2024-07-12 15:05:47.164707] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:21.367 [2024-07-12 15:05:47.164716] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:21.367 [2024-07-12 15:05:47.164754] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x35134b834780 00:17:21.367 [2024-07-12 15:05:47.164759] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:21.367 [2024-07-12 15:05:47.164781] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x35134b897e20 00:17:21.367 [2024-07-12 15:05:47.164834] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x35134b834780 00:17:21.367 [2024-07-12 15:05:47.164839] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x35134b834780 00:17:21.367 [2024-07-12 15:05:47.164860] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.367 pt4 00:17:21.367 15:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:21.367 15:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:21.367 15:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:21.367 15:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:21.367 15:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:21.367 15:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:21.367 15:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:21.367 15:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:21.367 15:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:21.367 15:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:21.367 15:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.367 15:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.625 15:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:21.625 "name": "raid_bdev1", 00:17:21.625 "uuid": "2df57484-4060-11ef-b2a4-e9dca065e82e", 00:17:21.625 "strip_size_kb": 0, 00:17:21.625 "state": "online", 00:17:21.625 "raid_level": "raid1", 00:17:21.625 "superblock": true, 00:17:21.625 "num_base_bdevs": 4, 00:17:21.625 "num_base_bdevs_discovered": 3, 00:17:21.625 "num_base_bdevs_operational": 3, 00:17:21.625 "base_bdevs_list": [ 00:17:21.625 { 00:17:21.625 "name": null, 00:17:21.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.625 "is_configured": false, 00:17:21.625 "data_offset": 2048, 00:17:21.625 "data_size": 63488 00:17:21.625 }, 00:17:21.625 { 00:17:21.625 "name": "pt2", 00:17:21.625 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:21.625 "is_configured": true, 00:17:21.625 "data_offset": 2048, 00:17:21.625 "data_size": 63488 00:17:21.625 }, 00:17:21.625 { 00:17:21.625 "name": "pt3", 00:17:21.625 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:21.625 "is_configured": true, 00:17:21.625 "data_offset": 2048, 00:17:21.625 "data_size": 63488 00:17:21.625 }, 00:17:21.625 { 00:17:21.625 "name": "pt4", 00:17:21.625 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:21.625 "is_configured": true, 00:17:21.625 "data_offset": 2048, 00:17:21.625 "data_size": 63488 00:17:21.625 } 00:17:21.625 ] 00:17:21.625 }' 00:17:21.625 15:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:21.625 15:05:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.192 15:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:17:22.192 15:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:22.449 15:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:17:22.449 15:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:22.449 15:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:17:22.706 [2024-07-12 15:05:48.292538] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:22.706 15:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 2df57484-4060-11ef-b2a4-e9dca065e82e '!=' 2df57484-4060-11ef-b2a4-e9dca065e82e ']' 00:17:22.706 15:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 64655 00:17:22.706 15:05:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 64655 ']' 00:17:22.707 15:05:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 64655 00:17:22.707 15:05:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:17:22.707 15:05:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:17:22.707 15:05:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 64655 00:17:22.707 15:05:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:17:22.707 15:05:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:17:22.707 15:05:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:17:22.707 killing process with pid 64655 00:17:22.707 15:05:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64655' 00:17:22.707 15:05:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 64655 00:17:22.707 [2024-07-12 15:05:48.321921] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:22.707 [2024-07-12 15:05:48.321944] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:22.707 [2024-07-12 15:05:48.321961] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:22.707 [2024-07-12 15:05:48.321965] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x35134b834780 name raid_bdev1, state offline 00:17:22.707 15:05:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 64655 00:17:22.707 [2024-07-12 15:05:48.345555] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:22.707 15:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:17:22.707 00:17:22.707 real 0m21.492s 00:17:22.707 user 0m39.072s 00:17:22.707 sys 0m3.089s 00:17:22.707 15:05:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:22.707 15:05:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.707 ************************************ 00:17:22.707 END TEST raid_superblock_test 00:17:22.707 ************************************ 00:17:22.965 15:05:48 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:22.966 15:05:48 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:17:22.966 15:05:48 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:22.966 15:05:48 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:22.966 15:05:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:22.966 ************************************ 00:17:22.966 START TEST raid_read_error_test 00:17:22.966 ************************************ 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 read 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.BOUvodzgHR 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=65291 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 65291 /var/tmp/spdk-raid.sock 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 65291 ']' 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:22.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:22.966 15:05:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.966 [2024-07-12 15:05:48.589162] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:17:22.966 [2024-07-12 15:05:48.589329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:23.532 EAL: TSC is not safe to use in SMP mode 00:17:23.532 EAL: TSC is not invariant 00:17:23.532 [2024-07-12 15:05:49.142976] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.532 [2024-07-12 15:05:49.231020] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:23.532 [2024-07-12 15:05:49.233292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.532 [2024-07-12 15:05:49.234162] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.532 [2024-07-12 15:05:49.234178] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:24.099 15:05:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:24.099 15:05:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:17:24.099 15:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:24.099 15:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:24.099 BaseBdev1_malloc 00:17:24.099 15:05:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:24.357 true 00:17:24.357 15:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:24.616 [2024-07-12 15:05:50.390555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:24.616 [2024-07-12 15:05:50.390643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.616 [2024-07-12 15:05:50.390688] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a95e5034780 00:17:24.616 [2024-07-12 15:05:50.390697] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.616 [2024-07-12 15:05:50.391404] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.616 [2024-07-12 15:05:50.391435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:24.616 BaseBdev1 00:17:24.616 15:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:24.616 15:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:24.874 BaseBdev2_malloc 00:17:24.875 15:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:25.132 true 00:17:25.133 15:05:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:25.391 [2024-07-12 15:05:51.150591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:25.391 [2024-07-12 15:05:51.150656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.391 [2024-07-12 15:05:51.150700] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a95e5034c80 00:17:25.391 [2024-07-12 15:05:51.150709] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.391 [2024-07-12 15:05:51.151380] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.391 [2024-07-12 15:05:51.151406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:25.391 BaseBdev2 00:17:25.391 15:05:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:25.391 15:05:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:25.649 BaseBdev3_malloc 00:17:25.649 15:05:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:25.907 true 00:17:25.907 15:05:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:26.164 [2024-07-12 15:05:51.866634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:26.164 [2024-07-12 15:05:51.866712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.164 [2024-07-12 15:05:51.866755] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a95e5035180 00:17:26.164 [2024-07-12 15:05:51.866764] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.164 [2024-07-12 15:05:51.867467] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.164 [2024-07-12 15:05:51.867506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:26.164 BaseBdev3 00:17:26.165 15:05:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:26.165 15:05:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:26.422 BaseBdev4_malloc 00:17:26.422 15:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:17:26.681 true 00:17:26.681 15:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:26.940 [2024-07-12 15:05:52.686685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:26.940 [2024-07-12 15:05:52.686741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.940 [2024-07-12 15:05:52.686770] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a95e5035680 00:17:26.940 [2024-07-12 15:05:52.686794] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.940 [2024-07-12 15:05:52.687473] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.940 [2024-07-12 15:05:52.687501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:26.940 BaseBdev4 00:17:26.940 15:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:17:27.200 [2024-07-12 15:05:52.914714] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:27.200 [2024-07-12 15:05:52.915365] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:27.200 [2024-07-12 15:05:52.915392] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:27.200 [2024-07-12 15:05:52.915407] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:27.200 [2024-07-12 15:05:52.915481] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1a95e5035900 00:17:27.200 [2024-07-12 15:05:52.915488] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:27.200 [2024-07-12 15:05:52.915530] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a95e50a0e20 00:17:27.200 [2024-07-12 15:05:52.915614] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1a95e5035900 00:17:27.200 [2024-07-12 15:05:52.915619] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1a95e5035900 00:17:27.200 [2024-07-12 15:05:52.915649] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.200 15:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:27.200 15:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:27.200 15:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:27.200 15:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:27.200 15:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:27.200 15:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:27.200 15:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:27.200 15:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:27.200 15:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:27.200 15:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:27.200 15:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.200 15:05:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.463 15:05:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:27.463 "name": "raid_bdev1", 00:17:27.463 "uuid": "3b62daa2-4060-11ef-b2a4-e9dca065e82e", 00:17:27.463 "strip_size_kb": 0, 00:17:27.463 "state": "online", 00:17:27.463 "raid_level": "raid1", 00:17:27.463 "superblock": true, 00:17:27.463 "num_base_bdevs": 4, 00:17:27.463 "num_base_bdevs_discovered": 4, 00:17:27.463 "num_base_bdevs_operational": 4, 00:17:27.463 "base_bdevs_list": [ 00:17:27.463 { 00:17:27.463 "name": "BaseBdev1", 00:17:27.463 "uuid": "758b08be-2771-3b56-8fe9-d08d783dabfd", 00:17:27.463 "is_configured": true, 00:17:27.463 "data_offset": 2048, 00:17:27.463 "data_size": 63488 00:17:27.463 }, 00:17:27.463 { 00:17:27.463 "name": "BaseBdev2", 00:17:27.463 "uuid": "9b0115df-2b01-735c-8744-f4e2c75badca", 00:17:27.463 "is_configured": true, 00:17:27.463 "data_offset": 2048, 00:17:27.463 "data_size": 63488 00:17:27.463 }, 00:17:27.463 { 00:17:27.463 "name": "BaseBdev3", 00:17:27.463 "uuid": "3e3dab01-c678-5e5a-9ab4-84762d349224", 00:17:27.463 "is_configured": true, 00:17:27.463 "data_offset": 2048, 00:17:27.463 "data_size": 63488 00:17:27.463 }, 00:17:27.463 { 00:17:27.463 "name": "BaseBdev4", 00:17:27.463 "uuid": "4dd2ac6e-10f7-3353-9ebe-b8a64403f67b", 00:17:27.463 "is_configured": true, 00:17:27.463 "data_offset": 2048, 00:17:27.463 "data_size": 63488 00:17:27.463 } 00:17:27.463 ] 00:17:27.463 }' 00:17:27.463 15:05:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:27.463 15:05:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.721 15:05:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:27.721 15:05:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:27.979 [2024-07-12 15:05:53.654918] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a95e50a0ec0 00:17:28.924 15:05:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:29.182 15:05:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:29.182 15:05:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:29.182 15:05:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:17:29.182 15:05:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:17:29.182 15:05:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:29.182 15:05:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:29.182 15:05:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:29.182 15:05:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:29.182 15:05:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:29.182 15:05:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:29.182 15:05:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:29.182 15:05:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:29.182 15:05:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:29.182 15:05:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:29.182 15:05:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.182 15:05:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.441 15:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:29.441 "name": "raid_bdev1", 00:17:29.441 "uuid": "3b62daa2-4060-11ef-b2a4-e9dca065e82e", 00:17:29.441 "strip_size_kb": 0, 00:17:29.441 "state": "online", 00:17:29.441 "raid_level": "raid1", 00:17:29.441 "superblock": true, 00:17:29.441 "num_base_bdevs": 4, 00:17:29.441 "num_base_bdevs_discovered": 4, 00:17:29.441 "num_base_bdevs_operational": 4, 00:17:29.441 "base_bdevs_list": [ 00:17:29.441 { 00:17:29.441 "name": "BaseBdev1", 00:17:29.441 "uuid": "758b08be-2771-3b56-8fe9-d08d783dabfd", 00:17:29.441 "is_configured": true, 00:17:29.441 "data_offset": 2048, 00:17:29.441 "data_size": 63488 00:17:29.441 }, 00:17:29.441 { 00:17:29.441 "name": "BaseBdev2", 00:17:29.441 "uuid": "9b0115df-2b01-735c-8744-f4e2c75badca", 00:17:29.441 "is_configured": true, 00:17:29.441 "data_offset": 2048, 00:17:29.441 "data_size": 63488 00:17:29.441 }, 00:17:29.441 { 00:17:29.441 "name": "BaseBdev3", 00:17:29.441 "uuid": "3e3dab01-c678-5e5a-9ab4-84762d349224", 00:17:29.441 "is_configured": true, 00:17:29.441 "data_offset": 2048, 00:17:29.441 "data_size": 63488 00:17:29.441 }, 00:17:29.441 { 00:17:29.441 "name": "BaseBdev4", 00:17:29.441 "uuid": "4dd2ac6e-10f7-3353-9ebe-b8a64403f67b", 00:17:29.441 "is_configured": true, 00:17:29.441 "data_offset": 2048, 00:17:29.441 "data_size": 63488 00:17:29.441 } 00:17:29.441 ] 00:17:29.441 }' 00:17:29.441 15:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:29.441 15:05:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.699 15:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:29.957 [2024-07-12 15:05:55.706968] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:29.957 [2024-07-12 15:05:55.706998] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:29.957 [2024-07-12 15:05:55.707376] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:29.957 [2024-07-12 15:05:55.707388] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:29.957 [2024-07-12 15:05:55.707415] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:29.957 [2024-07-12 15:05:55.707419] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a95e5035900 name raid_bdev1, state offline 00:17:29.957 0 00:17:29.957 15:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 65291 00:17:29.957 15:05:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 65291 ']' 00:17:29.957 15:05:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 65291 00:17:29.957 15:05:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:17:29.957 15:05:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:17:29.957 15:05:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 65291 00:17:29.957 15:05:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:17:29.957 15:05:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:17:29.957 killing process with pid 65291 00:17:29.957 15:05:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:17:29.957 15:05:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65291' 00:17:29.957 15:05:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 65291 00:17:29.957 [2024-07-12 15:05:55.743284] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:29.957 15:05:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 65291 00:17:29.957 [2024-07-12 15:05:55.766907] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:30.216 15:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.BOUvodzgHR 00:17:30.216 15:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:30.216 15:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:30.216 15:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:17:30.216 15:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:17:30.216 15:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:30.216 15:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:30.216 15:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:30.216 00:17:30.216 real 0m7.380s 00:17:30.216 user 0m11.713s 00:17:30.216 sys 0m1.265s 00:17:30.216 15:05:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:30.216 ************************************ 00:17:30.216 END TEST raid_read_error_test 00:17:30.216 ************************************ 00:17:30.216 15:05:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.216 15:05:55 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:30.216 15:05:55 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:17:30.216 15:05:55 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:30.216 15:05:55 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:30.216 15:05:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:30.216 ************************************ 00:17:30.216 START TEST raid_write_error_test 00:17:30.216 ************************************ 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 write 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.ix8Yr5TMmX 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=65429 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 65429 /var/tmp/spdk-raid.sock 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 65429 ']' 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:30.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:30.216 15:05:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.216 [2024-07-12 15:05:56.018316] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:17:30.216 [2024-07-12 15:05:56.018535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:30.782 EAL: TSC is not safe to use in SMP mode 00:17:30.782 EAL: TSC is not invariant 00:17:30.782 [2024-07-12 15:05:56.550277] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.041 [2024-07-12 15:05:56.645227] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:31.041 [2024-07-12 15:05:56.647757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.041 [2024-07-12 15:05:56.648709] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:31.041 [2024-07-12 15:05:56.648726] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:31.299 15:05:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:31.299 15:05:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:17:31.299 15:05:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:31.299 15:05:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:31.558 BaseBdev1_malloc 00:17:31.558 15:05:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:31.816 true 00:17:31.816 15:05:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:32.135 [2024-07-12 15:05:57.789712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:32.135 [2024-07-12 15:05:57.789785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.135 [2024-07-12 15:05:57.789816] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f27eec34780 00:17:32.135 [2024-07-12 15:05:57.789825] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.135 [2024-07-12 15:05:57.790533] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.135 [2024-07-12 15:05:57.790571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:32.135 BaseBdev1 00:17:32.135 15:05:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:32.135 15:05:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:32.394 BaseBdev2_malloc 00:17:32.394 15:05:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:32.651 true 00:17:32.651 15:05:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:32.909 [2024-07-12 15:05:58.545715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:32.909 [2024-07-12 15:05:58.545799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.909 [2024-07-12 15:05:58.545826] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f27eec34c80 00:17:32.909 [2024-07-12 15:05:58.545835] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.910 [2024-07-12 15:05:58.546497] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.910 [2024-07-12 15:05:58.546525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:32.910 BaseBdev2 00:17:32.910 15:05:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:32.910 15:05:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:33.168 BaseBdev3_malloc 00:17:33.168 15:05:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:33.426 true 00:17:33.426 15:05:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:33.683 [2024-07-12 15:05:59.305749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:33.683 [2024-07-12 15:05:59.305827] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.683 [2024-07-12 15:05:59.305858] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f27eec35180 00:17:33.683 [2024-07-12 15:05:59.305867] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.683 [2024-07-12 15:05:59.306563] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.683 [2024-07-12 15:05:59.306588] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:33.683 BaseBdev3 00:17:33.683 15:05:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:33.683 15:05:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:33.941 BaseBdev4_malloc 00:17:33.941 15:05:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:17:34.198 true 00:17:34.198 15:05:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:34.457 [2024-07-12 15:06:00.133823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:34.457 [2024-07-12 15:06:00.133891] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.457 [2024-07-12 15:06:00.133933] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f27eec35680 00:17:34.457 [2024-07-12 15:06:00.133942] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.457 [2024-07-12 15:06:00.134620] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.457 [2024-07-12 15:06:00.134645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:34.457 BaseBdev4 00:17:34.457 15:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:17:34.714 [2024-07-12 15:06:00.389835] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:34.714 [2024-07-12 15:06:00.390434] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:34.714 [2024-07-12 15:06:00.390458] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:34.714 [2024-07-12 15:06:00.390473] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:34.714 [2024-07-12 15:06:00.390548] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2f27eec35900 00:17:34.714 [2024-07-12 15:06:00.390554] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:34.714 [2024-07-12 15:06:00.390584] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2f27eeca0e20 00:17:34.714 [2024-07-12 15:06:00.390671] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2f27eec35900 00:17:34.714 [2024-07-12 15:06:00.390675] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2f27eec35900 00:17:34.714 [2024-07-12 15:06:00.390702] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.715 15:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:34.715 15:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:34.715 15:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:34.715 15:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:34.715 15:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:34.715 15:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:34.715 15:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:34.715 15:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:34.715 15:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:34.715 15:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:34.715 15:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.715 15:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.025 15:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:35.025 "name": "raid_bdev1", 00:17:35.025 "uuid": "3fd77796-4060-11ef-b2a4-e9dca065e82e", 00:17:35.025 "strip_size_kb": 0, 00:17:35.025 "state": "online", 00:17:35.025 "raid_level": "raid1", 00:17:35.025 "superblock": true, 00:17:35.025 "num_base_bdevs": 4, 00:17:35.025 "num_base_bdevs_discovered": 4, 00:17:35.025 "num_base_bdevs_operational": 4, 00:17:35.025 "base_bdevs_list": [ 00:17:35.025 { 00:17:35.025 "name": "BaseBdev1", 00:17:35.025 "uuid": "d2bd515f-6f3e-2056-bdaa-5dce967131f7", 00:17:35.025 "is_configured": true, 00:17:35.025 "data_offset": 2048, 00:17:35.025 "data_size": 63488 00:17:35.025 }, 00:17:35.025 { 00:17:35.025 "name": "BaseBdev2", 00:17:35.025 "uuid": "b69e461f-dada-525f-b805-64d674beca73", 00:17:35.025 "is_configured": true, 00:17:35.025 "data_offset": 2048, 00:17:35.025 "data_size": 63488 00:17:35.025 }, 00:17:35.025 { 00:17:35.025 "name": "BaseBdev3", 00:17:35.025 "uuid": "1b944930-e5f9-285c-8199-7e91da5838b8", 00:17:35.025 "is_configured": true, 00:17:35.025 "data_offset": 2048, 00:17:35.025 "data_size": 63488 00:17:35.025 }, 00:17:35.025 { 00:17:35.025 "name": "BaseBdev4", 00:17:35.025 "uuid": "0d752a10-5209-975f-b09b-447599bbec61", 00:17:35.025 "is_configured": true, 00:17:35.025 "data_offset": 2048, 00:17:35.025 "data_size": 63488 00:17:35.025 } 00:17:35.025 ] 00:17:35.025 }' 00:17:35.025 15:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:35.025 15:06:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.283 15:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:35.283 15:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:35.283 [2024-07-12 15:06:01.090090] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2f27eeca0ec0 00:17:36.656 15:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:36.656 [2024-07-12 15:06:02.282701] bdev_raid.c:2222:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:17:36.656 [2024-07-12 15:06:02.282755] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:36.656 [2024-07-12 15:06:02.282916] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x2f27eeca0ec0 00:17:36.656 15:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:36.656 15:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:36.656 15:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:17:36.656 15:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=3 00:17:36.656 15:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:36.656 15:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:36.656 15:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:36.656 15:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:36.656 15:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:36.656 15:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:36.656 15:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:36.656 15:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:36.656 15:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:36.656 15:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:36.656 15:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.656 15:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.914 15:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:36.914 "name": "raid_bdev1", 00:17:36.914 "uuid": "3fd77796-4060-11ef-b2a4-e9dca065e82e", 00:17:36.914 "strip_size_kb": 0, 00:17:36.914 "state": "online", 00:17:36.914 "raid_level": "raid1", 00:17:36.914 "superblock": true, 00:17:36.914 "num_base_bdevs": 4, 00:17:36.914 "num_base_bdevs_discovered": 3, 00:17:36.914 "num_base_bdevs_operational": 3, 00:17:36.914 "base_bdevs_list": [ 00:17:36.914 { 00:17:36.914 "name": null, 00:17:36.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.914 "is_configured": false, 00:17:36.914 "data_offset": 2048, 00:17:36.914 "data_size": 63488 00:17:36.914 }, 00:17:36.914 { 00:17:36.914 "name": "BaseBdev2", 00:17:36.914 "uuid": "b69e461f-dada-525f-b805-64d674beca73", 00:17:36.914 "is_configured": true, 00:17:36.914 "data_offset": 2048, 00:17:36.914 "data_size": 63488 00:17:36.914 }, 00:17:36.914 { 00:17:36.915 "name": "BaseBdev3", 00:17:36.915 "uuid": "1b944930-e5f9-285c-8199-7e91da5838b8", 00:17:36.915 "is_configured": true, 00:17:36.915 "data_offset": 2048, 00:17:36.915 "data_size": 63488 00:17:36.915 }, 00:17:36.915 { 00:17:36.915 "name": "BaseBdev4", 00:17:36.915 "uuid": "0d752a10-5209-975f-b09b-447599bbec61", 00:17:36.915 "is_configured": true, 00:17:36.915 "data_offset": 2048, 00:17:36.915 "data_size": 63488 00:17:36.915 } 00:17:36.915 ] 00:17:36.915 }' 00:17:36.915 15:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:36.915 15:06:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.172 15:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:37.429 [2024-07-12 15:06:03.092000] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:37.429 [2024-07-12 15:06:03.092028] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:37.429 [2024-07-12 15:06:03.092360] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:37.429 [2024-07-12 15:06:03.092371] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.429 [2024-07-12 15:06:03.092387] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:37.429 [2024-07-12 15:06:03.092392] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2f27eec35900 name raid_bdev1, state offline 00:17:37.429 0 00:17:37.429 15:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 65429 00:17:37.429 15:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 65429 ']' 00:17:37.429 15:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 65429 00:17:37.429 15:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:17:37.429 15:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:17:37.429 15:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 65429 00:17:37.429 15:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:17:37.429 15:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:17:37.429 15:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:17:37.429 killing process with pid 65429 00:17:37.429 15:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65429' 00:17:37.429 15:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 65429 00:17:37.429 [2024-07-12 15:06:03.119638] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:37.429 15:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 65429 00:17:37.429 [2024-07-12 15:06:03.142568] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:37.687 15:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.ix8Yr5TMmX 00:17:37.687 15:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:37.687 15:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:37.687 15:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:17:37.687 15:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:17:37.687 15:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:37.687 15:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:37.687 15:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:37.687 00:17:37.687 real 0m7.322s 00:17:37.687 user 0m11.736s 00:17:37.687 sys 0m1.123s 00:17:37.687 15:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:37.687 15:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.687 ************************************ 00:17:37.687 END TEST raid_write_error_test 00:17:37.687 ************************************ 00:17:37.687 15:06:03 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:37.687 15:06:03 bdev_raid -- bdev/bdev_raid.sh@875 -- # '[' '' = true ']' 00:17:37.687 15:06:03 bdev_raid -- bdev/bdev_raid.sh@884 -- # '[' n == y ']' 00:17:37.687 15:06:03 bdev_raid -- bdev/bdev_raid.sh@896 -- # base_blocklen=4096 00:17:37.687 15:06:03 bdev_raid -- bdev/bdev_raid.sh@898 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:37.687 15:06:03 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:37.687 15:06:03 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:37.687 15:06:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:37.687 ************************************ 00:17:37.687 START TEST raid_state_function_test_sb_4k 00:17:37.687 ************************************ 00:17:37.687 15:06:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:17:37.687 15:06:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:17:37.687 15:06:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:17:37.687 15:06:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:37.687 15:06:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:37.687 15:06:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:37.687 15:06:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:37.687 15:06:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:37.687 15:06:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:37.687 15:06:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:37.687 15:06:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:37.687 15:06:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:37.687 15:06:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:37.687 15:06:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:37.687 15:06:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:37.687 15:06:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:37.687 15:06:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:37.687 15:06:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:37.687 15:06:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:37.687 15:06:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:17:37.687 15:06:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:17:37.687 15:06:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:37.687 15:06:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:37.687 15:06:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=65565 00:17:37.687 15:06:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 65565' 00:17:37.687 15:06:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:37.687 Process raid pid: 65565 00:17:37.687 15:06:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 65565 /var/tmp/spdk-raid.sock 00:17:37.687 15:06:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 65565 ']' 00:17:37.687 15:06:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:37.687 15:06:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:37.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:37.687 15:06:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:37.687 15:06:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:37.687 15:06:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.687 [2024-07-12 15:06:03.379054] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:17:37.687 [2024-07-12 15:06:03.379267] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:38.278 EAL: TSC is not safe to use in SMP mode 00:17:38.278 EAL: TSC is not invariant 00:17:38.278 [2024-07-12 15:06:03.923552] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.278 [2024-07-12 15:06:04.011428] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:38.278 [2024-07-12 15:06:04.013575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.278 [2024-07-12 15:06:04.014353] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:38.278 [2024-07-12 15:06:04.014368] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:38.850 15:06:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:38.850 15:06:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:17:38.850 15:06:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:38.850 [2024-07-12 15:06:04.622594] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:38.850 [2024-07-12 15:06:04.622653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:38.850 [2024-07-12 15:06:04.622658] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:38.850 [2024-07-12 15:06:04.622667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:38.850 15:06:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:38.850 15:06:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:38.850 15:06:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:38.850 15:06:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:38.850 15:06:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:38.850 15:06:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:38.850 15:06:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:38.850 15:06:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:38.850 15:06:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:38.850 15:06:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:38.850 15:06:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.850 15:06:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.108 15:06:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:39.108 "name": "Existed_Raid", 00:17:39.108 "uuid": "425d55de-4060-11ef-b2a4-e9dca065e82e", 00:17:39.108 "strip_size_kb": 0, 00:17:39.108 "state": "configuring", 00:17:39.108 "raid_level": "raid1", 00:17:39.108 "superblock": true, 00:17:39.108 "num_base_bdevs": 2, 00:17:39.108 "num_base_bdevs_discovered": 0, 00:17:39.108 "num_base_bdevs_operational": 2, 00:17:39.108 "base_bdevs_list": [ 00:17:39.108 { 00:17:39.108 "name": "BaseBdev1", 00:17:39.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.108 "is_configured": false, 00:17:39.108 "data_offset": 0, 00:17:39.108 "data_size": 0 00:17:39.108 }, 00:17:39.108 { 00:17:39.108 "name": "BaseBdev2", 00:17:39.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.108 "is_configured": false, 00:17:39.108 "data_offset": 0, 00:17:39.108 "data_size": 0 00:17:39.108 } 00:17:39.108 ] 00:17:39.108 }' 00:17:39.108 15:06:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:39.108 15:06:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.366 15:06:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:39.625 [2024-07-12 15:06:05.446623] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:39.625 [2024-07-12 15:06:05.446664] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15ad04834500 name Existed_Raid, state configuring 00:17:39.901 15:06:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:39.901 [2024-07-12 15:06:05.682657] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:39.901 [2024-07-12 15:06:05.682721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:39.901 [2024-07-12 15:06:05.682727] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:39.901 [2024-07-12 15:06:05.682752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:39.901 15:06:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:17:40.160 [2024-07-12 15:06:05.915691] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:40.160 BaseBdev1 00:17:40.160 15:06:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:40.160 15:06:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:40.160 15:06:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:40.160 15:06:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:17:40.160 15:06:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:40.160 15:06:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:40.160 15:06:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:40.420 15:06:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:40.678 [ 00:17:40.678 { 00:17:40.678 "name": "BaseBdev1", 00:17:40.678 "aliases": [ 00:17:40.678 "43227e8e-4060-11ef-b2a4-e9dca065e82e" 00:17:40.678 ], 00:17:40.678 "product_name": "Malloc disk", 00:17:40.678 "block_size": 4096, 00:17:40.678 "num_blocks": 8192, 00:17:40.678 "uuid": "43227e8e-4060-11ef-b2a4-e9dca065e82e", 00:17:40.678 "assigned_rate_limits": { 00:17:40.678 "rw_ios_per_sec": 0, 00:17:40.678 "rw_mbytes_per_sec": 0, 00:17:40.678 "r_mbytes_per_sec": 0, 00:17:40.678 "w_mbytes_per_sec": 0 00:17:40.678 }, 00:17:40.678 "claimed": true, 00:17:40.678 "claim_type": "exclusive_write", 00:17:40.678 "zoned": false, 00:17:40.678 "supported_io_types": { 00:17:40.678 "read": true, 00:17:40.678 "write": true, 00:17:40.678 "unmap": true, 00:17:40.678 "flush": true, 00:17:40.678 "reset": true, 00:17:40.678 "nvme_admin": false, 00:17:40.678 "nvme_io": false, 00:17:40.678 "nvme_io_md": false, 00:17:40.678 "write_zeroes": true, 00:17:40.678 "zcopy": true, 00:17:40.678 "get_zone_info": false, 00:17:40.678 "zone_management": false, 00:17:40.678 "zone_append": false, 00:17:40.678 "compare": false, 00:17:40.678 "compare_and_write": false, 00:17:40.678 "abort": true, 00:17:40.678 "seek_hole": false, 00:17:40.678 "seek_data": false, 00:17:40.678 "copy": true, 00:17:40.678 "nvme_iov_md": false 00:17:40.678 }, 00:17:40.678 "memory_domains": [ 00:17:40.678 { 00:17:40.678 "dma_device_id": "system", 00:17:40.678 "dma_device_type": 1 00:17:40.678 }, 00:17:40.678 { 00:17:40.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.678 "dma_device_type": 2 00:17:40.678 } 00:17:40.678 ], 00:17:40.678 "driver_specific": {} 00:17:40.678 } 00:17:40.678 ] 00:17:40.678 15:06:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:17:40.678 15:06:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:40.678 15:06:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:40.678 15:06:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:40.678 15:06:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:40.678 15:06:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:40.678 15:06:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:40.678 15:06:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:40.678 15:06:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:40.678 15:06:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:40.678 15:06:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:40.678 15:06:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.678 15:06:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.936 15:06:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:40.936 "name": "Existed_Raid", 00:17:40.936 "uuid": "42ff169c-4060-11ef-b2a4-e9dca065e82e", 00:17:40.936 "strip_size_kb": 0, 00:17:40.936 "state": "configuring", 00:17:40.936 "raid_level": "raid1", 00:17:40.936 "superblock": true, 00:17:40.936 "num_base_bdevs": 2, 00:17:40.936 "num_base_bdevs_discovered": 1, 00:17:40.936 "num_base_bdevs_operational": 2, 00:17:40.936 "base_bdevs_list": [ 00:17:40.936 { 00:17:40.936 "name": "BaseBdev1", 00:17:40.936 "uuid": "43227e8e-4060-11ef-b2a4-e9dca065e82e", 00:17:40.936 "is_configured": true, 00:17:40.936 "data_offset": 256, 00:17:40.936 "data_size": 7936 00:17:40.936 }, 00:17:40.936 { 00:17:40.936 "name": "BaseBdev2", 00:17:40.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.936 "is_configured": false, 00:17:40.936 "data_offset": 0, 00:17:40.936 "data_size": 0 00:17:40.936 } 00:17:40.936 ] 00:17:40.936 }' 00:17:40.936 15:06:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:40.936 15:06:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.196 15:06:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:41.454 [2024-07-12 15:06:07.194771] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:41.454 [2024-07-12 15:06:07.194803] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15ad04834500 name Existed_Raid, state configuring 00:17:41.454 15:06:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:41.712 [2024-07-12 15:06:07.426806] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:41.712 [2024-07-12 15:06:07.427605] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:41.712 [2024-07-12 15:06:07.427645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:41.712 15:06:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:41.712 15:06:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:41.712 15:06:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:41.712 15:06:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:41.712 15:06:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:41.712 15:06:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:41.712 15:06:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:41.712 15:06:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:41.712 15:06:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:41.712 15:06:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:41.712 15:06:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:41.712 15:06:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:41.712 15:06:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.712 15:06:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.972 15:06:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:41.972 "name": "Existed_Raid", 00:17:41.972 "uuid": "44093937-4060-11ef-b2a4-e9dca065e82e", 00:17:41.972 "strip_size_kb": 0, 00:17:41.972 "state": "configuring", 00:17:41.972 "raid_level": "raid1", 00:17:41.972 "superblock": true, 00:17:41.972 "num_base_bdevs": 2, 00:17:41.972 "num_base_bdevs_discovered": 1, 00:17:41.972 "num_base_bdevs_operational": 2, 00:17:41.972 "base_bdevs_list": [ 00:17:41.972 { 00:17:41.972 "name": "BaseBdev1", 00:17:41.972 "uuid": "43227e8e-4060-11ef-b2a4-e9dca065e82e", 00:17:41.972 "is_configured": true, 00:17:41.972 "data_offset": 256, 00:17:41.972 "data_size": 7936 00:17:41.972 }, 00:17:41.972 { 00:17:41.972 "name": "BaseBdev2", 00:17:41.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.972 "is_configured": false, 00:17:41.972 "data_offset": 0, 00:17:41.972 "data_size": 0 00:17:41.972 } 00:17:41.972 ] 00:17:41.972 }' 00:17:41.972 15:06:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:41.972 15:06:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.230 15:06:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:17:42.490 [2024-07-12 15:06:08.266982] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:42.490 [2024-07-12 15:06:08.267052] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x15ad04834a00 00:17:42.490 [2024-07-12 15:06:08.267059] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:42.490 [2024-07-12 15:06:08.267081] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15ad04897e20 00:17:42.490 [2024-07-12 15:06:08.267134] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x15ad04834a00 00:17:42.490 [2024-07-12 15:06:08.267139] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x15ad04834a00 00:17:42.490 [2024-07-12 15:06:08.267160] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.490 BaseBdev2 00:17:42.490 15:06:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:42.490 15:06:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:42.490 15:06:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:42.490 15:06:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:17:42.490 15:06:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:42.490 15:06:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:42.490 15:06:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:42.748 15:06:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:43.007 [ 00:17:43.007 { 00:17:43.007 "name": "BaseBdev2", 00:17:43.007 "aliases": [ 00:17:43.007 "448967a3-4060-11ef-b2a4-e9dca065e82e" 00:17:43.007 ], 00:17:43.007 "product_name": "Malloc disk", 00:17:43.007 "block_size": 4096, 00:17:43.007 "num_blocks": 8192, 00:17:43.007 "uuid": "448967a3-4060-11ef-b2a4-e9dca065e82e", 00:17:43.007 "assigned_rate_limits": { 00:17:43.007 "rw_ios_per_sec": 0, 00:17:43.007 "rw_mbytes_per_sec": 0, 00:17:43.007 "r_mbytes_per_sec": 0, 00:17:43.007 "w_mbytes_per_sec": 0 00:17:43.007 }, 00:17:43.007 "claimed": true, 00:17:43.007 "claim_type": "exclusive_write", 00:17:43.007 "zoned": false, 00:17:43.007 "supported_io_types": { 00:17:43.007 "read": true, 00:17:43.007 "write": true, 00:17:43.007 "unmap": true, 00:17:43.007 "flush": true, 00:17:43.007 "reset": true, 00:17:43.007 "nvme_admin": false, 00:17:43.007 "nvme_io": false, 00:17:43.007 "nvme_io_md": false, 00:17:43.007 "write_zeroes": true, 00:17:43.007 "zcopy": true, 00:17:43.007 "get_zone_info": false, 00:17:43.007 "zone_management": false, 00:17:43.007 "zone_append": false, 00:17:43.007 "compare": false, 00:17:43.007 "compare_and_write": false, 00:17:43.007 "abort": true, 00:17:43.007 "seek_hole": false, 00:17:43.007 "seek_data": false, 00:17:43.007 "copy": true, 00:17:43.007 "nvme_iov_md": false 00:17:43.007 }, 00:17:43.007 "memory_domains": [ 00:17:43.007 { 00:17:43.007 "dma_device_id": "system", 00:17:43.007 "dma_device_type": 1 00:17:43.007 }, 00:17:43.007 { 00:17:43.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.007 "dma_device_type": 2 00:17:43.007 } 00:17:43.007 ], 00:17:43.007 "driver_specific": {} 00:17:43.007 } 00:17:43.007 ] 00:17:43.266 15:06:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:17:43.266 15:06:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:43.266 15:06:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:43.266 15:06:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:43.266 15:06:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:43.266 15:06:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:43.266 15:06:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:43.266 15:06:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:43.266 15:06:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:43.266 15:06:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:43.266 15:06:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:43.266 15:06:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:43.266 15:06:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:43.266 15:06:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.266 15:06:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.527 15:06:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:43.527 "name": "Existed_Raid", 00:17:43.527 "uuid": "44093937-4060-11ef-b2a4-e9dca065e82e", 00:17:43.527 "strip_size_kb": 0, 00:17:43.527 "state": "online", 00:17:43.527 "raid_level": "raid1", 00:17:43.527 "superblock": true, 00:17:43.527 "num_base_bdevs": 2, 00:17:43.527 "num_base_bdevs_discovered": 2, 00:17:43.527 "num_base_bdevs_operational": 2, 00:17:43.527 "base_bdevs_list": [ 00:17:43.527 { 00:17:43.527 "name": "BaseBdev1", 00:17:43.527 "uuid": "43227e8e-4060-11ef-b2a4-e9dca065e82e", 00:17:43.527 "is_configured": true, 00:17:43.527 "data_offset": 256, 00:17:43.527 "data_size": 7936 00:17:43.527 }, 00:17:43.527 { 00:17:43.527 "name": "BaseBdev2", 00:17:43.527 "uuid": "448967a3-4060-11ef-b2a4-e9dca065e82e", 00:17:43.527 "is_configured": true, 00:17:43.527 "data_offset": 256, 00:17:43.527 "data_size": 7936 00:17:43.527 } 00:17:43.527 ] 00:17:43.527 }' 00:17:43.527 15:06:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:43.527 15:06:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.787 15:06:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:43.787 15:06:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:43.787 15:06:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:43.787 15:06:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:43.787 15:06:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:43.787 15:06:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:17:43.787 15:06:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:43.787 15:06:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:44.045 [2024-07-12 15:06:09.655070] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:44.045 15:06:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:44.045 "name": "Existed_Raid", 00:17:44.045 "aliases": [ 00:17:44.045 "44093937-4060-11ef-b2a4-e9dca065e82e" 00:17:44.045 ], 00:17:44.045 "product_name": "Raid Volume", 00:17:44.045 "block_size": 4096, 00:17:44.045 "num_blocks": 7936, 00:17:44.045 "uuid": "44093937-4060-11ef-b2a4-e9dca065e82e", 00:17:44.045 "assigned_rate_limits": { 00:17:44.045 "rw_ios_per_sec": 0, 00:17:44.045 "rw_mbytes_per_sec": 0, 00:17:44.045 "r_mbytes_per_sec": 0, 00:17:44.045 "w_mbytes_per_sec": 0 00:17:44.045 }, 00:17:44.045 "claimed": false, 00:17:44.045 "zoned": false, 00:17:44.045 "supported_io_types": { 00:17:44.045 "read": true, 00:17:44.045 "write": true, 00:17:44.045 "unmap": false, 00:17:44.045 "flush": false, 00:17:44.045 "reset": true, 00:17:44.045 "nvme_admin": false, 00:17:44.045 "nvme_io": false, 00:17:44.045 "nvme_io_md": false, 00:17:44.045 "write_zeroes": true, 00:17:44.045 "zcopy": false, 00:17:44.045 "get_zone_info": false, 00:17:44.045 "zone_management": false, 00:17:44.045 "zone_append": false, 00:17:44.045 "compare": false, 00:17:44.046 "compare_and_write": false, 00:17:44.046 "abort": false, 00:17:44.046 "seek_hole": false, 00:17:44.046 "seek_data": false, 00:17:44.046 "copy": false, 00:17:44.046 "nvme_iov_md": false 00:17:44.046 }, 00:17:44.046 "memory_domains": [ 00:17:44.046 { 00:17:44.046 "dma_device_id": "system", 00:17:44.046 "dma_device_type": 1 00:17:44.046 }, 00:17:44.046 { 00:17:44.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.046 "dma_device_type": 2 00:17:44.046 }, 00:17:44.046 { 00:17:44.046 "dma_device_id": "system", 00:17:44.046 "dma_device_type": 1 00:17:44.046 }, 00:17:44.046 { 00:17:44.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.046 "dma_device_type": 2 00:17:44.046 } 00:17:44.046 ], 00:17:44.046 "driver_specific": { 00:17:44.046 "raid": { 00:17:44.046 "uuid": "44093937-4060-11ef-b2a4-e9dca065e82e", 00:17:44.046 "strip_size_kb": 0, 00:17:44.046 "state": "online", 00:17:44.046 "raid_level": "raid1", 00:17:44.046 "superblock": true, 00:17:44.046 "num_base_bdevs": 2, 00:17:44.046 "num_base_bdevs_discovered": 2, 00:17:44.046 "num_base_bdevs_operational": 2, 00:17:44.046 "base_bdevs_list": [ 00:17:44.046 { 00:17:44.046 "name": "BaseBdev1", 00:17:44.046 "uuid": "43227e8e-4060-11ef-b2a4-e9dca065e82e", 00:17:44.046 "is_configured": true, 00:17:44.046 "data_offset": 256, 00:17:44.046 "data_size": 7936 00:17:44.046 }, 00:17:44.046 { 00:17:44.046 "name": "BaseBdev2", 00:17:44.046 "uuid": "448967a3-4060-11ef-b2a4-e9dca065e82e", 00:17:44.046 "is_configured": true, 00:17:44.046 "data_offset": 256, 00:17:44.046 "data_size": 7936 00:17:44.046 } 00:17:44.046 ] 00:17:44.046 } 00:17:44.046 } 00:17:44.046 }' 00:17:44.046 15:06:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:44.046 15:06:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:44.046 BaseBdev2' 00:17:44.046 15:06:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:44.046 15:06:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:44.046 15:06:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:44.304 15:06:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:44.304 "name": "BaseBdev1", 00:17:44.304 "aliases": [ 00:17:44.304 "43227e8e-4060-11ef-b2a4-e9dca065e82e" 00:17:44.304 ], 00:17:44.304 "product_name": "Malloc disk", 00:17:44.304 "block_size": 4096, 00:17:44.304 "num_blocks": 8192, 00:17:44.304 "uuid": "43227e8e-4060-11ef-b2a4-e9dca065e82e", 00:17:44.304 "assigned_rate_limits": { 00:17:44.304 "rw_ios_per_sec": 0, 00:17:44.304 "rw_mbytes_per_sec": 0, 00:17:44.304 "r_mbytes_per_sec": 0, 00:17:44.304 "w_mbytes_per_sec": 0 00:17:44.304 }, 00:17:44.304 "claimed": true, 00:17:44.304 "claim_type": "exclusive_write", 00:17:44.304 "zoned": false, 00:17:44.304 "supported_io_types": { 00:17:44.304 "read": true, 00:17:44.304 "write": true, 00:17:44.304 "unmap": true, 00:17:44.304 "flush": true, 00:17:44.304 "reset": true, 00:17:44.304 "nvme_admin": false, 00:17:44.304 "nvme_io": false, 00:17:44.304 "nvme_io_md": false, 00:17:44.304 "write_zeroes": true, 00:17:44.304 "zcopy": true, 00:17:44.304 "get_zone_info": false, 00:17:44.304 "zone_management": false, 00:17:44.304 "zone_append": false, 00:17:44.304 "compare": false, 00:17:44.304 "compare_and_write": false, 00:17:44.304 "abort": true, 00:17:44.304 "seek_hole": false, 00:17:44.304 "seek_data": false, 00:17:44.304 "copy": true, 00:17:44.304 "nvme_iov_md": false 00:17:44.304 }, 00:17:44.304 "memory_domains": [ 00:17:44.304 { 00:17:44.304 "dma_device_id": "system", 00:17:44.304 "dma_device_type": 1 00:17:44.304 }, 00:17:44.304 { 00:17:44.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.304 "dma_device_type": 2 00:17:44.304 } 00:17:44.304 ], 00:17:44.304 "driver_specific": {} 00:17:44.304 }' 00:17:44.304 15:06:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:44.304 15:06:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:44.304 15:06:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:44.304 15:06:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:44.304 15:06:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:44.304 15:06:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:44.304 15:06:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:44.304 15:06:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:44.304 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:44.304 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:44.304 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:44.304 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:44.304 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:44.304 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:44.304 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:44.562 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:44.562 "name": "BaseBdev2", 00:17:44.562 "aliases": [ 00:17:44.562 "448967a3-4060-11ef-b2a4-e9dca065e82e" 00:17:44.562 ], 00:17:44.562 "product_name": "Malloc disk", 00:17:44.562 "block_size": 4096, 00:17:44.562 "num_blocks": 8192, 00:17:44.562 "uuid": "448967a3-4060-11ef-b2a4-e9dca065e82e", 00:17:44.562 "assigned_rate_limits": { 00:17:44.562 "rw_ios_per_sec": 0, 00:17:44.562 "rw_mbytes_per_sec": 0, 00:17:44.562 "r_mbytes_per_sec": 0, 00:17:44.562 "w_mbytes_per_sec": 0 00:17:44.562 }, 00:17:44.562 "claimed": true, 00:17:44.562 "claim_type": "exclusive_write", 00:17:44.562 "zoned": false, 00:17:44.562 "supported_io_types": { 00:17:44.562 "read": true, 00:17:44.562 "write": true, 00:17:44.562 "unmap": true, 00:17:44.562 "flush": true, 00:17:44.562 "reset": true, 00:17:44.562 "nvme_admin": false, 00:17:44.562 "nvme_io": false, 00:17:44.562 "nvme_io_md": false, 00:17:44.562 "write_zeroes": true, 00:17:44.562 "zcopy": true, 00:17:44.562 "get_zone_info": false, 00:17:44.562 "zone_management": false, 00:17:44.562 "zone_append": false, 00:17:44.562 "compare": false, 00:17:44.562 "compare_and_write": false, 00:17:44.562 "abort": true, 00:17:44.562 "seek_hole": false, 00:17:44.562 "seek_data": false, 00:17:44.562 "copy": true, 00:17:44.562 "nvme_iov_md": false 00:17:44.562 }, 00:17:44.562 "memory_domains": [ 00:17:44.562 { 00:17:44.562 "dma_device_id": "system", 00:17:44.562 "dma_device_type": 1 00:17:44.562 }, 00:17:44.562 { 00:17:44.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.562 "dma_device_type": 2 00:17:44.562 } 00:17:44.562 ], 00:17:44.562 "driver_specific": {} 00:17:44.562 }' 00:17:44.562 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:44.562 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:44.562 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:44.562 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:44.562 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:44.562 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:44.562 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:44.562 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:44.562 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:44.562 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:44.562 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:44.562 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:44.562 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:44.820 [2024-07-12 15:06:10.651124] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:45.079 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:45.079 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:17:45.079 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:45.080 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:17:45.080 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:17:45.080 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:45.080 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:45.080 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:45.080 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:45.080 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:45.080 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:45.080 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:45.080 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:45.080 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:45.080 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:45.080 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.080 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.406 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:45.406 "name": "Existed_Raid", 00:17:45.406 "uuid": "44093937-4060-11ef-b2a4-e9dca065e82e", 00:17:45.406 "strip_size_kb": 0, 00:17:45.406 "state": "online", 00:17:45.406 "raid_level": "raid1", 00:17:45.406 "superblock": true, 00:17:45.406 "num_base_bdevs": 2, 00:17:45.406 "num_base_bdevs_discovered": 1, 00:17:45.406 "num_base_bdevs_operational": 1, 00:17:45.406 "base_bdevs_list": [ 00:17:45.406 { 00:17:45.406 "name": null, 00:17:45.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.406 "is_configured": false, 00:17:45.406 "data_offset": 256, 00:17:45.406 "data_size": 7936 00:17:45.406 }, 00:17:45.406 { 00:17:45.406 "name": "BaseBdev2", 00:17:45.406 "uuid": "448967a3-4060-11ef-b2a4-e9dca065e82e", 00:17:45.406 "is_configured": true, 00:17:45.406 "data_offset": 256, 00:17:45.406 "data_size": 7936 00:17:45.406 } 00:17:45.406 ] 00:17:45.406 }' 00:17:45.406 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:45.406 15:06:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.665 15:06:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:45.665 15:06:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:45.665 15:06:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.665 15:06:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:45.923 15:06:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:45.923 15:06:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:45.923 15:06:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:46.181 [2024-07-12 15:06:11.793234] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:46.181 [2024-07-12 15:06:11.793314] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:46.181 [2024-07-12 15:06:11.799518] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:46.181 [2024-07-12 15:06:11.799539] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:46.181 [2024-07-12 15:06:11.799544] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15ad04834a00 name Existed_Raid, state offline 00:17:46.181 15:06:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:46.181 15:06:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:46.182 15:06:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.182 15:06:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:46.440 15:06:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:46.440 15:06:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:46.440 15:06:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:17:46.440 15:06:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 65565 00:17:46.440 15:06:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 65565 ']' 00:17:46.440 15:06:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 65565 00:17:46.440 15:06:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:17:46.440 15:06:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:17:46.440 15:06:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps -c -o command 65565 00:17:46.440 15:06:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # tail -1 00:17:46.440 15:06:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:17:46.440 15:06:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:17:46.440 killing process with pid 65565 00:17:46.440 15:06:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65565' 00:17:46.440 15:06:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@967 -- # kill 65565 00:17:46.440 [2024-07-12 15:06:12.093763] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:46.440 [2024-07-12 15:06:12.093796] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:46.440 15:06:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # wait 65565 00:17:46.699 15:06:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:17:46.699 00:17:46.699 real 0m8.905s 00:17:46.699 user 0m15.591s 00:17:46.699 sys 0m1.467s 00:17:46.699 15:06:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:46.699 ************************************ 00:17:46.699 END TEST raid_state_function_test_sb_4k 00:17:46.699 15:06:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.699 ************************************ 00:17:46.699 15:06:12 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:46.699 15:06:12 bdev_raid -- bdev/bdev_raid.sh@899 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:46.699 15:06:12 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:46.699 15:06:12 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:46.699 15:06:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:46.699 ************************************ 00:17:46.699 START TEST raid_superblock_test_4k 00:17:46.699 ************************************ 00:17:46.699 15:06:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:17:46.699 15:06:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:17:46.699 15:06:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:17:46.699 15:06:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:17:46.699 15:06:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:17:46.699 15:06:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:17:46.699 15:06:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:17:46.699 15:06:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:17:46.699 15:06:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:17:46.699 15:06:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:17:46.699 15:06:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local strip_size 00:17:46.699 15:06:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:17:46.700 15:06:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:17:46.700 15:06:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:17:46.700 15:06:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:17:46.700 15:06:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:17:46.700 15:06:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # raid_pid=65839 00:17:46.700 15:06:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # waitforlisten 65839 /var/tmp/spdk-raid.sock 00:17:46.700 15:06:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:46.700 15:06:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@829 -- # '[' -z 65839 ']' 00:17:46.700 15:06:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:46.700 15:06:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:46.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:46.700 15:06:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:46.700 15:06:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:46.700 15:06:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.700 [2024-07-12 15:06:12.330889] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:17:46.700 [2024-07-12 15:06:12.331065] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:47.266 EAL: TSC is not safe to use in SMP mode 00:17:47.266 EAL: TSC is not invariant 00:17:47.266 [2024-07-12 15:06:12.893188] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.266 [2024-07-12 15:06:12.981093] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:47.266 [2024-07-12 15:06:12.983282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.266 [2024-07-12 15:06:12.984041] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:47.266 [2024-07-12 15:06:12.984055] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:47.524 15:06:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.524 15:06:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # return 0 00:17:47.524 15:06:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:17:47.524 15:06:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:47.524 15:06:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:17:47.524 15:06:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:17:47.524 15:06:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:47.524 15:06:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:47.524 15:06:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:47.524 15:06:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:47.524 15:06:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:17:47.783 malloc1 00:17:47.783 15:06:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:48.041 [2024-07-12 15:06:13.827768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:48.041 [2024-07-12 15:06:13.827860] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.041 [2024-07-12 15:06:13.827888] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e5481834780 00:17:48.041 [2024-07-12 15:06:13.827897] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.041 [2024-07-12 15:06:13.828785] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.041 [2024-07-12 15:06:13.828810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:48.041 pt1 00:17:48.041 15:06:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:48.041 15:06:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:48.041 15:06:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:17:48.041 15:06:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:17:48.041 15:06:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:48.042 15:06:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:48.042 15:06:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:48.042 15:06:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:48.042 15:06:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:17:48.300 malloc2 00:17:48.300 15:06:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:48.558 [2024-07-12 15:06:14.307833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:48.558 [2024-07-12 15:06:14.307908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.558 [2024-07-12 15:06:14.307921] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e5481834c80 00:17:48.558 [2024-07-12 15:06:14.307929] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.558 [2024-07-12 15:06:14.308584] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.558 [2024-07-12 15:06:14.308608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:48.558 pt2 00:17:48.558 15:06:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:48.558 15:06:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:48.558 15:06:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:17:48.823 [2024-07-12 15:06:14.563842] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:48.823 [2024-07-12 15:06:14.564446] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:48.823 [2024-07-12 15:06:14.564510] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3e5481834f00 00:17:48.823 [2024-07-12 15:06:14.564516] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:48.823 [2024-07-12 15:06:14.564556] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3e5481897e20 00:17:48.823 [2024-07-12 15:06:14.564625] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3e5481834f00 00:17:48.823 [2024-07-12 15:06:14.564629] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3e5481834f00 00:17:48.823 [2024-07-12 15:06:14.564656] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.823 15:06:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:48.823 15:06:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:48.823 15:06:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:48.823 15:06:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:48.823 15:06:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:48.823 15:06:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:48.823 15:06:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:48.823 15:06:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:48.824 15:06:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:48.824 15:06:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:48.824 15:06:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.824 15:06:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.082 15:06:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:49.082 "name": "raid_bdev1", 00:17:49.082 "uuid": "484a3fb3-4060-11ef-b2a4-e9dca065e82e", 00:17:49.082 "strip_size_kb": 0, 00:17:49.082 "state": "online", 00:17:49.082 "raid_level": "raid1", 00:17:49.082 "superblock": true, 00:17:49.082 "num_base_bdevs": 2, 00:17:49.082 "num_base_bdevs_discovered": 2, 00:17:49.082 "num_base_bdevs_operational": 2, 00:17:49.082 "base_bdevs_list": [ 00:17:49.082 { 00:17:49.082 "name": "pt1", 00:17:49.082 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:49.082 "is_configured": true, 00:17:49.082 "data_offset": 256, 00:17:49.082 "data_size": 7936 00:17:49.082 }, 00:17:49.082 { 00:17:49.082 "name": "pt2", 00:17:49.082 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:49.082 "is_configured": true, 00:17:49.082 "data_offset": 256, 00:17:49.082 "data_size": 7936 00:17:49.082 } 00:17:49.082 ] 00:17:49.082 }' 00:17:49.082 15:06:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:49.082 15:06:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.649 15:06:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:17:49.649 15:06:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:49.649 15:06:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:49.649 15:06:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:49.649 15:06:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:49.649 15:06:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:17:49.649 15:06:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:49.649 15:06:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:49.649 [2024-07-12 15:06:15.423981] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:49.649 15:06:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:49.649 "name": "raid_bdev1", 00:17:49.649 "aliases": [ 00:17:49.649 "484a3fb3-4060-11ef-b2a4-e9dca065e82e" 00:17:49.649 ], 00:17:49.649 "product_name": "Raid Volume", 00:17:49.649 "block_size": 4096, 00:17:49.649 "num_blocks": 7936, 00:17:49.649 "uuid": "484a3fb3-4060-11ef-b2a4-e9dca065e82e", 00:17:49.649 "assigned_rate_limits": { 00:17:49.649 "rw_ios_per_sec": 0, 00:17:49.649 "rw_mbytes_per_sec": 0, 00:17:49.649 "r_mbytes_per_sec": 0, 00:17:49.649 "w_mbytes_per_sec": 0 00:17:49.649 }, 00:17:49.649 "claimed": false, 00:17:49.649 "zoned": false, 00:17:49.649 "supported_io_types": { 00:17:49.649 "read": true, 00:17:49.649 "write": true, 00:17:49.649 "unmap": false, 00:17:49.649 "flush": false, 00:17:49.649 "reset": true, 00:17:49.649 "nvme_admin": false, 00:17:49.649 "nvme_io": false, 00:17:49.649 "nvme_io_md": false, 00:17:49.649 "write_zeroes": true, 00:17:49.649 "zcopy": false, 00:17:49.649 "get_zone_info": false, 00:17:49.649 "zone_management": false, 00:17:49.649 "zone_append": false, 00:17:49.649 "compare": false, 00:17:49.649 "compare_and_write": false, 00:17:49.649 "abort": false, 00:17:49.649 "seek_hole": false, 00:17:49.649 "seek_data": false, 00:17:49.649 "copy": false, 00:17:49.649 "nvme_iov_md": false 00:17:49.649 }, 00:17:49.649 "memory_domains": [ 00:17:49.649 { 00:17:49.649 "dma_device_id": "system", 00:17:49.649 "dma_device_type": 1 00:17:49.649 }, 00:17:49.649 { 00:17:49.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.649 "dma_device_type": 2 00:17:49.649 }, 00:17:49.649 { 00:17:49.649 "dma_device_id": "system", 00:17:49.649 "dma_device_type": 1 00:17:49.649 }, 00:17:49.649 { 00:17:49.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.649 "dma_device_type": 2 00:17:49.649 } 00:17:49.649 ], 00:17:49.649 "driver_specific": { 00:17:49.649 "raid": { 00:17:49.649 "uuid": "484a3fb3-4060-11ef-b2a4-e9dca065e82e", 00:17:49.649 "strip_size_kb": 0, 00:17:49.649 "state": "online", 00:17:49.649 "raid_level": "raid1", 00:17:49.649 "superblock": true, 00:17:49.649 "num_base_bdevs": 2, 00:17:49.649 "num_base_bdevs_discovered": 2, 00:17:49.649 "num_base_bdevs_operational": 2, 00:17:49.649 "base_bdevs_list": [ 00:17:49.649 { 00:17:49.649 "name": "pt1", 00:17:49.649 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:49.649 "is_configured": true, 00:17:49.649 "data_offset": 256, 00:17:49.649 "data_size": 7936 00:17:49.649 }, 00:17:49.649 { 00:17:49.649 "name": "pt2", 00:17:49.649 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:49.649 "is_configured": true, 00:17:49.649 "data_offset": 256, 00:17:49.649 "data_size": 7936 00:17:49.649 } 00:17:49.649 ] 00:17:49.649 } 00:17:49.649 } 00:17:49.649 }' 00:17:49.649 15:06:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:49.649 15:06:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:49.649 pt2' 00:17:49.649 15:06:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:49.649 15:06:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:49.649 15:06:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:49.908 15:06:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:49.908 "name": "pt1", 00:17:49.908 "aliases": [ 00:17:49.908 "00000000-0000-0000-0000-000000000001" 00:17:49.908 ], 00:17:49.908 "product_name": "passthru", 00:17:49.908 "block_size": 4096, 00:17:49.908 "num_blocks": 8192, 00:17:49.908 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:49.908 "assigned_rate_limits": { 00:17:49.908 "rw_ios_per_sec": 0, 00:17:49.908 "rw_mbytes_per_sec": 0, 00:17:49.908 "r_mbytes_per_sec": 0, 00:17:49.908 "w_mbytes_per_sec": 0 00:17:49.908 }, 00:17:49.908 "claimed": true, 00:17:49.908 "claim_type": "exclusive_write", 00:17:49.908 "zoned": false, 00:17:49.908 "supported_io_types": { 00:17:49.908 "read": true, 00:17:49.908 "write": true, 00:17:49.908 "unmap": true, 00:17:49.908 "flush": true, 00:17:49.908 "reset": true, 00:17:49.908 "nvme_admin": false, 00:17:49.908 "nvme_io": false, 00:17:49.908 "nvme_io_md": false, 00:17:49.908 "write_zeroes": true, 00:17:49.908 "zcopy": true, 00:17:49.908 "get_zone_info": false, 00:17:49.908 "zone_management": false, 00:17:49.908 "zone_append": false, 00:17:49.908 "compare": false, 00:17:49.908 "compare_and_write": false, 00:17:49.908 "abort": true, 00:17:49.908 "seek_hole": false, 00:17:49.908 "seek_data": false, 00:17:49.908 "copy": true, 00:17:49.908 "nvme_iov_md": false 00:17:49.908 }, 00:17:49.908 "memory_domains": [ 00:17:49.908 { 00:17:49.908 "dma_device_id": "system", 00:17:49.908 "dma_device_type": 1 00:17:49.908 }, 00:17:49.908 { 00:17:49.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.908 "dma_device_type": 2 00:17:49.908 } 00:17:49.908 ], 00:17:49.908 "driver_specific": { 00:17:49.908 "passthru": { 00:17:49.908 "name": "pt1", 00:17:49.908 "base_bdev_name": "malloc1" 00:17:49.908 } 00:17:49.908 } 00:17:49.908 }' 00:17:49.908 15:06:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:49.908 15:06:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:49.908 15:06:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:49.908 15:06:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:49.908 15:06:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:49.908 15:06:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:49.908 15:06:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:49.908 15:06:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:49.908 15:06:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:49.908 15:06:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:49.908 15:06:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:50.166 15:06:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:50.166 15:06:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:50.166 15:06:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:50.166 15:06:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:50.425 15:06:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:50.425 "name": "pt2", 00:17:50.425 "aliases": [ 00:17:50.425 "00000000-0000-0000-0000-000000000002" 00:17:50.425 ], 00:17:50.425 "product_name": "passthru", 00:17:50.425 "block_size": 4096, 00:17:50.425 "num_blocks": 8192, 00:17:50.425 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.425 "assigned_rate_limits": { 00:17:50.425 "rw_ios_per_sec": 0, 00:17:50.425 "rw_mbytes_per_sec": 0, 00:17:50.425 "r_mbytes_per_sec": 0, 00:17:50.425 "w_mbytes_per_sec": 0 00:17:50.425 }, 00:17:50.425 "claimed": true, 00:17:50.425 "claim_type": "exclusive_write", 00:17:50.425 "zoned": false, 00:17:50.425 "supported_io_types": { 00:17:50.425 "read": true, 00:17:50.425 "write": true, 00:17:50.425 "unmap": true, 00:17:50.425 "flush": true, 00:17:50.425 "reset": true, 00:17:50.425 "nvme_admin": false, 00:17:50.425 "nvme_io": false, 00:17:50.425 "nvme_io_md": false, 00:17:50.425 "write_zeroes": true, 00:17:50.425 "zcopy": true, 00:17:50.425 "get_zone_info": false, 00:17:50.425 "zone_management": false, 00:17:50.425 "zone_append": false, 00:17:50.425 "compare": false, 00:17:50.425 "compare_and_write": false, 00:17:50.425 "abort": true, 00:17:50.425 "seek_hole": false, 00:17:50.425 "seek_data": false, 00:17:50.425 "copy": true, 00:17:50.425 "nvme_iov_md": false 00:17:50.425 }, 00:17:50.425 "memory_domains": [ 00:17:50.425 { 00:17:50.425 "dma_device_id": "system", 00:17:50.425 "dma_device_type": 1 00:17:50.425 }, 00:17:50.425 { 00:17:50.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.425 "dma_device_type": 2 00:17:50.425 } 00:17:50.425 ], 00:17:50.425 "driver_specific": { 00:17:50.425 "passthru": { 00:17:50.425 "name": "pt2", 00:17:50.425 "base_bdev_name": "malloc2" 00:17:50.425 } 00:17:50.425 } 00:17:50.425 }' 00:17:50.425 15:06:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:50.425 15:06:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:50.425 15:06:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:50.425 15:06:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:50.425 15:06:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:50.425 15:06:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:50.425 15:06:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:50.425 15:06:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:50.425 15:06:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:50.425 15:06:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:50.425 15:06:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:50.425 15:06:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:50.425 15:06:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:50.425 15:06:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:17:50.684 [2024-07-12 15:06:16.380021] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:50.684 15:06:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=484a3fb3-4060-11ef-b2a4-e9dca065e82e 00:17:50.684 15:06:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # '[' -z 484a3fb3-4060-11ef-b2a4-e9dca065e82e ']' 00:17:50.684 15:06:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:50.942 [2024-07-12 15:06:16.660001] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:50.942 [2024-07-12 15:06:16.660025] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:50.942 [2024-07-12 15:06:16.660049] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:50.942 [2024-07-12 15:06:16.660063] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:50.942 [2024-07-12 15:06:16.660068] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3e5481834f00 name raid_bdev1, state offline 00:17:50.942 15:06:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.942 15:06:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:17:51.201 15:06:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:17:51.201 15:06:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:17:51.201 15:06:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:51.201 15:06:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:51.459 15:06:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:51.459 15:06:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:51.718 15:06:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:51.718 15:06:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:51.976 15:06:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:17:51.976 15:06:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:51.976 15:06:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@648 -- # local es=0 00:17:51.976 15:06:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:51.976 15:06:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:51.976 15:06:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:51.976 15:06:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:51.976 15:06:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:51.976 15:06:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:51.976 15:06:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:51.976 15:06:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:51.976 15:06:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:51.976 15:06:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:52.234 [2024-07-12 15:06:17.992080] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:52.234 [2024-07-12 15:06:17.992646] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:52.234 [2024-07-12 15:06:17.992671] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:52.234 [2024-07-12 15:06:17.992708] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:52.234 [2024-07-12 15:06:17.992719] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:52.234 [2024-07-12 15:06:17.992723] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3e5481834c80 name raid_bdev1, state configuring 00:17:52.234 request: 00:17:52.234 { 00:17:52.234 "name": "raid_bdev1", 00:17:52.234 "raid_level": "raid1", 00:17:52.234 "base_bdevs": [ 00:17:52.234 "malloc1", 00:17:52.234 "malloc2" 00:17:52.234 ], 00:17:52.234 "superblock": false, 00:17:52.234 "method": "bdev_raid_create", 00:17:52.234 "req_id": 1 00:17:52.234 } 00:17:52.234 Got JSON-RPC error response 00:17:52.234 response: 00:17:52.234 { 00:17:52.234 "code": -17, 00:17:52.234 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:52.234 } 00:17:52.234 15:06:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # es=1 00:17:52.234 15:06:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:52.234 15:06:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:52.234 15:06:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:52.234 15:06:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.234 15:06:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:17:52.492 15:06:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:17:52.492 15:06:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:17:52.492 15:06:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:52.751 [2024-07-12 15:06:18.508104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:52.751 [2024-07-12 15:06:18.508176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.751 [2024-07-12 15:06:18.508205] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e5481834780 00:17:52.751 [2024-07-12 15:06:18.508213] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.751 [2024-07-12 15:06:18.508840] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.751 [2024-07-12 15:06:18.508862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:52.751 [2024-07-12 15:06:18.508897] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:52.751 [2024-07-12 15:06:18.508909] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:52.751 pt1 00:17:52.751 15:06:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:52.751 15:06:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:52.751 15:06:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:52.751 15:06:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:52.751 15:06:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:52.751 15:06:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:52.751 15:06:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:52.751 15:06:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:52.751 15:06:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:52.751 15:06:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:52.751 15:06:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.751 15:06:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.009 15:06:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:53.009 "name": "raid_bdev1", 00:17:53.009 "uuid": "484a3fb3-4060-11ef-b2a4-e9dca065e82e", 00:17:53.009 "strip_size_kb": 0, 00:17:53.009 "state": "configuring", 00:17:53.009 "raid_level": "raid1", 00:17:53.009 "superblock": true, 00:17:53.009 "num_base_bdevs": 2, 00:17:53.009 "num_base_bdevs_discovered": 1, 00:17:53.009 "num_base_bdevs_operational": 2, 00:17:53.009 "base_bdevs_list": [ 00:17:53.009 { 00:17:53.009 "name": "pt1", 00:17:53.009 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:53.009 "is_configured": true, 00:17:53.009 "data_offset": 256, 00:17:53.009 "data_size": 7936 00:17:53.009 }, 00:17:53.009 { 00:17:53.009 "name": null, 00:17:53.009 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:53.009 "is_configured": false, 00:17:53.009 "data_offset": 256, 00:17:53.009 "data_size": 7936 00:17:53.009 } 00:17:53.009 ] 00:17:53.009 }' 00:17:53.009 15:06:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:53.009 15:06:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.576 15:06:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:17:53.576 15:06:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:17:53.576 15:06:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:53.576 15:06:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:53.576 [2024-07-12 15:06:19.368142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:53.576 [2024-07-12 15:06:19.368215] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.576 [2024-07-12 15:06:19.368243] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e5481834f00 00:17:53.576 [2024-07-12 15:06:19.368251] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.576 [2024-07-12 15:06:19.368361] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.576 [2024-07-12 15:06:19.368372] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:53.576 [2024-07-12 15:06:19.368395] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:53.576 [2024-07-12 15:06:19.368403] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:53.576 [2024-07-12 15:06:19.368433] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3e5481835180 00:17:53.576 [2024-07-12 15:06:19.368437] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:53.576 [2024-07-12 15:06:19.368457] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3e5481897e20 00:17:53.576 [2024-07-12 15:06:19.368511] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3e5481835180 00:17:53.576 [2024-07-12 15:06:19.368516] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3e5481835180 00:17:53.576 [2024-07-12 15:06:19.368537] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.576 pt2 00:17:53.576 15:06:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:53.576 15:06:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:53.576 15:06:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:53.576 15:06:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:53.576 15:06:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:53.576 15:06:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:53.576 15:06:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:53.576 15:06:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:53.576 15:06:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:53.576 15:06:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:53.576 15:06:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:53.576 15:06:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:53.576 15:06:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.576 15:06:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.142 15:06:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:54.142 "name": "raid_bdev1", 00:17:54.142 "uuid": "484a3fb3-4060-11ef-b2a4-e9dca065e82e", 00:17:54.142 "strip_size_kb": 0, 00:17:54.142 "state": "online", 00:17:54.142 "raid_level": "raid1", 00:17:54.142 "superblock": true, 00:17:54.142 "num_base_bdevs": 2, 00:17:54.142 "num_base_bdevs_discovered": 2, 00:17:54.142 "num_base_bdevs_operational": 2, 00:17:54.142 "base_bdevs_list": [ 00:17:54.142 { 00:17:54.142 "name": "pt1", 00:17:54.142 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:54.142 "is_configured": true, 00:17:54.142 "data_offset": 256, 00:17:54.142 "data_size": 7936 00:17:54.142 }, 00:17:54.142 { 00:17:54.142 "name": "pt2", 00:17:54.142 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:54.142 "is_configured": true, 00:17:54.142 "data_offset": 256, 00:17:54.142 "data_size": 7936 00:17:54.142 } 00:17:54.142 ] 00:17:54.142 }' 00:17:54.142 15:06:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:54.142 15:06:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.400 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:17:54.400 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:54.400 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:54.400 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:54.400 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:54.400 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:17:54.401 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:54.401 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:54.660 [2024-07-12 15:06:20.232227] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:54.660 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:54.660 "name": "raid_bdev1", 00:17:54.660 "aliases": [ 00:17:54.660 "484a3fb3-4060-11ef-b2a4-e9dca065e82e" 00:17:54.660 ], 00:17:54.660 "product_name": "Raid Volume", 00:17:54.660 "block_size": 4096, 00:17:54.660 "num_blocks": 7936, 00:17:54.660 "uuid": "484a3fb3-4060-11ef-b2a4-e9dca065e82e", 00:17:54.660 "assigned_rate_limits": { 00:17:54.660 "rw_ios_per_sec": 0, 00:17:54.660 "rw_mbytes_per_sec": 0, 00:17:54.660 "r_mbytes_per_sec": 0, 00:17:54.660 "w_mbytes_per_sec": 0 00:17:54.660 }, 00:17:54.660 "claimed": false, 00:17:54.660 "zoned": false, 00:17:54.660 "supported_io_types": { 00:17:54.660 "read": true, 00:17:54.660 "write": true, 00:17:54.660 "unmap": false, 00:17:54.660 "flush": false, 00:17:54.660 "reset": true, 00:17:54.660 "nvme_admin": false, 00:17:54.660 "nvme_io": false, 00:17:54.660 "nvme_io_md": false, 00:17:54.660 "write_zeroes": true, 00:17:54.660 "zcopy": false, 00:17:54.660 "get_zone_info": false, 00:17:54.660 "zone_management": false, 00:17:54.660 "zone_append": false, 00:17:54.660 "compare": false, 00:17:54.660 "compare_and_write": false, 00:17:54.660 "abort": false, 00:17:54.660 "seek_hole": false, 00:17:54.660 "seek_data": false, 00:17:54.660 "copy": false, 00:17:54.660 "nvme_iov_md": false 00:17:54.660 }, 00:17:54.660 "memory_domains": [ 00:17:54.660 { 00:17:54.660 "dma_device_id": "system", 00:17:54.660 "dma_device_type": 1 00:17:54.660 }, 00:17:54.660 { 00:17:54.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.660 "dma_device_type": 2 00:17:54.660 }, 00:17:54.660 { 00:17:54.660 "dma_device_id": "system", 00:17:54.660 "dma_device_type": 1 00:17:54.660 }, 00:17:54.660 { 00:17:54.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.660 "dma_device_type": 2 00:17:54.660 } 00:17:54.660 ], 00:17:54.660 "driver_specific": { 00:17:54.660 "raid": { 00:17:54.660 "uuid": "484a3fb3-4060-11ef-b2a4-e9dca065e82e", 00:17:54.660 "strip_size_kb": 0, 00:17:54.660 "state": "online", 00:17:54.660 "raid_level": "raid1", 00:17:54.660 "superblock": true, 00:17:54.660 "num_base_bdevs": 2, 00:17:54.660 "num_base_bdevs_discovered": 2, 00:17:54.660 "num_base_bdevs_operational": 2, 00:17:54.660 "base_bdevs_list": [ 00:17:54.660 { 00:17:54.660 "name": "pt1", 00:17:54.660 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:54.660 "is_configured": true, 00:17:54.660 "data_offset": 256, 00:17:54.660 "data_size": 7936 00:17:54.660 }, 00:17:54.660 { 00:17:54.660 "name": "pt2", 00:17:54.660 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:54.660 "is_configured": true, 00:17:54.660 "data_offset": 256, 00:17:54.660 "data_size": 7936 00:17:54.660 } 00:17:54.660 ] 00:17:54.660 } 00:17:54.660 } 00:17:54.660 }' 00:17:54.660 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:54.660 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:54.660 pt2' 00:17:54.660 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:54.660 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:54.660 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:54.918 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:54.918 "name": "pt1", 00:17:54.919 "aliases": [ 00:17:54.919 "00000000-0000-0000-0000-000000000001" 00:17:54.919 ], 00:17:54.919 "product_name": "passthru", 00:17:54.919 "block_size": 4096, 00:17:54.919 "num_blocks": 8192, 00:17:54.919 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:54.919 "assigned_rate_limits": { 00:17:54.919 "rw_ios_per_sec": 0, 00:17:54.919 "rw_mbytes_per_sec": 0, 00:17:54.919 "r_mbytes_per_sec": 0, 00:17:54.919 "w_mbytes_per_sec": 0 00:17:54.919 }, 00:17:54.919 "claimed": true, 00:17:54.919 "claim_type": "exclusive_write", 00:17:54.919 "zoned": false, 00:17:54.919 "supported_io_types": { 00:17:54.919 "read": true, 00:17:54.919 "write": true, 00:17:54.919 "unmap": true, 00:17:54.919 "flush": true, 00:17:54.919 "reset": true, 00:17:54.919 "nvme_admin": false, 00:17:54.919 "nvme_io": false, 00:17:54.919 "nvme_io_md": false, 00:17:54.919 "write_zeroes": true, 00:17:54.919 "zcopy": true, 00:17:54.919 "get_zone_info": false, 00:17:54.919 "zone_management": false, 00:17:54.919 "zone_append": false, 00:17:54.919 "compare": false, 00:17:54.919 "compare_and_write": false, 00:17:54.919 "abort": true, 00:17:54.919 "seek_hole": false, 00:17:54.919 "seek_data": false, 00:17:54.919 "copy": true, 00:17:54.919 "nvme_iov_md": false 00:17:54.919 }, 00:17:54.919 "memory_domains": [ 00:17:54.919 { 00:17:54.919 "dma_device_id": "system", 00:17:54.919 "dma_device_type": 1 00:17:54.919 }, 00:17:54.919 { 00:17:54.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.919 "dma_device_type": 2 00:17:54.919 } 00:17:54.919 ], 00:17:54.919 "driver_specific": { 00:17:54.919 "passthru": { 00:17:54.919 "name": "pt1", 00:17:54.919 "base_bdev_name": "malloc1" 00:17:54.919 } 00:17:54.919 } 00:17:54.919 }' 00:17:54.919 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:54.919 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:54.919 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:54.919 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:54.919 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:54.919 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:54.919 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:54.919 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:54.919 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:54.919 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:54.919 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:54.919 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:54.919 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:54.919 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:54.919 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:55.177 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:55.177 "name": "pt2", 00:17:55.177 "aliases": [ 00:17:55.177 "00000000-0000-0000-0000-000000000002" 00:17:55.177 ], 00:17:55.177 "product_name": "passthru", 00:17:55.177 "block_size": 4096, 00:17:55.177 "num_blocks": 8192, 00:17:55.177 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:55.177 "assigned_rate_limits": { 00:17:55.177 "rw_ios_per_sec": 0, 00:17:55.177 "rw_mbytes_per_sec": 0, 00:17:55.177 "r_mbytes_per_sec": 0, 00:17:55.177 "w_mbytes_per_sec": 0 00:17:55.177 }, 00:17:55.177 "claimed": true, 00:17:55.177 "claim_type": "exclusive_write", 00:17:55.177 "zoned": false, 00:17:55.177 "supported_io_types": { 00:17:55.177 "read": true, 00:17:55.177 "write": true, 00:17:55.177 "unmap": true, 00:17:55.177 "flush": true, 00:17:55.177 "reset": true, 00:17:55.177 "nvme_admin": false, 00:17:55.177 "nvme_io": false, 00:17:55.177 "nvme_io_md": false, 00:17:55.177 "write_zeroes": true, 00:17:55.177 "zcopy": true, 00:17:55.177 "get_zone_info": false, 00:17:55.177 "zone_management": false, 00:17:55.177 "zone_append": false, 00:17:55.177 "compare": false, 00:17:55.177 "compare_and_write": false, 00:17:55.177 "abort": true, 00:17:55.177 "seek_hole": false, 00:17:55.177 "seek_data": false, 00:17:55.177 "copy": true, 00:17:55.177 "nvme_iov_md": false 00:17:55.177 }, 00:17:55.177 "memory_domains": [ 00:17:55.177 { 00:17:55.177 "dma_device_id": "system", 00:17:55.177 "dma_device_type": 1 00:17:55.177 }, 00:17:55.177 { 00:17:55.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.177 "dma_device_type": 2 00:17:55.177 } 00:17:55.177 ], 00:17:55.177 "driver_specific": { 00:17:55.177 "passthru": { 00:17:55.177 "name": "pt2", 00:17:55.177 "base_bdev_name": "malloc2" 00:17:55.177 } 00:17:55.177 } 00:17:55.177 }' 00:17:55.177 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:55.177 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:55.177 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:55.177 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:55.177 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:55.177 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:55.177 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:55.177 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:55.177 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:55.177 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:55.177 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:55.177 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:55.177 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:55.177 15:06:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:17:55.435 [2024-07-12 15:06:21.112326] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:55.435 15:06:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # '[' 484a3fb3-4060-11ef-b2a4-e9dca065e82e '!=' 484a3fb3-4060-11ef-b2a4-e9dca065e82e ']' 00:17:55.435 15:06:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:17:55.435 15:06:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:55.435 15:06:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:17:55.435 15:06:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:55.694 [2024-07-12 15:06:21.348357] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:55.694 15:06:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:55.694 15:06:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:55.694 15:06:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:55.694 15:06:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:55.694 15:06:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:55.694 15:06:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:55.694 15:06:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:55.694 15:06:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:55.694 15:06:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:55.694 15:06:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:55.694 15:06:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.694 15:06:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.974 15:06:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:55.974 "name": "raid_bdev1", 00:17:55.974 "uuid": "484a3fb3-4060-11ef-b2a4-e9dca065e82e", 00:17:55.974 "strip_size_kb": 0, 00:17:55.974 "state": "online", 00:17:55.974 "raid_level": "raid1", 00:17:55.974 "superblock": true, 00:17:55.974 "num_base_bdevs": 2, 00:17:55.974 "num_base_bdevs_discovered": 1, 00:17:55.974 "num_base_bdevs_operational": 1, 00:17:55.974 "base_bdevs_list": [ 00:17:55.974 { 00:17:55.974 "name": null, 00:17:55.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.974 "is_configured": false, 00:17:55.974 "data_offset": 256, 00:17:55.974 "data_size": 7936 00:17:55.974 }, 00:17:55.974 { 00:17:55.974 "name": "pt2", 00:17:55.974 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:55.974 "is_configured": true, 00:17:55.974 "data_offset": 256, 00:17:55.974 "data_size": 7936 00:17:55.974 } 00:17:55.974 ] 00:17:55.974 }' 00:17:55.974 15:06:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:55.974 15:06:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.232 15:06:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:56.490 [2024-07-12 15:06:22.280397] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:56.490 [2024-07-12 15:06:22.280430] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:56.490 [2024-07-12 15:06:22.280469] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:56.490 [2024-07-12 15:06:22.280483] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:56.490 [2024-07-12 15:06:22.280488] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3e5481835180 name raid_bdev1, state offline 00:17:56.490 15:06:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.490 15:06:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:17:56.748 15:06:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:17:56.748 15:06:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:17:56.748 15:06:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:17:56.748 15:06:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:56.748 15:06:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:57.006 15:06:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:17:57.006 15:06:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:57.006 15:06:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:17:57.006 15:06:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:17:57.006 15:06:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@518 -- # i=1 00:17:57.006 15:06:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:57.262 [2024-07-12 15:06:23.028479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:57.263 [2024-07-12 15:06:23.028564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.263 [2024-07-12 15:06:23.028583] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e5481834f00 00:17:57.263 [2024-07-12 15:06:23.028592] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.263 [2024-07-12 15:06:23.029472] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.263 [2024-07-12 15:06:23.029501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:57.263 [2024-07-12 15:06:23.029550] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:57.263 [2024-07-12 15:06:23.029564] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:57.263 [2024-07-12 15:06:23.029595] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3e5481835180 00:17:57.263 [2024-07-12 15:06:23.029600] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:57.263 [2024-07-12 15:06:23.029629] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3e5481897e20 00:17:57.263 [2024-07-12 15:06:23.029682] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3e5481835180 00:17:57.263 [2024-07-12 15:06:23.029686] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3e5481835180 00:17:57.263 [2024-07-12 15:06:23.029709] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.263 pt2 00:17:57.263 15:06:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:57.263 15:06:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:57.263 15:06:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:57.263 15:06:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:57.263 15:06:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:57.263 15:06:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:57.263 15:06:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:57.263 15:06:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:57.263 15:06:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:57.263 15:06:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:57.263 15:06:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.263 15:06:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.520 15:06:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:57.520 "name": "raid_bdev1", 00:17:57.520 "uuid": "484a3fb3-4060-11ef-b2a4-e9dca065e82e", 00:17:57.520 "strip_size_kb": 0, 00:17:57.520 "state": "online", 00:17:57.520 "raid_level": "raid1", 00:17:57.520 "superblock": true, 00:17:57.520 "num_base_bdevs": 2, 00:17:57.520 "num_base_bdevs_discovered": 1, 00:17:57.520 "num_base_bdevs_operational": 1, 00:17:57.520 "base_bdevs_list": [ 00:17:57.520 { 00:17:57.520 "name": null, 00:17:57.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.520 "is_configured": false, 00:17:57.520 "data_offset": 256, 00:17:57.520 "data_size": 7936 00:17:57.520 }, 00:17:57.520 { 00:17:57.520 "name": "pt2", 00:17:57.520 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:57.520 "is_configured": true, 00:17:57.520 "data_offset": 256, 00:17:57.520 "data_size": 7936 00:17:57.520 } 00:17:57.520 ] 00:17:57.520 }' 00:17:57.520 15:06:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:57.520 15:06:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.085 15:06:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:58.085 [2024-07-12 15:06:23.912514] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:58.085 [2024-07-12 15:06:23.912542] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:58.085 [2024-07-12 15:06:23.912572] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:58.085 [2024-07-12 15:06:23.912586] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:58.085 [2024-07-12 15:06:23.912591] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3e5481835180 name raid_bdev1, state offline 00:17:58.343 15:06:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.343 15:06:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:17:58.601 15:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:17:58.601 15:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:17:58.601 15:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:17:58.601 15:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:58.601 [2024-07-12 15:06:24.428597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:58.601 [2024-07-12 15:06:24.428690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.601 [2024-07-12 15:06:24.428703] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e5481834c80 00:17:58.601 [2024-07-12 15:06:24.428712] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.601 [2024-07-12 15:06:24.429670] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.601 [2024-07-12 15:06:24.429709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:58.601 [2024-07-12 15:06:24.429739] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:58.601 [2024-07-12 15:06:24.429753] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:58.601 [2024-07-12 15:06:24.429820] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:58.601 [2024-07-12 15:06:24.429825] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:58.601 [2024-07-12 15:06:24.429831] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3e5481834780 name raid_bdev1, state configuring 00:17:58.601 [2024-07-12 15:06:24.429839] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:58.601 [2024-07-12 15:06:24.429856] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3e5481834780 00:17:58.601 [2024-07-12 15:06:24.429860] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:58.601 [2024-07-12 15:06:24.429881] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3e5481897e20 00:17:58.601 [2024-07-12 15:06:24.429934] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3e5481834780 00:17:58.601 [2024-07-12 15:06:24.429939] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3e5481834780 00:17:58.601 [2024-07-12 15:06:24.429960] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.859 pt1 00:17:58.859 15:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:17:58.859 15:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:58.859 15:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:58.859 15:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:58.859 15:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:58.859 15:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:58.859 15:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:58.859 15:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:58.859 15:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:58.859 15:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:58.859 15:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:58.860 15:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.860 15:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.117 15:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:59.117 "name": "raid_bdev1", 00:17:59.117 "uuid": "484a3fb3-4060-11ef-b2a4-e9dca065e82e", 00:17:59.117 "strip_size_kb": 0, 00:17:59.117 "state": "online", 00:17:59.117 "raid_level": "raid1", 00:17:59.117 "superblock": true, 00:17:59.117 "num_base_bdevs": 2, 00:17:59.117 "num_base_bdevs_discovered": 1, 00:17:59.117 "num_base_bdevs_operational": 1, 00:17:59.117 "base_bdevs_list": [ 00:17:59.117 { 00:17:59.117 "name": null, 00:17:59.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.117 "is_configured": false, 00:17:59.117 "data_offset": 256, 00:17:59.117 "data_size": 7936 00:17:59.117 }, 00:17:59.117 { 00:17:59.117 "name": "pt2", 00:17:59.117 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:59.117 "is_configured": true, 00:17:59.117 "data_offset": 256, 00:17:59.117 "data_size": 7936 00:17:59.117 } 00:17:59.117 ] 00:17:59.117 }' 00:17:59.117 15:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:59.117 15:06:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.374 15:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:17:59.374 15:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:59.631 15:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:17:59.632 15:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:59.632 15:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:17:59.889 [2024-07-12 15:06:25.592769] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:59.889 15:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' 484a3fb3-4060-11ef-b2a4-e9dca065e82e '!=' 484a3fb3-4060-11ef-b2a4-e9dca065e82e ']' 00:17:59.889 15:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@562 -- # killprocess 65839 00:17:59.889 15:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@948 -- # '[' -z 65839 ']' 00:17:59.889 15:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # kill -0 65839 00:17:59.889 15:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # uname 00:17:59.889 15:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:17:59.890 15:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps -c -o command 65839 00:17:59.890 15:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # tail -1 00:17:59.890 15:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:17:59.890 15:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:17:59.890 15:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65839' 00:17:59.890 killing process with pid 65839 00:17:59.890 15:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@967 -- # kill 65839 00:17:59.890 [2024-07-12 15:06:25.624892] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:59.890 [2024-07-12 15:06:25.624916] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:59.890 [2024-07-12 15:06:25.624930] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:59.890 [2024-07-12 15:06:25.624934] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3e5481834780 name raid_bdev1, state offline 00:17:59.890 15:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # wait 65839 00:17:59.890 [2024-07-12 15:06:25.642644] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:00.148 15:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@564 -- # return 0 00:18:00.148 00:18:00.148 real 0m13.577s 00:18:00.148 user 0m24.282s 00:18:00.148 sys 0m2.031s 00:18:00.148 15:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:00.148 ************************************ 00:18:00.148 END TEST raid_superblock_test_4k 00:18:00.148 15:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.148 ************************************ 00:18:00.148 15:06:25 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:00.148 15:06:25 bdev_raid -- bdev/bdev_raid.sh@900 -- # '[' '' = true ']' 00:18:00.148 15:06:25 bdev_raid -- bdev/bdev_raid.sh@904 -- # base_malloc_params='-m 32' 00:18:00.148 15:06:25 bdev_raid -- bdev/bdev_raid.sh@905 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:18:00.148 15:06:25 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:00.148 15:06:25 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:00.148 15:06:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:00.148 ************************************ 00:18:00.148 START TEST raid_state_function_test_sb_md_separate 00:18:00.148 ************************************ 00:18:00.148 15:06:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:18:00.148 15:06:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:18:00.148 15:06:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:18:00.148 15:06:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:18:00.148 15:06:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:00.148 15:06:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:00.148 15:06:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:00.148 15:06:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:18:00.148 15:06:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:00.148 15:06:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:00.148 15:06:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:18:00.148 15:06:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:00.148 15:06:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:00.148 15:06:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:00.148 15:06:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:00.148 15:06:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:00.148 15:06:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:00.148 15:06:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:00.148 15:06:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:00.148 15:06:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:18:00.148 15:06:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:18:00.148 15:06:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:18:00.148 15:06:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:18:00.148 15:06:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=66226 00:18:00.148 15:06:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:00.148 Process raid pid: 66226 00:18:00.148 15:06:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 66226' 00:18:00.148 15:06:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 66226 /var/tmp/spdk-raid.sock 00:18:00.148 15:06:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 66226 ']' 00:18:00.148 15:06:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:00.148 15:06:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:00.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:00.148 15:06:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:00.148 15:06:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:00.148 15:06:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.148 [2024-07-12 15:06:25.964943] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:18:00.148 [2024-07-12 15:06:25.965185] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:00.714 EAL: TSC is not safe to use in SMP mode 00:18:00.714 EAL: TSC is not invariant 00:18:00.714 [2024-07-12 15:06:26.508485] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.972 [2024-07-12 15:06:26.596699] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:00.972 [2024-07-12 15:06:26.598894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.972 [2024-07-12 15:06:26.599670] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:00.972 [2024-07-12 15:06:26.599685] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:01.539 15:06:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:01.539 15:06:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:18:01.539 15:06:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:01.539 [2024-07-12 15:06:27.284009] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:01.539 [2024-07-12 15:06:27.284080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:01.539 [2024-07-12 15:06:27.284086] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:01.539 [2024-07-12 15:06:27.284095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:01.539 15:06:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:01.539 15:06:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:01.539 15:06:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:01.539 15:06:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:01.539 15:06:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:01.539 15:06:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:01.539 15:06:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:01.539 15:06:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:01.539 15:06:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:01.539 15:06:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:01.539 15:06:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.539 15:06:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.797 15:06:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:01.797 "name": "Existed_Raid", 00:18:01.797 "uuid": "4fdf3150-4060-11ef-b2a4-e9dca065e82e", 00:18:01.797 "strip_size_kb": 0, 00:18:01.797 "state": "configuring", 00:18:01.797 "raid_level": "raid1", 00:18:01.797 "superblock": true, 00:18:01.797 "num_base_bdevs": 2, 00:18:01.797 "num_base_bdevs_discovered": 0, 00:18:01.797 "num_base_bdevs_operational": 2, 00:18:01.797 "base_bdevs_list": [ 00:18:01.797 { 00:18:01.797 "name": "BaseBdev1", 00:18:01.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.797 "is_configured": false, 00:18:01.797 "data_offset": 0, 00:18:01.797 "data_size": 0 00:18:01.797 }, 00:18:01.797 { 00:18:01.797 "name": "BaseBdev2", 00:18:01.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.797 "is_configured": false, 00:18:01.797 "data_offset": 0, 00:18:01.797 "data_size": 0 00:18:01.797 } 00:18:01.797 ] 00:18:01.797 }' 00:18:01.797 15:06:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:01.797 15:06:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.056 15:06:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:02.621 [2024-07-12 15:06:28.148017] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:02.621 [2024-07-12 15:06:28.148048] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2ae621634500 name Existed_Raid, state configuring 00:18:02.621 15:06:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:02.621 [2024-07-12 15:06:28.428042] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:02.621 [2024-07-12 15:06:28.428096] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:02.621 [2024-07-12 15:06:28.428101] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:02.621 [2024-07-12 15:06:28.428110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:02.621 15:06:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:18:02.879 [2024-07-12 15:06:28.664992] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:02.879 BaseBdev1 00:18:02.879 15:06:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:02.879 15:06:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:02.879 15:06:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:02.879 15:06:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:18:02.879 15:06:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:02.879 15:06:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:02.879 15:06:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:03.446 15:06:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:03.446 [ 00:18:03.446 { 00:18:03.446 "name": "BaseBdev1", 00:18:03.446 "aliases": [ 00:18:03.446 "50b1c578-4060-11ef-b2a4-e9dca065e82e" 00:18:03.446 ], 00:18:03.446 "product_name": "Malloc disk", 00:18:03.446 "block_size": 4096, 00:18:03.446 "num_blocks": 8192, 00:18:03.446 "uuid": "50b1c578-4060-11ef-b2a4-e9dca065e82e", 00:18:03.446 "md_size": 32, 00:18:03.446 "md_interleave": false, 00:18:03.446 "dif_type": 0, 00:18:03.446 "assigned_rate_limits": { 00:18:03.446 "rw_ios_per_sec": 0, 00:18:03.446 "rw_mbytes_per_sec": 0, 00:18:03.446 "r_mbytes_per_sec": 0, 00:18:03.446 "w_mbytes_per_sec": 0 00:18:03.446 }, 00:18:03.446 "claimed": true, 00:18:03.446 "claim_type": "exclusive_write", 00:18:03.446 "zoned": false, 00:18:03.446 "supported_io_types": { 00:18:03.446 "read": true, 00:18:03.446 "write": true, 00:18:03.446 "unmap": true, 00:18:03.446 "flush": true, 00:18:03.446 "reset": true, 00:18:03.446 "nvme_admin": false, 00:18:03.446 "nvme_io": false, 00:18:03.446 "nvme_io_md": false, 00:18:03.446 "write_zeroes": true, 00:18:03.446 "zcopy": true, 00:18:03.446 "get_zone_info": false, 00:18:03.446 "zone_management": false, 00:18:03.446 "zone_append": false, 00:18:03.446 "compare": false, 00:18:03.446 "compare_and_write": false, 00:18:03.446 "abort": true, 00:18:03.446 "seek_hole": false, 00:18:03.446 "seek_data": false, 00:18:03.446 "copy": true, 00:18:03.446 "nvme_iov_md": false 00:18:03.446 }, 00:18:03.446 "memory_domains": [ 00:18:03.446 { 00:18:03.446 "dma_device_id": "system", 00:18:03.446 "dma_device_type": 1 00:18:03.446 }, 00:18:03.446 { 00:18:03.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.446 "dma_device_type": 2 00:18:03.446 } 00:18:03.446 ], 00:18:03.446 "driver_specific": {} 00:18:03.446 } 00:18:03.446 ] 00:18:03.446 15:06:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:18:03.446 15:06:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:03.446 15:06:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:03.446 15:06:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:03.446 15:06:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:03.446 15:06:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:03.446 15:06:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:03.446 15:06:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:03.446 15:06:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:03.446 15:06:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:03.446 15:06:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:03.446 15:06:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.446 15:06:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.705 15:06:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:03.705 "name": "Existed_Raid", 00:18:03.705 "uuid": "508dc21f-4060-11ef-b2a4-e9dca065e82e", 00:18:03.705 "strip_size_kb": 0, 00:18:03.705 "state": "configuring", 00:18:03.705 "raid_level": "raid1", 00:18:03.705 "superblock": true, 00:18:03.705 "num_base_bdevs": 2, 00:18:03.705 "num_base_bdevs_discovered": 1, 00:18:03.705 "num_base_bdevs_operational": 2, 00:18:03.705 "base_bdevs_list": [ 00:18:03.705 { 00:18:03.705 "name": "BaseBdev1", 00:18:03.705 "uuid": "50b1c578-4060-11ef-b2a4-e9dca065e82e", 00:18:03.705 "is_configured": true, 00:18:03.705 "data_offset": 256, 00:18:03.705 "data_size": 7936 00:18:03.705 }, 00:18:03.705 { 00:18:03.705 "name": "BaseBdev2", 00:18:03.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.705 "is_configured": false, 00:18:03.705 "data_offset": 0, 00:18:03.705 "data_size": 0 00:18:03.705 } 00:18:03.705 ] 00:18:03.705 }' 00:18:03.705 15:06:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:03.705 15:06:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.965 15:06:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:04.226 [2024-07-12 15:06:30.044095] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:04.226 [2024-07-12 15:06:30.044126] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2ae621634500 name Existed_Raid, state configuring 00:18:04.484 15:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:04.484 [2024-07-12 15:06:30.316122] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:04.743 [2024-07-12 15:06:30.316941] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:04.743 [2024-07-12 15:06:30.316985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:04.743 15:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:04.743 15:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:04.743 15:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:04.743 15:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:04.743 15:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:04.743 15:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:04.743 15:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:04.743 15:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:04.743 15:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:04.743 15:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:04.743 15:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:04.743 15:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:04.743 15:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.743 15:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.001 15:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:05.001 "name": "Existed_Raid", 00:18:05.001 "uuid": "51addb30-4060-11ef-b2a4-e9dca065e82e", 00:18:05.001 "strip_size_kb": 0, 00:18:05.001 "state": "configuring", 00:18:05.001 "raid_level": "raid1", 00:18:05.001 "superblock": true, 00:18:05.001 "num_base_bdevs": 2, 00:18:05.001 "num_base_bdevs_discovered": 1, 00:18:05.001 "num_base_bdevs_operational": 2, 00:18:05.001 "base_bdevs_list": [ 00:18:05.001 { 00:18:05.001 "name": "BaseBdev1", 00:18:05.001 "uuid": "50b1c578-4060-11ef-b2a4-e9dca065e82e", 00:18:05.001 "is_configured": true, 00:18:05.001 "data_offset": 256, 00:18:05.001 "data_size": 7936 00:18:05.001 }, 00:18:05.001 { 00:18:05.001 "name": "BaseBdev2", 00:18:05.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.001 "is_configured": false, 00:18:05.001 "data_offset": 0, 00:18:05.001 "data_size": 0 00:18:05.001 } 00:18:05.001 ] 00:18:05.001 }' 00:18:05.001 15:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:05.001 15:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.259 15:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:18:05.518 [2024-07-12 15:06:31.244270] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:05.518 [2024-07-12 15:06:31.244331] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2ae621634a00 00:18:05.518 [2024-07-12 15:06:31.244337] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:05.518 [2024-07-12 15:06:31.244358] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2ae621697e20 00:18:05.518 [2024-07-12 15:06:31.244388] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2ae621634a00 00:18:05.518 [2024-07-12 15:06:31.244392] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2ae621634a00 00:18:05.518 [2024-07-12 15:06:31.244407] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.518 BaseBdev2 00:18:05.518 15:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:05.518 15:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:05.518 15:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:05.518 15:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:18:05.518 15:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:05.518 15:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:05.518 15:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:05.776 15:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:06.033 [ 00:18:06.033 { 00:18:06.034 "name": "BaseBdev2", 00:18:06.034 "aliases": [ 00:18:06.034 "523b7779-4060-11ef-b2a4-e9dca065e82e" 00:18:06.034 ], 00:18:06.034 "product_name": "Malloc disk", 00:18:06.034 "block_size": 4096, 00:18:06.034 "num_blocks": 8192, 00:18:06.034 "uuid": "523b7779-4060-11ef-b2a4-e9dca065e82e", 00:18:06.034 "md_size": 32, 00:18:06.034 "md_interleave": false, 00:18:06.034 "dif_type": 0, 00:18:06.034 "assigned_rate_limits": { 00:18:06.034 "rw_ios_per_sec": 0, 00:18:06.034 "rw_mbytes_per_sec": 0, 00:18:06.034 "r_mbytes_per_sec": 0, 00:18:06.034 "w_mbytes_per_sec": 0 00:18:06.034 }, 00:18:06.034 "claimed": true, 00:18:06.034 "claim_type": "exclusive_write", 00:18:06.034 "zoned": false, 00:18:06.034 "supported_io_types": { 00:18:06.034 "read": true, 00:18:06.034 "write": true, 00:18:06.034 "unmap": true, 00:18:06.034 "flush": true, 00:18:06.034 "reset": true, 00:18:06.034 "nvme_admin": false, 00:18:06.034 "nvme_io": false, 00:18:06.034 "nvme_io_md": false, 00:18:06.034 "write_zeroes": true, 00:18:06.034 "zcopy": true, 00:18:06.034 "get_zone_info": false, 00:18:06.034 "zone_management": false, 00:18:06.034 "zone_append": false, 00:18:06.034 "compare": false, 00:18:06.034 "compare_and_write": false, 00:18:06.034 "abort": true, 00:18:06.034 "seek_hole": false, 00:18:06.034 "seek_data": false, 00:18:06.034 "copy": true, 00:18:06.034 "nvme_iov_md": false 00:18:06.034 }, 00:18:06.034 "memory_domains": [ 00:18:06.034 { 00:18:06.034 "dma_device_id": "system", 00:18:06.034 "dma_device_type": 1 00:18:06.034 }, 00:18:06.034 { 00:18:06.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.034 "dma_device_type": 2 00:18:06.034 } 00:18:06.034 ], 00:18:06.034 "driver_specific": {} 00:18:06.034 } 00:18:06.034 ] 00:18:06.034 15:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:18:06.034 15:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:06.034 15:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:06.034 15:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:06.034 15:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:06.034 15:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:06.034 15:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:06.034 15:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:06.034 15:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:06.034 15:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:06.034 15:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:06.034 15:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:06.034 15:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:06.034 15:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.034 15:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.291 15:06:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:06.291 "name": "Existed_Raid", 00:18:06.291 "uuid": "51addb30-4060-11ef-b2a4-e9dca065e82e", 00:18:06.291 "strip_size_kb": 0, 00:18:06.291 "state": "online", 00:18:06.291 "raid_level": "raid1", 00:18:06.291 "superblock": true, 00:18:06.291 "num_base_bdevs": 2, 00:18:06.291 "num_base_bdevs_discovered": 2, 00:18:06.291 "num_base_bdevs_operational": 2, 00:18:06.291 "base_bdevs_list": [ 00:18:06.291 { 00:18:06.291 "name": "BaseBdev1", 00:18:06.291 "uuid": "50b1c578-4060-11ef-b2a4-e9dca065e82e", 00:18:06.291 "is_configured": true, 00:18:06.291 "data_offset": 256, 00:18:06.291 "data_size": 7936 00:18:06.291 }, 00:18:06.291 { 00:18:06.291 "name": "BaseBdev2", 00:18:06.291 "uuid": "523b7779-4060-11ef-b2a4-e9dca065e82e", 00:18:06.291 "is_configured": true, 00:18:06.291 "data_offset": 256, 00:18:06.291 "data_size": 7936 00:18:06.291 } 00:18:06.291 ] 00:18:06.291 }' 00:18:06.291 15:06:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:06.291 15:06:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.858 15:06:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:06.859 15:06:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:06.859 15:06:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:06.859 15:06:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:06.859 15:06:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:06.859 15:06:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:18:06.859 15:06:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:06.859 15:06:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:06.859 [2024-07-12 15:06:32.672267] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:07.117 15:06:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:07.117 "name": "Existed_Raid", 00:18:07.117 "aliases": [ 00:18:07.117 "51addb30-4060-11ef-b2a4-e9dca065e82e" 00:18:07.117 ], 00:18:07.117 "product_name": "Raid Volume", 00:18:07.117 "block_size": 4096, 00:18:07.117 "num_blocks": 7936, 00:18:07.117 "uuid": "51addb30-4060-11ef-b2a4-e9dca065e82e", 00:18:07.117 "md_size": 32, 00:18:07.117 "md_interleave": false, 00:18:07.117 "dif_type": 0, 00:18:07.117 "assigned_rate_limits": { 00:18:07.117 "rw_ios_per_sec": 0, 00:18:07.117 "rw_mbytes_per_sec": 0, 00:18:07.117 "r_mbytes_per_sec": 0, 00:18:07.117 "w_mbytes_per_sec": 0 00:18:07.117 }, 00:18:07.117 "claimed": false, 00:18:07.117 "zoned": false, 00:18:07.117 "supported_io_types": { 00:18:07.117 "read": true, 00:18:07.117 "write": true, 00:18:07.117 "unmap": false, 00:18:07.117 "flush": false, 00:18:07.117 "reset": true, 00:18:07.117 "nvme_admin": false, 00:18:07.117 "nvme_io": false, 00:18:07.117 "nvme_io_md": false, 00:18:07.117 "write_zeroes": true, 00:18:07.117 "zcopy": false, 00:18:07.117 "get_zone_info": false, 00:18:07.117 "zone_management": false, 00:18:07.117 "zone_append": false, 00:18:07.117 "compare": false, 00:18:07.117 "compare_and_write": false, 00:18:07.117 "abort": false, 00:18:07.117 "seek_hole": false, 00:18:07.117 "seek_data": false, 00:18:07.117 "copy": false, 00:18:07.117 "nvme_iov_md": false 00:18:07.117 }, 00:18:07.117 "memory_domains": [ 00:18:07.117 { 00:18:07.117 "dma_device_id": "system", 00:18:07.117 "dma_device_type": 1 00:18:07.117 }, 00:18:07.117 { 00:18:07.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.117 "dma_device_type": 2 00:18:07.117 }, 00:18:07.117 { 00:18:07.117 "dma_device_id": "system", 00:18:07.117 "dma_device_type": 1 00:18:07.117 }, 00:18:07.117 { 00:18:07.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.117 "dma_device_type": 2 00:18:07.117 } 00:18:07.117 ], 00:18:07.117 "driver_specific": { 00:18:07.117 "raid": { 00:18:07.117 "uuid": "51addb30-4060-11ef-b2a4-e9dca065e82e", 00:18:07.117 "strip_size_kb": 0, 00:18:07.117 "state": "online", 00:18:07.117 "raid_level": "raid1", 00:18:07.117 "superblock": true, 00:18:07.117 "num_base_bdevs": 2, 00:18:07.117 "num_base_bdevs_discovered": 2, 00:18:07.117 "num_base_bdevs_operational": 2, 00:18:07.117 "base_bdevs_list": [ 00:18:07.117 { 00:18:07.117 "name": "BaseBdev1", 00:18:07.117 "uuid": "50b1c578-4060-11ef-b2a4-e9dca065e82e", 00:18:07.117 "is_configured": true, 00:18:07.117 "data_offset": 256, 00:18:07.117 "data_size": 7936 00:18:07.117 }, 00:18:07.117 { 00:18:07.117 "name": "BaseBdev2", 00:18:07.117 "uuid": "523b7779-4060-11ef-b2a4-e9dca065e82e", 00:18:07.117 "is_configured": true, 00:18:07.117 "data_offset": 256, 00:18:07.117 "data_size": 7936 00:18:07.117 } 00:18:07.117 ] 00:18:07.117 } 00:18:07.117 } 00:18:07.117 }' 00:18:07.118 15:06:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:07.118 15:06:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:07.118 BaseBdev2' 00:18:07.118 15:06:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:07.118 15:06:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:07.118 15:06:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:07.376 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:07.376 "name": "BaseBdev1", 00:18:07.376 "aliases": [ 00:18:07.376 "50b1c578-4060-11ef-b2a4-e9dca065e82e" 00:18:07.376 ], 00:18:07.376 "product_name": "Malloc disk", 00:18:07.376 "block_size": 4096, 00:18:07.376 "num_blocks": 8192, 00:18:07.376 "uuid": "50b1c578-4060-11ef-b2a4-e9dca065e82e", 00:18:07.376 "md_size": 32, 00:18:07.376 "md_interleave": false, 00:18:07.376 "dif_type": 0, 00:18:07.376 "assigned_rate_limits": { 00:18:07.376 "rw_ios_per_sec": 0, 00:18:07.376 "rw_mbytes_per_sec": 0, 00:18:07.376 "r_mbytes_per_sec": 0, 00:18:07.376 "w_mbytes_per_sec": 0 00:18:07.376 }, 00:18:07.376 "claimed": true, 00:18:07.376 "claim_type": "exclusive_write", 00:18:07.376 "zoned": false, 00:18:07.376 "supported_io_types": { 00:18:07.376 "read": true, 00:18:07.376 "write": true, 00:18:07.376 "unmap": true, 00:18:07.376 "flush": true, 00:18:07.376 "reset": true, 00:18:07.376 "nvme_admin": false, 00:18:07.376 "nvme_io": false, 00:18:07.376 "nvme_io_md": false, 00:18:07.376 "write_zeroes": true, 00:18:07.376 "zcopy": true, 00:18:07.376 "get_zone_info": false, 00:18:07.376 "zone_management": false, 00:18:07.376 "zone_append": false, 00:18:07.376 "compare": false, 00:18:07.376 "compare_and_write": false, 00:18:07.376 "abort": true, 00:18:07.376 "seek_hole": false, 00:18:07.376 "seek_data": false, 00:18:07.376 "copy": true, 00:18:07.376 "nvme_iov_md": false 00:18:07.376 }, 00:18:07.376 "memory_domains": [ 00:18:07.376 { 00:18:07.376 "dma_device_id": "system", 00:18:07.376 "dma_device_type": 1 00:18:07.376 }, 00:18:07.376 { 00:18:07.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.376 "dma_device_type": 2 00:18:07.376 } 00:18:07.376 ], 00:18:07.376 "driver_specific": {} 00:18:07.376 }' 00:18:07.376 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:07.376 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:07.376 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:18:07.376 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:07.376 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:07.376 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:18:07.376 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:07.376 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:07.376 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:18:07.376 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:07.376 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:07.376 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:18:07.376 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:07.376 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:07.376 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:07.634 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:07.634 "name": "BaseBdev2", 00:18:07.634 "aliases": [ 00:18:07.634 "523b7779-4060-11ef-b2a4-e9dca065e82e" 00:18:07.634 ], 00:18:07.634 "product_name": "Malloc disk", 00:18:07.634 "block_size": 4096, 00:18:07.634 "num_blocks": 8192, 00:18:07.634 "uuid": "523b7779-4060-11ef-b2a4-e9dca065e82e", 00:18:07.634 "md_size": 32, 00:18:07.634 "md_interleave": false, 00:18:07.634 "dif_type": 0, 00:18:07.634 "assigned_rate_limits": { 00:18:07.634 "rw_ios_per_sec": 0, 00:18:07.634 "rw_mbytes_per_sec": 0, 00:18:07.634 "r_mbytes_per_sec": 0, 00:18:07.634 "w_mbytes_per_sec": 0 00:18:07.634 }, 00:18:07.634 "claimed": true, 00:18:07.634 "claim_type": "exclusive_write", 00:18:07.634 "zoned": false, 00:18:07.634 "supported_io_types": { 00:18:07.634 "read": true, 00:18:07.634 "write": true, 00:18:07.634 "unmap": true, 00:18:07.634 "flush": true, 00:18:07.634 "reset": true, 00:18:07.634 "nvme_admin": false, 00:18:07.634 "nvme_io": false, 00:18:07.634 "nvme_io_md": false, 00:18:07.634 "write_zeroes": true, 00:18:07.634 "zcopy": true, 00:18:07.634 "get_zone_info": false, 00:18:07.634 "zone_management": false, 00:18:07.634 "zone_append": false, 00:18:07.634 "compare": false, 00:18:07.634 "compare_and_write": false, 00:18:07.634 "abort": true, 00:18:07.634 "seek_hole": false, 00:18:07.634 "seek_data": false, 00:18:07.634 "copy": true, 00:18:07.634 "nvme_iov_md": false 00:18:07.634 }, 00:18:07.634 "memory_domains": [ 00:18:07.634 { 00:18:07.634 "dma_device_id": "system", 00:18:07.634 "dma_device_type": 1 00:18:07.634 }, 00:18:07.634 { 00:18:07.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.634 "dma_device_type": 2 00:18:07.634 } 00:18:07.634 ], 00:18:07.634 "driver_specific": {} 00:18:07.634 }' 00:18:07.634 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:07.634 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:07.634 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:18:07.634 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:07.634 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:07.634 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:18:07.634 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:07.634 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:07.634 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:18:07.634 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:07.634 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:07.634 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:18:07.634 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:07.916 [2024-07-12 15:06:33.636280] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:07.916 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:07.916 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:18:07.916 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:07.916 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:18:07.916 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:18:07.916 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:07.916 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:07.916 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:07.916 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:07.916 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:07.916 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:07.916 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:07.916 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:07.916 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:07.916 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:07.916 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.916 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.175 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:08.175 "name": "Existed_Raid", 00:18:08.175 "uuid": "51addb30-4060-11ef-b2a4-e9dca065e82e", 00:18:08.175 "strip_size_kb": 0, 00:18:08.175 "state": "online", 00:18:08.175 "raid_level": "raid1", 00:18:08.175 "superblock": true, 00:18:08.175 "num_base_bdevs": 2, 00:18:08.175 "num_base_bdevs_discovered": 1, 00:18:08.175 "num_base_bdevs_operational": 1, 00:18:08.175 "base_bdevs_list": [ 00:18:08.175 { 00:18:08.175 "name": null, 00:18:08.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.175 "is_configured": false, 00:18:08.175 "data_offset": 256, 00:18:08.175 "data_size": 7936 00:18:08.175 }, 00:18:08.175 { 00:18:08.175 "name": "BaseBdev2", 00:18:08.175 "uuid": "523b7779-4060-11ef-b2a4-e9dca065e82e", 00:18:08.175 "is_configured": true, 00:18:08.175 "data_offset": 256, 00:18:08.175 "data_size": 7936 00:18:08.175 } 00:18:08.175 ] 00:18:08.175 }' 00:18:08.175 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:08.176 15:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.434 15:06:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:08.434 15:06:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:08.434 15:06:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.434 15:06:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:08.692 15:06:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:08.692 15:06:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:08.692 15:06:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:08.951 [2024-07-12 15:06:34.754519] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:08.951 [2024-07-12 15:06:34.754580] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:08.951 [2024-07-12 15:06:34.760777] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:08.951 [2024-07-12 15:06:34.760800] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:08.951 [2024-07-12 15:06:34.760805] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2ae621634a00 name Existed_Raid, state offline 00:18:08.951 15:06:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:08.951 15:06:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:08.951 15:06:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.951 15:06:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:09.209 15:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:09.209 15:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:09.209 15:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:18:09.209 15:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 66226 00:18:09.209 15:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 66226 ']' 00:18:09.209 15:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 66226 00:18:09.209 15:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:18:09.209 15:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:18:09.209 15:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps -c -o command 66226 00:18:09.209 15:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # tail -1 00:18:09.209 15:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:18:09.209 15:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:18:09.209 killing process with pid 66226 00:18:09.209 15:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66226' 00:18:09.209 15:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 66226 00:18:09.209 [2024-07-12 15:06:35.025759] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:09.209 [2024-07-12 15:06:35.025793] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:09.209 15:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 66226 00:18:09.468 15:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:18:09.468 00:18:09.468 real 0m9.259s 00:18:09.468 user 0m16.340s 00:18:09.468 sys 0m1.421s 00:18:09.468 15:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:09.468 15:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.468 ************************************ 00:18:09.468 END TEST raid_state_function_test_sb_md_separate 00:18:09.468 ************************************ 00:18:09.468 15:06:35 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:09.468 15:06:35 bdev_raid -- bdev/bdev_raid.sh@906 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:18:09.468 15:06:35 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:09.468 15:06:35 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:09.468 15:06:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:09.468 ************************************ 00:18:09.468 START TEST raid_superblock_test_md_separate 00:18:09.468 ************************************ 00:18:09.468 15:06:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:18:09.468 15:06:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:18:09.468 15:06:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:18:09.468 15:06:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:18:09.468 15:06:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:18:09.468 15:06:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:18:09.468 15:06:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:18:09.468 15:06:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:18:09.468 15:06:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:18:09.468 15:06:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:18:09.468 15:06:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local strip_size 00:18:09.468 15:06:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:18:09.468 15:06:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:18:09.468 15:06:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:18:09.468 15:06:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:18:09.468 15:06:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:18:09.468 15:06:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # raid_pid=66500 00:18:09.468 15:06:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:09.468 15:06:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # waitforlisten 66500 /var/tmp/spdk-raid.sock 00:18:09.468 15:06:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@829 -- # '[' -z 66500 ']' 00:18:09.468 15:06:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:09.468 15:06:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:09.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:09.468 15:06:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:09.468 15:06:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:09.468 15:06:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.468 [2024-07-12 15:06:35.269915] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:18:09.468 [2024-07-12 15:06:35.270171] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:10.035 EAL: TSC is not safe to use in SMP mode 00:18:10.035 EAL: TSC is not invariant 00:18:10.035 [2024-07-12 15:06:35.816073] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.293 [2024-07-12 15:06:35.903182] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:10.293 [2024-07-12 15:06:35.905340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.293 [2024-07-12 15:06:35.906116] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:10.293 [2024-07-12 15:06:35.906131] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:10.552 15:06:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:10.552 15:06:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # return 0 00:18:10.552 15:06:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:18:10.552 15:06:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:10.552 15:06:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:18:10.552 15:06:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:18:10.552 15:06:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:10.552 15:06:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:10.552 15:06:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:10.552 15:06:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:10.552 15:06:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:18:10.811 malloc1 00:18:10.811 15:06:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:11.070 [2024-07-12 15:06:36.882583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:11.070 [2024-07-12 15:06:36.882659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.070 [2024-07-12 15:06:36.882672] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x378655a34780 00:18:11.070 [2024-07-12 15:06:36.882680] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.070 [2024-07-12 15:06:36.883550] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.070 [2024-07-12 15:06:36.883582] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:11.070 pt1 00:18:11.342 15:06:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:11.342 15:06:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:11.342 15:06:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:18:11.342 15:06:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:18:11.342 15:06:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:11.342 15:06:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:11.342 15:06:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:11.342 15:06:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:11.342 15:06:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:18:11.342 malloc2 00:18:11.342 15:06:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:11.908 [2024-07-12 15:06:37.458604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:11.908 [2024-07-12 15:06:37.458660] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.908 [2024-07-12 15:06:37.458672] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x378655a34c80 00:18:11.908 [2024-07-12 15:06:37.458680] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.908 [2024-07-12 15:06:37.459302] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.908 [2024-07-12 15:06:37.459326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:11.908 pt2 00:18:11.908 15:06:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:11.908 15:06:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:11.908 15:06:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:18:11.908 [2024-07-12 15:06:37.738628] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:11.908 [2024-07-12 15:06:37.739222] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:11.908 [2024-07-12 15:06:37.739292] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x378655a34f00 00:18:11.908 [2024-07-12 15:06:37.739299] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:11.908 [2024-07-12 15:06:37.739339] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x378655a97e20 00:18:11.908 [2024-07-12 15:06:37.739367] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x378655a34f00 00:18:11.908 [2024-07-12 15:06:37.739371] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x378655a34f00 00:18:11.908 [2024-07-12 15:06:37.739388] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.166 15:06:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:12.166 15:06:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:12.166 15:06:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:12.166 15:06:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:12.166 15:06:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:12.166 15:06:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:12.166 15:06:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:12.166 15:06:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:12.166 15:06:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:12.166 15:06:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:12.166 15:06:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.166 15:06:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.424 15:06:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:12.424 "name": "raid_bdev1", 00:18:12.424 "uuid": "561a70ba-4060-11ef-b2a4-e9dca065e82e", 00:18:12.424 "strip_size_kb": 0, 00:18:12.424 "state": "online", 00:18:12.424 "raid_level": "raid1", 00:18:12.424 "superblock": true, 00:18:12.424 "num_base_bdevs": 2, 00:18:12.424 "num_base_bdevs_discovered": 2, 00:18:12.424 "num_base_bdevs_operational": 2, 00:18:12.424 "base_bdevs_list": [ 00:18:12.424 { 00:18:12.424 "name": "pt1", 00:18:12.424 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:12.424 "is_configured": true, 00:18:12.424 "data_offset": 256, 00:18:12.424 "data_size": 7936 00:18:12.424 }, 00:18:12.424 { 00:18:12.424 "name": "pt2", 00:18:12.424 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:12.424 "is_configured": true, 00:18:12.424 "data_offset": 256, 00:18:12.424 "data_size": 7936 00:18:12.424 } 00:18:12.424 ] 00:18:12.424 }' 00:18:12.424 15:06:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:12.424 15:06:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.682 15:06:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:18:12.682 15:06:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:12.682 15:06:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:12.682 15:06:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:12.682 15:06:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:12.682 15:06:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:18:12.683 15:06:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:12.683 15:06:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:12.941 [2024-07-12 15:06:38.574707] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:12.941 15:06:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:12.941 "name": "raid_bdev1", 00:18:12.941 "aliases": [ 00:18:12.941 "561a70ba-4060-11ef-b2a4-e9dca065e82e" 00:18:12.941 ], 00:18:12.941 "product_name": "Raid Volume", 00:18:12.941 "block_size": 4096, 00:18:12.941 "num_blocks": 7936, 00:18:12.941 "uuid": "561a70ba-4060-11ef-b2a4-e9dca065e82e", 00:18:12.941 "md_size": 32, 00:18:12.941 "md_interleave": false, 00:18:12.941 "dif_type": 0, 00:18:12.941 "assigned_rate_limits": { 00:18:12.941 "rw_ios_per_sec": 0, 00:18:12.941 "rw_mbytes_per_sec": 0, 00:18:12.941 "r_mbytes_per_sec": 0, 00:18:12.941 "w_mbytes_per_sec": 0 00:18:12.941 }, 00:18:12.941 "claimed": false, 00:18:12.941 "zoned": false, 00:18:12.941 "supported_io_types": { 00:18:12.941 "read": true, 00:18:12.941 "write": true, 00:18:12.941 "unmap": false, 00:18:12.941 "flush": false, 00:18:12.941 "reset": true, 00:18:12.941 "nvme_admin": false, 00:18:12.941 "nvme_io": false, 00:18:12.941 "nvme_io_md": false, 00:18:12.941 "write_zeroes": true, 00:18:12.941 "zcopy": false, 00:18:12.941 "get_zone_info": false, 00:18:12.941 "zone_management": false, 00:18:12.941 "zone_append": false, 00:18:12.941 "compare": false, 00:18:12.941 "compare_and_write": false, 00:18:12.941 "abort": false, 00:18:12.941 "seek_hole": false, 00:18:12.941 "seek_data": false, 00:18:12.941 "copy": false, 00:18:12.941 "nvme_iov_md": false 00:18:12.941 }, 00:18:12.941 "memory_domains": [ 00:18:12.941 { 00:18:12.941 "dma_device_id": "system", 00:18:12.941 "dma_device_type": 1 00:18:12.941 }, 00:18:12.941 { 00:18:12.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.941 "dma_device_type": 2 00:18:12.941 }, 00:18:12.941 { 00:18:12.941 "dma_device_id": "system", 00:18:12.941 "dma_device_type": 1 00:18:12.941 }, 00:18:12.941 { 00:18:12.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.941 "dma_device_type": 2 00:18:12.941 } 00:18:12.941 ], 00:18:12.941 "driver_specific": { 00:18:12.941 "raid": { 00:18:12.941 "uuid": "561a70ba-4060-11ef-b2a4-e9dca065e82e", 00:18:12.941 "strip_size_kb": 0, 00:18:12.941 "state": "online", 00:18:12.941 "raid_level": "raid1", 00:18:12.941 "superblock": true, 00:18:12.941 "num_base_bdevs": 2, 00:18:12.941 "num_base_bdevs_discovered": 2, 00:18:12.941 "num_base_bdevs_operational": 2, 00:18:12.941 "base_bdevs_list": [ 00:18:12.941 { 00:18:12.942 "name": "pt1", 00:18:12.942 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:12.942 "is_configured": true, 00:18:12.942 "data_offset": 256, 00:18:12.942 "data_size": 7936 00:18:12.942 }, 00:18:12.942 { 00:18:12.942 "name": "pt2", 00:18:12.942 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:12.942 "is_configured": true, 00:18:12.942 "data_offset": 256, 00:18:12.942 "data_size": 7936 00:18:12.942 } 00:18:12.942 ] 00:18:12.942 } 00:18:12.942 } 00:18:12.942 }' 00:18:12.942 15:06:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:12.942 15:06:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:12.942 pt2' 00:18:12.942 15:06:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:12.942 15:06:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:12.942 15:06:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:13.199 15:06:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:13.199 "name": "pt1", 00:18:13.199 "aliases": [ 00:18:13.199 "00000000-0000-0000-0000-000000000001" 00:18:13.199 ], 00:18:13.199 "product_name": "passthru", 00:18:13.199 "block_size": 4096, 00:18:13.199 "num_blocks": 8192, 00:18:13.199 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:13.199 "md_size": 32, 00:18:13.199 "md_interleave": false, 00:18:13.199 "dif_type": 0, 00:18:13.199 "assigned_rate_limits": { 00:18:13.199 "rw_ios_per_sec": 0, 00:18:13.199 "rw_mbytes_per_sec": 0, 00:18:13.199 "r_mbytes_per_sec": 0, 00:18:13.199 "w_mbytes_per_sec": 0 00:18:13.199 }, 00:18:13.199 "claimed": true, 00:18:13.199 "claim_type": "exclusive_write", 00:18:13.199 "zoned": false, 00:18:13.199 "supported_io_types": { 00:18:13.199 "read": true, 00:18:13.199 "write": true, 00:18:13.199 "unmap": true, 00:18:13.199 "flush": true, 00:18:13.199 "reset": true, 00:18:13.199 "nvme_admin": false, 00:18:13.199 "nvme_io": false, 00:18:13.199 "nvme_io_md": false, 00:18:13.199 "write_zeroes": true, 00:18:13.199 "zcopy": true, 00:18:13.199 "get_zone_info": false, 00:18:13.199 "zone_management": false, 00:18:13.199 "zone_append": false, 00:18:13.200 "compare": false, 00:18:13.200 "compare_and_write": false, 00:18:13.200 "abort": true, 00:18:13.200 "seek_hole": false, 00:18:13.200 "seek_data": false, 00:18:13.200 "copy": true, 00:18:13.200 "nvme_iov_md": false 00:18:13.200 }, 00:18:13.200 "memory_domains": [ 00:18:13.200 { 00:18:13.200 "dma_device_id": "system", 00:18:13.200 "dma_device_type": 1 00:18:13.200 }, 00:18:13.200 { 00:18:13.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.200 "dma_device_type": 2 00:18:13.200 } 00:18:13.200 ], 00:18:13.200 "driver_specific": { 00:18:13.200 "passthru": { 00:18:13.200 "name": "pt1", 00:18:13.200 "base_bdev_name": "malloc1" 00:18:13.200 } 00:18:13.200 } 00:18:13.200 }' 00:18:13.200 15:06:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:13.200 15:06:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:13.200 15:06:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:18:13.200 15:06:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:13.200 15:06:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:13.200 15:06:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:18:13.200 15:06:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:13.200 15:06:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:13.200 15:06:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:18:13.200 15:06:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:13.200 15:06:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:13.200 15:06:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:18:13.200 15:06:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:13.200 15:06:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:13.200 15:06:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:13.459 15:06:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:13.459 "name": "pt2", 00:18:13.459 "aliases": [ 00:18:13.459 "00000000-0000-0000-0000-000000000002" 00:18:13.459 ], 00:18:13.459 "product_name": "passthru", 00:18:13.459 "block_size": 4096, 00:18:13.459 "num_blocks": 8192, 00:18:13.459 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:13.459 "md_size": 32, 00:18:13.459 "md_interleave": false, 00:18:13.459 "dif_type": 0, 00:18:13.459 "assigned_rate_limits": { 00:18:13.459 "rw_ios_per_sec": 0, 00:18:13.459 "rw_mbytes_per_sec": 0, 00:18:13.459 "r_mbytes_per_sec": 0, 00:18:13.459 "w_mbytes_per_sec": 0 00:18:13.459 }, 00:18:13.459 "claimed": true, 00:18:13.459 "claim_type": "exclusive_write", 00:18:13.459 "zoned": false, 00:18:13.459 "supported_io_types": { 00:18:13.459 "read": true, 00:18:13.459 "write": true, 00:18:13.459 "unmap": true, 00:18:13.459 "flush": true, 00:18:13.459 "reset": true, 00:18:13.459 "nvme_admin": false, 00:18:13.459 "nvme_io": false, 00:18:13.459 "nvme_io_md": false, 00:18:13.459 "write_zeroes": true, 00:18:13.459 "zcopy": true, 00:18:13.459 "get_zone_info": false, 00:18:13.459 "zone_management": false, 00:18:13.459 "zone_append": false, 00:18:13.459 "compare": false, 00:18:13.459 "compare_and_write": false, 00:18:13.459 "abort": true, 00:18:13.459 "seek_hole": false, 00:18:13.459 "seek_data": false, 00:18:13.459 "copy": true, 00:18:13.459 "nvme_iov_md": false 00:18:13.459 }, 00:18:13.459 "memory_domains": [ 00:18:13.459 { 00:18:13.459 "dma_device_id": "system", 00:18:13.459 "dma_device_type": 1 00:18:13.459 }, 00:18:13.459 { 00:18:13.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.459 "dma_device_type": 2 00:18:13.459 } 00:18:13.459 ], 00:18:13.459 "driver_specific": { 00:18:13.459 "passthru": { 00:18:13.459 "name": "pt2", 00:18:13.459 "base_bdev_name": "malloc2" 00:18:13.459 } 00:18:13.459 } 00:18:13.459 }' 00:18:13.459 15:06:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:13.459 15:06:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:13.459 15:06:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:18:13.459 15:06:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:13.459 15:06:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:13.459 15:06:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:18:13.459 15:06:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:13.459 15:06:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:13.459 15:06:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:18:13.459 15:06:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:13.459 15:06:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:13.459 15:06:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:18:13.459 15:06:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:13.459 15:06:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:18:13.718 [2024-07-12 15:06:39.450740] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:13.718 15:06:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=561a70ba-4060-11ef-b2a4-e9dca065e82e 00:18:13.718 15:06:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # '[' -z 561a70ba-4060-11ef-b2a4-e9dca065e82e ']' 00:18:13.718 15:06:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:13.975 [2024-07-12 15:06:39.702711] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:13.975 [2024-07-12 15:06:39.702738] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:13.975 [2024-07-12 15:06:39.702761] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:13.976 [2024-07-12 15:06:39.702776] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:13.976 [2024-07-12 15:06:39.702780] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x378655a34f00 name raid_bdev1, state offline 00:18:13.976 15:06:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.976 15:06:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:18:14.235 15:06:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:18:14.235 15:06:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:18:14.235 15:06:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:14.235 15:06:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:14.493 15:06:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:14.493 15:06:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:14.751 15:06:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:14.751 15:06:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:15.010 15:06:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:18:15.010 15:06:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:18:15.010 15:06:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:18:15.010 15:06:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:18:15.010 15:06:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:15.010 15:06:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:15.010 15:06:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:15.010 15:06:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:15.010 15:06:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:15.010 15:06:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:15.010 15:06:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:15.010 15:06:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:15.010 15:06:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:18:15.269 [2024-07-12 15:06:40.934801] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:15.269 [2024-07-12 15:06:40.935394] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:15.269 [2024-07-12 15:06:40.935420] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:15.269 [2024-07-12 15:06:40.935453] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:15.269 [2024-07-12 15:06:40.935463] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:15.269 [2024-07-12 15:06:40.935467] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x378655a34c80 name raid_bdev1, state configuring 00:18:15.269 request: 00:18:15.269 { 00:18:15.269 "name": "raid_bdev1", 00:18:15.269 "raid_level": "raid1", 00:18:15.269 "base_bdevs": [ 00:18:15.269 "malloc1", 00:18:15.269 "malloc2" 00:18:15.269 ], 00:18:15.269 "superblock": false, 00:18:15.269 "method": "bdev_raid_create", 00:18:15.269 "req_id": 1 00:18:15.269 } 00:18:15.269 Got JSON-RPC error response 00:18:15.269 response: 00:18:15.269 { 00:18:15.269 "code": -17, 00:18:15.269 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:15.269 } 00:18:15.269 15:06:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # es=1 00:18:15.269 15:06:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:15.269 15:06:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:15.269 15:06:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:15.269 15:06:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.269 15:06:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:18:15.528 15:06:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:18:15.528 15:06:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:18:15.528 15:06:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:15.790 [2024-07-12 15:06:41.422816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:15.790 [2024-07-12 15:06:41.422889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.790 [2024-07-12 15:06:41.422900] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x378655a34780 00:18:15.790 [2024-07-12 15:06:41.422908] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.790 [2024-07-12 15:06:41.423526] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.790 [2024-07-12 15:06:41.423554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:15.790 [2024-07-12 15:06:41.423586] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:15.790 [2024-07-12 15:06:41.423599] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:15.790 pt1 00:18:15.790 15:06:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:15.790 15:06:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:15.790 15:06:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:15.790 15:06:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:15.790 15:06:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:15.790 15:06:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:15.790 15:06:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:15.790 15:06:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:15.790 15:06:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:15.790 15:06:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:15.790 15:06:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.790 15:06:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.049 15:06:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:16.049 "name": "raid_bdev1", 00:18:16.049 "uuid": "561a70ba-4060-11ef-b2a4-e9dca065e82e", 00:18:16.049 "strip_size_kb": 0, 00:18:16.049 "state": "configuring", 00:18:16.049 "raid_level": "raid1", 00:18:16.049 "superblock": true, 00:18:16.049 "num_base_bdevs": 2, 00:18:16.049 "num_base_bdevs_discovered": 1, 00:18:16.049 "num_base_bdevs_operational": 2, 00:18:16.049 "base_bdevs_list": [ 00:18:16.049 { 00:18:16.049 "name": "pt1", 00:18:16.049 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:16.049 "is_configured": true, 00:18:16.049 "data_offset": 256, 00:18:16.049 "data_size": 7936 00:18:16.049 }, 00:18:16.049 { 00:18:16.049 "name": null, 00:18:16.049 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:16.049 "is_configured": false, 00:18:16.049 "data_offset": 256, 00:18:16.049 "data_size": 7936 00:18:16.049 } 00:18:16.049 ] 00:18:16.049 }' 00:18:16.049 15:06:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:16.049 15:06:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.307 15:06:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:18:16.307 15:06:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:18:16.307 15:06:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:16.307 15:06:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:16.566 [2024-07-12 15:06:42.286875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:16.566 [2024-07-12 15:06:42.286928] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.566 [2024-07-12 15:06:42.286940] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x378655a34f00 00:18:16.566 [2024-07-12 15:06:42.286948] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.566 [2024-07-12 15:06:42.287019] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.566 [2024-07-12 15:06:42.287029] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:16.566 [2024-07-12 15:06:42.287053] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:16.566 [2024-07-12 15:06:42.287061] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:16.566 [2024-07-12 15:06:42.287078] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x378655a35180 00:18:16.566 [2024-07-12 15:06:42.287082] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:16.566 [2024-07-12 15:06:42.287101] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x378655a97e20 00:18:16.566 [2024-07-12 15:06:42.287123] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x378655a35180 00:18:16.566 [2024-07-12 15:06:42.287127] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x378655a35180 00:18:16.566 [2024-07-12 15:06:42.287143] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.566 pt2 00:18:16.566 15:06:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:16.566 15:06:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:16.566 15:06:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:16.566 15:06:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:16.566 15:06:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:16.566 15:06:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:16.566 15:06:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:16.566 15:06:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:16.566 15:06:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:16.566 15:06:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:16.566 15:06:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:16.566 15:06:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:16.566 15:06:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.566 15:06:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.825 15:06:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:16.825 "name": "raid_bdev1", 00:18:16.825 "uuid": "561a70ba-4060-11ef-b2a4-e9dca065e82e", 00:18:16.825 "strip_size_kb": 0, 00:18:16.825 "state": "online", 00:18:16.825 "raid_level": "raid1", 00:18:16.825 "superblock": true, 00:18:16.825 "num_base_bdevs": 2, 00:18:16.825 "num_base_bdevs_discovered": 2, 00:18:16.825 "num_base_bdevs_operational": 2, 00:18:16.825 "base_bdevs_list": [ 00:18:16.825 { 00:18:16.825 "name": "pt1", 00:18:16.825 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:16.825 "is_configured": true, 00:18:16.825 "data_offset": 256, 00:18:16.825 "data_size": 7936 00:18:16.825 }, 00:18:16.825 { 00:18:16.825 "name": "pt2", 00:18:16.825 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:16.825 "is_configured": true, 00:18:16.825 "data_offset": 256, 00:18:16.825 "data_size": 7936 00:18:16.825 } 00:18:16.825 ] 00:18:16.825 }' 00:18:16.825 15:06:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:16.825 15:06:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.390 15:06:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:18:17.390 15:06:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:17.390 15:06:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:17.390 15:06:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:17.390 15:06:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:17.390 15:06:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:18:17.390 15:06:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:17.390 15:06:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:17.390 [2024-07-12 15:06:43.182954] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:17.390 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:17.390 "name": "raid_bdev1", 00:18:17.390 "aliases": [ 00:18:17.390 "561a70ba-4060-11ef-b2a4-e9dca065e82e" 00:18:17.390 ], 00:18:17.390 "product_name": "Raid Volume", 00:18:17.390 "block_size": 4096, 00:18:17.390 "num_blocks": 7936, 00:18:17.390 "uuid": "561a70ba-4060-11ef-b2a4-e9dca065e82e", 00:18:17.390 "md_size": 32, 00:18:17.390 "md_interleave": false, 00:18:17.390 "dif_type": 0, 00:18:17.390 "assigned_rate_limits": { 00:18:17.390 "rw_ios_per_sec": 0, 00:18:17.390 "rw_mbytes_per_sec": 0, 00:18:17.390 "r_mbytes_per_sec": 0, 00:18:17.390 "w_mbytes_per_sec": 0 00:18:17.390 }, 00:18:17.390 "claimed": false, 00:18:17.390 "zoned": false, 00:18:17.390 "supported_io_types": { 00:18:17.390 "read": true, 00:18:17.390 "write": true, 00:18:17.390 "unmap": false, 00:18:17.390 "flush": false, 00:18:17.391 "reset": true, 00:18:17.391 "nvme_admin": false, 00:18:17.391 "nvme_io": false, 00:18:17.391 "nvme_io_md": false, 00:18:17.391 "write_zeroes": true, 00:18:17.391 "zcopy": false, 00:18:17.391 "get_zone_info": false, 00:18:17.391 "zone_management": false, 00:18:17.391 "zone_append": false, 00:18:17.391 "compare": false, 00:18:17.391 "compare_and_write": false, 00:18:17.391 "abort": false, 00:18:17.391 "seek_hole": false, 00:18:17.391 "seek_data": false, 00:18:17.391 "copy": false, 00:18:17.391 "nvme_iov_md": false 00:18:17.391 }, 00:18:17.391 "memory_domains": [ 00:18:17.391 { 00:18:17.391 "dma_device_id": "system", 00:18:17.391 "dma_device_type": 1 00:18:17.391 }, 00:18:17.391 { 00:18:17.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.391 "dma_device_type": 2 00:18:17.391 }, 00:18:17.391 { 00:18:17.391 "dma_device_id": "system", 00:18:17.391 "dma_device_type": 1 00:18:17.391 }, 00:18:17.391 { 00:18:17.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.391 "dma_device_type": 2 00:18:17.391 } 00:18:17.391 ], 00:18:17.391 "driver_specific": { 00:18:17.391 "raid": { 00:18:17.391 "uuid": "561a70ba-4060-11ef-b2a4-e9dca065e82e", 00:18:17.391 "strip_size_kb": 0, 00:18:17.391 "state": "online", 00:18:17.391 "raid_level": "raid1", 00:18:17.391 "superblock": true, 00:18:17.391 "num_base_bdevs": 2, 00:18:17.391 "num_base_bdevs_discovered": 2, 00:18:17.391 "num_base_bdevs_operational": 2, 00:18:17.391 "base_bdevs_list": [ 00:18:17.391 { 00:18:17.391 "name": "pt1", 00:18:17.391 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:17.391 "is_configured": true, 00:18:17.391 "data_offset": 256, 00:18:17.391 "data_size": 7936 00:18:17.391 }, 00:18:17.391 { 00:18:17.391 "name": "pt2", 00:18:17.391 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:17.391 "is_configured": true, 00:18:17.391 "data_offset": 256, 00:18:17.391 "data_size": 7936 00:18:17.391 } 00:18:17.391 ] 00:18:17.391 } 00:18:17.391 } 00:18:17.391 }' 00:18:17.391 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:17.391 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:17.391 pt2' 00:18:17.391 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:17.391 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:17.391 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:17.649 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:17.649 "name": "pt1", 00:18:17.649 "aliases": [ 00:18:17.649 "00000000-0000-0000-0000-000000000001" 00:18:17.649 ], 00:18:17.649 "product_name": "passthru", 00:18:17.649 "block_size": 4096, 00:18:17.649 "num_blocks": 8192, 00:18:17.649 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:17.649 "md_size": 32, 00:18:17.649 "md_interleave": false, 00:18:17.649 "dif_type": 0, 00:18:17.649 "assigned_rate_limits": { 00:18:17.649 "rw_ios_per_sec": 0, 00:18:17.649 "rw_mbytes_per_sec": 0, 00:18:17.649 "r_mbytes_per_sec": 0, 00:18:17.649 "w_mbytes_per_sec": 0 00:18:17.649 }, 00:18:17.649 "claimed": true, 00:18:17.649 "claim_type": "exclusive_write", 00:18:17.649 "zoned": false, 00:18:17.649 "supported_io_types": { 00:18:17.649 "read": true, 00:18:17.649 "write": true, 00:18:17.649 "unmap": true, 00:18:17.649 "flush": true, 00:18:17.649 "reset": true, 00:18:17.649 "nvme_admin": false, 00:18:17.649 "nvme_io": false, 00:18:17.649 "nvme_io_md": false, 00:18:17.649 "write_zeroes": true, 00:18:17.649 "zcopy": true, 00:18:17.649 "get_zone_info": false, 00:18:17.649 "zone_management": false, 00:18:17.649 "zone_append": false, 00:18:17.649 "compare": false, 00:18:17.649 "compare_and_write": false, 00:18:17.649 "abort": true, 00:18:17.649 "seek_hole": false, 00:18:17.649 "seek_data": false, 00:18:17.649 "copy": true, 00:18:17.649 "nvme_iov_md": false 00:18:17.649 }, 00:18:17.649 "memory_domains": [ 00:18:17.649 { 00:18:17.649 "dma_device_id": "system", 00:18:17.649 "dma_device_type": 1 00:18:17.649 }, 00:18:17.649 { 00:18:17.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.649 "dma_device_type": 2 00:18:17.649 } 00:18:17.649 ], 00:18:17.649 "driver_specific": { 00:18:17.649 "passthru": { 00:18:17.649 "name": "pt1", 00:18:17.649 "base_bdev_name": "malloc1" 00:18:17.649 } 00:18:17.649 } 00:18:17.649 }' 00:18:17.649 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:17.649 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:17.649 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:18:17.649 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:17.908 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:17.908 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:18:17.908 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:17.908 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:17.908 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:18:17.908 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:17.908 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:17.908 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:18:17.908 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:17.908 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:17.908 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:18.240 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:18.240 "name": "pt2", 00:18:18.240 "aliases": [ 00:18:18.240 "00000000-0000-0000-0000-000000000002" 00:18:18.240 ], 00:18:18.240 "product_name": "passthru", 00:18:18.240 "block_size": 4096, 00:18:18.240 "num_blocks": 8192, 00:18:18.240 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:18.240 "md_size": 32, 00:18:18.240 "md_interleave": false, 00:18:18.240 "dif_type": 0, 00:18:18.240 "assigned_rate_limits": { 00:18:18.240 "rw_ios_per_sec": 0, 00:18:18.240 "rw_mbytes_per_sec": 0, 00:18:18.240 "r_mbytes_per_sec": 0, 00:18:18.240 "w_mbytes_per_sec": 0 00:18:18.240 }, 00:18:18.240 "claimed": true, 00:18:18.240 "claim_type": "exclusive_write", 00:18:18.240 "zoned": false, 00:18:18.240 "supported_io_types": { 00:18:18.240 "read": true, 00:18:18.240 "write": true, 00:18:18.240 "unmap": true, 00:18:18.240 "flush": true, 00:18:18.240 "reset": true, 00:18:18.240 "nvme_admin": false, 00:18:18.240 "nvme_io": false, 00:18:18.240 "nvme_io_md": false, 00:18:18.240 "write_zeroes": true, 00:18:18.240 "zcopy": true, 00:18:18.240 "get_zone_info": false, 00:18:18.240 "zone_management": false, 00:18:18.240 "zone_append": false, 00:18:18.240 "compare": false, 00:18:18.240 "compare_and_write": false, 00:18:18.240 "abort": true, 00:18:18.240 "seek_hole": false, 00:18:18.240 "seek_data": false, 00:18:18.240 "copy": true, 00:18:18.240 "nvme_iov_md": false 00:18:18.240 }, 00:18:18.240 "memory_domains": [ 00:18:18.240 { 00:18:18.240 "dma_device_id": "system", 00:18:18.240 "dma_device_type": 1 00:18:18.240 }, 00:18:18.240 { 00:18:18.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.240 "dma_device_type": 2 00:18:18.240 } 00:18:18.240 ], 00:18:18.240 "driver_specific": { 00:18:18.240 "passthru": { 00:18:18.240 "name": "pt2", 00:18:18.240 "base_bdev_name": "malloc2" 00:18:18.240 } 00:18:18.240 } 00:18:18.240 }' 00:18:18.240 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:18.240 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:18.240 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:18:18.240 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:18.240 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:18.240 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:18:18.240 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:18.240 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:18.240 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:18:18.240 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:18.240 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:18.240 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:18:18.240 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:18.240 15:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:18:18.499 [2024-07-12 15:06:44.110988] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:18.499 15:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # '[' 561a70ba-4060-11ef-b2a4-e9dca065e82e '!=' 561a70ba-4060-11ef-b2a4-e9dca065e82e ']' 00:18:18.499 15:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:18:18.499 15:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:18.499 15:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:18:18.499 15:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:18.757 [2024-07-12 15:06:44.434982] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:18.757 15:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:18.757 15:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:18.757 15:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:18.757 15:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:18.757 15:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:18.757 15:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:18.757 15:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:18.757 15:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:18.757 15:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:18.757 15:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:18.757 15:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.757 15:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.016 15:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:19.016 "name": "raid_bdev1", 00:18:19.016 "uuid": "561a70ba-4060-11ef-b2a4-e9dca065e82e", 00:18:19.016 "strip_size_kb": 0, 00:18:19.016 "state": "online", 00:18:19.016 "raid_level": "raid1", 00:18:19.016 "superblock": true, 00:18:19.016 "num_base_bdevs": 2, 00:18:19.016 "num_base_bdevs_discovered": 1, 00:18:19.016 "num_base_bdevs_operational": 1, 00:18:19.016 "base_bdevs_list": [ 00:18:19.016 { 00:18:19.016 "name": null, 00:18:19.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.016 "is_configured": false, 00:18:19.016 "data_offset": 256, 00:18:19.016 "data_size": 7936 00:18:19.016 }, 00:18:19.016 { 00:18:19.016 "name": "pt2", 00:18:19.016 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:19.016 "is_configured": true, 00:18:19.016 "data_offset": 256, 00:18:19.016 "data_size": 7936 00:18:19.016 } 00:18:19.016 ] 00:18:19.016 }' 00:18:19.016 15:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:19.016 15:06:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.274 15:06:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:19.533 [2024-07-12 15:06:45.299005] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:19.533 [2024-07-12 15:06:45.299030] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:19.533 [2024-07-12 15:06:45.299053] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:19.533 [2024-07-12 15:06:45.299065] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:19.533 [2024-07-12 15:06:45.299069] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x378655a35180 name raid_bdev1, state offline 00:18:19.533 15:06:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.533 15:06:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:18:19.791 15:06:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:18:19.791 15:06:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:18:19.791 15:06:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:18:19.791 15:06:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:19.791 15:06:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:20.050 15:06:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:18:20.050 15:06:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:20.050 15:06:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:18:20.050 15:06:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:18:20.050 15:06:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@518 -- # i=1 00:18:20.050 15:06:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:20.308 [2024-07-12 15:06:46.079044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:20.308 [2024-07-12 15:06:46.079095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.308 [2024-07-12 15:06:46.079106] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x378655a34f00 00:18:20.308 [2024-07-12 15:06:46.079115] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.308 [2024-07-12 15:06:46.079743] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.308 [2024-07-12 15:06:46.079770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:20.308 [2024-07-12 15:06:46.079796] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:20.308 [2024-07-12 15:06:46.079807] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:20.308 [2024-07-12 15:06:46.079824] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x378655a35180 00:18:20.308 [2024-07-12 15:06:46.079828] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:20.308 [2024-07-12 15:06:46.079848] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x378655a97e20 00:18:20.308 [2024-07-12 15:06:46.079871] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x378655a35180 00:18:20.308 [2024-07-12 15:06:46.079875] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x378655a35180 00:18:20.308 [2024-07-12 15:06:46.079889] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.308 pt2 00:18:20.308 15:06:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:20.308 15:06:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:20.308 15:06:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:20.308 15:06:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:20.308 15:06:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:20.308 15:06:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:20.308 15:06:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:20.308 15:06:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:20.308 15:06:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:20.308 15:06:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:20.308 15:06:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.308 15:06:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.567 15:06:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:20.567 "name": "raid_bdev1", 00:18:20.567 "uuid": "561a70ba-4060-11ef-b2a4-e9dca065e82e", 00:18:20.567 "strip_size_kb": 0, 00:18:20.567 "state": "online", 00:18:20.567 "raid_level": "raid1", 00:18:20.567 "superblock": true, 00:18:20.567 "num_base_bdevs": 2, 00:18:20.567 "num_base_bdevs_discovered": 1, 00:18:20.567 "num_base_bdevs_operational": 1, 00:18:20.567 "base_bdevs_list": [ 00:18:20.567 { 00:18:20.567 "name": null, 00:18:20.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.567 "is_configured": false, 00:18:20.567 "data_offset": 256, 00:18:20.567 "data_size": 7936 00:18:20.567 }, 00:18:20.567 { 00:18:20.567 "name": "pt2", 00:18:20.567 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:20.567 "is_configured": true, 00:18:20.567 "data_offset": 256, 00:18:20.567 "data_size": 7936 00:18:20.567 } 00:18:20.567 ] 00:18:20.567 }' 00:18:20.567 15:06:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:20.567 15:06:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.135 15:06:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:21.135 [2024-07-12 15:06:46.915070] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:21.135 [2024-07-12 15:06:46.915092] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:21.135 [2024-07-12 15:06:46.915132] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:21.135 [2024-07-12 15:06:46.915144] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:21.135 [2024-07-12 15:06:46.915148] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x378655a35180 name raid_bdev1, state offline 00:18:21.135 15:06:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.135 15:06:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:18:21.393 15:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:18:21.393 15:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:18:21.393 15:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:18:21.393 15:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:21.651 [2024-07-12 15:06:47.467112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:21.651 [2024-07-12 15:06:47.467168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.651 [2024-07-12 15:06:47.467179] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x378655a34c80 00:18:21.651 [2024-07-12 15:06:47.467187] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.651 [2024-07-12 15:06:47.467820] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.651 [2024-07-12 15:06:47.467847] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:21.651 [2024-07-12 15:06:47.467872] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:21.651 [2024-07-12 15:06:47.467883] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:21.651 [2024-07-12 15:06:47.467910] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:21.651 [2024-07-12 15:06:47.467915] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:21.651 [2024-07-12 15:06:47.467921] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x378655a34780 name raid_bdev1, state configuring 00:18:21.651 [2024-07-12 15:06:47.467929] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:21.651 [2024-07-12 15:06:47.467945] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x378655a34780 00:18:21.651 [2024-07-12 15:06:47.467949] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:21.651 [2024-07-12 15:06:47.467970] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x378655a97e20 00:18:21.651 [2024-07-12 15:06:47.467992] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x378655a34780 00:18:21.651 [2024-07-12 15:06:47.467996] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x378655a34780 00:18:21.651 [2024-07-12 15:06:47.468010] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.651 pt1 00:18:21.910 15:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:18:21.910 15:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:21.910 15:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:21.910 15:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:21.910 15:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:21.910 15:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:21.910 15:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:21.910 15:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:21.910 15:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:21.910 15:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:21.910 15:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:21.910 15:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.910 15:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.910 15:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:21.910 "name": "raid_bdev1", 00:18:21.910 "uuid": "561a70ba-4060-11ef-b2a4-e9dca065e82e", 00:18:21.910 "strip_size_kb": 0, 00:18:21.910 "state": "online", 00:18:21.910 "raid_level": "raid1", 00:18:21.910 "superblock": true, 00:18:21.910 "num_base_bdevs": 2, 00:18:21.910 "num_base_bdevs_discovered": 1, 00:18:21.910 "num_base_bdevs_operational": 1, 00:18:21.910 "base_bdevs_list": [ 00:18:21.910 { 00:18:21.910 "name": null, 00:18:21.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.910 "is_configured": false, 00:18:21.910 "data_offset": 256, 00:18:21.910 "data_size": 7936 00:18:21.910 }, 00:18:21.910 { 00:18:21.910 "name": "pt2", 00:18:21.910 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:21.910 "is_configured": true, 00:18:21.910 "data_offset": 256, 00:18:21.910 "data_size": 7936 00:18:21.910 } 00:18:21.910 ] 00:18:21.910 }' 00:18:21.910 15:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:21.910 15:06:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.477 15:06:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:18:22.477 15:06:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:22.735 15:06:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:18:22.735 15:06:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:22.735 15:06:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:18:22.735 [2024-07-12 15:06:48.547194] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:22.735 15:06:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 561a70ba-4060-11ef-b2a4-e9dca065e82e '!=' 561a70ba-4060-11ef-b2a4-e9dca065e82e ']' 00:18:22.735 15:06:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@562 -- # killprocess 66500 00:18:22.735 15:06:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@948 -- # '[' -z 66500 ']' 00:18:22.735 15:06:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # kill -0 66500 00:18:22.995 15:06:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # uname 00:18:22.995 15:06:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:18:22.995 15:06:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps -c -o command 66500 00:18:22.995 15:06:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # tail -1 00:18:22.995 15:06:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:18:22.995 15:06:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:18:22.995 killing process with pid 66500 00:18:22.995 15:06:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66500' 00:18:22.995 15:06:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@967 -- # kill 66500 00:18:22.995 [2024-07-12 15:06:48.577656] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:22.995 15:06:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # wait 66500 00:18:22.995 [2024-07-12 15:06:48.577682] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:22.995 [2024-07-12 15:06:48.577693] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:22.995 [2024-07-12 15:06:48.577698] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x378655a34780 name raid_bdev1, state offline 00:18:22.995 [2024-07-12 15:06:48.589605] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:22.995 15:06:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@564 -- # return 0 00:18:22.995 00:18:22.995 real 0m13.507s 00:18:22.995 user 0m24.141s 00:18:22.995 sys 0m2.114s 00:18:22.995 ************************************ 00:18:22.995 END TEST raid_superblock_test_md_separate 00:18:22.995 ************************************ 00:18:22.995 15:06:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:22.995 15:06:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.995 15:06:48 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:22.995 15:06:48 bdev_raid -- bdev/bdev_raid.sh@907 -- # '[' '' = true ']' 00:18:22.995 15:06:48 bdev_raid -- bdev/bdev_raid.sh@911 -- # base_malloc_params='-m 32 -i' 00:18:22.995 15:06:48 bdev_raid -- bdev/bdev_raid.sh@912 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:18:22.995 15:06:48 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:22.995 15:06:48 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:22.995 15:06:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:22.995 ************************************ 00:18:22.995 START TEST raid_state_function_test_sb_md_interleaved 00:18:22.995 ************************************ 00:18:22.995 15:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:18:22.995 15:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:18:22.995 15:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:18:22.995 15:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:18:22.995 15:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:22.995 15:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:22.995 15:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:22.995 15:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:18:22.995 15:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:22.995 15:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:22.995 15:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:18:22.995 15:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:22.995 15:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:22.995 15:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:22.995 15:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:22.995 15:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:22.995 15:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:22.995 15:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:22.995 15:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:22.995 15:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:18:22.995 15:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:18:22.995 15:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:18:22.995 15:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:18:22.995 15:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=66891 00:18:22.995 Process raid pid: 66891 00:18:22.995 15:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 66891' 00:18:22.995 15:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 66891 /var/tmp/spdk-raid.sock 00:18:22.995 15:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:22.995 15:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 66891 ']' 00:18:22.995 15:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:22.995 15:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:22.996 15:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:22.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:22.996 15:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:22.996 15:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.254 [2024-07-12 15:06:48.827948] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:18:23.254 [2024-07-12 15:06:48.828118] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:23.830 EAL: TSC is not safe to use in SMP mode 00:18:23.830 EAL: TSC is not invariant 00:18:23.830 [2024-07-12 15:06:49.360072] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.830 [2024-07-12 15:06:49.455572] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:23.830 [2024-07-12 15:06:49.457971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.830 [2024-07-12 15:06:49.458857] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:23.830 [2024-07-12 15:06:49.458878] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:24.397 15:06:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:24.397 15:06:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:18:24.397 15:06:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:24.397 [2024-07-12 15:06:50.171878] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:24.397 [2024-07-12 15:06:50.171930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:24.397 [2024-07-12 15:06:50.171936] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:24.397 [2024-07-12 15:06:50.171945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:24.397 15:06:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:24.397 15:06:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:24.397 15:06:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:24.397 15:06:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:24.397 15:06:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:24.397 15:06:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:24.397 15:06:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:24.397 15:06:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:24.397 15:06:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:24.397 15:06:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:24.397 15:06:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.397 15:06:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:24.655 15:06:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:24.655 "name": "Existed_Raid", 00:18:24.655 "uuid": "5d839ab5-4060-11ef-b2a4-e9dca065e82e", 00:18:24.655 "strip_size_kb": 0, 00:18:24.655 "state": "configuring", 00:18:24.655 "raid_level": "raid1", 00:18:24.655 "superblock": true, 00:18:24.655 "num_base_bdevs": 2, 00:18:24.655 "num_base_bdevs_discovered": 0, 00:18:24.655 "num_base_bdevs_operational": 2, 00:18:24.655 "base_bdevs_list": [ 00:18:24.655 { 00:18:24.655 "name": "BaseBdev1", 00:18:24.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.655 "is_configured": false, 00:18:24.655 "data_offset": 0, 00:18:24.655 "data_size": 0 00:18:24.655 }, 00:18:24.655 { 00:18:24.655 "name": "BaseBdev2", 00:18:24.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.655 "is_configured": false, 00:18:24.655 "data_offset": 0, 00:18:24.655 "data_size": 0 00:18:24.655 } 00:18:24.655 ] 00:18:24.655 }' 00:18:24.655 15:06:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:24.655 15:06:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.219 15:06:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:25.219 [2024-07-12 15:06:50.967888] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:25.219 [2024-07-12 15:06:50.967915] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14b962834500 name Existed_Raid, state configuring 00:18:25.219 15:06:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:25.476 [2024-07-12 15:06:51.231903] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:25.476 [2024-07-12 15:06:51.231957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:25.476 [2024-07-12 15:06:51.231962] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:25.476 [2024-07-12 15:06:51.231971] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:25.476 15:06:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:18:25.733 [2024-07-12 15:06:51.468833] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:25.733 BaseBdev1 00:18:25.733 15:06:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:25.733 15:06:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:25.734 15:06:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:25.734 15:06:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:18:25.734 15:06:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:25.734 15:06:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:25.734 15:06:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:26.089 15:06:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:26.363 [ 00:18:26.363 { 00:18:26.363 "name": "BaseBdev1", 00:18:26.363 "aliases": [ 00:18:26.363 "5e495d8d-4060-11ef-b2a4-e9dca065e82e" 00:18:26.363 ], 00:18:26.363 "product_name": "Malloc disk", 00:18:26.363 "block_size": 4128, 00:18:26.363 "num_blocks": 8192, 00:18:26.363 "uuid": "5e495d8d-4060-11ef-b2a4-e9dca065e82e", 00:18:26.363 "md_size": 32, 00:18:26.363 "md_interleave": true, 00:18:26.363 "dif_type": 0, 00:18:26.363 "assigned_rate_limits": { 00:18:26.363 "rw_ios_per_sec": 0, 00:18:26.363 "rw_mbytes_per_sec": 0, 00:18:26.363 "r_mbytes_per_sec": 0, 00:18:26.363 "w_mbytes_per_sec": 0 00:18:26.363 }, 00:18:26.363 "claimed": true, 00:18:26.363 "claim_type": "exclusive_write", 00:18:26.363 "zoned": false, 00:18:26.363 "supported_io_types": { 00:18:26.363 "read": true, 00:18:26.363 "write": true, 00:18:26.363 "unmap": true, 00:18:26.363 "flush": true, 00:18:26.363 "reset": true, 00:18:26.363 "nvme_admin": false, 00:18:26.363 "nvme_io": false, 00:18:26.363 "nvme_io_md": false, 00:18:26.363 "write_zeroes": true, 00:18:26.363 "zcopy": true, 00:18:26.363 "get_zone_info": false, 00:18:26.363 "zone_management": false, 00:18:26.363 "zone_append": false, 00:18:26.363 "compare": false, 00:18:26.363 "compare_and_write": false, 00:18:26.363 "abort": true, 00:18:26.363 "seek_hole": false, 00:18:26.363 "seek_data": false, 00:18:26.363 "copy": true, 00:18:26.363 "nvme_iov_md": false 00:18:26.363 }, 00:18:26.363 "memory_domains": [ 00:18:26.363 { 00:18:26.363 "dma_device_id": "system", 00:18:26.363 "dma_device_type": 1 00:18:26.363 }, 00:18:26.363 { 00:18:26.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.363 "dma_device_type": 2 00:18:26.363 } 00:18:26.363 ], 00:18:26.363 "driver_specific": {} 00:18:26.363 } 00:18:26.363 ] 00:18:26.363 15:06:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:18:26.363 15:06:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:26.363 15:06:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:26.363 15:06:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:26.363 15:06:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:26.363 15:06:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:26.363 15:06:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:26.363 15:06:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:26.363 15:06:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:26.363 15:06:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:26.363 15:06:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:26.363 15:06:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.363 15:06:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:26.621 15:06:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:26.621 "name": "Existed_Raid", 00:18:26.621 "uuid": "5e255a0f-4060-11ef-b2a4-e9dca065e82e", 00:18:26.621 "strip_size_kb": 0, 00:18:26.621 "state": "configuring", 00:18:26.621 "raid_level": "raid1", 00:18:26.621 "superblock": true, 00:18:26.621 "num_base_bdevs": 2, 00:18:26.621 "num_base_bdevs_discovered": 1, 00:18:26.621 "num_base_bdevs_operational": 2, 00:18:26.621 "base_bdevs_list": [ 00:18:26.621 { 00:18:26.621 "name": "BaseBdev1", 00:18:26.621 "uuid": "5e495d8d-4060-11ef-b2a4-e9dca065e82e", 00:18:26.621 "is_configured": true, 00:18:26.621 "data_offset": 256, 00:18:26.621 "data_size": 7936 00:18:26.621 }, 00:18:26.621 { 00:18:26.621 "name": "BaseBdev2", 00:18:26.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.621 "is_configured": false, 00:18:26.621 "data_offset": 0, 00:18:26.621 "data_size": 0 00:18:26.621 } 00:18:26.621 ] 00:18:26.621 }' 00:18:26.621 15:06:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:26.621 15:06:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.878 15:06:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:27.137 [2024-07-12 15:06:52.763962] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:27.137 [2024-07-12 15:06:52.763992] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14b962834500 name Existed_Raid, state configuring 00:18:27.137 15:06:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:18:27.395 [2024-07-12 15:06:52.987987] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:27.395 [2024-07-12 15:06:52.988765] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:27.395 [2024-07-12 15:06:52.988803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:27.395 15:06:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:27.395 15:06:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:27.395 15:06:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:27.395 15:06:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:27.395 15:06:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:27.395 15:06:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:27.395 15:06:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:27.395 15:06:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:27.395 15:06:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:27.395 15:06:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:27.395 15:06:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:27.395 15:06:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:27.395 15:06:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.395 15:06:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.653 15:06:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:27.653 "name": "Existed_Raid", 00:18:27.653 "uuid": "5f314edd-4060-11ef-b2a4-e9dca065e82e", 00:18:27.653 "strip_size_kb": 0, 00:18:27.653 "state": "configuring", 00:18:27.653 "raid_level": "raid1", 00:18:27.653 "superblock": true, 00:18:27.653 "num_base_bdevs": 2, 00:18:27.653 "num_base_bdevs_discovered": 1, 00:18:27.653 "num_base_bdevs_operational": 2, 00:18:27.653 "base_bdevs_list": [ 00:18:27.653 { 00:18:27.653 "name": "BaseBdev1", 00:18:27.653 "uuid": "5e495d8d-4060-11ef-b2a4-e9dca065e82e", 00:18:27.653 "is_configured": true, 00:18:27.653 "data_offset": 256, 00:18:27.653 "data_size": 7936 00:18:27.653 }, 00:18:27.653 { 00:18:27.653 "name": "BaseBdev2", 00:18:27.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.653 "is_configured": false, 00:18:27.653 "data_offset": 0, 00:18:27.653 "data_size": 0 00:18:27.653 } 00:18:27.653 ] 00:18:27.653 }' 00:18:27.653 15:06:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:27.653 15:06:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.911 15:06:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:18:28.169 [2024-07-12 15:06:53.820107] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:28.169 [2024-07-12 15:06:53.820181] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x14b962834a00 00:18:28.169 [2024-07-12 15:06:53.820186] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:28.169 [2024-07-12 15:06:53.820206] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14b962897e20 00:18:28.169 [2024-07-12 15:06:53.820220] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x14b962834a00 00:18:28.169 [2024-07-12 15:06:53.820224] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x14b962834a00 00:18:28.169 [2024-07-12 15:06:53.820235] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.169 BaseBdev2 00:18:28.169 15:06:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:28.169 15:06:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:28.169 15:06:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:28.169 15:06:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:18:28.169 15:06:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:28.169 15:06:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:28.169 15:06:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:28.426 15:06:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:28.684 [ 00:18:28.684 { 00:18:28.684 "name": "BaseBdev2", 00:18:28.684 "aliases": [ 00:18:28.684 "5fb044cc-4060-11ef-b2a4-e9dca065e82e" 00:18:28.684 ], 00:18:28.684 "product_name": "Malloc disk", 00:18:28.684 "block_size": 4128, 00:18:28.684 "num_blocks": 8192, 00:18:28.684 "uuid": "5fb044cc-4060-11ef-b2a4-e9dca065e82e", 00:18:28.684 "md_size": 32, 00:18:28.684 "md_interleave": true, 00:18:28.684 "dif_type": 0, 00:18:28.684 "assigned_rate_limits": { 00:18:28.684 "rw_ios_per_sec": 0, 00:18:28.684 "rw_mbytes_per_sec": 0, 00:18:28.684 "r_mbytes_per_sec": 0, 00:18:28.684 "w_mbytes_per_sec": 0 00:18:28.684 }, 00:18:28.684 "claimed": true, 00:18:28.684 "claim_type": "exclusive_write", 00:18:28.684 "zoned": false, 00:18:28.684 "supported_io_types": { 00:18:28.684 "read": true, 00:18:28.684 "write": true, 00:18:28.684 "unmap": true, 00:18:28.684 "flush": true, 00:18:28.684 "reset": true, 00:18:28.684 "nvme_admin": false, 00:18:28.684 "nvme_io": false, 00:18:28.684 "nvme_io_md": false, 00:18:28.684 "write_zeroes": true, 00:18:28.684 "zcopy": true, 00:18:28.684 "get_zone_info": false, 00:18:28.684 "zone_management": false, 00:18:28.684 "zone_append": false, 00:18:28.684 "compare": false, 00:18:28.684 "compare_and_write": false, 00:18:28.684 "abort": true, 00:18:28.684 "seek_hole": false, 00:18:28.684 "seek_data": false, 00:18:28.684 "copy": true, 00:18:28.684 "nvme_iov_md": false 00:18:28.684 }, 00:18:28.684 "memory_domains": [ 00:18:28.684 { 00:18:28.684 "dma_device_id": "system", 00:18:28.684 "dma_device_type": 1 00:18:28.684 }, 00:18:28.684 { 00:18:28.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.684 "dma_device_type": 2 00:18:28.684 } 00:18:28.684 ], 00:18:28.684 "driver_specific": {} 00:18:28.684 } 00:18:28.684 ] 00:18:28.684 15:06:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:18:28.684 15:06:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:28.684 15:06:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:28.684 15:06:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:28.685 15:06:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:28.685 15:06:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:28.685 15:06:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:28.685 15:06:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:28.685 15:06:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:28.685 15:06:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:28.685 15:06:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:28.685 15:06:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:28.685 15:06:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:28.685 15:06:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.685 15:06:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.942 15:06:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:28.942 "name": "Existed_Raid", 00:18:28.942 "uuid": "5f314edd-4060-11ef-b2a4-e9dca065e82e", 00:18:28.942 "strip_size_kb": 0, 00:18:28.942 "state": "online", 00:18:28.942 "raid_level": "raid1", 00:18:28.942 "superblock": true, 00:18:28.942 "num_base_bdevs": 2, 00:18:28.942 "num_base_bdevs_discovered": 2, 00:18:28.942 "num_base_bdevs_operational": 2, 00:18:28.942 "base_bdevs_list": [ 00:18:28.942 { 00:18:28.942 "name": "BaseBdev1", 00:18:28.942 "uuid": "5e495d8d-4060-11ef-b2a4-e9dca065e82e", 00:18:28.942 "is_configured": true, 00:18:28.942 "data_offset": 256, 00:18:28.942 "data_size": 7936 00:18:28.942 }, 00:18:28.942 { 00:18:28.942 "name": "BaseBdev2", 00:18:28.942 "uuid": "5fb044cc-4060-11ef-b2a4-e9dca065e82e", 00:18:28.942 "is_configured": true, 00:18:28.942 "data_offset": 256, 00:18:28.942 "data_size": 7936 00:18:28.942 } 00:18:28.942 ] 00:18:28.942 }' 00:18:28.942 15:06:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:28.942 15:06:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.200 15:06:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:29.200 15:06:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:29.200 15:06:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:29.200 15:06:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:29.200 15:06:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:29.200 15:06:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:18:29.200 15:06:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:29.200 15:06:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:29.200 [2024-07-12 15:06:55.032146] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:29.459 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:29.459 "name": "Existed_Raid", 00:18:29.459 "aliases": [ 00:18:29.459 "5f314edd-4060-11ef-b2a4-e9dca065e82e" 00:18:29.459 ], 00:18:29.459 "product_name": "Raid Volume", 00:18:29.459 "block_size": 4128, 00:18:29.459 "num_blocks": 7936, 00:18:29.459 "uuid": "5f314edd-4060-11ef-b2a4-e9dca065e82e", 00:18:29.459 "md_size": 32, 00:18:29.459 "md_interleave": true, 00:18:29.459 "dif_type": 0, 00:18:29.459 "assigned_rate_limits": { 00:18:29.459 "rw_ios_per_sec": 0, 00:18:29.459 "rw_mbytes_per_sec": 0, 00:18:29.459 "r_mbytes_per_sec": 0, 00:18:29.459 "w_mbytes_per_sec": 0 00:18:29.459 }, 00:18:29.459 "claimed": false, 00:18:29.459 "zoned": false, 00:18:29.459 "supported_io_types": { 00:18:29.459 "read": true, 00:18:29.459 "write": true, 00:18:29.459 "unmap": false, 00:18:29.459 "flush": false, 00:18:29.459 "reset": true, 00:18:29.459 "nvme_admin": false, 00:18:29.459 "nvme_io": false, 00:18:29.459 "nvme_io_md": false, 00:18:29.459 "write_zeroes": true, 00:18:29.459 "zcopy": false, 00:18:29.459 "get_zone_info": false, 00:18:29.459 "zone_management": false, 00:18:29.459 "zone_append": false, 00:18:29.459 "compare": false, 00:18:29.459 "compare_and_write": false, 00:18:29.459 "abort": false, 00:18:29.459 "seek_hole": false, 00:18:29.459 "seek_data": false, 00:18:29.459 "copy": false, 00:18:29.459 "nvme_iov_md": false 00:18:29.459 }, 00:18:29.459 "memory_domains": [ 00:18:29.459 { 00:18:29.459 "dma_device_id": "system", 00:18:29.459 "dma_device_type": 1 00:18:29.459 }, 00:18:29.459 { 00:18:29.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.459 "dma_device_type": 2 00:18:29.459 }, 00:18:29.459 { 00:18:29.459 "dma_device_id": "system", 00:18:29.459 "dma_device_type": 1 00:18:29.459 }, 00:18:29.459 { 00:18:29.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.459 "dma_device_type": 2 00:18:29.459 } 00:18:29.459 ], 00:18:29.459 "driver_specific": { 00:18:29.459 "raid": { 00:18:29.459 "uuid": "5f314edd-4060-11ef-b2a4-e9dca065e82e", 00:18:29.459 "strip_size_kb": 0, 00:18:29.459 "state": "online", 00:18:29.459 "raid_level": "raid1", 00:18:29.459 "superblock": true, 00:18:29.459 "num_base_bdevs": 2, 00:18:29.459 "num_base_bdevs_discovered": 2, 00:18:29.459 "num_base_bdevs_operational": 2, 00:18:29.459 "base_bdevs_list": [ 00:18:29.459 { 00:18:29.459 "name": "BaseBdev1", 00:18:29.459 "uuid": "5e495d8d-4060-11ef-b2a4-e9dca065e82e", 00:18:29.459 "is_configured": true, 00:18:29.459 "data_offset": 256, 00:18:29.459 "data_size": 7936 00:18:29.459 }, 00:18:29.459 { 00:18:29.459 "name": "BaseBdev2", 00:18:29.459 "uuid": "5fb044cc-4060-11ef-b2a4-e9dca065e82e", 00:18:29.459 "is_configured": true, 00:18:29.459 "data_offset": 256, 00:18:29.459 "data_size": 7936 00:18:29.459 } 00:18:29.459 ] 00:18:29.459 } 00:18:29.459 } 00:18:29.459 }' 00:18:29.459 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:29.459 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:29.459 BaseBdev2' 00:18:29.459 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:29.459 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:29.459 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:29.737 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:29.737 "name": "BaseBdev1", 00:18:29.737 "aliases": [ 00:18:29.737 "5e495d8d-4060-11ef-b2a4-e9dca065e82e" 00:18:29.737 ], 00:18:29.737 "product_name": "Malloc disk", 00:18:29.737 "block_size": 4128, 00:18:29.737 "num_blocks": 8192, 00:18:29.737 "uuid": "5e495d8d-4060-11ef-b2a4-e9dca065e82e", 00:18:29.737 "md_size": 32, 00:18:29.737 "md_interleave": true, 00:18:29.737 "dif_type": 0, 00:18:29.737 "assigned_rate_limits": { 00:18:29.737 "rw_ios_per_sec": 0, 00:18:29.737 "rw_mbytes_per_sec": 0, 00:18:29.737 "r_mbytes_per_sec": 0, 00:18:29.737 "w_mbytes_per_sec": 0 00:18:29.737 }, 00:18:29.737 "claimed": true, 00:18:29.737 "claim_type": "exclusive_write", 00:18:29.737 "zoned": false, 00:18:29.737 "supported_io_types": { 00:18:29.737 "read": true, 00:18:29.737 "write": true, 00:18:29.737 "unmap": true, 00:18:29.737 "flush": true, 00:18:29.738 "reset": true, 00:18:29.738 "nvme_admin": false, 00:18:29.738 "nvme_io": false, 00:18:29.738 "nvme_io_md": false, 00:18:29.738 "write_zeroes": true, 00:18:29.738 "zcopy": true, 00:18:29.738 "get_zone_info": false, 00:18:29.738 "zone_management": false, 00:18:29.738 "zone_append": false, 00:18:29.738 "compare": false, 00:18:29.738 "compare_and_write": false, 00:18:29.738 "abort": true, 00:18:29.738 "seek_hole": false, 00:18:29.738 "seek_data": false, 00:18:29.738 "copy": true, 00:18:29.738 "nvme_iov_md": false 00:18:29.738 }, 00:18:29.738 "memory_domains": [ 00:18:29.738 { 00:18:29.738 "dma_device_id": "system", 00:18:29.738 "dma_device_type": 1 00:18:29.738 }, 00:18:29.738 { 00:18:29.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.738 "dma_device_type": 2 00:18:29.738 } 00:18:29.738 ], 00:18:29.738 "driver_specific": {} 00:18:29.738 }' 00:18:29.738 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:29.738 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:29.738 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:18:29.738 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:29.738 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:29.738 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:18:29.738 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:29.738 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:29.738 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:18:29.738 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:29.738 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:29.738 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:18:29.738 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:29.738 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:29.738 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:29.997 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:29.997 "name": "BaseBdev2", 00:18:29.997 "aliases": [ 00:18:29.997 "5fb044cc-4060-11ef-b2a4-e9dca065e82e" 00:18:29.997 ], 00:18:29.997 "product_name": "Malloc disk", 00:18:29.997 "block_size": 4128, 00:18:29.997 "num_blocks": 8192, 00:18:29.997 "uuid": "5fb044cc-4060-11ef-b2a4-e9dca065e82e", 00:18:29.997 "md_size": 32, 00:18:29.997 "md_interleave": true, 00:18:29.997 "dif_type": 0, 00:18:29.997 "assigned_rate_limits": { 00:18:29.997 "rw_ios_per_sec": 0, 00:18:29.997 "rw_mbytes_per_sec": 0, 00:18:29.997 "r_mbytes_per_sec": 0, 00:18:29.997 "w_mbytes_per_sec": 0 00:18:29.997 }, 00:18:29.997 "claimed": true, 00:18:29.997 "claim_type": "exclusive_write", 00:18:29.997 "zoned": false, 00:18:29.997 "supported_io_types": { 00:18:29.997 "read": true, 00:18:29.997 "write": true, 00:18:29.997 "unmap": true, 00:18:29.997 "flush": true, 00:18:29.997 "reset": true, 00:18:29.997 "nvme_admin": false, 00:18:29.997 "nvme_io": false, 00:18:29.997 "nvme_io_md": false, 00:18:29.997 "write_zeroes": true, 00:18:29.998 "zcopy": true, 00:18:29.998 "get_zone_info": false, 00:18:29.998 "zone_management": false, 00:18:29.998 "zone_append": false, 00:18:29.998 "compare": false, 00:18:29.998 "compare_and_write": false, 00:18:29.998 "abort": true, 00:18:29.998 "seek_hole": false, 00:18:29.998 "seek_data": false, 00:18:29.998 "copy": true, 00:18:29.998 "nvme_iov_md": false 00:18:29.998 }, 00:18:29.998 "memory_domains": [ 00:18:29.998 { 00:18:29.998 "dma_device_id": "system", 00:18:29.998 "dma_device_type": 1 00:18:29.998 }, 00:18:29.998 { 00:18:29.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.998 "dma_device_type": 2 00:18:29.998 } 00:18:29.998 ], 00:18:29.998 "driver_specific": {} 00:18:29.998 }' 00:18:29.998 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:29.998 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:29.998 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:18:29.998 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:29.998 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:29.998 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:18:29.998 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:29.998 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:29.998 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:18:29.998 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:29.998 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:29.998 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:18:29.998 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:30.256 [2024-07-12 15:06:55.864159] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:30.256 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:30.256 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:18:30.256 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:30.256 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:18:30.256 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:18:30.256 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:30.256 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:30.256 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:30.256 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:30.256 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:30.256 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:30.256 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:30.256 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:30.256 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:30.256 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:30.256 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.256 15:06:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.515 15:06:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:30.515 "name": "Existed_Raid", 00:18:30.515 "uuid": "5f314edd-4060-11ef-b2a4-e9dca065e82e", 00:18:30.515 "strip_size_kb": 0, 00:18:30.515 "state": "online", 00:18:30.515 "raid_level": "raid1", 00:18:30.515 "superblock": true, 00:18:30.515 "num_base_bdevs": 2, 00:18:30.515 "num_base_bdevs_discovered": 1, 00:18:30.515 "num_base_bdevs_operational": 1, 00:18:30.515 "base_bdevs_list": [ 00:18:30.515 { 00:18:30.515 "name": null, 00:18:30.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.515 "is_configured": false, 00:18:30.515 "data_offset": 256, 00:18:30.515 "data_size": 7936 00:18:30.515 }, 00:18:30.515 { 00:18:30.515 "name": "BaseBdev2", 00:18:30.515 "uuid": "5fb044cc-4060-11ef-b2a4-e9dca065e82e", 00:18:30.515 "is_configured": true, 00:18:30.515 "data_offset": 256, 00:18:30.515 "data_size": 7936 00:18:30.515 } 00:18:30.515 ] 00:18:30.515 }' 00:18:30.515 15:06:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:30.515 15:06:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.773 15:06:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:30.773 15:06:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:30.773 15:06:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.773 15:06:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:31.031 15:06:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:31.031 15:06:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:31.031 15:06:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:31.289 [2024-07-12 15:06:56.893934] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:31.289 [2024-07-12 15:06:56.893978] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:31.289 [2024-07-12 15:06:56.899689] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:31.289 [2024-07-12 15:06:56.899711] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:31.289 [2024-07-12 15:06:56.899716] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14b962834a00 name Existed_Raid, state offline 00:18:31.289 15:06:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:31.289 15:06:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:31.289 15:06:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.289 15:06:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:31.547 15:06:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:31.547 15:06:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:31.547 15:06:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:18:31.547 15:06:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 66891 00:18:31.547 15:06:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 66891 ']' 00:18:31.547 15:06:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 66891 00:18:31.547 15:06:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:18:31.547 15:06:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:18:31.547 15:06:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps -c -o command 66891 00:18:31.547 15:06:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # tail -1 00:18:31.547 15:06:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:18:31.547 killing process with pid 66891 00:18:31.547 15:06:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:18:31.547 15:06:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66891' 00:18:31.547 15:06:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 66891 00:18:31.547 [2024-07-12 15:06:57.210538] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:31.547 [2024-07-12 15:06:57.210570] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:31.547 15:06:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 66891 00:18:31.805 15:06:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:18:31.805 00:18:31.805 real 0m8.568s 00:18:31.805 user 0m15.002s 00:18:31.805 sys 0m1.383s 00:18:31.805 15:06:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:31.805 ************************************ 00:18:31.805 END TEST raid_state_function_test_sb_md_interleaved 00:18:31.805 ************************************ 00:18:31.805 15:06:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.805 15:06:57 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:31.805 15:06:57 bdev_raid -- bdev/bdev_raid.sh@913 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:18:31.805 15:06:57 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:31.805 15:06:57 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:31.805 15:06:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:31.805 ************************************ 00:18:31.805 START TEST raid_superblock_test_md_interleaved 00:18:31.805 ************************************ 00:18:31.805 15:06:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:18:31.805 15:06:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:18:31.805 15:06:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:18:31.805 15:06:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:18:31.805 15:06:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:18:31.805 15:06:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:18:31.805 15:06:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:18:31.805 15:06:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:18:31.805 15:06:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:18:31.805 15:06:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:18:31.805 15:06:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local strip_size 00:18:31.805 15:06:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:18:31.805 15:06:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:18:31.805 15:06:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:18:31.805 15:06:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:18:31.805 15:06:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:18:31.805 15:06:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # raid_pid=67161 00:18:31.805 15:06:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # waitforlisten 67161 /var/tmp/spdk-raid.sock 00:18:31.805 15:06:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 67161 ']' 00:18:31.805 15:06:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:31.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:31.805 15:06:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:31.805 15:06:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.805 15:06:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:31.805 15:06:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.805 15:06:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.805 [2024-07-12 15:06:57.435930] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:18:31.805 [2024-07-12 15:06:57.436168] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:32.373 EAL: TSC is not safe to use in SMP mode 00:18:32.373 EAL: TSC is not invariant 00:18:32.373 [2024-07-12 15:06:57.951095] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.373 [2024-07-12 15:06:58.034226] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:32.373 [2024-07-12 15:06:58.036298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.373 [2024-07-12 15:06:58.037042] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:32.373 [2024-07-12 15:06:58.037056] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:32.939 15:06:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:32.939 15:06:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:18:32.939 15:06:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:18:32.939 15:06:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:32.939 15:06:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:18:32.939 15:06:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:18:32.939 15:06:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:32.939 15:06:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:32.939 15:06:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:32.939 15:06:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:32.939 15:06:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:18:32.939 malloc1 00:18:32.939 15:06:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:33.198 [2024-07-12 15:06:58.956810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:33.198 [2024-07-12 15:06:58.956893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.198 [2024-07-12 15:06:58.956922] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3fa557a34780 00:18:33.198 [2024-07-12 15:06:58.956930] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.198 [2024-07-12 15:06:58.957718] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.198 [2024-07-12 15:06:58.957744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:33.198 pt1 00:18:33.198 15:06:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:33.198 15:06:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:33.198 15:06:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:18:33.198 15:06:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:18:33.198 15:06:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:33.198 15:06:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:33.198 15:06:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:33.198 15:06:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:33.198 15:06:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:18:33.456 malloc2 00:18:33.456 15:06:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:33.714 [2024-07-12 15:06:59.500875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:33.714 [2024-07-12 15:06:59.500934] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.714 [2024-07-12 15:06:59.500947] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3fa557a34c80 00:18:33.714 [2024-07-12 15:06:59.500955] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.714 [2024-07-12 15:06:59.501532] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.714 [2024-07-12 15:06:59.501562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:33.714 pt2 00:18:33.714 15:06:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:33.714 15:06:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:33.714 15:06:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:18:33.972 [2024-07-12 15:06:59.732899] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:33.972 [2024-07-12 15:06:59.733468] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:33.972 [2024-07-12 15:06:59.733527] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3fa557a34f00 00:18:33.972 [2024-07-12 15:06:59.733533] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:33.972 [2024-07-12 15:06:59.733576] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3fa557a97e20 00:18:33.972 [2024-07-12 15:06:59.733590] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3fa557a34f00 00:18:33.972 [2024-07-12 15:06:59.733594] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3fa557a34f00 00:18:33.972 [2024-07-12 15:06:59.733607] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.972 15:06:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:33.972 15:06:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:33.972 15:06:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:33.972 15:06:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:33.972 15:06:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:33.972 15:06:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:33.972 15:06:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:33.972 15:06:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:33.972 15:06:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:33.972 15:06:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:33.972 15:06:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.972 15:06:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.229 15:07:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:34.229 "name": "raid_bdev1", 00:18:34.229 "uuid": "63367fea-4060-11ef-b2a4-e9dca065e82e", 00:18:34.229 "strip_size_kb": 0, 00:18:34.229 "state": "online", 00:18:34.229 "raid_level": "raid1", 00:18:34.229 "superblock": true, 00:18:34.229 "num_base_bdevs": 2, 00:18:34.229 "num_base_bdevs_discovered": 2, 00:18:34.229 "num_base_bdevs_operational": 2, 00:18:34.229 "base_bdevs_list": [ 00:18:34.229 { 00:18:34.229 "name": "pt1", 00:18:34.229 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:34.229 "is_configured": true, 00:18:34.229 "data_offset": 256, 00:18:34.229 "data_size": 7936 00:18:34.229 }, 00:18:34.229 { 00:18:34.229 "name": "pt2", 00:18:34.229 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:34.229 "is_configured": true, 00:18:34.229 "data_offset": 256, 00:18:34.229 "data_size": 7936 00:18:34.229 } 00:18:34.229 ] 00:18:34.229 }' 00:18:34.229 15:07:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:34.229 15:07:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.580 15:07:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:18:34.580 15:07:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:34.580 15:07:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:34.580 15:07:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:34.580 15:07:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:34.580 15:07:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:18:34.580 15:07:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:34.580 15:07:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:34.838 [2024-07-12 15:07:00.580969] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:34.838 15:07:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:34.838 "name": "raid_bdev1", 00:18:34.838 "aliases": [ 00:18:34.838 "63367fea-4060-11ef-b2a4-e9dca065e82e" 00:18:34.838 ], 00:18:34.838 "product_name": "Raid Volume", 00:18:34.838 "block_size": 4128, 00:18:34.838 "num_blocks": 7936, 00:18:34.838 "uuid": "63367fea-4060-11ef-b2a4-e9dca065e82e", 00:18:34.838 "md_size": 32, 00:18:34.838 "md_interleave": true, 00:18:34.838 "dif_type": 0, 00:18:34.838 "assigned_rate_limits": { 00:18:34.838 "rw_ios_per_sec": 0, 00:18:34.838 "rw_mbytes_per_sec": 0, 00:18:34.838 "r_mbytes_per_sec": 0, 00:18:34.838 "w_mbytes_per_sec": 0 00:18:34.838 }, 00:18:34.838 "claimed": false, 00:18:34.838 "zoned": false, 00:18:34.838 "supported_io_types": { 00:18:34.838 "read": true, 00:18:34.838 "write": true, 00:18:34.838 "unmap": false, 00:18:34.838 "flush": false, 00:18:34.838 "reset": true, 00:18:34.838 "nvme_admin": false, 00:18:34.838 "nvme_io": false, 00:18:34.838 "nvme_io_md": false, 00:18:34.838 "write_zeroes": true, 00:18:34.838 "zcopy": false, 00:18:34.838 "get_zone_info": false, 00:18:34.838 "zone_management": false, 00:18:34.838 "zone_append": false, 00:18:34.838 "compare": false, 00:18:34.838 "compare_and_write": false, 00:18:34.838 "abort": false, 00:18:34.838 "seek_hole": false, 00:18:34.838 "seek_data": false, 00:18:34.838 "copy": false, 00:18:34.838 "nvme_iov_md": false 00:18:34.838 }, 00:18:34.838 "memory_domains": [ 00:18:34.838 { 00:18:34.838 "dma_device_id": "system", 00:18:34.838 "dma_device_type": 1 00:18:34.838 }, 00:18:34.838 { 00:18:34.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:34.838 "dma_device_type": 2 00:18:34.838 }, 00:18:34.838 { 00:18:34.838 "dma_device_id": "system", 00:18:34.838 "dma_device_type": 1 00:18:34.838 }, 00:18:34.838 { 00:18:34.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:34.838 "dma_device_type": 2 00:18:34.838 } 00:18:34.838 ], 00:18:34.838 "driver_specific": { 00:18:34.838 "raid": { 00:18:34.838 "uuid": "63367fea-4060-11ef-b2a4-e9dca065e82e", 00:18:34.838 "strip_size_kb": 0, 00:18:34.838 "state": "online", 00:18:34.838 "raid_level": "raid1", 00:18:34.838 "superblock": true, 00:18:34.838 "num_base_bdevs": 2, 00:18:34.838 "num_base_bdevs_discovered": 2, 00:18:34.838 "num_base_bdevs_operational": 2, 00:18:34.838 "base_bdevs_list": [ 00:18:34.838 { 00:18:34.838 "name": "pt1", 00:18:34.838 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:34.838 "is_configured": true, 00:18:34.838 "data_offset": 256, 00:18:34.838 "data_size": 7936 00:18:34.838 }, 00:18:34.838 { 00:18:34.838 "name": "pt2", 00:18:34.838 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:34.838 "is_configured": true, 00:18:34.838 "data_offset": 256, 00:18:34.838 "data_size": 7936 00:18:34.838 } 00:18:34.838 ] 00:18:34.838 } 00:18:34.838 } 00:18:34.838 }' 00:18:34.838 15:07:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:34.838 15:07:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:34.838 pt2' 00:18:34.838 15:07:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:34.839 15:07:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:34.839 15:07:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:35.097 15:07:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:35.097 "name": "pt1", 00:18:35.097 "aliases": [ 00:18:35.097 "00000000-0000-0000-0000-000000000001" 00:18:35.097 ], 00:18:35.097 "product_name": "passthru", 00:18:35.097 "block_size": 4128, 00:18:35.097 "num_blocks": 8192, 00:18:35.097 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:35.097 "md_size": 32, 00:18:35.097 "md_interleave": true, 00:18:35.097 "dif_type": 0, 00:18:35.097 "assigned_rate_limits": { 00:18:35.097 "rw_ios_per_sec": 0, 00:18:35.097 "rw_mbytes_per_sec": 0, 00:18:35.097 "r_mbytes_per_sec": 0, 00:18:35.097 "w_mbytes_per_sec": 0 00:18:35.097 }, 00:18:35.097 "claimed": true, 00:18:35.097 "claim_type": "exclusive_write", 00:18:35.097 "zoned": false, 00:18:35.097 "supported_io_types": { 00:18:35.097 "read": true, 00:18:35.097 "write": true, 00:18:35.097 "unmap": true, 00:18:35.097 "flush": true, 00:18:35.097 "reset": true, 00:18:35.097 "nvme_admin": false, 00:18:35.097 "nvme_io": false, 00:18:35.097 "nvme_io_md": false, 00:18:35.097 "write_zeroes": true, 00:18:35.097 "zcopy": true, 00:18:35.097 "get_zone_info": false, 00:18:35.097 "zone_management": false, 00:18:35.097 "zone_append": false, 00:18:35.097 "compare": false, 00:18:35.097 "compare_and_write": false, 00:18:35.097 "abort": true, 00:18:35.097 "seek_hole": false, 00:18:35.097 "seek_data": false, 00:18:35.097 "copy": true, 00:18:35.097 "nvme_iov_md": false 00:18:35.097 }, 00:18:35.097 "memory_domains": [ 00:18:35.097 { 00:18:35.097 "dma_device_id": "system", 00:18:35.097 "dma_device_type": 1 00:18:35.097 }, 00:18:35.097 { 00:18:35.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.097 "dma_device_type": 2 00:18:35.097 } 00:18:35.097 ], 00:18:35.097 "driver_specific": { 00:18:35.097 "passthru": { 00:18:35.097 "name": "pt1", 00:18:35.097 "base_bdev_name": "malloc1" 00:18:35.097 } 00:18:35.097 } 00:18:35.097 }' 00:18:35.097 15:07:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:35.097 15:07:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:35.097 15:07:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:18:35.097 15:07:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:35.097 15:07:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:35.097 15:07:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:18:35.097 15:07:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:35.097 15:07:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:35.097 15:07:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:18:35.097 15:07:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:35.097 15:07:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:35.097 15:07:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:18:35.097 15:07:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:35.097 15:07:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:35.097 15:07:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:35.356 15:07:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:35.356 "name": "pt2", 00:18:35.356 "aliases": [ 00:18:35.356 "00000000-0000-0000-0000-000000000002" 00:18:35.356 ], 00:18:35.356 "product_name": "passthru", 00:18:35.356 "block_size": 4128, 00:18:35.356 "num_blocks": 8192, 00:18:35.356 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:35.356 "md_size": 32, 00:18:35.356 "md_interleave": true, 00:18:35.356 "dif_type": 0, 00:18:35.356 "assigned_rate_limits": { 00:18:35.356 "rw_ios_per_sec": 0, 00:18:35.356 "rw_mbytes_per_sec": 0, 00:18:35.356 "r_mbytes_per_sec": 0, 00:18:35.356 "w_mbytes_per_sec": 0 00:18:35.356 }, 00:18:35.356 "claimed": true, 00:18:35.356 "claim_type": "exclusive_write", 00:18:35.356 "zoned": false, 00:18:35.356 "supported_io_types": { 00:18:35.356 "read": true, 00:18:35.356 "write": true, 00:18:35.356 "unmap": true, 00:18:35.356 "flush": true, 00:18:35.356 "reset": true, 00:18:35.356 "nvme_admin": false, 00:18:35.356 "nvme_io": false, 00:18:35.356 "nvme_io_md": false, 00:18:35.356 "write_zeroes": true, 00:18:35.356 "zcopy": true, 00:18:35.356 "get_zone_info": false, 00:18:35.356 "zone_management": false, 00:18:35.356 "zone_append": false, 00:18:35.356 "compare": false, 00:18:35.356 "compare_and_write": false, 00:18:35.356 "abort": true, 00:18:35.356 "seek_hole": false, 00:18:35.356 "seek_data": false, 00:18:35.356 "copy": true, 00:18:35.356 "nvme_iov_md": false 00:18:35.356 }, 00:18:35.356 "memory_domains": [ 00:18:35.356 { 00:18:35.356 "dma_device_id": "system", 00:18:35.356 "dma_device_type": 1 00:18:35.356 }, 00:18:35.356 { 00:18:35.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.356 "dma_device_type": 2 00:18:35.356 } 00:18:35.356 ], 00:18:35.356 "driver_specific": { 00:18:35.356 "passthru": { 00:18:35.356 "name": "pt2", 00:18:35.356 "base_bdev_name": "malloc2" 00:18:35.356 } 00:18:35.356 } 00:18:35.356 }' 00:18:35.356 15:07:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:35.356 15:07:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:35.356 15:07:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:18:35.356 15:07:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:35.356 15:07:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:35.356 15:07:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:18:35.356 15:07:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:35.356 15:07:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:35.356 15:07:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:18:35.356 15:07:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:35.356 15:07:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:35.614 15:07:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:18:35.614 15:07:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:35.614 15:07:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:18:35.872 [2024-07-12 15:07:01.461007] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:35.872 15:07:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=63367fea-4060-11ef-b2a4-e9dca065e82e 00:18:35.872 15:07:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # '[' -z 63367fea-4060-11ef-b2a4-e9dca065e82e ']' 00:18:35.872 15:07:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:36.152 [2024-07-12 15:07:01.744972] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:36.152 [2024-07-12 15:07:01.744996] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:36.152 [2024-07-12 15:07:01.745035] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:36.152 [2024-07-12 15:07:01.745049] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:36.152 [2024-07-12 15:07:01.745053] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3fa557a34f00 name raid_bdev1, state offline 00:18:36.152 15:07:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.152 15:07:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:18:36.409 15:07:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:18:36.409 15:07:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:18:36.409 15:07:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:36.409 15:07:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:36.667 15:07:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:36.667 15:07:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:36.924 15:07:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:36.924 15:07:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:37.181 15:07:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:18:37.181 15:07:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:18:37.181 15:07:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:18:37.181 15:07:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:18:37.181 15:07:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:37.181 15:07:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:37.181 15:07:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:37.181 15:07:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:37.181 15:07:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:37.181 15:07:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:37.181 15:07:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:37.181 15:07:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:37.181 15:07:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:18:37.440 [2024-07-12 15:07:03.145058] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:37.440 [2024-07-12 15:07:03.145631] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:37.440 [2024-07-12 15:07:03.145656] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:37.440 [2024-07-12 15:07:03.145689] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:37.440 [2024-07-12 15:07:03.145700] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:37.440 [2024-07-12 15:07:03.145705] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3fa557a34c80 name raid_bdev1, state configuring 00:18:37.440 request: 00:18:37.440 { 00:18:37.440 "name": "raid_bdev1", 00:18:37.440 "raid_level": "raid1", 00:18:37.440 "base_bdevs": [ 00:18:37.440 "malloc1", 00:18:37.440 "malloc2" 00:18:37.440 ], 00:18:37.440 "superblock": false, 00:18:37.440 "method": "bdev_raid_create", 00:18:37.440 "req_id": 1 00:18:37.440 } 00:18:37.440 Got JSON-RPC error response 00:18:37.440 response: 00:18:37.440 { 00:18:37.440 "code": -17, 00:18:37.440 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:37.440 } 00:18:37.440 15:07:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:18:37.440 15:07:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:37.440 15:07:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:37.440 15:07:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:37.440 15:07:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.440 15:07:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:18:37.698 15:07:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:18:37.698 15:07:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:18:37.698 15:07:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:37.955 [2024-07-12 15:07:03.661078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:37.955 [2024-07-12 15:07:03.661131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.955 [2024-07-12 15:07:03.661143] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3fa557a34780 00:18:37.955 [2024-07-12 15:07:03.661151] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.955 [2024-07-12 15:07:03.661723] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.955 [2024-07-12 15:07:03.661749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:37.955 [2024-07-12 15:07:03.661768] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:37.955 [2024-07-12 15:07:03.661781] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:37.955 pt1 00:18:37.955 15:07:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:37.955 15:07:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:37.955 15:07:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:37.955 15:07:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:37.955 15:07:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:37.955 15:07:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:37.955 15:07:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:37.955 15:07:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:37.955 15:07:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:37.955 15:07:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:37.955 15:07:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.955 15:07:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.212 15:07:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:38.212 "name": "raid_bdev1", 00:18:38.212 "uuid": "63367fea-4060-11ef-b2a4-e9dca065e82e", 00:18:38.212 "strip_size_kb": 0, 00:18:38.212 "state": "configuring", 00:18:38.212 "raid_level": "raid1", 00:18:38.212 "superblock": true, 00:18:38.212 "num_base_bdevs": 2, 00:18:38.212 "num_base_bdevs_discovered": 1, 00:18:38.212 "num_base_bdevs_operational": 2, 00:18:38.212 "base_bdevs_list": [ 00:18:38.212 { 00:18:38.212 "name": "pt1", 00:18:38.212 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:38.212 "is_configured": true, 00:18:38.212 "data_offset": 256, 00:18:38.212 "data_size": 7936 00:18:38.212 }, 00:18:38.212 { 00:18:38.212 "name": null, 00:18:38.212 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:38.212 "is_configured": false, 00:18:38.212 "data_offset": 256, 00:18:38.212 "data_size": 7936 00:18:38.212 } 00:18:38.212 ] 00:18:38.212 }' 00:18:38.212 15:07:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:38.212 15:07:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.470 15:07:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:18:38.470 15:07:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:18:38.470 15:07:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:38.470 15:07:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:38.728 [2024-07-12 15:07:04.453108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:38.728 [2024-07-12 15:07:04.453164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.728 [2024-07-12 15:07:04.453176] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3fa557a34f00 00:18:38.728 [2024-07-12 15:07:04.453184] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.728 [2024-07-12 15:07:04.453237] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.728 [2024-07-12 15:07:04.453247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:38.728 [2024-07-12 15:07:04.453264] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:38.728 [2024-07-12 15:07:04.453272] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:38.728 [2024-07-12 15:07:04.453294] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3fa557a35180 00:18:38.728 [2024-07-12 15:07:04.453298] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:38.728 [2024-07-12 15:07:04.453317] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3fa557a97e20 00:18:38.728 [2024-07-12 15:07:04.453330] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3fa557a35180 00:18:38.728 [2024-07-12 15:07:04.453334] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3fa557a35180 00:18:38.728 [2024-07-12 15:07:04.453346] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:38.728 pt2 00:18:38.728 15:07:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:38.728 15:07:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:38.728 15:07:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:38.728 15:07:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:38.728 15:07:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:38.728 15:07:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:38.728 15:07:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:38.728 15:07:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:38.728 15:07:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:38.728 15:07:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:38.728 15:07:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:38.728 15:07:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:38.728 15:07:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.728 15:07:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.986 15:07:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:38.986 "name": "raid_bdev1", 00:18:38.986 "uuid": "63367fea-4060-11ef-b2a4-e9dca065e82e", 00:18:38.986 "strip_size_kb": 0, 00:18:38.986 "state": "online", 00:18:38.986 "raid_level": "raid1", 00:18:38.986 "superblock": true, 00:18:38.986 "num_base_bdevs": 2, 00:18:38.986 "num_base_bdevs_discovered": 2, 00:18:38.986 "num_base_bdevs_operational": 2, 00:18:38.986 "base_bdevs_list": [ 00:18:38.986 { 00:18:38.986 "name": "pt1", 00:18:38.986 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:38.986 "is_configured": true, 00:18:38.987 "data_offset": 256, 00:18:38.987 "data_size": 7936 00:18:38.987 }, 00:18:38.987 { 00:18:38.987 "name": "pt2", 00:18:38.987 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:38.987 "is_configured": true, 00:18:38.987 "data_offset": 256, 00:18:38.987 "data_size": 7936 00:18:38.987 } 00:18:38.987 ] 00:18:38.987 }' 00:18:38.987 15:07:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:38.987 15:07:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.554 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:18:39.554 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:39.554 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:39.554 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:39.554 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:39.554 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:18:39.554 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:39.554 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:39.812 [2024-07-12 15:07:05.401195] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:39.812 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:39.812 "name": "raid_bdev1", 00:18:39.812 "aliases": [ 00:18:39.812 "63367fea-4060-11ef-b2a4-e9dca065e82e" 00:18:39.812 ], 00:18:39.812 "product_name": "Raid Volume", 00:18:39.812 "block_size": 4128, 00:18:39.812 "num_blocks": 7936, 00:18:39.812 "uuid": "63367fea-4060-11ef-b2a4-e9dca065e82e", 00:18:39.812 "md_size": 32, 00:18:39.812 "md_interleave": true, 00:18:39.812 "dif_type": 0, 00:18:39.812 "assigned_rate_limits": { 00:18:39.812 "rw_ios_per_sec": 0, 00:18:39.812 "rw_mbytes_per_sec": 0, 00:18:39.812 "r_mbytes_per_sec": 0, 00:18:39.812 "w_mbytes_per_sec": 0 00:18:39.812 }, 00:18:39.812 "claimed": false, 00:18:39.812 "zoned": false, 00:18:39.812 "supported_io_types": { 00:18:39.812 "read": true, 00:18:39.812 "write": true, 00:18:39.812 "unmap": false, 00:18:39.812 "flush": false, 00:18:39.812 "reset": true, 00:18:39.812 "nvme_admin": false, 00:18:39.812 "nvme_io": false, 00:18:39.812 "nvme_io_md": false, 00:18:39.812 "write_zeroes": true, 00:18:39.812 "zcopy": false, 00:18:39.812 "get_zone_info": false, 00:18:39.812 "zone_management": false, 00:18:39.812 "zone_append": false, 00:18:39.812 "compare": false, 00:18:39.812 "compare_and_write": false, 00:18:39.812 "abort": false, 00:18:39.812 "seek_hole": false, 00:18:39.812 "seek_data": false, 00:18:39.812 "copy": false, 00:18:39.812 "nvme_iov_md": false 00:18:39.812 }, 00:18:39.812 "memory_domains": [ 00:18:39.812 { 00:18:39.812 "dma_device_id": "system", 00:18:39.812 "dma_device_type": 1 00:18:39.812 }, 00:18:39.812 { 00:18:39.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.812 "dma_device_type": 2 00:18:39.812 }, 00:18:39.812 { 00:18:39.812 "dma_device_id": "system", 00:18:39.812 "dma_device_type": 1 00:18:39.812 }, 00:18:39.812 { 00:18:39.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.812 "dma_device_type": 2 00:18:39.812 } 00:18:39.812 ], 00:18:39.812 "driver_specific": { 00:18:39.812 "raid": { 00:18:39.812 "uuid": "63367fea-4060-11ef-b2a4-e9dca065e82e", 00:18:39.812 "strip_size_kb": 0, 00:18:39.812 "state": "online", 00:18:39.812 "raid_level": "raid1", 00:18:39.812 "superblock": true, 00:18:39.812 "num_base_bdevs": 2, 00:18:39.812 "num_base_bdevs_discovered": 2, 00:18:39.812 "num_base_bdevs_operational": 2, 00:18:39.812 "base_bdevs_list": [ 00:18:39.812 { 00:18:39.812 "name": "pt1", 00:18:39.812 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:39.812 "is_configured": true, 00:18:39.812 "data_offset": 256, 00:18:39.812 "data_size": 7936 00:18:39.812 }, 00:18:39.812 { 00:18:39.812 "name": "pt2", 00:18:39.812 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:39.812 "is_configured": true, 00:18:39.812 "data_offset": 256, 00:18:39.812 "data_size": 7936 00:18:39.812 } 00:18:39.812 ] 00:18:39.812 } 00:18:39.812 } 00:18:39.812 }' 00:18:39.812 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:39.812 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:39.812 pt2' 00:18:39.812 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:39.812 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:39.812 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:40.070 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:40.070 "name": "pt1", 00:18:40.070 "aliases": [ 00:18:40.070 "00000000-0000-0000-0000-000000000001" 00:18:40.070 ], 00:18:40.070 "product_name": "passthru", 00:18:40.070 "block_size": 4128, 00:18:40.070 "num_blocks": 8192, 00:18:40.070 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:40.070 "md_size": 32, 00:18:40.070 "md_interleave": true, 00:18:40.070 "dif_type": 0, 00:18:40.070 "assigned_rate_limits": { 00:18:40.070 "rw_ios_per_sec": 0, 00:18:40.070 "rw_mbytes_per_sec": 0, 00:18:40.070 "r_mbytes_per_sec": 0, 00:18:40.070 "w_mbytes_per_sec": 0 00:18:40.070 }, 00:18:40.070 "claimed": true, 00:18:40.070 "claim_type": "exclusive_write", 00:18:40.070 "zoned": false, 00:18:40.070 "supported_io_types": { 00:18:40.070 "read": true, 00:18:40.070 "write": true, 00:18:40.070 "unmap": true, 00:18:40.070 "flush": true, 00:18:40.070 "reset": true, 00:18:40.070 "nvme_admin": false, 00:18:40.070 "nvme_io": false, 00:18:40.070 "nvme_io_md": false, 00:18:40.070 "write_zeroes": true, 00:18:40.070 "zcopy": true, 00:18:40.070 "get_zone_info": false, 00:18:40.070 "zone_management": false, 00:18:40.070 "zone_append": false, 00:18:40.070 "compare": false, 00:18:40.070 "compare_and_write": false, 00:18:40.070 "abort": true, 00:18:40.070 "seek_hole": false, 00:18:40.070 "seek_data": false, 00:18:40.070 "copy": true, 00:18:40.070 "nvme_iov_md": false 00:18:40.070 }, 00:18:40.070 "memory_domains": [ 00:18:40.070 { 00:18:40.070 "dma_device_id": "system", 00:18:40.070 "dma_device_type": 1 00:18:40.070 }, 00:18:40.070 { 00:18:40.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.070 "dma_device_type": 2 00:18:40.070 } 00:18:40.070 ], 00:18:40.070 "driver_specific": { 00:18:40.070 "passthru": { 00:18:40.070 "name": "pt1", 00:18:40.070 "base_bdev_name": "malloc1" 00:18:40.070 } 00:18:40.070 } 00:18:40.070 }' 00:18:40.070 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:40.070 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:40.070 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:18:40.070 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:40.070 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:40.070 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:18:40.070 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:40.070 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:40.070 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:18:40.070 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:40.071 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:40.071 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:18:40.071 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:40.071 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:40.071 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:40.328 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:40.328 "name": "pt2", 00:18:40.328 "aliases": [ 00:18:40.328 "00000000-0000-0000-0000-000000000002" 00:18:40.328 ], 00:18:40.328 "product_name": "passthru", 00:18:40.328 "block_size": 4128, 00:18:40.328 "num_blocks": 8192, 00:18:40.328 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:40.328 "md_size": 32, 00:18:40.328 "md_interleave": true, 00:18:40.328 "dif_type": 0, 00:18:40.328 "assigned_rate_limits": { 00:18:40.328 "rw_ios_per_sec": 0, 00:18:40.328 "rw_mbytes_per_sec": 0, 00:18:40.328 "r_mbytes_per_sec": 0, 00:18:40.328 "w_mbytes_per_sec": 0 00:18:40.328 }, 00:18:40.328 "claimed": true, 00:18:40.328 "claim_type": "exclusive_write", 00:18:40.328 "zoned": false, 00:18:40.328 "supported_io_types": { 00:18:40.328 "read": true, 00:18:40.328 "write": true, 00:18:40.328 "unmap": true, 00:18:40.328 "flush": true, 00:18:40.328 "reset": true, 00:18:40.328 "nvme_admin": false, 00:18:40.328 "nvme_io": false, 00:18:40.328 "nvme_io_md": false, 00:18:40.328 "write_zeroes": true, 00:18:40.328 "zcopy": true, 00:18:40.328 "get_zone_info": false, 00:18:40.328 "zone_management": false, 00:18:40.328 "zone_append": false, 00:18:40.328 "compare": false, 00:18:40.328 "compare_and_write": false, 00:18:40.328 "abort": true, 00:18:40.328 "seek_hole": false, 00:18:40.328 "seek_data": false, 00:18:40.328 "copy": true, 00:18:40.328 "nvme_iov_md": false 00:18:40.328 }, 00:18:40.328 "memory_domains": [ 00:18:40.328 { 00:18:40.328 "dma_device_id": "system", 00:18:40.328 "dma_device_type": 1 00:18:40.328 }, 00:18:40.328 { 00:18:40.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.328 "dma_device_type": 2 00:18:40.328 } 00:18:40.328 ], 00:18:40.328 "driver_specific": { 00:18:40.328 "passthru": { 00:18:40.328 "name": "pt2", 00:18:40.328 "base_bdev_name": "malloc2" 00:18:40.328 } 00:18:40.328 } 00:18:40.328 }' 00:18:40.328 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:40.328 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:40.328 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:18:40.328 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:40.328 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:40.328 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:18:40.328 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:40.328 15:07:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:40.328 15:07:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:18:40.328 15:07:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:40.328 15:07:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:40.328 15:07:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:18:40.328 15:07:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:40.328 15:07:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:18:40.587 [2024-07-12 15:07:06.285228] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:40.587 15:07:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # '[' 63367fea-4060-11ef-b2a4-e9dca065e82e '!=' 63367fea-4060-11ef-b2a4-e9dca065e82e ']' 00:18:40.587 15:07:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:18:40.587 15:07:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:40.587 15:07:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:18:40.587 15:07:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:40.845 [2024-07-12 15:07:06.537214] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:40.845 15:07:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:40.845 15:07:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:40.845 15:07:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:40.845 15:07:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:40.845 15:07:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:40.845 15:07:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:40.845 15:07:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:40.845 15:07:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:40.845 15:07:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:40.845 15:07:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:40.845 15:07:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.845 15:07:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.103 15:07:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:41.103 "name": "raid_bdev1", 00:18:41.103 "uuid": "63367fea-4060-11ef-b2a4-e9dca065e82e", 00:18:41.103 "strip_size_kb": 0, 00:18:41.103 "state": "online", 00:18:41.103 "raid_level": "raid1", 00:18:41.103 "superblock": true, 00:18:41.103 "num_base_bdevs": 2, 00:18:41.103 "num_base_bdevs_discovered": 1, 00:18:41.103 "num_base_bdevs_operational": 1, 00:18:41.103 "base_bdevs_list": [ 00:18:41.103 { 00:18:41.103 "name": null, 00:18:41.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.103 "is_configured": false, 00:18:41.103 "data_offset": 256, 00:18:41.103 "data_size": 7936 00:18:41.103 }, 00:18:41.103 { 00:18:41.103 "name": "pt2", 00:18:41.103 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:41.103 "is_configured": true, 00:18:41.103 "data_offset": 256, 00:18:41.103 "data_size": 7936 00:18:41.103 } 00:18:41.103 ] 00:18:41.103 }' 00:18:41.103 15:07:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:41.103 15:07:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.361 15:07:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:41.619 [2024-07-12 15:07:07.377239] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:41.619 [2024-07-12 15:07:07.377279] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:41.619 [2024-07-12 15:07:07.377318] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:41.619 [2024-07-12 15:07:07.377330] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:41.619 [2024-07-12 15:07:07.377335] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3fa557a35180 name raid_bdev1, state offline 00:18:41.619 15:07:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:18:41.619 15:07:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.876 15:07:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:18:41.876 15:07:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:18:41.876 15:07:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:18:41.876 15:07:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:41.876 15:07:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:42.133 15:07:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:18:42.133 15:07:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:42.133 15:07:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:18:42.133 15:07:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:18:42.133 15:07:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@518 -- # i=1 00:18:42.133 15:07:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:42.390 [2024-07-12 15:07:08.181310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:42.390 [2024-07-12 15:07:08.181367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.390 [2024-07-12 15:07:08.181379] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3fa557a34f00 00:18:42.391 [2024-07-12 15:07:08.181387] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.391 [2024-07-12 15:07:08.181983] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.391 [2024-07-12 15:07:08.182005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:42.391 [2024-07-12 15:07:08.182025] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:42.391 [2024-07-12 15:07:08.182038] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:42.391 [2024-07-12 15:07:08.182056] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3fa557a35180 00:18:42.391 [2024-07-12 15:07:08.182060] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:42.391 [2024-07-12 15:07:08.182080] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3fa557a97e20 00:18:42.391 [2024-07-12 15:07:08.182093] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3fa557a35180 00:18:42.391 [2024-07-12 15:07:08.182097] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3fa557a35180 00:18:42.391 [2024-07-12 15:07:08.182117] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:42.391 pt2 00:18:42.391 15:07:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:42.391 15:07:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:42.391 15:07:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:42.391 15:07:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:42.391 15:07:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:42.391 15:07:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:42.391 15:07:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:42.391 15:07:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:42.391 15:07:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:42.391 15:07:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:42.391 15:07:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.391 15:07:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.648 15:07:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:42.648 "name": "raid_bdev1", 00:18:42.648 "uuid": "63367fea-4060-11ef-b2a4-e9dca065e82e", 00:18:42.648 "strip_size_kb": 0, 00:18:42.648 "state": "online", 00:18:42.648 "raid_level": "raid1", 00:18:42.648 "superblock": true, 00:18:42.648 "num_base_bdevs": 2, 00:18:42.648 "num_base_bdevs_discovered": 1, 00:18:42.648 "num_base_bdevs_operational": 1, 00:18:42.648 "base_bdevs_list": [ 00:18:42.648 { 00:18:42.648 "name": null, 00:18:42.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.648 "is_configured": false, 00:18:42.648 "data_offset": 256, 00:18:42.648 "data_size": 7936 00:18:42.648 }, 00:18:42.648 { 00:18:42.648 "name": "pt2", 00:18:42.648 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:42.648 "is_configured": true, 00:18:42.648 "data_offset": 256, 00:18:42.648 "data_size": 7936 00:18:42.648 } 00:18:42.648 ] 00:18:42.648 }' 00:18:42.648 15:07:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:42.648 15:07:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.211 15:07:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:43.211 [2024-07-12 15:07:08.969334] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:43.211 [2024-07-12 15:07:08.969362] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:43.211 [2024-07-12 15:07:08.969385] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:43.211 [2024-07-12 15:07:08.969397] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:43.211 [2024-07-12 15:07:08.969401] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3fa557a35180 name raid_bdev1, state offline 00:18:43.211 15:07:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.211 15:07:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:18:43.469 15:07:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:18:43.469 15:07:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:18:43.469 15:07:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:18:43.469 15:07:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:43.728 [2024-07-12 15:07:09.457364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:43.728 [2024-07-12 15:07:09.457411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.728 [2024-07-12 15:07:09.457423] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3fa557a34c80 00:18:43.728 [2024-07-12 15:07:09.457431] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.728 [2024-07-12 15:07:09.458015] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.728 [2024-07-12 15:07:09.458039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:43.728 [2024-07-12 15:07:09.458059] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:43.728 [2024-07-12 15:07:09.458071] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:43.728 [2024-07-12 15:07:09.458099] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:43.728 [2024-07-12 15:07:09.458104] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:43.728 [2024-07-12 15:07:09.458110] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3fa557a34780 name raid_bdev1, state configuring 00:18:43.728 [2024-07-12 15:07:09.458121] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:43.728 [2024-07-12 15:07:09.458152] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3fa557a34780 00:18:43.728 [2024-07-12 15:07:09.458157] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:43.728 [2024-07-12 15:07:09.458177] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3fa557a97e20 00:18:43.728 [2024-07-12 15:07:09.458189] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3fa557a34780 00:18:43.728 [2024-07-12 15:07:09.458193] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3fa557a34780 00:18:43.728 [2024-07-12 15:07:09.458203] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.728 pt1 00:18:43.728 15:07:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:18:43.728 15:07:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:43.728 15:07:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:43.728 15:07:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:43.728 15:07:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:43.728 15:07:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:43.728 15:07:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:43.728 15:07:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:43.728 15:07:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:43.728 15:07:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:43.728 15:07:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:43.728 15:07:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.728 15:07:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.986 15:07:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:43.986 "name": "raid_bdev1", 00:18:43.986 "uuid": "63367fea-4060-11ef-b2a4-e9dca065e82e", 00:18:43.986 "strip_size_kb": 0, 00:18:43.986 "state": "online", 00:18:43.986 "raid_level": "raid1", 00:18:43.986 "superblock": true, 00:18:43.986 "num_base_bdevs": 2, 00:18:43.986 "num_base_bdevs_discovered": 1, 00:18:43.987 "num_base_bdevs_operational": 1, 00:18:43.987 "base_bdevs_list": [ 00:18:43.987 { 00:18:43.987 "name": null, 00:18:43.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.987 "is_configured": false, 00:18:43.987 "data_offset": 256, 00:18:43.987 "data_size": 7936 00:18:43.987 }, 00:18:43.987 { 00:18:43.987 "name": "pt2", 00:18:43.987 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:43.987 "is_configured": true, 00:18:43.987 "data_offset": 256, 00:18:43.987 "data_size": 7936 00:18:43.987 } 00:18:43.987 ] 00:18:43.987 }' 00:18:43.987 15:07:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:43.987 15:07:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.551 15:07:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:18:44.551 15:07:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:44.551 15:07:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:18:44.551 15:07:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:44.551 15:07:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:18:44.809 [2024-07-12 15:07:10.589450] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:44.809 15:07:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' 63367fea-4060-11ef-b2a4-e9dca065e82e '!=' 63367fea-4060-11ef-b2a4-e9dca065e82e ']' 00:18:44.809 15:07:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@562 -- # killprocess 67161 00:18:44.809 15:07:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 67161 ']' 00:18:44.809 15:07:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 67161 00:18:44.809 15:07:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:18:44.809 15:07:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:18:44.809 15:07:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps -c -o command 67161 00:18:44.809 15:07:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # tail -1 00:18:44.809 15:07:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:18:44.809 15:07:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:18:44.809 killing process with pid 67161 00:18:44.809 15:07:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67161' 00:18:44.809 15:07:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@967 -- # kill 67161 00:18:44.809 [2024-07-12 15:07:10.619578] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:44.809 [2024-07-12 15:07:10.619604] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:44.809 [2024-07-12 15:07:10.619616] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:44.809 [2024-07-12 15:07:10.619620] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3fa557a34780 name raid_bdev1, state offline 00:18:44.809 15:07:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # wait 67161 00:18:44.809 [2024-07-12 15:07:10.631227] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:45.067 15:07:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@564 -- # return 0 00:18:45.067 00:18:45.067 real 0m13.379s 00:18:45.067 user 0m23.927s 00:18:45.067 sys 0m2.051s 00:18:45.067 15:07:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:45.067 15:07:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.067 ************************************ 00:18:45.067 END TEST raid_superblock_test_md_interleaved 00:18:45.067 ************************************ 00:18:45.067 15:07:10 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:45.067 15:07:10 bdev_raid -- bdev/bdev_raid.sh@914 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:45.067 15:07:10 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:18:45.067 15:07:10 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:45.067 15:07:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:45.067 ************************************ 00:18:45.067 START TEST raid_rebuild_test_sb_md_interleaved 00:18:45.067 ************************************ 00:18:45.067 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false false 00:18:45.067 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:18:45.067 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:18:45.067 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:18:45.067 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:18:45.067 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local verify=false 00:18:45.067 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:18:45.067 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:18:45.067 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:18:45.067 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:18:45.067 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:18:45.067 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:18:45.067 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:18:45.067 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:18:45.067 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:45.067 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:18:45.067 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:18:45.067 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local strip_size 00:18:45.067 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local create_arg 00:18:45.067 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:18:45.067 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local data_offset 00:18:45.067 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:18:45.067 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:18:45.067 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:18:45.067 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:18:45.067 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # raid_pid=67552 00:18:45.067 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:45.067 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # waitforlisten 67552 /var/tmp/spdk-raid.sock 00:18:45.068 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 67552 ']' 00:18:45.068 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:45.068 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:45.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:45.068 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:45.068 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:45.068 15:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.068 [2024-07-12 15:07:10.862287] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:18:45.068 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:45.068 Zero copy mechanism will not be used. 00:18:45.068 [2024-07-12 15:07:10.862507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:45.634 EAL: TSC is not safe to use in SMP mode 00:18:45.634 EAL: TSC is not invariant 00:18:45.634 [2024-07-12 15:07:11.398810] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.892 [2024-07-12 15:07:11.484835] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:45.892 [2024-07-12 15:07:11.486912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.892 [2024-07-12 15:07:11.487663] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:45.892 [2024-07-12 15:07:11.487681] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:46.149 15:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:46.149 15:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:18:46.149 15:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:18:46.149 15:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:46.406 BaseBdev1_malloc 00:18:46.406 15:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:46.663 [2024-07-12 15:07:12.435533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:46.663 [2024-07-12 15:07:12.435591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.663 [2024-07-12 15:07:12.436191] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x246607e34780 00:18:46.663 [2024-07-12 15:07:12.436223] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.663 [2024-07-12 15:07:12.436884] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.663 [2024-07-12 15:07:12.436913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:46.663 BaseBdev1 00:18:46.663 15:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:18:46.663 15:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:46.920 BaseBdev2_malloc 00:18:46.920 15:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:47.177 [2024-07-12 15:07:12.911593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:47.177 [2024-07-12 15:07:12.911666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.177 [2024-07-12 15:07:12.911710] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x246607e34c80 00:18:47.177 [2024-07-12 15:07:12.911720] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.177 [2024-07-12 15:07:12.912331] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.177 [2024-07-12 15:07:12.912357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:47.177 BaseBdev2 00:18:47.177 15:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:47.438 spare_malloc 00:18:47.438 15:07:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:47.695 spare_delay 00:18:47.695 15:07:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:47.952 [2024-07-12 15:07:13.647631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:47.952 [2024-07-12 15:07:13.647701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.952 [2024-07-12 15:07:13.647743] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x246607e35400 00:18:47.952 [2024-07-12 15:07:13.647753] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.952 [2024-07-12 15:07:13.648347] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.953 [2024-07-12 15:07:13.648370] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:47.953 spare 00:18:47.953 15:07:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:18:48.210 [2024-07-12 15:07:13.891662] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:48.210 [2024-07-12 15:07:13.892253] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:48.210 [2024-07-12 15:07:13.892325] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x246607e35680 00:18:48.210 [2024-07-12 15:07:13.892332] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:48.210 [2024-07-12 15:07:13.892364] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x246607e97e20 00:18:48.210 [2024-07-12 15:07:13.892379] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x246607e35680 00:18:48.210 [2024-07-12 15:07:13.892382] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x246607e35680 00:18:48.210 [2024-07-12 15:07:13.892396] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:48.210 15:07:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:48.210 15:07:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:48.210 15:07:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:48.210 15:07:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:48.210 15:07:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:48.210 15:07:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:48.210 15:07:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:48.210 15:07:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:48.210 15:07:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:48.210 15:07:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:48.210 15:07:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.210 15:07:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.469 15:07:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:48.469 "name": "raid_bdev1", 00:18:48.469 "uuid": "6ba6f49b-4060-11ef-b2a4-e9dca065e82e", 00:18:48.469 "strip_size_kb": 0, 00:18:48.469 "state": "online", 00:18:48.469 "raid_level": "raid1", 00:18:48.469 "superblock": true, 00:18:48.469 "num_base_bdevs": 2, 00:18:48.469 "num_base_bdevs_discovered": 2, 00:18:48.469 "num_base_bdevs_operational": 2, 00:18:48.469 "base_bdevs_list": [ 00:18:48.469 { 00:18:48.469 "name": "BaseBdev1", 00:18:48.469 "uuid": "d8f6c046-fba9-c857-9bfa-cfb2cf4519db", 00:18:48.469 "is_configured": true, 00:18:48.469 "data_offset": 256, 00:18:48.469 "data_size": 7936 00:18:48.469 }, 00:18:48.469 { 00:18:48.469 "name": "BaseBdev2", 00:18:48.469 "uuid": "c6319b94-b1f3-3f50-a76a-dd45c1fd2a97", 00:18:48.469 "is_configured": true, 00:18:48.469 "data_offset": 256, 00:18:48.469 "data_size": 7936 00:18:48.469 } 00:18:48.469 ] 00:18:48.469 }' 00:18:48.469 15:07:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:48.469 15:07:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.748 15:07:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:48.748 15:07:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:18:49.007 [2024-07-12 15:07:14.659712] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:49.007 15:07:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:18:49.007 15:07:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.007 15:07:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:49.266 15:07:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:18:49.266 15:07:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:18:49.266 15:07:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # '[' false = true ']' 00:18:49.266 15:07:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:18:49.524 [2024-07-12 15:07:15.179708] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:49.524 15:07:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:49.524 15:07:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:49.524 15:07:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:49.524 15:07:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:49.524 15:07:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:49.524 15:07:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:49.524 15:07:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:49.524 15:07:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:49.524 15:07:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:49.524 15:07:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:49.524 15:07:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.524 15:07:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.782 15:07:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:49.782 "name": "raid_bdev1", 00:18:49.782 "uuid": "6ba6f49b-4060-11ef-b2a4-e9dca065e82e", 00:18:49.782 "strip_size_kb": 0, 00:18:49.782 "state": "online", 00:18:49.782 "raid_level": "raid1", 00:18:49.782 "superblock": true, 00:18:49.782 "num_base_bdevs": 2, 00:18:49.782 "num_base_bdevs_discovered": 1, 00:18:49.782 "num_base_bdevs_operational": 1, 00:18:49.782 "base_bdevs_list": [ 00:18:49.782 { 00:18:49.782 "name": null, 00:18:49.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.782 "is_configured": false, 00:18:49.782 "data_offset": 256, 00:18:49.782 "data_size": 7936 00:18:49.782 }, 00:18:49.782 { 00:18:49.782 "name": "BaseBdev2", 00:18:49.782 "uuid": "c6319b94-b1f3-3f50-a76a-dd45c1fd2a97", 00:18:49.782 "is_configured": true, 00:18:49.782 "data_offset": 256, 00:18:49.782 "data_size": 7936 00:18:49.782 } 00:18:49.782 ] 00:18:49.782 }' 00:18:49.782 15:07:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:49.782 15:07:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.040 15:07:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:50.298 [2024-07-12 15:07:15.971751] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:50.298 [2024-07-12 15:07:15.972009] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x246607e97ec0 00:18:50.298 [2024-07-12 15:07:15.972859] bdev_raid.c:2825:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:50.298 15:07:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # sleep 1 00:18:51.260 15:07:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:51.260 15:07:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:51.260 15:07:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:51.260 15:07:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:51.260 15:07:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:51.260 15:07:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.260 15:07:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.518 15:07:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:51.518 "name": "raid_bdev1", 00:18:51.518 "uuid": "6ba6f49b-4060-11ef-b2a4-e9dca065e82e", 00:18:51.518 "strip_size_kb": 0, 00:18:51.518 "state": "online", 00:18:51.518 "raid_level": "raid1", 00:18:51.518 "superblock": true, 00:18:51.518 "num_base_bdevs": 2, 00:18:51.518 "num_base_bdevs_discovered": 2, 00:18:51.518 "num_base_bdevs_operational": 2, 00:18:51.518 "process": { 00:18:51.518 "type": "rebuild", 00:18:51.518 "target": "spare", 00:18:51.518 "progress": { 00:18:51.518 "blocks": 3072, 00:18:51.518 "percent": 38 00:18:51.518 } 00:18:51.518 }, 00:18:51.518 "base_bdevs_list": [ 00:18:51.518 { 00:18:51.518 "name": "spare", 00:18:51.518 "uuid": "850c43ea-4e58-a854-9773-32772700469c", 00:18:51.518 "is_configured": true, 00:18:51.518 "data_offset": 256, 00:18:51.518 "data_size": 7936 00:18:51.518 }, 00:18:51.518 { 00:18:51.518 "name": "BaseBdev2", 00:18:51.518 "uuid": "c6319b94-b1f3-3f50-a76a-dd45c1fd2a97", 00:18:51.518 "is_configured": true, 00:18:51.518 "data_offset": 256, 00:18:51.518 "data_size": 7936 00:18:51.518 } 00:18:51.518 ] 00:18:51.518 }' 00:18:51.518 15:07:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:51.518 15:07:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:51.518 15:07:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:51.518 15:07:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:51.518 15:07:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:18:51.777 [2024-07-12 15:07:17.508036] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:51.777 [2024-07-12 15:07:17.580044] bdev_raid.c:2516:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:18:51.777 [2024-07-12 15:07:17.580127] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:51.777 [2024-07-12 15:07:17.580134] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:51.777 [2024-07-12 15:07:17.580139] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: Operation not supported by device 00:18:51.777 15:07:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:51.777 15:07:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:51.777 15:07:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:51.777 15:07:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:51.777 15:07:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:51.777 15:07:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:51.777 15:07:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:51.777 15:07:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:51.777 15:07:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:51.777 15:07:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:51.777 15:07:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.777 15:07:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.343 15:07:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:52.343 "name": "raid_bdev1", 00:18:52.343 "uuid": "6ba6f49b-4060-11ef-b2a4-e9dca065e82e", 00:18:52.343 "strip_size_kb": 0, 00:18:52.343 "state": "online", 00:18:52.343 "raid_level": "raid1", 00:18:52.343 "superblock": true, 00:18:52.343 "num_base_bdevs": 2, 00:18:52.343 "num_base_bdevs_discovered": 1, 00:18:52.343 "num_base_bdevs_operational": 1, 00:18:52.343 "base_bdevs_list": [ 00:18:52.343 { 00:18:52.343 "name": null, 00:18:52.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.343 "is_configured": false, 00:18:52.343 "data_offset": 256, 00:18:52.343 "data_size": 7936 00:18:52.343 }, 00:18:52.343 { 00:18:52.343 "name": "BaseBdev2", 00:18:52.343 "uuid": "c6319b94-b1f3-3f50-a76a-dd45c1fd2a97", 00:18:52.343 "is_configured": true, 00:18:52.343 "data_offset": 256, 00:18:52.343 "data_size": 7936 00:18:52.343 } 00:18:52.343 ] 00:18:52.343 }' 00:18:52.343 15:07:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:52.343 15:07:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:52.601 15:07:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:52.601 15:07:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:52.601 15:07:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:52.601 15:07:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:52.601 15:07:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:52.601 15:07:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.601 15:07:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.859 15:07:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:52.859 "name": "raid_bdev1", 00:18:52.859 "uuid": "6ba6f49b-4060-11ef-b2a4-e9dca065e82e", 00:18:52.859 "strip_size_kb": 0, 00:18:52.859 "state": "online", 00:18:52.859 "raid_level": "raid1", 00:18:52.859 "superblock": true, 00:18:52.859 "num_base_bdevs": 2, 00:18:52.859 "num_base_bdevs_discovered": 1, 00:18:52.859 "num_base_bdevs_operational": 1, 00:18:52.859 "base_bdevs_list": [ 00:18:52.859 { 00:18:52.859 "name": null, 00:18:52.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.859 "is_configured": false, 00:18:52.859 "data_offset": 256, 00:18:52.859 "data_size": 7936 00:18:52.859 }, 00:18:52.859 { 00:18:52.859 "name": "BaseBdev2", 00:18:52.859 "uuid": "c6319b94-b1f3-3f50-a76a-dd45c1fd2a97", 00:18:52.859 "is_configured": true, 00:18:52.859 "data_offset": 256, 00:18:52.859 "data_size": 7936 00:18:52.859 } 00:18:52.859 ] 00:18:52.859 }' 00:18:52.859 15:07:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:52.859 15:07:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:52.859 15:07:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:52.859 15:07:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:52.859 15:07:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:53.116 [2024-07-12 15:07:18.720218] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:53.116 [2024-07-12 15:07:18.720479] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x246607e97e20 00:18:53.116 [2024-07-12 15:07:18.721283] bdev_raid.c:2825:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:53.116 15:07:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:54.048 15:07:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:54.048 15:07:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:54.048 15:07:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:54.048 15:07:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:54.048 15:07:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:54.048 15:07:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.048 15:07:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.306 15:07:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:54.306 "name": "raid_bdev1", 00:18:54.306 "uuid": "6ba6f49b-4060-11ef-b2a4-e9dca065e82e", 00:18:54.306 "strip_size_kb": 0, 00:18:54.306 "state": "online", 00:18:54.306 "raid_level": "raid1", 00:18:54.306 "superblock": true, 00:18:54.306 "num_base_bdevs": 2, 00:18:54.306 "num_base_bdevs_discovered": 2, 00:18:54.306 "num_base_bdevs_operational": 2, 00:18:54.306 "process": { 00:18:54.306 "type": "rebuild", 00:18:54.306 "target": "spare", 00:18:54.306 "progress": { 00:18:54.306 "blocks": 3328, 00:18:54.306 "percent": 41 00:18:54.306 } 00:18:54.306 }, 00:18:54.306 "base_bdevs_list": [ 00:18:54.306 { 00:18:54.306 "name": "spare", 00:18:54.306 "uuid": "850c43ea-4e58-a854-9773-32772700469c", 00:18:54.306 "is_configured": true, 00:18:54.306 "data_offset": 256, 00:18:54.306 "data_size": 7936 00:18:54.306 }, 00:18:54.306 { 00:18:54.306 "name": "BaseBdev2", 00:18:54.306 "uuid": "c6319b94-b1f3-3f50-a76a-dd45c1fd2a97", 00:18:54.306 "is_configured": true, 00:18:54.306 "data_offset": 256, 00:18:54.306 "data_size": 7936 00:18:54.306 } 00:18:54.306 ] 00:18:54.306 }' 00:18:54.306 15:07:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:54.306 15:07:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:54.306 15:07:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:54.306 15:07:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:54.306 15:07:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:18:54.306 15:07:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:18:54.306 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:18:54.306 15:07:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:18:54.306 15:07:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:18:54.306 15:07:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:18:54.306 15:07:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@705 -- # local timeout=723 00:18:54.306 15:07:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:18:54.306 15:07:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:54.306 15:07:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:54.306 15:07:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:54.306 15:07:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:54.306 15:07:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:54.307 15:07:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.307 15:07:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.629 15:07:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:54.629 "name": "raid_bdev1", 00:18:54.629 "uuid": "6ba6f49b-4060-11ef-b2a4-e9dca065e82e", 00:18:54.629 "strip_size_kb": 0, 00:18:54.629 "state": "online", 00:18:54.629 "raid_level": "raid1", 00:18:54.629 "superblock": true, 00:18:54.629 "num_base_bdevs": 2, 00:18:54.629 "num_base_bdevs_discovered": 2, 00:18:54.629 "num_base_bdevs_operational": 2, 00:18:54.629 "process": { 00:18:54.629 "type": "rebuild", 00:18:54.629 "target": "spare", 00:18:54.629 "progress": { 00:18:54.629 "blocks": 4096, 00:18:54.629 "percent": 51 00:18:54.629 } 00:18:54.629 }, 00:18:54.629 "base_bdevs_list": [ 00:18:54.629 { 00:18:54.629 "name": "spare", 00:18:54.629 "uuid": "850c43ea-4e58-a854-9773-32772700469c", 00:18:54.629 "is_configured": true, 00:18:54.629 "data_offset": 256, 00:18:54.629 "data_size": 7936 00:18:54.629 }, 00:18:54.629 { 00:18:54.629 "name": "BaseBdev2", 00:18:54.629 "uuid": "c6319b94-b1f3-3f50-a76a-dd45c1fd2a97", 00:18:54.629 "is_configured": true, 00:18:54.629 "data_offset": 256, 00:18:54.629 "data_size": 7936 00:18:54.629 } 00:18:54.629 ] 00:18:54.629 }' 00:18:54.630 15:07:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:54.630 15:07:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:54.630 15:07:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:54.630 15:07:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:54.630 15:07:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:18:56.002 15:07:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:18:56.002 15:07:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:56.002 15:07:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:56.002 15:07:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:56.002 15:07:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:56.002 15:07:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:56.002 15:07:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.002 15:07:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.002 15:07:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:56.002 "name": "raid_bdev1", 00:18:56.002 "uuid": "6ba6f49b-4060-11ef-b2a4-e9dca065e82e", 00:18:56.002 "strip_size_kb": 0, 00:18:56.002 "state": "online", 00:18:56.002 "raid_level": "raid1", 00:18:56.002 "superblock": true, 00:18:56.002 "num_base_bdevs": 2, 00:18:56.002 "num_base_bdevs_discovered": 2, 00:18:56.002 "num_base_bdevs_operational": 2, 00:18:56.002 "process": { 00:18:56.002 "type": "rebuild", 00:18:56.002 "target": "spare", 00:18:56.002 "progress": { 00:18:56.002 "blocks": 7424, 00:18:56.002 "percent": 93 00:18:56.002 } 00:18:56.002 }, 00:18:56.002 "base_bdevs_list": [ 00:18:56.002 { 00:18:56.002 "name": "spare", 00:18:56.002 "uuid": "850c43ea-4e58-a854-9773-32772700469c", 00:18:56.002 "is_configured": true, 00:18:56.002 "data_offset": 256, 00:18:56.002 "data_size": 7936 00:18:56.002 }, 00:18:56.002 { 00:18:56.002 "name": "BaseBdev2", 00:18:56.002 "uuid": "c6319b94-b1f3-3f50-a76a-dd45c1fd2a97", 00:18:56.002 "is_configured": true, 00:18:56.002 "data_offset": 256, 00:18:56.002 "data_size": 7936 00:18:56.002 } 00:18:56.002 ] 00:18:56.002 }' 00:18:56.002 15:07:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:56.002 15:07:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:56.002 15:07:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:56.002 15:07:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:56.002 15:07:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:18:56.002 [2024-07-12 15:07:21.834637] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:56.002 [2024-07-12 15:07:21.834692] bdev_raid.c:2506:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:56.002 [2024-07-12 15:07:21.834752] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.934 15:07:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:18:56.934 15:07:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:56.934 15:07:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:56.934 15:07:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:56.934 15:07:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:56.934 15:07:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:56.934 15:07:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.934 15:07:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.496 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:57.496 "name": "raid_bdev1", 00:18:57.496 "uuid": "6ba6f49b-4060-11ef-b2a4-e9dca065e82e", 00:18:57.496 "strip_size_kb": 0, 00:18:57.496 "state": "online", 00:18:57.496 "raid_level": "raid1", 00:18:57.496 "superblock": true, 00:18:57.496 "num_base_bdevs": 2, 00:18:57.496 "num_base_bdevs_discovered": 2, 00:18:57.496 "num_base_bdevs_operational": 2, 00:18:57.496 "base_bdevs_list": [ 00:18:57.496 { 00:18:57.496 "name": "spare", 00:18:57.496 "uuid": "850c43ea-4e58-a854-9773-32772700469c", 00:18:57.496 "is_configured": true, 00:18:57.496 "data_offset": 256, 00:18:57.496 "data_size": 7936 00:18:57.496 }, 00:18:57.496 { 00:18:57.496 "name": "BaseBdev2", 00:18:57.496 "uuid": "c6319b94-b1f3-3f50-a76a-dd45c1fd2a97", 00:18:57.496 "is_configured": true, 00:18:57.496 "data_offset": 256, 00:18:57.496 "data_size": 7936 00:18:57.496 } 00:18:57.496 ] 00:18:57.496 }' 00:18:57.496 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:57.496 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:57.496 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:57.496 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:18:57.496 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # break 00:18:57.496 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:57.496 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:57.496 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:57.496 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:57.496 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:57.496 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.496 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.496 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:57.496 "name": "raid_bdev1", 00:18:57.496 "uuid": "6ba6f49b-4060-11ef-b2a4-e9dca065e82e", 00:18:57.496 "strip_size_kb": 0, 00:18:57.496 "state": "online", 00:18:57.496 "raid_level": "raid1", 00:18:57.496 "superblock": true, 00:18:57.496 "num_base_bdevs": 2, 00:18:57.496 "num_base_bdevs_discovered": 2, 00:18:57.496 "num_base_bdevs_operational": 2, 00:18:57.496 "base_bdevs_list": [ 00:18:57.496 { 00:18:57.496 "name": "spare", 00:18:57.496 "uuid": "850c43ea-4e58-a854-9773-32772700469c", 00:18:57.496 "is_configured": true, 00:18:57.496 "data_offset": 256, 00:18:57.496 "data_size": 7936 00:18:57.496 }, 00:18:57.496 { 00:18:57.496 "name": "BaseBdev2", 00:18:57.496 "uuid": "c6319b94-b1f3-3f50-a76a-dd45c1fd2a97", 00:18:57.496 "is_configured": true, 00:18:57.496 "data_offset": 256, 00:18:57.496 "data_size": 7936 00:18:57.496 } 00:18:57.496 ] 00:18:57.496 }' 00:18:57.496 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:57.496 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:57.496 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:57.496 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:57.496 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:57.496 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:57.496 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:57.496 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:57.496 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:57.496 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:57.496 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:57.496 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:57.496 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:57.496 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:57.496 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.496 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.754 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:57.754 "name": "raid_bdev1", 00:18:57.754 "uuid": "6ba6f49b-4060-11ef-b2a4-e9dca065e82e", 00:18:57.754 "strip_size_kb": 0, 00:18:57.754 "state": "online", 00:18:57.754 "raid_level": "raid1", 00:18:57.754 "superblock": true, 00:18:57.754 "num_base_bdevs": 2, 00:18:57.754 "num_base_bdevs_discovered": 2, 00:18:57.754 "num_base_bdevs_operational": 2, 00:18:57.754 "base_bdevs_list": [ 00:18:57.754 { 00:18:57.754 "name": "spare", 00:18:57.754 "uuid": "850c43ea-4e58-a854-9773-32772700469c", 00:18:57.754 "is_configured": true, 00:18:57.754 "data_offset": 256, 00:18:57.754 "data_size": 7936 00:18:57.754 }, 00:18:57.754 { 00:18:57.754 "name": "BaseBdev2", 00:18:57.754 "uuid": "c6319b94-b1f3-3f50-a76a-dd45c1fd2a97", 00:18:57.754 "is_configured": true, 00:18:57.754 "data_offset": 256, 00:18:57.754 "data_size": 7936 00:18:57.754 } 00:18:57.754 ] 00:18:57.754 }' 00:18:57.754 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:57.754 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.319 15:07:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:58.576 [2024-07-12 15:07:24.154878] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:58.576 [2024-07-12 15:07:24.154906] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:58.576 [2024-07-12 15:07:24.154946] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:58.576 [2024-07-12 15:07:24.154961] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:58.576 [2024-07-12 15:07:24.154966] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x246607e35680 name raid_bdev1, state offline 00:18:58.576 15:07:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.576 15:07:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # jq length 00:18:58.576 15:07:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:18:58.576 15:07:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # '[' false = true ']' 00:18:58.576 15:07:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:18:58.576 15:07:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:18:58.895 15:07:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:59.171 [2024-07-12 15:07:24.882915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:59.171 [2024-07-12 15:07:24.882975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.171 [2024-07-12 15:07:24.883004] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x246607e35400 00:18:59.171 [2024-07-12 15:07:24.883013] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.171 [2024-07-12 15:07:24.883611] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.171 [2024-07-12 15:07:24.883634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:59.171 [2024-07-12 15:07:24.883655] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:59.171 [2024-07-12 15:07:24.883668] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:59.171 [2024-07-12 15:07:24.883691] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:59.171 spare 00:18:59.171 15:07:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:59.171 15:07:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:59.171 15:07:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:59.171 15:07:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:59.171 15:07:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:59.171 15:07:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:59.171 15:07:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:59.171 15:07:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:59.171 15:07:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:59.171 15:07:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:59.171 15:07:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.171 15:07:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.171 [2024-07-12 15:07:24.983669] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x246607e35680 00:18:59.171 [2024-07-12 15:07:24.983694] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:59.171 [2024-07-12 15:07:24.983736] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x246607e97e20 00:18:59.171 [2024-07-12 15:07:24.983767] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x246607e35680 00:18:59.171 [2024-07-12 15:07:24.983770] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x246607e35680 00:18:59.171 [2024-07-12 15:07:24.983787] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:59.428 15:07:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:59.428 "name": "raid_bdev1", 00:18:59.428 "uuid": "6ba6f49b-4060-11ef-b2a4-e9dca065e82e", 00:18:59.429 "strip_size_kb": 0, 00:18:59.429 "state": "online", 00:18:59.429 "raid_level": "raid1", 00:18:59.429 "superblock": true, 00:18:59.429 "num_base_bdevs": 2, 00:18:59.429 "num_base_bdevs_discovered": 2, 00:18:59.429 "num_base_bdevs_operational": 2, 00:18:59.429 "base_bdevs_list": [ 00:18:59.429 { 00:18:59.429 "name": "spare", 00:18:59.429 "uuid": "850c43ea-4e58-a854-9773-32772700469c", 00:18:59.429 "is_configured": true, 00:18:59.429 "data_offset": 256, 00:18:59.429 "data_size": 7936 00:18:59.429 }, 00:18:59.429 { 00:18:59.429 "name": "BaseBdev2", 00:18:59.429 "uuid": "c6319b94-b1f3-3f50-a76a-dd45c1fd2a97", 00:18:59.429 "is_configured": true, 00:18:59.429 "data_offset": 256, 00:18:59.429 "data_size": 7936 00:18:59.429 } 00:18:59.429 ] 00:18:59.429 }' 00:18:59.429 15:07:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:59.429 15:07:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.686 15:07:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:59.686 15:07:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:59.686 15:07:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:59.686 15:07:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:59.686 15:07:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:59.686 15:07:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.686 15:07:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.251 15:07:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:00.251 "name": "raid_bdev1", 00:19:00.251 "uuid": "6ba6f49b-4060-11ef-b2a4-e9dca065e82e", 00:19:00.251 "strip_size_kb": 0, 00:19:00.251 "state": "online", 00:19:00.251 "raid_level": "raid1", 00:19:00.251 "superblock": true, 00:19:00.251 "num_base_bdevs": 2, 00:19:00.251 "num_base_bdevs_discovered": 2, 00:19:00.251 "num_base_bdevs_operational": 2, 00:19:00.251 "base_bdevs_list": [ 00:19:00.251 { 00:19:00.251 "name": "spare", 00:19:00.251 "uuid": "850c43ea-4e58-a854-9773-32772700469c", 00:19:00.251 "is_configured": true, 00:19:00.251 "data_offset": 256, 00:19:00.251 "data_size": 7936 00:19:00.251 }, 00:19:00.251 { 00:19:00.251 "name": "BaseBdev2", 00:19:00.251 "uuid": "c6319b94-b1f3-3f50-a76a-dd45c1fd2a97", 00:19:00.251 "is_configured": true, 00:19:00.251 "data_offset": 256, 00:19:00.251 "data_size": 7936 00:19:00.251 } 00:19:00.251 ] 00:19:00.251 }' 00:19:00.251 15:07:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:00.252 15:07:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:00.252 15:07:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:00.252 15:07:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:00.252 15:07:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.252 15:07:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:00.252 15:07:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:19:00.252 15:07:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:00.819 [2024-07-12 15:07:26.358987] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:00.819 15:07:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:00.819 15:07:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:00.819 15:07:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:00.819 15:07:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:00.819 15:07:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:00.819 15:07:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:00.819 15:07:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:00.819 15:07:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:00.819 15:07:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:00.819 15:07:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:00.819 15:07:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.819 15:07:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.077 15:07:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:01.077 "name": "raid_bdev1", 00:19:01.077 "uuid": "6ba6f49b-4060-11ef-b2a4-e9dca065e82e", 00:19:01.077 "strip_size_kb": 0, 00:19:01.077 "state": "online", 00:19:01.077 "raid_level": "raid1", 00:19:01.077 "superblock": true, 00:19:01.077 "num_base_bdevs": 2, 00:19:01.077 "num_base_bdevs_discovered": 1, 00:19:01.077 "num_base_bdevs_operational": 1, 00:19:01.077 "base_bdevs_list": [ 00:19:01.077 { 00:19:01.077 "name": null, 00:19:01.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.077 "is_configured": false, 00:19:01.077 "data_offset": 256, 00:19:01.077 "data_size": 7936 00:19:01.077 }, 00:19:01.077 { 00:19:01.077 "name": "BaseBdev2", 00:19:01.077 "uuid": "c6319b94-b1f3-3f50-a76a-dd45c1fd2a97", 00:19:01.077 "is_configured": true, 00:19:01.077 "data_offset": 256, 00:19:01.077 "data_size": 7936 00:19:01.077 } 00:19:01.077 ] 00:19:01.077 }' 00:19:01.077 15:07:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:01.077 15:07:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.335 15:07:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:01.594 [2024-07-12 15:07:27.171027] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:01.594 [2024-07-12 15:07:27.171102] bdev_raid.c:3564:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:01.594 [2024-07-12 15:07:27.171107] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:01.594 [2024-07-12 15:07:27.171142] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:01.594 [2024-07-12 15:07:27.171312] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x246607e97ec0 00:19:01.594 [2024-07-12 15:07:27.171863] bdev_raid.c:2825:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:01.594 15:07:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # sleep 1 00:19:02.529 15:07:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:02.529 15:07:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:02.529 15:07:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:02.529 15:07:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:02.529 15:07:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:02.529 15:07:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.529 15:07:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.787 15:07:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:02.787 "name": "raid_bdev1", 00:19:02.787 "uuid": "6ba6f49b-4060-11ef-b2a4-e9dca065e82e", 00:19:02.787 "strip_size_kb": 0, 00:19:02.787 "state": "online", 00:19:02.787 "raid_level": "raid1", 00:19:02.787 "superblock": true, 00:19:02.787 "num_base_bdevs": 2, 00:19:02.787 "num_base_bdevs_discovered": 2, 00:19:02.787 "num_base_bdevs_operational": 2, 00:19:02.787 "process": { 00:19:02.787 "type": "rebuild", 00:19:02.787 "target": "spare", 00:19:02.787 "progress": { 00:19:02.787 "blocks": 3584, 00:19:02.787 "percent": 45 00:19:02.787 } 00:19:02.787 }, 00:19:02.787 "base_bdevs_list": [ 00:19:02.787 { 00:19:02.787 "name": "spare", 00:19:02.787 "uuid": "850c43ea-4e58-a854-9773-32772700469c", 00:19:02.787 "is_configured": true, 00:19:02.787 "data_offset": 256, 00:19:02.787 "data_size": 7936 00:19:02.787 }, 00:19:02.787 { 00:19:02.787 "name": "BaseBdev2", 00:19:02.787 "uuid": "c6319b94-b1f3-3f50-a76a-dd45c1fd2a97", 00:19:02.787 "is_configured": true, 00:19:02.787 "data_offset": 256, 00:19:02.787 "data_size": 7936 00:19:02.787 } 00:19:02.787 ] 00:19:02.787 }' 00:19:02.787 15:07:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:03.045 15:07:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:03.045 15:07:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:03.045 15:07:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:03.045 15:07:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:03.304 [2024-07-12 15:07:28.891991] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:03.304 [2024-07-12 15:07:28.979927] bdev_raid.c:2516:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:19:03.304 [2024-07-12 15:07:28.979981] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.304 [2024-07-12 15:07:28.979987] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:03.304 [2024-07-12 15:07:28.979992] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: Operation not supported by device 00:19:03.304 15:07:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:03.304 15:07:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:03.304 15:07:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:03.304 15:07:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:03.304 15:07:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:03.304 15:07:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:03.304 15:07:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:03.304 15:07:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:03.304 15:07:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:03.304 15:07:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:03.304 15:07:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.304 15:07:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.562 15:07:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:03.562 "name": "raid_bdev1", 00:19:03.562 "uuid": "6ba6f49b-4060-11ef-b2a4-e9dca065e82e", 00:19:03.562 "strip_size_kb": 0, 00:19:03.562 "state": "online", 00:19:03.562 "raid_level": "raid1", 00:19:03.562 "superblock": true, 00:19:03.562 "num_base_bdevs": 2, 00:19:03.562 "num_base_bdevs_discovered": 1, 00:19:03.562 "num_base_bdevs_operational": 1, 00:19:03.562 "base_bdevs_list": [ 00:19:03.562 { 00:19:03.562 "name": null, 00:19:03.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.562 "is_configured": false, 00:19:03.562 "data_offset": 256, 00:19:03.562 "data_size": 7936 00:19:03.562 }, 00:19:03.562 { 00:19:03.562 "name": "BaseBdev2", 00:19:03.562 "uuid": "c6319b94-b1f3-3f50-a76a-dd45c1fd2a97", 00:19:03.562 "is_configured": true, 00:19:03.562 "data_offset": 256, 00:19:03.562 "data_size": 7936 00:19:03.562 } 00:19:03.562 ] 00:19:03.562 }' 00:19:03.562 15:07:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:03.562 15:07:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.821 15:07:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:04.079 [2024-07-12 15:07:29.892122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:04.079 [2024-07-12 15:07:29.892183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.080 [2024-07-12 15:07:29.892211] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x246607e35400 00:19:04.080 [2024-07-12 15:07:29.892221] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.080 [2024-07-12 15:07:29.892285] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.080 [2024-07-12 15:07:29.892294] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:04.080 [2024-07-12 15:07:29.892314] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:04.080 [2024-07-12 15:07:29.892319] bdev_raid.c:3564:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:04.080 [2024-07-12 15:07:29.892323] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:04.080 [2024-07-12 15:07:29.892335] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:04.080 [2024-07-12 15:07:29.892506] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x246607e97e20 00:19:04.080 [2024-07-12 15:07:29.893052] bdev_raid.c:2825:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:04.080 spare 00:19:04.338 15:07:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # sleep 1 00:19:05.273 15:07:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:05.273 15:07:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:05.273 15:07:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:05.273 15:07:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:05.273 15:07:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:05.273 15:07:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.273 15:07:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.531 15:07:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:05.531 "name": "raid_bdev1", 00:19:05.531 "uuid": "6ba6f49b-4060-11ef-b2a4-e9dca065e82e", 00:19:05.531 "strip_size_kb": 0, 00:19:05.531 "state": "online", 00:19:05.531 "raid_level": "raid1", 00:19:05.531 "superblock": true, 00:19:05.531 "num_base_bdevs": 2, 00:19:05.531 "num_base_bdevs_discovered": 2, 00:19:05.531 "num_base_bdevs_operational": 2, 00:19:05.531 "process": { 00:19:05.531 "type": "rebuild", 00:19:05.531 "target": "spare", 00:19:05.531 "progress": { 00:19:05.531 "blocks": 3328, 00:19:05.531 "percent": 41 00:19:05.531 } 00:19:05.531 }, 00:19:05.531 "base_bdevs_list": [ 00:19:05.531 { 00:19:05.531 "name": "spare", 00:19:05.531 "uuid": "850c43ea-4e58-a854-9773-32772700469c", 00:19:05.531 "is_configured": true, 00:19:05.531 "data_offset": 256, 00:19:05.531 "data_size": 7936 00:19:05.531 }, 00:19:05.531 { 00:19:05.531 "name": "BaseBdev2", 00:19:05.531 "uuid": "c6319b94-b1f3-3f50-a76a-dd45c1fd2a97", 00:19:05.531 "is_configured": true, 00:19:05.531 "data_offset": 256, 00:19:05.531 "data_size": 7936 00:19:05.531 } 00:19:05.531 ] 00:19:05.531 }' 00:19:05.531 15:07:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:05.531 15:07:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:05.531 15:07:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:05.531 15:07:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:05.531 15:07:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:05.789 [2024-07-12 15:07:31.496306] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:05.789 [2024-07-12 15:07:31.500225] bdev_raid.c:2516:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:19:05.789 [2024-07-12 15:07:31.500265] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.789 [2024-07-12 15:07:31.500271] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:05.789 [2024-07-12 15:07:31.500275] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: Operation not supported by device 00:19:05.789 15:07:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:05.789 15:07:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:05.789 15:07:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:05.790 15:07:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:05.790 15:07:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:05.790 15:07:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:05.790 15:07:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:05.790 15:07:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:05.790 15:07:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:05.790 15:07:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:05.790 15:07:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.790 15:07:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.047 15:07:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:06.047 "name": "raid_bdev1", 00:19:06.047 "uuid": "6ba6f49b-4060-11ef-b2a4-e9dca065e82e", 00:19:06.047 "strip_size_kb": 0, 00:19:06.047 "state": "online", 00:19:06.047 "raid_level": "raid1", 00:19:06.047 "superblock": true, 00:19:06.047 "num_base_bdevs": 2, 00:19:06.047 "num_base_bdevs_discovered": 1, 00:19:06.047 "num_base_bdevs_operational": 1, 00:19:06.047 "base_bdevs_list": [ 00:19:06.047 { 00:19:06.047 "name": null, 00:19:06.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.047 "is_configured": false, 00:19:06.047 "data_offset": 256, 00:19:06.047 "data_size": 7936 00:19:06.047 }, 00:19:06.047 { 00:19:06.047 "name": "BaseBdev2", 00:19:06.047 "uuid": "c6319b94-b1f3-3f50-a76a-dd45c1fd2a97", 00:19:06.047 "is_configured": true, 00:19:06.047 "data_offset": 256, 00:19:06.047 "data_size": 7936 00:19:06.047 } 00:19:06.047 ] 00:19:06.047 }' 00:19:06.047 15:07:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:06.047 15:07:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.649 15:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:06.649 15:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:06.649 15:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:06.649 15:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:06.649 15:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:06.649 15:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.649 15:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.649 15:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:06.649 "name": "raid_bdev1", 00:19:06.649 "uuid": "6ba6f49b-4060-11ef-b2a4-e9dca065e82e", 00:19:06.649 "strip_size_kb": 0, 00:19:06.649 "state": "online", 00:19:06.649 "raid_level": "raid1", 00:19:06.649 "superblock": true, 00:19:06.649 "num_base_bdevs": 2, 00:19:06.649 "num_base_bdevs_discovered": 1, 00:19:06.649 "num_base_bdevs_operational": 1, 00:19:06.649 "base_bdevs_list": [ 00:19:06.649 { 00:19:06.649 "name": null, 00:19:06.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.649 "is_configured": false, 00:19:06.649 "data_offset": 256, 00:19:06.649 "data_size": 7936 00:19:06.649 }, 00:19:06.649 { 00:19:06.649 "name": "BaseBdev2", 00:19:06.649 "uuid": "c6319b94-b1f3-3f50-a76a-dd45c1fd2a97", 00:19:06.649 "is_configured": true, 00:19:06.649 "data_offset": 256, 00:19:06.649 "data_size": 7936 00:19:06.649 } 00:19:06.649 ] 00:19:06.649 }' 00:19:06.649 15:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:06.649 15:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:06.649 15:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:06.649 15:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:06.649 15:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:19:06.908 15:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:07.182 [2024-07-12 15:07:32.932372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:07.182 [2024-07-12 15:07:32.932431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.182 [2024-07-12 15:07:32.932460] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x246607e34780 00:19:07.182 [2024-07-12 15:07:32.932482] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.182 [2024-07-12 15:07:32.932542] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.182 [2024-07-12 15:07:32.932552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:07.182 [2024-07-12 15:07:32.932571] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:07.182 [2024-07-12 15:07:32.932576] bdev_raid.c:3564:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:07.182 [2024-07-12 15:07:32.932580] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:07.182 BaseBdev1 00:19:07.182 15:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # sleep 1 00:19:08.553 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:08.553 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:08.553 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:08.553 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:08.553 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:08.553 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:08.553 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:08.553 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:08.553 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:08.553 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:08.553 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.553 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.553 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:08.553 "name": "raid_bdev1", 00:19:08.553 "uuid": "6ba6f49b-4060-11ef-b2a4-e9dca065e82e", 00:19:08.553 "strip_size_kb": 0, 00:19:08.553 "state": "online", 00:19:08.553 "raid_level": "raid1", 00:19:08.553 "superblock": true, 00:19:08.553 "num_base_bdevs": 2, 00:19:08.553 "num_base_bdevs_discovered": 1, 00:19:08.553 "num_base_bdevs_operational": 1, 00:19:08.553 "base_bdevs_list": [ 00:19:08.553 { 00:19:08.553 "name": null, 00:19:08.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.553 "is_configured": false, 00:19:08.553 "data_offset": 256, 00:19:08.553 "data_size": 7936 00:19:08.553 }, 00:19:08.553 { 00:19:08.553 "name": "BaseBdev2", 00:19:08.553 "uuid": "c6319b94-b1f3-3f50-a76a-dd45c1fd2a97", 00:19:08.553 "is_configured": true, 00:19:08.553 "data_offset": 256, 00:19:08.553 "data_size": 7936 00:19:08.553 } 00:19:08.553 ] 00:19:08.553 }' 00:19:08.553 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:08.553 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.809 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:08.809 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:08.810 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:08.810 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:08.810 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:08.810 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.810 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.373 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:09.373 "name": "raid_bdev1", 00:19:09.373 "uuid": "6ba6f49b-4060-11ef-b2a4-e9dca065e82e", 00:19:09.373 "strip_size_kb": 0, 00:19:09.373 "state": "online", 00:19:09.373 "raid_level": "raid1", 00:19:09.373 "superblock": true, 00:19:09.373 "num_base_bdevs": 2, 00:19:09.373 "num_base_bdevs_discovered": 1, 00:19:09.373 "num_base_bdevs_operational": 1, 00:19:09.373 "base_bdevs_list": [ 00:19:09.373 { 00:19:09.373 "name": null, 00:19:09.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.373 "is_configured": false, 00:19:09.373 "data_offset": 256, 00:19:09.373 "data_size": 7936 00:19:09.373 }, 00:19:09.373 { 00:19:09.373 "name": "BaseBdev2", 00:19:09.373 "uuid": "c6319b94-b1f3-3f50-a76a-dd45c1fd2a97", 00:19:09.373 "is_configured": true, 00:19:09.373 "data_offset": 256, 00:19:09.373 "data_size": 7936 00:19:09.373 } 00:19:09.373 ] 00:19:09.373 }' 00:19:09.373 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:09.373 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:09.373 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:09.373 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:09.373 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:09.373 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:19:09.373 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:09.373 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:09.373 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:09.373 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:09.373 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:09.373 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:09.373 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:09.373 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:09.373 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:09.373 15:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:09.373 [2024-07-12 15:07:35.176498] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:09.373 [2024-07-12 15:07:35.176561] bdev_raid.c:3564:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:09.373 [2024-07-12 15:07:35.176566] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:09.373 request: 00:19:09.373 { 00:19:09.373 "base_bdev": "BaseBdev1", 00:19:09.373 "raid_bdev": "raid_bdev1", 00:19:09.373 "method": "bdev_raid_add_base_bdev", 00:19:09.373 "req_id": 1 00:19:09.373 } 00:19:09.373 Got JSON-RPC error response 00:19:09.373 response: 00:19:09.373 { 00:19:09.373 "code": -22, 00:19:09.373 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:09.373 } 00:19:09.373 15:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:19:09.373 15:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:09.373 15:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:09.373 15:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:09.373 15:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # sleep 1 00:19:10.747 15:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:10.747 15:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:10.747 15:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:10.747 15:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:10.747 15:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:10.747 15:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:10.747 15:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:10.747 15:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:10.747 15:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:10.747 15:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:10.747 15:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.747 15:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.747 15:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:10.747 "name": "raid_bdev1", 00:19:10.747 "uuid": "6ba6f49b-4060-11ef-b2a4-e9dca065e82e", 00:19:10.747 "strip_size_kb": 0, 00:19:10.747 "state": "online", 00:19:10.747 "raid_level": "raid1", 00:19:10.747 "superblock": true, 00:19:10.747 "num_base_bdevs": 2, 00:19:10.747 "num_base_bdevs_discovered": 1, 00:19:10.747 "num_base_bdevs_operational": 1, 00:19:10.747 "base_bdevs_list": [ 00:19:10.747 { 00:19:10.747 "name": null, 00:19:10.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.747 "is_configured": false, 00:19:10.747 "data_offset": 256, 00:19:10.747 "data_size": 7936 00:19:10.747 }, 00:19:10.747 { 00:19:10.747 "name": "BaseBdev2", 00:19:10.747 "uuid": "c6319b94-b1f3-3f50-a76a-dd45c1fd2a97", 00:19:10.747 "is_configured": true, 00:19:10.747 "data_offset": 256, 00:19:10.747 "data_size": 7936 00:19:10.747 } 00:19:10.747 ] 00:19:10.747 }' 00:19:10.747 15:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:10.747 15:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.312 15:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:11.312 15:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:11.312 15:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:11.312 15:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:11.312 15:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:11.312 15:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.312 15:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.312 15:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:11.312 "name": "raid_bdev1", 00:19:11.312 "uuid": "6ba6f49b-4060-11ef-b2a4-e9dca065e82e", 00:19:11.312 "strip_size_kb": 0, 00:19:11.312 "state": "online", 00:19:11.312 "raid_level": "raid1", 00:19:11.312 "superblock": true, 00:19:11.312 "num_base_bdevs": 2, 00:19:11.312 "num_base_bdevs_discovered": 1, 00:19:11.312 "num_base_bdevs_operational": 1, 00:19:11.312 "base_bdevs_list": [ 00:19:11.312 { 00:19:11.312 "name": null, 00:19:11.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.312 "is_configured": false, 00:19:11.312 "data_offset": 256, 00:19:11.312 "data_size": 7936 00:19:11.312 }, 00:19:11.312 { 00:19:11.312 "name": "BaseBdev2", 00:19:11.312 "uuid": "c6319b94-b1f3-3f50-a76a-dd45c1fd2a97", 00:19:11.312 "is_configured": true, 00:19:11.312 "data_offset": 256, 00:19:11.312 "data_size": 7936 00:19:11.312 } 00:19:11.312 ] 00:19:11.312 }' 00:19:11.312 15:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:11.312 15:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:11.312 15:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:11.312 15:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:11.312 15:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # killprocess 67552 00:19:11.312 15:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 67552 ']' 00:19:11.312 15:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 67552 00:19:11.312 15:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:19:11.312 15:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:19:11.312 15:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps -c -o command 67552 00:19:11.312 15:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # tail -1 00:19:11.312 15:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:19:11.312 15:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:19:11.312 15:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67552' 00:19:11.312 killing process with pid 67552 00:19:11.312 15:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 67552 00:19:11.312 Received shutdown signal, test time was about 60.000000 seconds 00:19:11.312 00:19:11.312 Latency(us) 00:19:11.312 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.312 =================================================================================================================== 00:19:11.313 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:11.313 [2024-07-12 15:07:37.129446] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:11.313 [2024-07-12 15:07:37.129478] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:11.313 [2024-07-12 15:07:37.129490] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:11.313 [2024-07-12 15:07:37.129495] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x246607e35680 name raid_bdev1, state offline 00:19:11.313 15:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 67552 00:19:11.572 [2024-07-12 15:07:37.147002] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:11.572 15:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # return 0 00:19:11.572 00:19:11.572 real 0m26.469s 00:19:11.572 user 0m41.025s 00:19:11.572 sys 0m2.640s 00:19:11.572 15:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:11.572 15:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.572 ************************************ 00:19:11.572 END TEST raid_rebuild_test_sb_md_interleaved 00:19:11.572 ************************************ 00:19:11.572 15:07:37 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:11.572 15:07:37 bdev_raid -- bdev/bdev_raid.sh@916 -- # trap - EXIT 00:19:11.572 15:07:37 bdev_raid -- bdev/bdev_raid.sh@917 -- # cleanup 00:19:11.572 15:07:37 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 67552 ']' 00:19:11.572 15:07:37 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 67552 00:19:11.572 15:07:37 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:19:11.572 00:19:11.572 real 11m49.382s 00:19:11.572 user 20m42.582s 00:19:11.572 sys 1m44.853s 00:19:11.572 15:07:37 bdev_raid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:11.572 15:07:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:11.572 ************************************ 00:19:11.572 END TEST bdev_raid 00:19:11.572 ************************************ 00:19:11.830 15:07:37 -- common/autotest_common.sh@1142 -- # return 0 00:19:11.830 15:07:37 -- spdk/autotest.sh@191 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:19:11.830 15:07:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:11.830 15:07:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:11.830 15:07:37 -- common/autotest_common.sh@10 -- # set +x 00:19:11.830 ************************************ 00:19:11.830 START TEST bdevperf_config 00:19:11.830 ************************************ 00:19:11.830 15:07:37 bdevperf_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:19:11.830 * Looking for test storage... 00:19:11.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:19:11.830 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:19:11.830 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:19:11.830 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:19:11.830 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:19:11.830 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:19:11.830 15:07:37 bdevperf_config -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:19:15.127 15:07:40 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-12 15:07:37.579772] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:19:15.127 [2024-07-12 15:07:37.579992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:15.127 Using job config with 4 jobs 00:19:15.127 EAL: TSC is not safe to use in SMP mode 00:19:15.127 EAL: TSC is not invariant 00:19:15.127 [2024-07-12 15:07:38.123928] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.127 [2024-07-12 15:07:38.216708] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:15.127 [2024-07-12 15:07:38.219292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.127 cpumask for '\''job0'\'' is too big 00:19:15.127 cpumask for '\''job1'\'' is too big 00:19:15.127 cpumask for '\''job2'\'' is too big 00:19:15.127 cpumask for '\''job3'\'' is too big 00:19:15.127 Running I/O for 2 seconds... 00:19:15.127 00:19:15.127 Latency(us) 00:19:15.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.127 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:19:15.127 Malloc0 : 2.00 316525.45 309.11 0.00 0.00 808.51 266.24 1906.50 00:19:15.127 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:19:15.127 Malloc0 : 2.00 316543.60 309.12 0.00 0.00 808.22 268.10 1638.40 00:19:15.127 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:19:15.127 Malloc0 : 2.00 316525.89 309.11 0.00 0.00 808.05 266.24 1422.43 00:19:15.127 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:19:15.127 Malloc0 : 2.00 316596.05 309.18 0.00 0.00 807.65 102.40 1489.46 00:19:15.127 =================================================================================================================== 00:19:15.127 Total : 1266190.99 1236.51 0.00 0.00 808.11 102.40 1906.50' 00:19:15.127 15:07:40 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-12 15:07:37.579772] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:19:15.127 [2024-07-12 15:07:37.579992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:15.127 Using job config with 4 jobs 00:19:15.127 EAL: TSC is not safe to use in SMP mode 00:19:15.127 EAL: TSC is not invariant 00:19:15.127 [2024-07-12 15:07:38.123928] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.127 [2024-07-12 15:07:38.216708] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:15.127 [2024-07-12 15:07:38.219292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.127 cpumask for '\''job0'\'' is too big 00:19:15.127 cpumask for '\''job1'\'' is too big 00:19:15.127 cpumask for '\''job2'\'' is too big 00:19:15.127 cpumask for '\''job3'\'' is too big 00:19:15.127 Running I/O for 2 seconds... 00:19:15.127 00:19:15.127 Latency(us) 00:19:15.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.127 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:19:15.127 Malloc0 : 2.00 316525.45 309.11 0.00 0.00 808.51 266.24 1906.50 00:19:15.127 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:19:15.127 Malloc0 : 2.00 316543.60 309.12 0.00 0.00 808.22 268.10 1638.40 00:19:15.127 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:19:15.127 Malloc0 : 2.00 316525.89 309.11 0.00 0.00 808.05 266.24 1422.43 00:19:15.127 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:19:15.127 Malloc0 : 2.00 316596.05 309.18 0.00 0.00 807.65 102.40 1489.46 00:19:15.127 =================================================================================================================== 00:19:15.127 Total : 1266190.99 1236.51 0.00 0.00 808.11 102.40 1906.50' 00:19:15.127 15:07:40 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-12 15:07:37.579772] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:19:15.127 [2024-07-12 15:07:37.579992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:15.127 Using job config with 4 jobs 00:19:15.127 EAL: TSC is not safe to use in SMP mode 00:19:15.127 EAL: TSC is not invariant 00:19:15.127 [2024-07-12 15:07:38.123928] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.127 [2024-07-12 15:07:38.216708] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:15.127 [2024-07-12 15:07:38.219292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.127 cpumask for '\''job0'\'' is too big 00:19:15.127 cpumask for '\''job1'\'' is too big 00:19:15.127 cpumask for '\''job2'\'' is too big 00:19:15.127 cpumask for '\''job3'\'' is too big 00:19:15.127 Running I/O for 2 seconds... 00:19:15.127 00:19:15.127 Latency(us) 00:19:15.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.127 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:19:15.127 Malloc0 : 2.00 316525.45 309.11 0.00 0.00 808.51 266.24 1906.50 00:19:15.127 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:19:15.127 Malloc0 : 2.00 316543.60 309.12 0.00 0.00 808.22 268.10 1638.40 00:19:15.127 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:19:15.127 Malloc0 : 2.00 316525.89 309.11 0.00 0.00 808.05 266.24 1422.43 00:19:15.127 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:19:15.127 Malloc0 : 2.00 316596.05 309.18 0.00 0.00 807.65 102.40 1489.46 00:19:15.127 =================================================================================================================== 00:19:15.128 Total : 1266190.99 1236.51 0.00 0.00 808.11 102.40 1906.50' 00:19:15.128 15:07:40 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:19:15.128 15:07:40 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:19:15.128 15:07:40 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:19:15.128 15:07:40 bdevperf_config -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:19:15.128 [2024-07-12 15:07:40.463235] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:19:15.128 [2024-07-12 15:07:40.463465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:15.385 EAL: TSC is not safe to use in SMP mode 00:19:15.385 EAL: TSC is not invariant 00:19:15.385 [2024-07-12 15:07:40.974410] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.385 [2024-07-12 15:07:41.058499] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:15.385 [2024-07-12 15:07:41.060687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.385 cpumask for 'job0' is too big 00:19:15.385 cpumask for 'job1' is too big 00:19:15.385 cpumask for 'job2' is too big 00:19:15.385 cpumask for 'job3' is too big 00:19:17.931 15:07:43 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:19:17.931 Running I/O for 2 seconds... 00:19:17.931 00:19:17.931 Latency(us) 00:19:17.931 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.931 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:19:17.931 Malloc0 : 2.00 319191.47 311.71 0.00 0.00 801.77 208.52 1556.48 00:19:17.931 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:19:17.931 Malloc0 : 2.00 319208.53 311.73 0.00 0.00 801.55 202.01 1429.88 00:19:17.931 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:19:17.931 Malloc0 : 2.00 319189.88 311.71 0.00 0.00 801.42 202.94 1496.90 00:19:17.931 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:19:17.931 Malloc0 : 2.00 319172.02 311.69 0.00 0.00 801.28 200.15 1563.93 00:19:17.931 =================================================================================================================== 00:19:17.931 Total : 1276761.90 1246.84 0.00 0.00 801.51 200.15 1563.93' 00:19:17.931 15:07:43 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:19:17.931 15:07:43 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:19:17.931 15:07:43 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:19:17.931 15:07:43 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:19:17.931 15:07:43 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:19:17.931 15:07:43 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:19:17.931 15:07:43 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:19:17.931 15:07:43 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:19:17.931 00:19:17.931 15:07:43 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:19:17.931 15:07:43 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:19:17.931 15:07:43 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:19:17.931 15:07:43 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:19:17.931 15:07:43 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:19:17.931 15:07:43 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:19:17.931 15:07:43 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:19:17.931 15:07:43 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:19:17.931 00:19:17.931 15:07:43 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:19:17.931 15:07:43 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:19:17.931 15:07:43 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:19:17.931 15:07:43 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:19:17.931 15:07:43 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:19:17.931 15:07:43 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:19:17.931 15:07:43 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:19:17.931 15:07:43 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:19:17.931 00:19:17.931 15:07:43 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:19:17.931 15:07:43 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:19:17.931 15:07:43 bdevperf_config -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-12 15:07:43.307695] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:19:20.460 [2024-07-12 15:07:43.307887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:20.460 Using job config with 3 jobs 00:19:20.460 EAL: TSC is not safe to use in SMP mode 00:19:20.460 EAL: TSC is not invariant 00:19:20.460 [2024-07-12 15:07:43.813757] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.460 [2024-07-12 15:07:43.895598] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:20.460 [2024-07-12 15:07:43.897793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.460 cpumask for '\''job0'\'' is too big 00:19:20.460 cpumask for '\''job1'\'' is too big 00:19:20.460 cpumask for '\''job2'\'' is too big 00:19:20.460 Running I/O for 2 seconds... 00:19:20.460 00:19:20.460 Latency(us) 00:19:20.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.460 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:19:20.460 Malloc0 : 2.00 404833.31 395.35 0.00 0.00 632.11 238.31 1124.54 00:19:20.460 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:19:20.460 Malloc0 : 2.00 404856.44 395.37 0.00 0.00 631.93 206.66 901.12 00:19:20.460 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:19:20.460 Malloc0 : 2.00 404841.27 395.35 0.00 0.00 631.82 137.77 741.00 00:19:20.460 =================================================================================================================== 00:19:20.460 Total : 1214531.01 1186.07 0.00 0.00 631.95 137.77 1124.54' 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-12 15:07:43.307695] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:19:20.460 [2024-07-12 15:07:43.307887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:20.460 Using job config with 3 jobs 00:19:20.460 EAL: TSC is not safe to use in SMP mode 00:19:20.460 EAL: TSC is not invariant 00:19:20.460 [2024-07-12 15:07:43.813757] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.460 [2024-07-12 15:07:43.895598] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:20.460 [2024-07-12 15:07:43.897793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.460 cpumask for '\''job0'\'' is too big 00:19:20.460 cpumask for '\''job1'\'' is too big 00:19:20.460 cpumask for '\''job2'\'' is too big 00:19:20.460 Running I/O for 2 seconds... 00:19:20.460 00:19:20.460 Latency(us) 00:19:20.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.460 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:19:20.460 Malloc0 : 2.00 404833.31 395.35 0.00 0.00 632.11 238.31 1124.54 00:19:20.460 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:19:20.460 Malloc0 : 2.00 404856.44 395.37 0.00 0.00 631.93 206.66 901.12 00:19:20.460 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:19:20.460 Malloc0 : 2.00 404841.27 395.35 0.00 0.00 631.82 137.77 741.00 00:19:20.460 =================================================================================================================== 00:19:20.460 Total : 1214531.01 1186.07 0.00 0.00 631.95 137.77 1124.54' 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-12 15:07:43.307695] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:19:20.460 [2024-07-12 15:07:43.307887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:20.460 Using job config with 3 jobs 00:19:20.460 EAL: TSC is not safe to use in SMP mode 00:19:20.460 EAL: TSC is not invariant 00:19:20.460 [2024-07-12 15:07:43.813757] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.460 [2024-07-12 15:07:43.895598] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:20.460 [2024-07-12 15:07:43.897793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.460 cpumask for '\''job0'\'' is too big 00:19:20.460 cpumask for '\''job1'\'' is too big 00:19:20.460 cpumask for '\''job2'\'' is too big 00:19:20.460 Running I/O for 2 seconds... 00:19:20.460 00:19:20.460 Latency(us) 00:19:20.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.460 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:19:20.460 Malloc0 : 2.00 404833.31 395.35 0.00 0.00 632.11 238.31 1124.54 00:19:20.460 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:19:20.460 Malloc0 : 2.00 404856.44 395.37 0.00 0.00 631.93 206.66 901.12 00:19:20.460 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:19:20.460 Malloc0 : 2.00 404841.27 395.35 0.00 0.00 631.82 137.77 741.00 00:19:20.460 =================================================================================================================== 00:19:20.460 Total : 1214531.01 1186.07 0.00 0.00 631.95 137.77 1124.54' 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:19:20.460 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:19:20.460 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:19:20.460 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:19:20.460 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:19:20.460 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:19:20.460 15:07:46 bdevperf_config -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:19:23.814 15:07:49 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-12 15:07:46.158607] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:19:23.814 [2024-07-12 15:07:46.158876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:23.814 Using job config with 4 jobs 00:19:23.814 EAL: TSC is not safe to use in SMP mode 00:19:23.814 EAL: TSC is not invariant 00:19:23.814 [2024-07-12 15:07:46.690414] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.814 [2024-07-12 15:07:46.771395] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:23.814 [2024-07-12 15:07:46.773551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.814 cpumask for '\''job0'\'' is too big 00:19:23.814 cpumask for '\''job1'\'' is too big 00:19:23.814 cpumask for '\''job2'\'' is too big 00:19:23.814 cpumask for '\''job3'\'' is too big 00:19:23.814 Running I/O for 2 seconds... 00:19:23.814 00:19:23.814 Latency(us) 00:19:23.814 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.814 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:23.814 Malloc0 : 2.00 150853.85 147.32 0.00 0.00 1696.66 472.90 3127.86 00:19:23.814 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:23.814 Malloc1 : 2.00 150846.31 147.31 0.00 0.00 1696.46 441.25 3127.86 00:19:23.815 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:23.815 Malloc0 : 2.00 150837.05 147.30 0.00 0.00 1696.11 450.56 2666.13 00:19:23.815 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:23.815 Malloc1 : 2.00 150827.79 147.29 0.00 0.00 1695.97 424.49 2666.13 00:19:23.815 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:23.815 Malloc0 : 2.00 150881.02 147.34 0.00 0.00 1694.86 456.15 2174.61 00:19:23.815 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:23.815 Malloc1 : 2.00 150871.46 147.34 0.00 0.00 1694.72 431.94 2174.61 00:19:23.815 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:23.815 Malloc0 : 2.00 150863.41 147.33 0.00 0.00 1694.31 444.97 2144.82 00:19:23.815 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:23.815 Malloc1 : 2.00 150854.22 147.32 0.00 0.00 1694.17 420.77 2129.92 00:19:23.815 =================================================================================================================== 00:19:23.815 Total : 1206835.11 1178.55 0.00 0.00 1695.41 420.77 3127.86' 00:19:23.815 15:07:49 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-12 15:07:46.158607] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:19:23.815 [2024-07-12 15:07:46.158876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:23.815 Using job config with 4 jobs 00:19:23.815 EAL: TSC is not safe to use in SMP mode 00:19:23.815 EAL: TSC is not invariant 00:19:23.815 [2024-07-12 15:07:46.690414] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.815 [2024-07-12 15:07:46.771395] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:23.815 [2024-07-12 15:07:46.773551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.815 cpumask for '\''job0'\'' is too big 00:19:23.815 cpumask for '\''job1'\'' is too big 00:19:23.815 cpumask for '\''job2'\'' is too big 00:19:23.815 cpumask for '\''job3'\'' is too big 00:19:23.815 Running I/O for 2 seconds... 00:19:23.815 00:19:23.815 Latency(us) 00:19:23.815 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.815 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:23.815 Malloc0 : 2.00 150853.85 147.32 0.00 0.00 1696.66 472.90 3127.86 00:19:23.815 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:23.815 Malloc1 : 2.00 150846.31 147.31 0.00 0.00 1696.46 441.25 3127.86 00:19:23.815 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:23.815 Malloc0 : 2.00 150837.05 147.30 0.00 0.00 1696.11 450.56 2666.13 00:19:23.815 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:23.815 Malloc1 : 2.00 150827.79 147.29 0.00 0.00 1695.97 424.49 2666.13 00:19:23.815 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:23.815 Malloc0 : 2.00 150881.02 147.34 0.00 0.00 1694.86 456.15 2174.61 00:19:23.815 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:23.815 Malloc1 : 2.00 150871.46 147.34 0.00 0.00 1694.72 431.94 2174.61 00:19:23.815 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:23.815 Malloc0 : 2.00 150863.41 147.33 0.00 0.00 1694.31 444.97 2144.82 00:19:23.815 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:23.815 Malloc1 : 2.00 150854.22 147.32 0.00 0.00 1694.17 420.77 2129.92 00:19:23.815 =================================================================================================================== 00:19:23.815 Total : 1206835.11 1178.55 0.00 0.00 1695.41 420.77 3127.86' 00:19:23.815 15:07:49 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-12 15:07:46.158607] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:19:23.815 [2024-07-12 15:07:46.158876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:23.815 Using job config with 4 jobs 00:19:23.815 EAL: TSC is not safe to use in SMP mode 00:19:23.815 EAL: TSC is not invariant 00:19:23.815 [2024-07-12 15:07:46.690414] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.815 [2024-07-12 15:07:46.771395] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:23.815 [2024-07-12 15:07:46.773551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.815 cpumask for '\''job0'\'' is too big 00:19:23.815 cpumask for '\''job1'\'' is too big 00:19:23.815 cpumask for '\''job2'\'' is too big 00:19:23.815 cpumask for '\''job3'\'' is too big 00:19:23.815 Running I/O for 2 seconds... 00:19:23.815 00:19:23.815 Latency(us) 00:19:23.815 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.815 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:23.815 Malloc0 : 2.00 150853.85 147.32 0.00 0.00 1696.66 472.90 3127.86 00:19:23.815 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:23.815 Malloc1 : 2.00 150846.31 147.31 0.00 0.00 1696.46 441.25 3127.86 00:19:23.815 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:23.815 Malloc0 : 2.00 150837.05 147.30 0.00 0.00 1696.11 450.56 2666.13 00:19:23.815 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:23.815 Malloc1 : 2.00 150827.79 147.29 0.00 0.00 1695.97 424.49 2666.13 00:19:23.815 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:23.815 Malloc0 : 2.00 150881.02 147.34 0.00 0.00 1694.86 456.15 2174.61 00:19:23.815 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:23.815 Malloc1 : 2.00 150871.46 147.34 0.00 0.00 1694.72 431.94 2174.61 00:19:23.815 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:23.815 Malloc0 : 2.00 150863.41 147.33 0.00 0.00 1694.31 444.97 2144.82 00:19:23.815 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:19:23.815 Malloc1 : 2.00 150854.22 147.32 0.00 0.00 1694.17 420.77 2129.92 00:19:23.815 =================================================================================================================== 00:19:23.815 Total : 1206835.11 1178.55 0.00 0.00 1695.41 420.77 3127.86' 00:19:23.815 15:07:49 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:19:23.815 15:07:49 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:19:23.815 15:07:49 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:19:23.815 15:07:49 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:19:23.815 15:07:49 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:19:23.815 15:07:49 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:19:23.815 00:19:23.815 real 0m11.603s 00:19:23.815 user 0m9.242s 00:19:23.815 sys 0m2.339s 00:19:23.815 15:07:49 bdevperf_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:23.815 ************************************ 00:19:23.815 END TEST bdevperf_config 00:19:23.815 ************************************ 00:19:23.815 15:07:49 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:19:23.815 15:07:49 -- common/autotest_common.sh@1142 -- # return 0 00:19:23.815 15:07:49 -- spdk/autotest.sh@192 -- # uname -s 00:19:23.815 15:07:49 -- spdk/autotest.sh@192 -- # [[ FreeBSD == Linux ]] 00:19:23.815 15:07:49 -- spdk/autotest.sh@198 -- # uname -s 00:19:23.815 15:07:49 -- spdk/autotest.sh@198 -- # [[ FreeBSD == Linux ]] 00:19:23.815 15:07:49 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:19:23.815 15:07:49 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:19:23.815 15:07:49 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:23.815 15:07:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:23.815 15:07:49 -- common/autotest_common.sh@10 -- # set +x 00:19:23.815 ************************************ 00:19:23.815 START TEST blockdev_nvme 00:19:23.815 ************************************ 00:19:23.815 15:07:49 blockdev_nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:19:23.815 * Looking for test storage... 00:19:23.815 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:23.815 15:07:49 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:23.815 15:07:49 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:19:23.815 15:07:49 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:23.815 15:07:49 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:23.815 15:07:49 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:23.815 15:07:49 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:23.815 15:07:49 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:23.815 15:07:49 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:23.815 15:07:49 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:19:23.815 15:07:49 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:19:23.815 15:07:49 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:19:23.815 15:07:49 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:19:23.815 15:07:49 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:19:23.815 15:07:49 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' FreeBSD = Linux ']' 00:19:23.815 15:07:49 blockdev_nvme -- bdev/blockdev.sh@679 -- # PRE_RESERVED_MEM=2048 00:19:23.815 15:07:49 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:19:23.815 15:07:49 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:19:23.815 15:07:49 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:19:23.815 15:07:49 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:19:23.815 15:07:49 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:19:23.815 15:07:49 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:19:23.815 15:07:49 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:19:23.816 15:07:49 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:19:23.816 15:07:49 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:19:23.816 15:07:49 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=68292 00:19:23.816 15:07:49 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:23.816 15:07:49 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 68292 00:19:23.816 15:07:49 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:23.816 15:07:49 blockdev_nvme -- common/autotest_common.sh@829 -- # '[' -z 68292 ']' 00:19:23.816 15:07:49 blockdev_nvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.816 15:07:49 blockdev_nvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:23.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.816 15:07:49 blockdev_nvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.816 15:07:49 blockdev_nvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:23.816 15:07:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:23.816 [2024-07-12 15:07:49.221796] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:19:23.816 [2024-07-12 15:07:49.221932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:24.073 EAL: TSC is not safe to use in SMP mode 00:19:24.073 EAL: TSC is not invariant 00:19:24.073 [2024-07-12 15:07:49.728569] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.073 [2024-07-12 15:07:49.812816] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:24.073 [2024-07-12 15:07:49.814903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.641 15:07:50 blockdev_nvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:24.641 15:07:50 blockdev_nvme -- common/autotest_common.sh@862 -- # return 0 00:19:24.641 15:07:50 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:19:24.641 15:07:50 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:19:24.641 15:07:50 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:19:24.641 15:07:50 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:19:24.641 15:07:50 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:24.641 15:07:50 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:19:24.641 15:07:50 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.641 15:07:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:24.641 [2024-07-12 15:07:50.366732] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:24.641 15:07:50 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.641 15:07:50 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:19:24.641 15:07:50 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.641 15:07:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:24.641 15:07:50 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.641 15:07:50 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:19:24.641 15:07:50 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:19:24.641 15:07:50 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.641 15:07:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:24.641 15:07:50 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.641 15:07:50 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:19:24.641 15:07:50 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.641 15:07:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:24.641 15:07:50 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.641 15:07:50 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:24.641 15:07:50 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.641 15:07:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:24.899 15:07:50 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.899 15:07:50 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:19:24.899 15:07:50 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:19:24.899 15:07:50 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.899 15:07:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:24.899 15:07:50 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:19:24.899 15:07:50 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.899 15:07:50 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:19:24.899 15:07:50 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:19:24.899 15:07:50 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "816c9315-4060-11ef-b2a4-e9dca065e82e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "816c9315-4060-11ef-b2a4-e9dca065e82e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:19:24.899 15:07:50 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:19:24.899 15:07:50 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:19:24.899 15:07:50 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:19:24.899 15:07:50 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 68292 00:19:24.899 15:07:50 blockdev_nvme -- common/autotest_common.sh@948 -- # '[' -z 68292 ']' 00:19:24.899 15:07:50 blockdev_nvme -- common/autotest_common.sh@952 -- # kill -0 68292 00:19:24.899 15:07:50 blockdev_nvme -- common/autotest_common.sh@953 -- # uname 00:19:24.899 15:07:50 blockdev_nvme -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:19:24.899 15:07:50 blockdev_nvme -- common/autotest_common.sh@956 -- # tail -1 00:19:24.899 15:07:50 blockdev_nvme -- common/autotest_common.sh@956 -- # ps -c -o command 68292 00:19:24.899 15:07:50 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:19:24.899 15:07:50 blockdev_nvme -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:19:24.899 15:07:50 blockdev_nvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68292' 00:19:24.899 killing process with pid 68292 00:19:24.899 15:07:50 blockdev_nvme -- common/autotest_common.sh@967 -- # kill 68292 00:19:24.899 15:07:50 blockdev_nvme -- common/autotest_common.sh@972 -- # wait 68292 00:19:25.156 15:07:50 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:25.156 15:07:50 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:19:25.156 15:07:50 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:19:25.156 15:07:50 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:25.156 15:07:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:25.156 ************************************ 00:19:25.156 START TEST bdev_hello_world 00:19:25.156 ************************************ 00:19:25.156 15:07:50 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:19:25.156 [2024-07-12 15:07:50.791415] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:19:25.156 [2024-07-12 15:07:50.791554] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:25.724 EAL: TSC is not safe to use in SMP mode 00:19:25.724 EAL: TSC is not invariant 00:19:25.724 [2024-07-12 15:07:51.303138] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.724 [2024-07-12 15:07:51.383784] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:25.724 [2024-07-12 15:07:51.385900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.724 [2024-07-12 15:07:51.443770] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:25.724 [2024-07-12 15:07:51.516834] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:25.724 [2024-07-12 15:07:51.516865] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:19:25.724 [2024-07-12 15:07:51.516877] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:25.724 [2024-07-12 15:07:51.517539] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:25.724 [2024-07-12 15:07:51.517867] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:25.724 [2024-07-12 15:07:51.517899] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:25.724 [2024-07-12 15:07:51.518061] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:25.724 00:19:25.724 [2024-07-12 15:07:51.518090] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:25.982 00:19:25.982 real 0m0.903s 00:19:25.982 user 0m0.339s 00:19:25.982 sys 0m0.563s 00:19:25.982 15:07:51 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:25.982 15:07:51 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:25.982 ************************************ 00:19:25.982 END TEST bdev_hello_world 00:19:25.982 ************************************ 00:19:25.982 15:07:51 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:25.982 15:07:51 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:19:25.982 15:07:51 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:25.982 15:07:51 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:25.982 15:07:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:25.982 ************************************ 00:19:25.982 START TEST bdev_bounds 00:19:25.982 ************************************ 00:19:25.982 15:07:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:19:25.982 15:07:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=68363 00:19:25.982 Process bdevio pid: 68363 00:19:25.982 15:07:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:25.982 15:07:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:25.982 15:07:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 68363' 00:19:25.982 15:07:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 68363 00:19:25.982 15:07:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 68363 ']' 00:19:25.982 15:07:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.982 15:07:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:25.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.982 15:07:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.982 15:07:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:25.982 15:07:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:25.982 [2024-07-12 15:07:51.742843] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:19:25.982 [2024-07-12 15:07:51.743024] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:26.569 EAL: TSC is not safe to use in SMP mode 00:19:26.569 EAL: TSC is not invariant 00:19:26.569 [2024-07-12 15:07:52.276003] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:26.569 [2024-07-12 15:07:52.368268] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:26.569 [2024-07-12 15:07:52.368330] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:19:26.569 [2024-07-12 15:07:52.368340] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:19:26.569 [2024-07-12 15:07:52.371917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.569 [2024-07-12 15:07:52.371806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.569 [2024-07-12 15:07:52.371912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.827 [2024-07-12 15:07:52.430177] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:27.085 15:07:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:27.085 15:07:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:19:27.085 15:07:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:27.085 I/O targets: 00:19:27.085 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:19:27.085 00:19:27.085 00:19:27.085 CUnit - A unit testing framework for C - Version 2.1-3 00:19:27.085 http://cunit.sourceforge.net/ 00:19:27.085 00:19:27.085 00:19:27.085 Suite: bdevio tests on: Nvme0n1 00:19:27.085 Test: blockdev write read block ...passed 00:19:27.085 Test: blockdev write zeroes read block ...passed 00:19:27.085 Test: blockdev write zeroes read no split ...passed 00:19:27.085 Test: blockdev write zeroes read split ...passed 00:19:27.085 Test: blockdev write zeroes read split partial ...passed 00:19:27.085 Test: blockdev reset ...[2024-07-12 15:07:52.919309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:19:27.342 [2024-07-12 15:07:52.920688] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:27.342 passed 00:19:27.342 Test: blockdev write read 8 blocks ...passed 00:19:27.342 Test: blockdev write read size > 128k ...passed 00:19:27.342 Test: blockdev write read invalid size ...passed 00:19:27.342 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:27.342 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:27.342 Test: blockdev write read max offset ...passed 00:19:27.342 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:27.342 Test: blockdev writev readv 8 blocks ...passed 00:19:27.342 Test: blockdev writev readv 30 x 1block ...passed 00:19:27.342 Test: blockdev writev readv block ...passed 00:19:27.342 Test: blockdev writev readv size > 128k ...passed 00:19:27.342 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:27.342 Test: blockdev comparev and writev ...[2024-07-12 15:07:52.924399] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x292718000 len:0x1000 00:19:27.342 [2024-07-12 15:07:52.924441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:19:27.342 passed 00:19:27.342 Test: blockdev nvme passthru rw ...passed 00:19:27.342 Test: blockdev nvme passthru vendor specific ...passed 00:19:27.342 Test: blockdev nvme admin passthru ...[2024-07-12 15:07:52.924932] nvme_qpair.c: 220:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:19:27.342 [2024-07-12 15:07:52.924953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:19:27.342 passed 00:19:27.342 Test: blockdev copy ...passed 00:19:27.342 00:19:27.342 Run Summary: Type Total Ran Passed Failed Inactive 00:19:27.342 suites 1 1 n/a 0 0 00:19:27.342 tests 23 23 23 0 0 00:19:27.342 asserts 152 152 152 0 n/a 00:19:27.342 00:19:27.342 Elapsed time = 0.039 seconds 00:19:27.342 0 00:19:27.342 15:07:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 68363 00:19:27.342 15:07:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 68363 ']' 00:19:27.342 15:07:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 68363 00:19:27.342 15:07:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:19:27.342 15:07:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:19:27.342 15:07:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps -c -o command 68363 00:19:27.342 15:07:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # tail -1 00:19:27.342 15:07:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=bdevio 00:19:27.342 15:07:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' bdevio = sudo ']' 00:19:27.342 killing process with pid 68363 00:19:27.342 15:07:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68363' 00:19:27.342 15:07:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 68363 00:19:27.342 15:07:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 68363 00:19:27.342 15:07:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:19:27.342 00:19:27.343 real 0m1.399s 00:19:27.343 user 0m2.660s 00:19:27.343 sys 0m0.670s 00:19:27.343 15:07:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:27.343 15:07:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:27.343 ************************************ 00:19:27.343 END TEST bdev_bounds 00:19:27.343 ************************************ 00:19:27.343 15:07:53 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:27.343 15:07:53 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:19:27.343 15:07:53 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:27.343 15:07:53 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:27.343 15:07:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:27.600 ************************************ 00:19:27.600 START TEST bdev_nbd 00:19:27.600 ************************************ 00:19:27.600 15:07:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:19:27.600 15:07:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:19:27.600 15:07:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ FreeBSD == Linux ]] 00:19:27.600 15:07:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # return 0 00:19:27.600 00:19:27.600 real 0m0.004s 00:19:27.600 user 0m0.003s 00:19:27.600 sys 0m0.002s 00:19:27.600 15:07:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:27.600 15:07:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:27.600 ************************************ 00:19:27.600 END TEST bdev_nbd 00:19:27.600 ************************************ 00:19:27.600 15:07:53 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:27.600 15:07:53 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:19:27.600 15:07:53 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:19:27.600 15:07:53 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:19:27.600 skipping fio tests on NVMe due to multi-ns failures. 00:19:27.600 15:07:53 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:27.600 15:07:53 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:27.600 15:07:53 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:19:27.600 15:07:53 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:27.600 15:07:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:27.600 ************************************ 00:19:27.600 START TEST bdev_verify 00:19:27.600 ************************************ 00:19:27.600 15:07:53 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:27.600 [2024-07-12 15:07:53.232226] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:19:27.600 [2024-07-12 15:07:53.232446] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:28.166 EAL: TSC is not safe to use in SMP mode 00:19:28.166 EAL: TSC is not invariant 00:19:28.166 [2024-07-12 15:07:53.777612] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:28.166 [2024-07-12 15:07:53.870860] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:28.166 [2024-07-12 15:07:53.870919] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:19:28.166 [2024-07-12 15:07:53.874092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.166 [2024-07-12 15:07:53.874080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.166 [2024-07-12 15:07:53.933995] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:28.425 Running I/O for 5 seconds... 00:19:33.683 00:19:33.683 Latency(us) 00:19:33.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.683 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:33.683 Verification LBA range: start 0x0 length 0xa0000 00:19:33.683 Nvme0n1 : 5.00 20895.85 81.62 0.00 0.00 6116.93 258.79 11617.75 00:19:33.683 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:33.683 Verification LBA range: start 0xa0000 length 0xa0000 00:19:33.683 Nvme0n1 : 5.00 20638.80 80.62 0.00 0.00 6192.66 696.32 9472.94 00:19:33.683 =================================================================================================================== 00:19:33.683 Total : 41534.65 162.24 0.00 0.00 6154.56 258.79 11617.75 00:19:33.942 00:19:33.942 real 0m6.531s 00:19:33.942 user 0m11.604s 00:19:33.942 sys 0m0.622s 00:19:33.942 15:07:59 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:33.942 15:07:59 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:33.942 ************************************ 00:19:33.942 END TEST bdev_verify 00:19:33.942 ************************************ 00:19:34.201 15:07:59 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:34.201 15:07:59 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:34.201 15:07:59 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:19:34.201 15:07:59 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:34.201 15:07:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:34.201 ************************************ 00:19:34.201 START TEST bdev_verify_big_io 00:19:34.201 ************************************ 00:19:34.201 15:07:59 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:34.201 [2024-07-12 15:07:59.816080] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:19:34.201 [2024-07-12 15:07:59.816375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:34.769 EAL: TSC is not safe to use in SMP mode 00:19:34.769 EAL: TSC is not invariant 00:19:34.769 [2024-07-12 15:08:00.547894] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:35.027 [2024-07-12 15:08:00.647309] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:35.027 [2024-07-12 15:08:00.647393] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:19:35.027 [2024-07-12 15:08:00.650687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.027 [2024-07-12 15:08:00.650674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.027 [2024-07-12 15:08:00.708811] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:35.027 Running I/O for 5 seconds... 00:19:40.298 00:19:40.298 Latency(us) 00:19:40.298 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.298 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:40.298 Verification LBA range: start 0x0 length 0xa000 00:19:40.298 Nvme0n1 : 5.01 8237.17 514.82 0.00 0.00 15453.97 168.49 29789.11 00:19:40.298 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:40.298 Verification LBA range: start 0xa000 length 0xa000 00:19:40.298 Nvme0n1 : 5.01 8314.37 519.65 0.00 0.00 15302.42 165.70 28835.86 00:19:40.298 =================================================================================================================== 00:19:40.298 Total : 16551.54 1034.47 0.00 0.00 15377.84 165.70 29789.11 00:19:43.581 00:19:43.581 real 0m9.143s 00:19:43.581 user 0m16.499s 00:19:43.581 sys 0m0.777s 00:19:43.581 ************************************ 00:19:43.581 END TEST bdev_verify_big_io 00:19:43.581 ************************************ 00:19:43.581 15:08:08 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:43.581 15:08:08 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:43.581 15:08:08 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:43.581 15:08:08 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:43.581 15:08:08 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:19:43.581 15:08:08 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:43.581 15:08:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:43.581 ************************************ 00:19:43.581 START TEST bdev_write_zeroes 00:19:43.581 ************************************ 00:19:43.581 15:08:08 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:43.581 [2024-07-12 15:08:09.001570] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:19:43.581 [2024-07-12 15:08:09.001753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:43.840 EAL: TSC is not safe to use in SMP mode 00:19:43.840 EAL: TSC is not invariant 00:19:43.840 [2024-07-12 15:08:09.549151] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.097 [2024-07-12 15:08:09.686908] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:44.097 [2024-07-12 15:08:09.689797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.097 [2024-07-12 15:08:09.752998] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:44.097 Running I/O for 1 seconds... 00:19:45.106 00:19:45.106 Latency(us) 00:19:45.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.106 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:45.106 Nvme0n1 : 1.00 66906.23 261.35 0.00 0.00 1911.05 437.53 10843.24 00:19:45.106 =================================================================================================================== 00:19:45.106 Total : 66906.23 261.35 0.00 0.00 1911.05 437.53 10843.24 00:19:45.364 00:19:45.364 real 0m2.014s 00:19:45.364 user 0m1.420s 00:19:45.364 sys 0m0.592s 00:19:45.364 15:08:11 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:45.364 15:08:11 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:45.364 ************************************ 00:19:45.364 END TEST bdev_write_zeroes 00:19:45.364 ************************************ 00:19:45.364 15:08:11 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:45.364 15:08:11 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:45.364 15:08:11 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:19:45.364 15:08:11 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:45.364 15:08:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:45.365 ************************************ 00:19:45.365 START TEST bdev_json_nonenclosed 00:19:45.365 ************************************ 00:19:45.365 15:08:11 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:45.365 [2024-07-12 15:08:11.063117] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:19:45.365 [2024-07-12 15:08:11.063307] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:45.932 EAL: TSC is not safe to use in SMP mode 00:19:45.932 EAL: TSC is not invariant 00:19:45.932 [2024-07-12 15:08:11.580414] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.932 [2024-07-12 15:08:11.654717] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:45.932 [2024-07-12 15:08:11.657014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.932 [2024-07-12 15:08:11.657085] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:45.932 [2024-07-12 15:08:11.657096] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:45.932 [2024-07-12 15:08:11.657104] app.c:1057:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:45.932 00:19:45.932 real 0m0.709s 00:19:45.932 user 0m0.148s 00:19:45.932 sys 0m0.560s 00:19:45.932 15:08:11 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:19:45.932 15:08:11 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:45.932 ************************************ 00:19:45.932 END TEST bdev_json_nonenclosed 00:19:45.932 ************************************ 00:19:45.932 15:08:11 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:46.190 15:08:11 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:19:46.190 15:08:11 blockdev_nvme -- bdev/blockdev.sh@782 -- # true 00:19:46.190 15:08:11 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:46.190 15:08:11 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:19:46.190 15:08:11 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:46.190 15:08:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:46.190 ************************************ 00:19:46.190 START TEST bdev_json_nonarray 00:19:46.190 ************************************ 00:19:46.190 15:08:11 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:46.190 [2024-07-12 15:08:11.815729] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:19:46.190 [2024-07-12 15:08:11.815934] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:46.757 EAL: TSC is not safe to use in SMP mode 00:19:46.757 EAL: TSC is not invariant 00:19:46.757 [2024-07-12 15:08:12.334513] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.757 [2024-07-12 15:08:12.416505] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:46.757 [2024-07-12 15:08:12.418822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.757 [2024-07-12 15:08:12.418915] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:46.757 [2024-07-12 15:08:12.418926] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:46.757 [2024-07-12 15:08:12.418934] app.c:1057:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:46.757 00:19:46.757 real 0m0.728s 00:19:46.757 user 0m0.174s 00:19:46.757 sys 0m0.552s 00:19:46.757 15:08:12 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:19:46.757 15:08:12 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:46.757 15:08:12 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:46.757 ************************************ 00:19:46.757 END TEST bdev_json_nonarray 00:19:46.757 ************************************ 00:19:46.757 15:08:12 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:19:46.757 15:08:12 blockdev_nvme -- bdev/blockdev.sh@785 -- # true 00:19:46.757 15:08:12 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:19:46.757 15:08:12 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:19:46.757 15:08:12 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:19:46.757 15:08:12 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:19:46.757 15:08:12 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:19:46.757 15:08:12 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:46.757 15:08:12 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:46.757 15:08:12 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:19:46.757 15:08:12 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:19:46.757 15:08:12 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:19:46.757 15:08:12 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:19:46.757 00:19:46.757 real 0m23.513s 00:19:46.757 user 0m34.632s 00:19:46.757 sys 0m5.290s 00:19:46.757 15:08:12 blockdev_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:46.757 15:08:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:46.757 ************************************ 00:19:46.757 END TEST blockdev_nvme 00:19:46.757 ************************************ 00:19:47.014 15:08:12 -- common/autotest_common.sh@1142 -- # return 0 00:19:47.014 15:08:12 -- spdk/autotest.sh@213 -- # uname -s 00:19:47.014 15:08:12 -- spdk/autotest.sh@213 -- # [[ FreeBSD == Linux ]] 00:19:47.014 15:08:12 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:19:47.014 15:08:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:47.014 15:08:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:47.014 15:08:12 -- common/autotest_common.sh@10 -- # set +x 00:19:47.014 ************************************ 00:19:47.014 START TEST nvme 00:19:47.014 ************************************ 00:19:47.014 15:08:12 nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:19:47.014 * Looking for test storage... 00:19:47.014 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:19:47.014 15:08:12 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:47.272 hw.nic_uio.bdfs="0:16:0" 00:19:47.272 15:08:12 nvme -- nvme/nvme.sh@79 -- # uname 00:19:47.272 15:08:12 nvme -- nvme/nvme.sh@79 -- # '[' FreeBSD = Linux ']' 00:19:47.272 15:08:12 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:19:47.272 15:08:12 nvme -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:19:47.272 15:08:12 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:47.272 15:08:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:47.272 ************************************ 00:19:47.272 START TEST nvme_reset 00:19:47.272 ************************************ 00:19:47.272 15:08:12 nvme.nvme_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:19:47.837 EAL: TSC is not safe to use in SMP mode 00:19:47.837 EAL: TSC is not invariant 00:19:47.837 [2024-07-12 15:08:13.523186] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:47.837 Initializing NVMe Controllers 00:19:47.837 Skipping QEMU NVMe SSD at 0000:00:10.0 00:19:47.837 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:19:47.837 00:19:47.837 real 0m0.589s 00:19:47.837 user 0m0.002s 00:19:47.837 sys 0m0.586s 00:19:47.837 15:08:13 nvme.nvme_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:47.837 ************************************ 00:19:47.837 15:08:13 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:19:47.837 END TEST nvme_reset 00:19:47.837 ************************************ 00:19:47.837 15:08:13 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:47.837 15:08:13 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:19:47.837 15:08:13 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:47.837 15:08:13 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:47.837 15:08:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:47.837 ************************************ 00:19:47.837 START TEST nvme_identify 00:19:47.837 ************************************ 00:19:47.837 15:08:13 nvme.nvme_identify -- common/autotest_common.sh@1123 -- # nvme_identify 00:19:47.837 15:08:13 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:19:47.837 15:08:13 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:19:47.837 15:08:13 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:19:47.837 15:08:13 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:19:47.837 15:08:13 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:19:47.837 15:08:13 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:19:47.837 15:08:13 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:47.837 15:08:13 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:47.837 15:08:13 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:19:47.837 15:08:13 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:19:47.837 15:08:13 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:19:47.837 15:08:13 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:19:48.403 EAL: TSC is not safe to use in SMP mode 00:19:48.403 EAL: TSC is not invariant 00:19:48.403 [2024-07-12 15:08:14.217073] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:48.403 ===================================================== 00:19:48.403 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:48.403 ===================================================== 00:19:48.403 Controller Capabilities/Features 00:19:48.403 ================================ 00:19:48.403 Vendor ID: 1b36 00:19:48.403 Subsystem Vendor ID: 1af4 00:19:48.403 Serial Number: 12340 00:19:48.403 Model Number: QEMU NVMe Ctrl 00:19:48.403 Firmware Version: 8.0.0 00:19:48.403 Recommended Arb Burst: 6 00:19:48.403 IEEE OUI Identifier: 00 54 52 00:19:48.403 Multi-path I/O 00:19:48.403 May have multiple subsystem ports: No 00:19:48.403 May have multiple controllers: No 00:19:48.403 Associated with SR-IOV VF: No 00:19:48.403 Max Data Transfer Size: 524288 00:19:48.403 Max Number of Namespaces: 256 00:19:48.403 Max Number of I/O Queues: 64 00:19:48.403 NVMe Specification Version (VS): 1.4 00:19:48.403 NVMe Specification Version (Identify): 1.4 00:19:48.403 Maximum Queue Entries: 2048 00:19:48.403 Contiguous Queues Required: Yes 00:19:48.403 Arbitration Mechanisms Supported 00:19:48.403 Weighted Round Robin: Not Supported 00:19:48.403 Vendor Specific: Not Supported 00:19:48.403 Reset Timeout: 7500 ms 00:19:48.403 Doorbell Stride: 4 bytes 00:19:48.403 NVM Subsystem Reset: Not Supported 00:19:48.403 Command Sets Supported 00:19:48.403 NVM Command Set: Supported 00:19:48.403 Boot Partition: Not Supported 00:19:48.403 Memory Page Size Minimum: 4096 bytes 00:19:48.403 Memory Page Size Maximum: 65536 bytes 00:19:48.403 Persistent Memory Region: Not Supported 00:19:48.403 Optional Asynchronous Events Supported 00:19:48.404 Namespace Attribute Notices: Supported 00:19:48.404 Firmware Activation Notices: Not Supported 00:19:48.404 ANA Change Notices: Not Supported 00:19:48.404 PLE Aggregate Log Change Notices: Not Supported 00:19:48.404 LBA Status Info Alert Notices: Not Supported 00:19:48.404 EGE Aggregate Log Change Notices: Not Supported 00:19:48.404 Normal NVM Subsystem Shutdown event: Not Supported 00:19:48.404 Zone Descriptor Change Notices: Not Supported 00:19:48.404 Discovery Log Change Notices: Not Supported 00:19:48.404 Controller Attributes 00:19:48.404 128-bit Host Identifier: Not Supported 00:19:48.404 Non-Operational Permissive Mode: Not Supported 00:19:48.404 NVM Sets: Not Supported 00:19:48.404 Read Recovery Levels: Not Supported 00:19:48.404 Endurance Groups: Not Supported 00:19:48.404 Predictable Latency Mode: Not Supported 00:19:48.404 Traffic Based Keep ALive: Not Supported 00:19:48.404 Namespace Granularity: Not Supported 00:19:48.404 SQ Associations: Not Supported 00:19:48.404 UUID List: Not Supported 00:19:48.404 Multi-Domain Subsystem: Not Supported 00:19:48.404 Fixed Capacity Management: Not Supported 00:19:48.404 Variable Capacity Management: Not Supported 00:19:48.404 Delete Endurance Group: Not Supported 00:19:48.404 Delete NVM Set: Not Supported 00:19:48.404 Extended LBA Formats Supported: Supported 00:19:48.404 Flexible Data Placement Supported: Not Supported 00:19:48.404 00:19:48.404 Controller Memory Buffer Support 00:19:48.404 ================================ 00:19:48.404 Supported: No 00:19:48.404 00:19:48.404 Persistent Memory Region Support 00:19:48.404 ================================ 00:19:48.404 Supported: No 00:19:48.404 00:19:48.404 Admin Command Set Attributes 00:19:48.404 ============================ 00:19:48.404 Security Send/Receive: Not Supported 00:19:48.404 Format NVM: Supported 00:19:48.404 Firmware Activate/Download: Not Supported 00:19:48.404 Namespace Management: Supported 00:19:48.404 Device Self-Test: Not Supported 00:19:48.404 Directives: Supported 00:19:48.404 NVMe-MI: Not Supported 00:19:48.404 Virtualization Management: Not Supported 00:19:48.404 Doorbell Buffer Config: Supported 00:19:48.404 Get LBA Status Capability: Not Supported 00:19:48.404 Command & Feature Lockdown Capability: Not Supported 00:19:48.404 Abort Command Limit: 4 00:19:48.404 Async Event Request Limit: 4 00:19:48.404 Number of Firmware Slots: N/A 00:19:48.404 Firmware Slot 1 Read-Only: N/A 00:19:48.404 Firmware Activation Without Reset: N/A 00:19:48.404 Multiple Update Detection Support: N/A 00:19:48.404 Firmware Update Granularity: No Information Provided 00:19:48.404 Per-Namespace SMART Log: Yes 00:19:48.404 Asymmetric Namespace Access Log Page: Not Supported 00:19:48.404 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:19:48.404 Command Effects Log Page: Supported 00:19:48.404 Get Log Page Extended Data: Supported 00:19:48.404 Telemetry Log Pages: Not Supported 00:19:48.404 Persistent Event Log Pages: Not Supported 00:19:48.404 Supported Log Pages Log Page: May Support 00:19:48.404 Commands Supported & Effects Log Page: Not Supported 00:19:48.404 Feature Identifiers & Effects Log Page:May Support 00:19:48.404 NVMe-MI Commands & Effects Log Page: May Support 00:19:48.404 Data Area 4 for Telemetry Log: Not Supported 00:19:48.404 Error Log Page Entries Supported: 1 00:19:48.404 Keep Alive: Not Supported 00:19:48.404 00:19:48.404 NVM Command Set Attributes 00:19:48.404 ========================== 00:19:48.404 Submission Queue Entry Size 00:19:48.404 Max: 64 00:19:48.404 Min: 64 00:19:48.404 Completion Queue Entry Size 00:19:48.404 Max: 16 00:19:48.404 Min: 16 00:19:48.404 Number of Namespaces: 256 00:19:48.404 Compare Command: Supported 00:19:48.404 Write Uncorrectable Command: Not Supported 00:19:48.404 Dataset Management Command: Supported 00:19:48.404 Write Zeroes Command: Supported 00:19:48.404 Set Features Save Field: Supported 00:19:48.404 Reservations: Not Supported 00:19:48.404 Timestamp: Supported 00:19:48.404 Copy: Supported 00:19:48.404 Volatile Write Cache: Present 00:19:48.404 Atomic Write Unit (Normal): 1 00:19:48.404 Atomic Write Unit (PFail): 1 00:19:48.404 Atomic Compare & Write Unit: 1 00:19:48.404 Fused Compare & Write: Not Supported 00:19:48.404 Scatter-Gather List 00:19:48.404 SGL Command Set: Supported 00:19:48.404 SGL Keyed: Not Supported 00:19:48.404 SGL Bit Bucket Descriptor: Not Supported 00:19:48.404 SGL Metadata Pointer: Not Supported 00:19:48.404 Oversized SGL: Not Supported 00:19:48.404 SGL Metadata Address: Not Supported 00:19:48.404 SGL Offset: Not Supported 00:19:48.404 Transport SGL Data Block: Not Supported 00:19:48.404 Replay Protected Memory Block: Not Supported 00:19:48.404 00:19:48.404 Firmware Slot Information 00:19:48.404 ========================= 00:19:48.404 Active slot: 1 00:19:48.404 Slot 1 Firmware Revision: 1.0 00:19:48.404 00:19:48.404 00:19:48.404 Commands Supported and Effects 00:19:48.404 ============================== 00:19:48.404 Admin Commands 00:19:48.404 -------------- 00:19:48.404 Delete I/O Submission Queue (00h): Supported 00:19:48.404 Create I/O Submission Queue (01h): Supported 00:19:48.404 Get Log Page (02h): Supported 00:19:48.404 Delete I/O Completion Queue (04h): Supported 00:19:48.404 Create I/O Completion Queue (05h): Supported 00:19:48.404 Identify (06h): Supported 00:19:48.404 Abort (08h): Supported 00:19:48.404 Set Features (09h): Supported 00:19:48.404 Get Features (0Ah): Supported 00:19:48.404 Asynchronous Event Request (0Ch): Supported 00:19:48.404 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:48.404 Directive Send (19h): Supported 00:19:48.404 Directive Receive (1Ah): Supported 00:19:48.404 Virtualization Management (1Ch): Supported 00:19:48.404 Doorbell Buffer Config (7Ch): Supported 00:19:48.404 Format NVM (80h): Supported LBA-Change 00:19:48.404 I/O Commands 00:19:48.404 ------------ 00:19:48.404 Flush (00h): Supported LBA-Change 00:19:48.404 Write (01h): Supported LBA-Change 00:19:48.404 Read (02h): Supported 00:19:48.404 Compare (05h): Supported 00:19:48.404 Write Zeroes (08h): Supported LBA-Change 00:19:48.404 Dataset Management (09h): Supported LBA-Change 00:19:48.404 Unknown (0Ch): Supported 00:19:48.404 Unknown (12h): Supported 00:19:48.404 Copy (19h): Supported LBA-Change 00:19:48.404 Unknown (1Dh): Supported LBA-Change 00:19:48.404 00:19:48.404 Error Log 00:19:48.404 ========= 00:19:48.404 00:19:48.404 Arbitration 00:19:48.404 =========== 00:19:48.404 Arbitration Burst: no limit 00:19:48.404 00:19:48.404 Power Management 00:19:48.404 ================ 00:19:48.404 Number of Power States: 1 00:19:48.404 Current Power State: Power State #0 00:19:48.404 Power State #0: 00:19:48.404 Max Power: 25.00 W 00:19:48.404 Non-Operational State: Operational 00:19:48.404 Entry Latency: 16 microseconds 00:19:48.404 Exit Latency: 4 microseconds 00:19:48.404 Relative Read Throughput: 0 00:19:48.404 Relative Read Latency: 0 00:19:48.404 Relative Write Throughput: 0 00:19:48.404 Relative Write Latency: 0 00:19:48.662 Idle Power: Not Reported 00:19:48.662 Active Power: Not Reported 00:19:48.662 Non-Operational Permissive Mode: Not Supported 00:19:48.662 00:19:48.662 Health Information 00:19:48.662 ================== 00:19:48.662 Critical Warnings: 00:19:48.662 Available Spare Space: OK 00:19:48.662 Temperature: OK 00:19:48.662 Device Reliability: OK 00:19:48.662 Read Only: No 00:19:48.662 Volatile Memory Backup: OK 00:19:48.662 Current Temperature: 323 Kelvin (50 Celsius) 00:19:48.662 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:48.662 Available Spare: 0% 00:19:48.662 Available Spare Threshold: 0% 00:19:48.662 Life Percentage Used: 0% 00:19:48.662 Data Units Read: 12291 00:19:48.662 Data Units Written: 12276 00:19:48.662 Host Read Commands: 290929 00:19:48.662 Host Write Commands: 290788 00:19:48.662 Controller Busy Time: 0 minutes 00:19:48.662 Power Cycles: 0 00:19:48.662 Power On Hours: 0 hours 00:19:48.662 Unsafe Shutdowns: 0 00:19:48.662 Unrecoverable Media Errors: 0 00:19:48.662 Lifetime Error Log Entries: 0 00:19:48.662 Warning Temperature Time: 0 minutes 00:19:48.662 Critical Temperature Time: 0 minutes 00:19:48.662 00:19:48.662 Number of Queues 00:19:48.662 ================ 00:19:48.662 Number of I/O Submission Queues: 64 00:19:48.662 Number of I/O Completion Queues: 64 00:19:48.662 00:19:48.662 ZNS Specific Controller Data 00:19:48.662 ============================ 00:19:48.662 Zone Append Size Limit: 0 00:19:48.662 00:19:48.662 00:19:48.662 Active Namespaces 00:19:48.662 ================= 00:19:48.662 Namespace ID:1 00:19:48.662 Error Recovery Timeout: Unlimited 00:19:48.662 Command Set Identifier: NVM (00h) 00:19:48.662 Deallocate: Supported 00:19:48.662 Deallocated/Unwritten Error: Supported 00:19:48.662 Deallocated Read Value: All 0x00 00:19:48.662 Deallocate in Write Zeroes: Not Supported 00:19:48.662 Deallocated Guard Field: 0xFFFF 00:19:48.662 Flush: Supported 00:19:48.662 Reservation: Not Supported 00:19:48.662 Namespace Sharing Capabilities: Private 00:19:48.662 Size (in LBAs): 1310720 (5GiB) 00:19:48.662 Capacity (in LBAs): 1310720 (5GiB) 00:19:48.662 Utilization (in LBAs): 1310720 (5GiB) 00:19:48.662 Thin Provisioning: Not Supported 00:19:48.663 Per-NS Atomic Units: No 00:19:48.663 Maximum Single Source Range Length: 128 00:19:48.663 Maximum Copy Length: 128 00:19:48.663 Maximum Source Range Count: 128 00:19:48.663 NGUID/EUI64 Never Reused: No 00:19:48.663 Namespace Write Protected: No 00:19:48.663 Number of LBA Formats: 8 00:19:48.663 Current LBA Format: LBA Format #04 00:19:48.663 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:48.663 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:48.663 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:48.663 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:48.663 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:48.663 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:48.663 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:48.663 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:48.663 00:19:48.663 NVM Specific Namespace Data 00:19:48.663 =========================== 00:19:48.663 Logical Block Storage Tag Mask: 0 00:19:48.663 Protection Information Capabilities: 00:19:48.663 16b Guard Protection Information Storage Tag Support: No 00:19:48.663 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:48.663 Storage Tag Check Read Support: No 00:19:48.663 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:48.663 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:48.663 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:48.663 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:48.663 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:48.663 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:48.663 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:48.663 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:48.663 15:08:14 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:19:48.663 15:08:14 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:19:49.229 EAL: TSC is not safe to use in SMP mode 00:19:49.229 EAL: TSC is not invariant 00:19:49.229 [2024-07-12 15:08:14.798553] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:49.229 ===================================================== 00:19:49.229 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:49.229 ===================================================== 00:19:49.229 Controller Capabilities/Features 00:19:49.229 ================================ 00:19:49.229 Vendor ID: 1b36 00:19:49.229 Subsystem Vendor ID: 1af4 00:19:49.230 Serial Number: 12340 00:19:49.230 Model Number: QEMU NVMe Ctrl 00:19:49.230 Firmware Version: 8.0.0 00:19:49.230 Recommended Arb Burst: 6 00:19:49.230 IEEE OUI Identifier: 00 54 52 00:19:49.230 Multi-path I/O 00:19:49.230 May have multiple subsystem ports: No 00:19:49.230 May have multiple controllers: No 00:19:49.230 Associated with SR-IOV VF: No 00:19:49.230 Max Data Transfer Size: 524288 00:19:49.230 Max Number of Namespaces: 256 00:19:49.230 Max Number of I/O Queues: 64 00:19:49.230 NVMe Specification Version (VS): 1.4 00:19:49.230 NVMe Specification Version (Identify): 1.4 00:19:49.230 Maximum Queue Entries: 2048 00:19:49.230 Contiguous Queues Required: Yes 00:19:49.230 Arbitration Mechanisms Supported 00:19:49.230 Weighted Round Robin: Not Supported 00:19:49.230 Vendor Specific: Not Supported 00:19:49.230 Reset Timeout: 7500 ms 00:19:49.230 Doorbell Stride: 4 bytes 00:19:49.230 NVM Subsystem Reset: Not Supported 00:19:49.230 Command Sets Supported 00:19:49.230 NVM Command Set: Supported 00:19:49.230 Boot Partition: Not Supported 00:19:49.230 Memory Page Size Minimum: 4096 bytes 00:19:49.230 Memory Page Size Maximum: 65536 bytes 00:19:49.230 Persistent Memory Region: Not Supported 00:19:49.230 Optional Asynchronous Events Supported 00:19:49.230 Namespace Attribute Notices: Supported 00:19:49.230 Firmware Activation Notices: Not Supported 00:19:49.230 ANA Change Notices: Not Supported 00:19:49.230 PLE Aggregate Log Change Notices: Not Supported 00:19:49.230 LBA Status Info Alert Notices: Not Supported 00:19:49.230 EGE Aggregate Log Change Notices: Not Supported 00:19:49.230 Normal NVM Subsystem Shutdown event: Not Supported 00:19:49.230 Zone Descriptor Change Notices: Not Supported 00:19:49.230 Discovery Log Change Notices: Not Supported 00:19:49.230 Controller Attributes 00:19:49.230 128-bit Host Identifier: Not Supported 00:19:49.230 Non-Operational Permissive Mode: Not Supported 00:19:49.230 NVM Sets: Not Supported 00:19:49.230 Read Recovery Levels: Not Supported 00:19:49.230 Endurance Groups: Not Supported 00:19:49.230 Predictable Latency Mode: Not Supported 00:19:49.230 Traffic Based Keep ALive: Not Supported 00:19:49.230 Namespace Granularity: Not Supported 00:19:49.230 SQ Associations: Not Supported 00:19:49.230 UUID List: Not Supported 00:19:49.230 Multi-Domain Subsystem: Not Supported 00:19:49.230 Fixed Capacity Management: Not Supported 00:19:49.230 Variable Capacity Management: Not Supported 00:19:49.230 Delete Endurance Group: Not Supported 00:19:49.230 Delete NVM Set: Not Supported 00:19:49.230 Extended LBA Formats Supported: Supported 00:19:49.230 Flexible Data Placement Supported: Not Supported 00:19:49.230 00:19:49.230 Controller Memory Buffer Support 00:19:49.230 ================================ 00:19:49.230 Supported: No 00:19:49.230 00:19:49.230 Persistent Memory Region Support 00:19:49.230 ================================ 00:19:49.230 Supported: No 00:19:49.230 00:19:49.230 Admin Command Set Attributes 00:19:49.230 ============================ 00:19:49.230 Security Send/Receive: Not Supported 00:19:49.230 Format NVM: Supported 00:19:49.230 Firmware Activate/Download: Not Supported 00:19:49.230 Namespace Management: Supported 00:19:49.230 Device Self-Test: Not Supported 00:19:49.230 Directives: Supported 00:19:49.230 NVMe-MI: Not Supported 00:19:49.230 Virtualization Management: Not Supported 00:19:49.230 Doorbell Buffer Config: Supported 00:19:49.230 Get LBA Status Capability: Not Supported 00:19:49.230 Command & Feature Lockdown Capability: Not Supported 00:19:49.230 Abort Command Limit: 4 00:19:49.230 Async Event Request Limit: 4 00:19:49.230 Number of Firmware Slots: N/A 00:19:49.230 Firmware Slot 1 Read-Only: N/A 00:19:49.230 Firmware Activation Without Reset: N/A 00:19:49.230 Multiple Update Detection Support: N/A 00:19:49.230 Firmware Update Granularity: No Information Provided 00:19:49.230 Per-Namespace SMART Log: Yes 00:19:49.230 Asymmetric Namespace Access Log Page: Not Supported 00:19:49.230 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:19:49.230 Command Effects Log Page: Supported 00:19:49.230 Get Log Page Extended Data: Supported 00:19:49.230 Telemetry Log Pages: Not Supported 00:19:49.230 Persistent Event Log Pages: Not Supported 00:19:49.230 Supported Log Pages Log Page: May Support 00:19:49.230 Commands Supported & Effects Log Page: Not Supported 00:19:49.230 Feature Identifiers & Effects Log Page:May Support 00:19:49.230 NVMe-MI Commands & Effects Log Page: May Support 00:19:49.230 Data Area 4 for Telemetry Log: Not Supported 00:19:49.230 Error Log Page Entries Supported: 1 00:19:49.230 Keep Alive: Not Supported 00:19:49.230 00:19:49.230 NVM Command Set Attributes 00:19:49.230 ========================== 00:19:49.230 Submission Queue Entry Size 00:19:49.230 Max: 64 00:19:49.230 Min: 64 00:19:49.230 Completion Queue Entry Size 00:19:49.230 Max: 16 00:19:49.230 Min: 16 00:19:49.230 Number of Namespaces: 256 00:19:49.230 Compare Command: Supported 00:19:49.230 Write Uncorrectable Command: Not Supported 00:19:49.230 Dataset Management Command: Supported 00:19:49.230 Write Zeroes Command: Supported 00:19:49.230 Set Features Save Field: Supported 00:19:49.230 Reservations: Not Supported 00:19:49.230 Timestamp: Supported 00:19:49.230 Copy: Supported 00:19:49.230 Volatile Write Cache: Present 00:19:49.230 Atomic Write Unit (Normal): 1 00:19:49.230 Atomic Write Unit (PFail): 1 00:19:49.230 Atomic Compare & Write Unit: 1 00:19:49.230 Fused Compare & Write: Not Supported 00:19:49.230 Scatter-Gather List 00:19:49.230 SGL Command Set: Supported 00:19:49.230 SGL Keyed: Not Supported 00:19:49.230 SGL Bit Bucket Descriptor: Not Supported 00:19:49.230 SGL Metadata Pointer: Not Supported 00:19:49.230 Oversized SGL: Not Supported 00:19:49.230 SGL Metadata Address: Not Supported 00:19:49.230 SGL Offset: Not Supported 00:19:49.230 Transport SGL Data Block: Not Supported 00:19:49.230 Replay Protected Memory Block: Not Supported 00:19:49.230 00:19:49.230 Firmware Slot Information 00:19:49.230 ========================= 00:19:49.230 Active slot: 1 00:19:49.230 Slot 1 Firmware Revision: 1.0 00:19:49.230 00:19:49.230 00:19:49.230 Commands Supported and Effects 00:19:49.230 ============================== 00:19:49.230 Admin Commands 00:19:49.230 -------------- 00:19:49.230 Delete I/O Submission Queue (00h): Supported 00:19:49.230 Create I/O Submission Queue (01h): Supported 00:19:49.230 Get Log Page (02h): Supported 00:19:49.230 Delete I/O Completion Queue (04h): Supported 00:19:49.230 Create I/O Completion Queue (05h): Supported 00:19:49.230 Identify (06h): Supported 00:19:49.230 Abort (08h): Supported 00:19:49.230 Set Features (09h): Supported 00:19:49.230 Get Features (0Ah): Supported 00:19:49.230 Asynchronous Event Request (0Ch): Supported 00:19:49.230 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:49.230 Directive Send (19h): Supported 00:19:49.230 Directive Receive (1Ah): Supported 00:19:49.230 Virtualization Management (1Ch): Supported 00:19:49.230 Doorbell Buffer Config (7Ch): Supported 00:19:49.230 Format NVM (80h): Supported LBA-Change 00:19:49.230 I/O Commands 00:19:49.230 ------------ 00:19:49.230 Flush (00h): Supported LBA-Change 00:19:49.230 Write (01h): Supported LBA-Change 00:19:49.230 Read (02h): Supported 00:19:49.230 Compare (05h): Supported 00:19:49.230 Write Zeroes (08h): Supported LBA-Change 00:19:49.230 Dataset Management (09h): Supported LBA-Change 00:19:49.230 Unknown (0Ch): Supported 00:19:49.230 Unknown (12h): Supported 00:19:49.230 Copy (19h): Supported LBA-Change 00:19:49.230 Unknown (1Dh): Supported LBA-Change 00:19:49.230 00:19:49.230 Error Log 00:19:49.230 ========= 00:19:49.230 00:19:49.230 Arbitration 00:19:49.230 =========== 00:19:49.230 Arbitration Burst: no limit 00:19:49.230 00:19:49.230 Power Management 00:19:49.230 ================ 00:19:49.230 Number of Power States: 1 00:19:49.230 Current Power State: Power State #0 00:19:49.230 Power State #0: 00:19:49.230 Max Power: 25.00 W 00:19:49.230 Non-Operational State: Operational 00:19:49.230 Entry Latency: 16 microseconds 00:19:49.230 Exit Latency: 4 microseconds 00:19:49.230 Relative Read Throughput: 0 00:19:49.230 Relative Read Latency: 0 00:19:49.230 Relative Write Throughput: 0 00:19:49.230 Relative Write Latency: 0 00:19:49.230 Idle Power: Not Reported 00:19:49.230 Active Power: Not Reported 00:19:49.230 Non-Operational Permissive Mode: Not Supported 00:19:49.230 00:19:49.230 Health Information 00:19:49.230 ================== 00:19:49.230 Critical Warnings: 00:19:49.230 Available Spare Space: OK 00:19:49.230 Temperature: OK 00:19:49.230 Device Reliability: OK 00:19:49.230 Read Only: No 00:19:49.230 Volatile Memory Backup: OK 00:19:49.230 Current Temperature: 323 Kelvin (50 Celsius) 00:19:49.230 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:49.230 Available Spare: 0% 00:19:49.230 Available Spare Threshold: 0% 00:19:49.230 Life Percentage Used: 0% 00:19:49.230 Data Units Read: 12291 00:19:49.230 Data Units Written: 12276 00:19:49.230 Host Read Commands: 290929 00:19:49.230 Host Write Commands: 290788 00:19:49.230 Controller Busy Time: 0 minutes 00:19:49.230 Power Cycles: 0 00:19:49.230 Power On Hours: 0 hours 00:19:49.230 Unsafe Shutdowns: 0 00:19:49.230 Unrecoverable Media Errors: 0 00:19:49.230 Lifetime Error Log Entries: 0 00:19:49.230 Warning Temperature Time: 0 minutes 00:19:49.230 Critical Temperature Time: 0 minutes 00:19:49.230 00:19:49.230 Number of Queues 00:19:49.230 ================ 00:19:49.230 Number of I/O Submission Queues: 64 00:19:49.230 Number of I/O Completion Queues: 64 00:19:49.230 00:19:49.230 ZNS Specific Controller Data 00:19:49.230 ============================ 00:19:49.230 Zone Append Size Limit: 0 00:19:49.230 00:19:49.230 00:19:49.230 Active Namespaces 00:19:49.230 ================= 00:19:49.230 Namespace ID:1 00:19:49.230 Error Recovery Timeout: Unlimited 00:19:49.230 Command Set Identifier: NVM (00h) 00:19:49.230 Deallocate: Supported 00:19:49.230 Deallocated/Unwritten Error: Supported 00:19:49.230 Deallocated Read Value: All 0x00 00:19:49.230 Deallocate in Write Zeroes: Not Supported 00:19:49.230 Deallocated Guard Field: 0xFFFF 00:19:49.230 Flush: Supported 00:19:49.230 Reservation: Not Supported 00:19:49.230 Namespace Sharing Capabilities: Private 00:19:49.230 Size (in LBAs): 1310720 (5GiB) 00:19:49.230 Capacity (in LBAs): 1310720 (5GiB) 00:19:49.230 Utilization (in LBAs): 1310720 (5GiB) 00:19:49.230 Thin Provisioning: Not Supported 00:19:49.230 Per-NS Atomic Units: No 00:19:49.230 Maximum Single Source Range Length: 128 00:19:49.230 Maximum Copy Length: 128 00:19:49.230 Maximum Source Range Count: 128 00:19:49.230 NGUID/EUI64 Never Reused: No 00:19:49.230 Namespace Write Protected: No 00:19:49.230 Number of LBA Formats: 8 00:19:49.230 Current LBA Format: LBA Format #04 00:19:49.230 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:49.230 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:49.230 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:49.230 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:49.230 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:49.230 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:49.230 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:49.230 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:49.230 00:19:49.230 NVM Specific Namespace Data 00:19:49.230 =========================== 00:19:49.230 Logical Block Storage Tag Mask: 0 00:19:49.230 Protection Information Capabilities: 00:19:49.230 16b Guard Protection Information Storage Tag Support: No 00:19:49.230 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:49.230 Storage Tag Check Read Support: No 00:19:49.230 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:49.230 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:49.230 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:49.230 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:49.230 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:49.230 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:49.230 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:49.230 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:49.230 00:19:49.230 real 0m1.228s 00:19:49.230 user 0m0.053s 00:19:49.230 sys 0m1.191s 00:19:49.230 15:08:14 nvme.nvme_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:49.230 15:08:14 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:19:49.230 ************************************ 00:19:49.230 END TEST nvme_identify 00:19:49.230 ************************************ 00:19:49.230 15:08:14 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:49.230 15:08:14 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:19:49.231 15:08:14 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:49.231 15:08:14 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:49.231 15:08:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:49.231 ************************************ 00:19:49.231 START TEST nvme_perf 00:19:49.231 ************************************ 00:19:49.231 15:08:14 nvme.nvme_perf -- common/autotest_common.sh@1123 -- # nvme_perf 00:19:49.231 15:08:14 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:19:49.797 EAL: TSC is not safe to use in SMP mode 00:19:49.797 EAL: TSC is not invariant 00:19:49.797 [2024-07-12 15:08:15.432608] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:50.731 Initializing NVMe Controllers 00:19:50.731 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:50.731 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:19:50.731 Initialization complete. Launching workers. 00:19:50.731 ======================================================== 00:19:50.731 Latency(us) 00:19:50.731 Device Information : IOPS MiB/s Average min max 00:19:50.731 PCIE (0000:00:10.0) NSID 1 from core 0: 87177.36 1021.61 1469.09 383.65 4006.77 00:19:50.731 ======================================================== 00:19:50.731 Total : 87177.36 1021.61 1469.09 383.65 4006.77 00:19:50.731 00:19:50.731 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:50.731 ================================================================================= 00:19:50.731 1.00000% : 1191.564us 00:19:50.731 10.00000% : 1273.484us 00:19:50.731 25.00000% : 1333.063us 00:19:50.731 50.00000% : 1437.325us 00:19:50.731 75.00000% : 1571.376us 00:19:50.731 90.00000% : 1683.085us 00:19:50.731 95.00000% : 1742.663us 00:19:50.731 98.00000% : 1899.056us 00:19:50.731 99.00000% : 2323.551us 00:19:50.731 99.50000% : 2517.180us 00:19:50.731 99.90000% : 3395.959us 00:19:50.731 99.99000% : 3842.795us 00:19:50.731 99.99900% : 4021.530us 00:19:50.731 99.99990% : 4021.530us 00:19:50.731 99.99999% : 4021.530us 00:19:50.731 00:19:50.731 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:50.731 ============================================================================== 00:19:50.731 Range in us Cumulative IO count 00:19:50.731 383.535 - 385.397: 0.0023% ( 2) 00:19:50.731 385.397 - 387.258: 0.0034% ( 1) 00:19:50.731 387.258 - 389.120: 0.0057% ( 2) 00:19:50.731 389.120 - 390.982: 0.0080% ( 2) 00:19:50.731 390.982 - 392.844: 0.0092% ( 1) 00:19:50.731 392.844 - 394.706: 0.0115% ( 2) 00:19:50.731 394.706 - 396.568: 0.0138% ( 2) 00:19:50.731 396.568 - 398.429: 0.0161% ( 2) 00:19:50.731 398.429 - 400.291: 0.0172% ( 1) 00:19:50.731 400.291 - 402.153: 0.0195% ( 2) 00:19:50.731 402.153 - 404.015: 0.0218% ( 2) 00:19:50.731 404.015 - 405.877: 0.0229% ( 1) 00:19:50.731 405.877 - 407.738: 0.0252% ( 2) 00:19:50.731 407.738 - 409.600: 0.0275% ( 2) 00:19:50.731 409.600 - 411.462: 0.0287% ( 1) 00:19:50.731 411.462 - 413.324: 0.0310% ( 2) 00:19:50.731 413.324 - 415.186: 0.0321% ( 1) 00:19:50.731 793.135 - 796.859: 0.0333% ( 1) 00:19:50.731 796.859 - 800.582: 0.0356% ( 2) 00:19:50.731 800.582 - 804.306: 0.0378% ( 2) 00:19:50.731 804.306 - 808.030: 0.0413% ( 3) 00:19:50.731 808.030 - 811.753: 0.0436% ( 2) 00:19:50.731 811.753 - 815.477: 0.0447% ( 1) 00:19:50.731 837.819 - 841.542: 0.0470% ( 2) 00:19:50.731 841.542 - 845.266: 0.0493% ( 2) 00:19:50.731 845.266 - 848.990: 0.0505% ( 1) 00:19:50.731 953.252 - 960.699: 0.0516% ( 1) 00:19:50.731 960.699 - 968.146: 0.0562% ( 4) 00:19:50.731 1102.197 - 1109.644: 0.0585% ( 2) 00:19:50.731 1109.644 - 1117.092: 0.0631% ( 4) 00:19:50.731 1117.092 - 1124.539: 0.0757% ( 11) 00:19:50.731 1124.539 - 1131.986: 0.0929% ( 15) 00:19:50.731 1131.986 - 1139.433: 0.1101% ( 15) 00:19:50.731 1139.433 - 1146.881: 0.1422% ( 28) 00:19:50.731 1146.881 - 1154.328: 0.1984% ( 49) 00:19:50.731 1154.328 - 1161.775: 0.2753% ( 67) 00:19:50.731 1161.775 - 1169.223: 0.3923% ( 102) 00:19:50.731 1169.223 - 1176.670: 0.5758% ( 160) 00:19:50.731 1176.670 - 1184.117: 0.8224% ( 215) 00:19:50.731 1184.117 - 1191.564: 1.1309% ( 269) 00:19:50.731 1191.564 - 1199.012: 1.5094% ( 330) 00:19:50.731 1199.012 - 1206.459: 1.9980% ( 426) 00:19:50.731 1206.459 - 1213.906: 2.5841% ( 511) 00:19:50.731 1213.906 - 1221.354: 3.2929% ( 618) 00:19:50.731 1221.354 - 1228.801: 4.1015% ( 705) 00:19:50.731 1228.801 - 1236.248: 5.0087% ( 791) 00:19:50.731 1236.248 - 1243.695: 6.0169% ( 879) 00:19:50.731 1243.695 - 1251.143: 7.1111% ( 954) 00:19:50.731 1251.143 - 1258.590: 8.3050% ( 1041) 00:19:50.731 1258.590 - 1266.037: 9.6309% ( 1156) 00:19:50.731 1266.037 - 1273.484: 11.0451% ( 1233) 00:19:50.731 1273.484 - 1280.932: 12.5682% ( 1328) 00:19:50.731 1280.932 - 1288.379: 14.2072% ( 1429) 00:19:50.731 1288.379 - 1295.826: 15.9093% ( 1484) 00:19:50.731 1295.826 - 1303.274: 17.6538% ( 1521) 00:19:50.731 1303.274 - 1310.721: 19.4763% ( 1589) 00:19:50.731 1310.721 - 1318.168: 21.3412% ( 1626) 00:19:50.731 1318.168 - 1325.615: 23.2716% ( 1683) 00:19:50.731 1325.615 - 1333.063: 25.2042% ( 1685) 00:19:50.731 1333.063 - 1340.510: 27.1586% ( 1704) 00:19:50.731 1340.510 - 1347.957: 29.1003% ( 1693) 00:19:50.731 1347.957 - 1355.405: 31.0008% ( 1657) 00:19:50.731 1355.405 - 1362.852: 32.9048% ( 1660) 00:19:50.731 1362.852 - 1370.299: 34.7903% ( 1644) 00:19:50.731 1370.299 - 1377.746: 36.6966% ( 1662) 00:19:50.731 1377.746 - 1385.194: 38.5925% ( 1653) 00:19:50.731 1385.194 - 1392.641: 40.4666% ( 1634) 00:19:50.731 1392.641 - 1400.088: 42.3143% ( 1611) 00:19:50.731 1400.088 - 1407.535: 44.0944% ( 1552) 00:19:50.731 1407.535 - 1414.983: 45.8056% ( 1492) 00:19:50.731 1414.983 - 1422.430: 47.4331% ( 1419) 00:19:50.731 1422.430 - 1429.877: 48.9861% ( 1354) 00:19:50.731 1429.877 - 1437.325: 50.4794% ( 1302) 00:19:50.731 1437.325 - 1444.772: 51.9613% ( 1292) 00:19:50.731 1444.772 - 1452.219: 53.4248% ( 1276) 00:19:50.731 1452.219 - 1459.666: 54.8424% ( 1236) 00:19:50.731 1459.666 - 1467.114: 56.2887% ( 1261) 00:19:50.731 1467.114 - 1474.561: 57.6662% ( 1201) 00:19:50.731 1474.561 - 1482.008: 59.0345% ( 1193) 00:19:50.731 1482.008 - 1489.456: 60.4040% ( 1194) 00:19:50.731 1489.456 - 1496.903: 61.7665% ( 1188) 00:19:50.731 1496.903 - 1504.350: 63.1165% ( 1177) 00:19:50.731 1504.350 - 1511.797: 64.4424% ( 1156) 00:19:50.731 1511.797 - 1519.245: 65.7785% ( 1165) 00:19:50.731 1519.245 - 1526.692: 67.0952% ( 1148) 00:19:50.731 1526.692 - 1534.139: 68.4360% ( 1169) 00:19:50.731 1534.139 - 1541.586: 69.7550% ( 1150) 00:19:50.731 1541.586 - 1549.034: 71.0935% ( 1167) 00:19:50.731 1549.034 - 1556.481: 72.4159% ( 1153) 00:19:50.731 1556.481 - 1563.928: 73.7269% ( 1143) 00:19:50.731 1563.928 - 1571.376: 75.0172% ( 1125) 00:19:50.731 1571.376 - 1578.823: 76.2708% ( 1093) 00:19:50.731 1578.823 - 1586.270: 77.5107% ( 1081) 00:19:50.731 1586.270 - 1593.717: 78.7138% ( 1049) 00:19:50.731 1593.717 - 1601.165: 79.8975% ( 1032) 00:19:50.731 1601.165 - 1608.612: 81.0570% ( 1011) 00:19:50.731 1608.612 - 1616.059: 82.1627% ( 964) 00:19:50.731 1616.059 - 1623.507: 83.2179% ( 920) 00:19:50.731 1623.507 - 1630.954: 84.2696% ( 917) 00:19:50.731 1630.954 - 1638.401: 85.2812% ( 882) 00:19:50.731 1638.401 - 1645.848: 86.2538% ( 848) 00:19:50.731 1645.848 - 1653.296: 87.1909% ( 817) 00:19:50.731 1653.296 - 1660.743: 88.0729% ( 769) 00:19:50.731 1660.743 - 1668.190: 88.9216% ( 740) 00:19:50.732 1668.190 - 1675.637: 89.7463% ( 719) 00:19:50.732 1675.637 - 1683.085: 90.5113% ( 667) 00:19:50.732 1683.085 - 1690.532: 91.2568% ( 650) 00:19:50.732 1690.532 - 1697.979: 91.9588% ( 612) 00:19:50.732 1697.979 - 1705.427: 92.6205% ( 577) 00:19:50.732 1705.427 - 1712.874: 93.2353% ( 536) 00:19:50.732 1712.874 - 1720.321: 93.7767% ( 472) 00:19:50.732 1720.321 - 1727.768: 94.2618% ( 423) 00:19:50.732 1727.768 - 1735.216: 94.6931% ( 376) 00:19:50.732 1735.216 - 1742.663: 95.0796% ( 337) 00:19:50.732 1742.663 - 1750.110: 95.4329% ( 308) 00:19:50.732 1750.110 - 1757.558: 95.7609% ( 286) 00:19:50.732 1757.558 - 1765.005: 96.0453% ( 248) 00:19:50.732 1765.005 - 1772.452: 96.3068% ( 228) 00:19:50.732 1772.452 - 1779.899: 96.5305% ( 195) 00:19:50.732 1779.899 - 1787.347: 96.7220% ( 167) 00:19:50.732 1787.347 - 1794.794: 96.8849% ( 142) 00:19:50.732 1794.794 - 1802.241: 97.0111% ( 110) 00:19:50.732 1802.241 - 1809.688: 97.1349% ( 108) 00:19:50.732 1809.688 - 1817.136: 97.2485% ( 99) 00:19:50.732 1817.136 - 1824.583: 97.3494% ( 88) 00:19:50.732 1824.583 - 1832.030: 97.4457% ( 84) 00:19:50.732 1832.030 - 1839.478: 97.5398% ( 82) 00:19:50.732 1839.478 - 1846.925: 97.6247% ( 74) 00:19:50.732 1846.925 - 1854.372: 97.6981% ( 64) 00:19:50.732 1854.372 - 1861.819: 97.7635% ( 57) 00:19:50.732 1861.819 - 1869.267: 97.8300% ( 58) 00:19:50.732 1869.267 - 1876.714: 97.8896% ( 52) 00:19:50.732 1876.714 - 1884.161: 97.9481% ( 51) 00:19:50.732 1884.161 - 1891.609: 97.9997% ( 45) 00:19:50.732 1891.609 - 1899.056: 98.0479% ( 42) 00:19:50.732 1899.056 - 1906.503: 98.0857% ( 33) 00:19:50.732 1906.503 - 1921.398: 98.1488% ( 55) 00:19:50.732 1921.398 - 1936.292: 98.2050% ( 49) 00:19:50.732 1936.292 - 1951.187: 98.2589% ( 47) 00:19:50.732 1951.187 - 1966.081: 98.3117% ( 46) 00:19:50.732 1966.081 - 1980.976: 98.3450% ( 29) 00:19:50.732 1980.976 - 1995.870: 98.3610% ( 14) 00:19:50.732 1995.870 - 2010.765: 98.3725% ( 10) 00:19:50.732 2010.765 - 2025.660: 98.3828% ( 9) 00:19:50.732 2025.660 - 2040.554: 98.3966% ( 12) 00:19:50.732 2040.554 - 2055.449: 98.4149% ( 16) 00:19:50.732 2055.449 - 2070.343: 98.4356% ( 18) 00:19:50.732 2070.343 - 2085.238: 98.4562% ( 18) 00:19:50.732 2085.238 - 2100.132: 98.4780% ( 19) 00:19:50.732 2100.132 - 2115.027: 98.5055% ( 24) 00:19:50.732 2115.027 - 2129.921: 98.5411% ( 31) 00:19:50.732 2129.921 - 2144.816: 98.5801% ( 34) 00:19:50.732 2144.816 - 2159.711: 98.6283% ( 42) 00:19:50.732 2159.711 - 2174.605: 98.6776% ( 43) 00:19:50.732 2174.605 - 2189.500: 98.7326% ( 48) 00:19:50.732 2189.500 - 2204.394: 98.7831% ( 44) 00:19:50.732 2204.394 - 2219.289: 98.8232% ( 35) 00:19:50.732 2219.289 - 2234.183: 98.8611% ( 33) 00:19:50.732 2234.183 - 2249.078: 98.8989% ( 33) 00:19:50.732 2249.078 - 2263.972: 98.9368% ( 33) 00:19:50.732 2263.972 - 2278.867: 98.9517% ( 13) 00:19:50.732 2278.867 - 2293.762: 98.9643% ( 11) 00:19:50.732 2293.762 - 2308.656: 98.9781% ( 12) 00:19:50.732 2308.656 - 2323.551: 99.0067% ( 25) 00:19:50.732 2323.551 - 2338.445: 99.0331% ( 23) 00:19:50.732 2338.445 - 2353.340: 99.0664% ( 29) 00:19:50.732 2353.340 - 2368.234: 99.1088% ( 37) 00:19:50.732 2368.234 - 2383.129: 99.1513% ( 37) 00:19:50.732 2383.129 - 2398.023: 99.1925% ( 36) 00:19:50.732 2398.023 - 2412.918: 99.2338% ( 36) 00:19:50.732 2412.918 - 2427.813: 99.2820% ( 42) 00:19:50.732 2427.813 - 2442.707: 99.3313% ( 43) 00:19:50.732 2442.707 - 2457.602: 99.3795% ( 42) 00:19:50.732 2457.602 - 2472.496: 99.4162% ( 32) 00:19:50.732 2472.496 - 2487.391: 99.4483% ( 28) 00:19:50.732 2487.391 - 2502.285: 99.4839% ( 31) 00:19:50.732 2502.285 - 2517.180: 99.5206% ( 32) 00:19:50.732 2517.180 - 2532.074: 99.5550% ( 30) 00:19:50.732 2532.074 - 2546.969: 99.5710% ( 14) 00:19:50.732 2546.969 - 2561.864: 99.5837% ( 11) 00:19:50.732 2561.864 - 2576.758: 99.5963% ( 11) 00:19:50.732 2576.758 - 2591.653: 99.6055% ( 8) 00:19:50.732 2591.653 - 2606.547: 99.6135% ( 7) 00:19:50.732 2606.547 - 2621.442: 99.6215% ( 7) 00:19:50.732 2621.442 - 2636.336: 99.6295% ( 7) 00:19:50.732 2636.336 - 2651.231: 99.6376% ( 7) 00:19:50.732 2651.231 - 2666.125: 99.6433% ( 5) 00:19:50.732 2666.125 - 2681.020: 99.6444% ( 1) 00:19:50.732 2681.020 - 2695.915: 99.6525% ( 7) 00:19:50.732 2695.915 - 2710.809: 99.6605% ( 7) 00:19:50.732 2710.809 - 2725.704: 99.6685% ( 7) 00:19:50.732 2725.704 - 2740.598: 99.6766% ( 7) 00:19:50.732 2740.598 - 2755.493: 99.6846% ( 7) 00:19:50.732 2755.493 - 2770.387: 99.6926% ( 7) 00:19:50.732 2770.387 - 2785.282: 99.6995% ( 6) 00:19:50.732 2785.282 - 2800.176: 99.7029% ( 3) 00:19:50.732 2859.755 - 2874.649: 99.7041% ( 1) 00:19:50.732 2889.544 - 2904.438: 99.7087% ( 4) 00:19:50.732 2904.438 - 2919.333: 99.7098% ( 1) 00:19:50.732 2949.122 - 2964.017: 99.7110% ( 1) 00:19:50.732 3112.962 - 3127.857: 99.7133% ( 2) 00:19:50.732 3127.857 - 3142.751: 99.7213% ( 7) 00:19:50.732 3142.751 - 3157.646: 99.7305% ( 8) 00:19:50.732 3157.646 - 3172.540: 99.7396% ( 8) 00:19:50.732 3172.540 - 3187.435: 99.7500% ( 9) 00:19:50.732 3187.435 - 3202.329: 99.7603% ( 9) 00:19:50.732 3202.329 - 3217.224: 99.7729% ( 11) 00:19:50.732 3217.224 - 3232.119: 99.7855% ( 11) 00:19:50.732 3232.119 - 3247.013: 99.7993% ( 12) 00:19:50.732 3247.013 - 3261.908: 99.8119% ( 11) 00:19:50.732 3261.908 - 3276.802: 99.8245% ( 11) 00:19:50.732 3276.802 - 3291.697: 99.8383% ( 12) 00:19:50.732 3291.697 - 3306.591: 99.8486% ( 9) 00:19:50.732 3306.591 - 3321.486: 99.8635% ( 13) 00:19:50.732 3321.486 - 3336.380: 99.8761% ( 11) 00:19:50.732 3336.380 - 3351.275: 99.8876% ( 10) 00:19:50.732 3351.275 - 3366.170: 99.8922% ( 4) 00:19:50.732 3366.170 - 3381.064: 99.8956% ( 3) 00:19:50.732 3381.064 - 3395.959: 99.9002% ( 4) 00:19:50.732 3395.959 - 3410.853: 99.9037% ( 3) 00:19:50.732 3410.853 - 3425.748: 99.9071% ( 3) 00:19:50.732 3425.748 - 3440.642: 99.9082% ( 1) 00:19:50.732 3440.642 - 3455.537: 99.9117% ( 3) 00:19:50.732 3455.537 - 3470.431: 99.9163% ( 4) 00:19:50.732 3470.431 - 3485.326: 99.9197% ( 3) 00:19:50.732 3485.326 - 3500.220: 99.9232% ( 3) 00:19:50.732 3500.220 - 3515.115: 99.9266% ( 3) 00:19:50.732 3515.115 - 3530.010: 99.9312% ( 4) 00:19:50.732 3530.010 - 3544.904: 99.9346% ( 3) 00:19:50.732 3544.904 - 3559.799: 99.9381% ( 3) 00:19:50.732 3559.799 - 3574.693: 99.9415% ( 3) 00:19:50.732 3574.693 - 3589.588: 99.9438% ( 2) 00:19:50.732 3589.588 - 3604.482: 99.9472% ( 3) 00:19:50.732 3604.482 - 3619.377: 99.9518% ( 4) 00:19:50.732 3619.377 - 3634.271: 99.9553% ( 3) 00:19:50.732 3634.271 - 3649.166: 99.9587% ( 3) 00:19:50.732 3649.166 - 3664.061: 99.9610% ( 2) 00:19:50.732 3664.061 - 3678.955: 99.9644% ( 3) 00:19:50.732 3678.955 - 3693.850: 99.9667% ( 2) 00:19:50.732 3693.850 - 3708.744: 99.9690% ( 2) 00:19:50.732 3708.744 - 3723.639: 99.9725% ( 3) 00:19:50.732 3723.639 - 3738.533: 99.9748% ( 2) 00:19:50.732 3738.533 - 3753.428: 99.9771% ( 2) 00:19:50.732 3753.428 - 3768.322: 99.9805% ( 3) 00:19:50.732 3768.322 - 3783.217: 99.9828% ( 2) 00:19:50.732 3783.217 - 3798.112: 99.9862% ( 3) 00:19:50.732 3798.112 - 3813.006: 99.9897% ( 3) 00:19:50.732 3813.006 - 3842.795: 99.9931% ( 3) 00:19:50.732 3842.795 - 3872.584: 99.9966% ( 3) 00:19:50.732 3872.584 - 3902.373: 99.9989% ( 2) 00:19:50.732 3991.741 - 4021.530: 100.0000% ( 1) 00:19:50.732 00:19:50.732 15:08:16 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:19:51.297 EAL: TSC is not safe to use in SMP mode 00:19:51.297 EAL: TSC is not invariant 00:19:51.297 [2024-07-12 15:08:17.099103] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:52.679 Initializing NVMe Controllers 00:19:52.679 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:52.679 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:19:52.679 Initialization complete. Launching workers. 00:19:52.679 ======================================================== 00:19:52.679 Latency(us) 00:19:52.679 Device Information : IOPS MiB/s Average min max 00:19:52.679 PCIE (0000:00:10.0) NSID 1 from core 0: 69747.19 817.35 1835.18 596.13 5737.03 00:19:52.679 ======================================================== 00:19:52.679 Total : 69747.19 817.35 1835.18 596.13 5737.03 00:19:52.679 00:19:52.679 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:52.679 ================================================================================= 00:19:52.679 1.00000% : 1437.325us 00:19:52.679 10.00000% : 1563.928us 00:19:52.679 25.00000% : 1660.743us 00:19:52.679 50.00000% : 1809.688us 00:19:52.679 75.00000% : 1995.870us 00:19:52.679 90.00000% : 2129.921us 00:19:52.679 95.00000% : 2249.078us 00:19:52.679 98.00000% : 2398.023us 00:19:52.679 99.00000% : 2561.864us 00:19:52.679 99.50000% : 2725.704us 00:19:52.679 99.90000% : 3083.173us 00:19:52.679 99.99000% : 3932.163us 00:19:52.679 99.99900% : 5749.298us 00:19:52.679 99.99990% : 5749.298us 00:19:52.679 99.99999% : 5749.298us 00:19:52.679 00:19:52.679 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:52.679 ============================================================================== 00:19:52.679 Range in us Cumulative IO count 00:19:52.679 595.782 - 599.506: 0.0057% ( 4) 00:19:52.679 599.506 - 603.229: 0.0100% ( 3) 00:19:52.679 603.229 - 606.953: 0.0115% ( 1) 00:19:52.679 659.084 - 662.808: 0.0129% ( 1) 00:19:52.679 662.808 - 666.531: 0.0172% ( 3) 00:19:52.679 953.252 - 960.699: 0.0272% ( 7) 00:19:52.679 960.699 - 968.146: 0.0401% ( 9) 00:19:52.679 968.146 - 975.593: 0.0444% ( 3) 00:19:52.679 1079.855 - 1087.303: 0.0473% ( 2) 00:19:52.679 1087.303 - 1094.750: 0.0702% ( 16) 00:19:52.679 1094.750 - 1102.197: 0.0745% ( 3) 00:19:52.679 1102.197 - 1109.644: 0.0788% ( 3) 00:19:52.679 1109.644 - 1117.092: 0.0831% ( 3) 00:19:52.679 1117.092 - 1124.539: 0.0846% ( 1) 00:19:52.679 1213.906 - 1221.354: 0.0960% ( 8) 00:19:52.679 1221.354 - 1228.801: 0.1132% ( 12) 00:19:52.679 1228.801 - 1236.248: 0.1319% ( 13) 00:19:52.679 1236.248 - 1243.695: 0.1404% ( 6) 00:19:52.679 1243.695 - 1251.143: 0.1476% ( 5) 00:19:52.679 1251.143 - 1258.590: 0.1548% ( 5) 00:19:52.679 1258.590 - 1266.037: 0.1619% ( 5) 00:19:52.679 1266.037 - 1273.484: 0.1705% ( 6) 00:19:52.679 1273.484 - 1280.932: 0.1791% ( 6) 00:19:52.679 1280.932 - 1288.379: 0.1849% ( 4) 00:19:52.679 1310.721 - 1318.168: 0.1892% ( 3) 00:19:52.679 1318.168 - 1325.615: 0.2049% ( 11) 00:19:52.679 1325.615 - 1333.063: 0.2322% ( 19) 00:19:52.679 1333.063 - 1340.510: 0.2494% ( 12) 00:19:52.679 1340.510 - 1347.957: 0.2795% ( 21) 00:19:52.679 1347.957 - 1355.405: 0.2924% ( 9) 00:19:52.679 1355.405 - 1362.852: 0.3053% ( 9) 00:19:52.679 1362.852 - 1370.299: 0.3354% ( 21) 00:19:52.679 1370.299 - 1377.746: 0.3540% ( 13) 00:19:52.679 1377.746 - 1385.194: 0.3798% ( 18) 00:19:52.679 1385.194 - 1392.641: 0.4099% ( 21) 00:19:52.679 1392.641 - 1400.088: 0.4529% ( 30) 00:19:52.679 1400.088 - 1407.535: 0.5331% ( 56) 00:19:52.679 1407.535 - 1414.983: 0.6335% ( 70) 00:19:52.679 1414.983 - 1422.430: 0.7667% ( 93) 00:19:52.679 1422.430 - 1429.877: 0.9559% ( 132) 00:19:52.679 1429.877 - 1437.325: 1.2339% ( 194) 00:19:52.679 1437.325 - 1444.772: 1.4575% ( 156) 00:19:52.679 1444.772 - 1452.219: 1.7298% ( 190) 00:19:52.679 1452.219 - 1459.666: 2.0394% ( 216) 00:19:52.679 1459.666 - 1467.114: 2.3747% ( 234) 00:19:52.679 1467.114 - 1474.561: 2.7789% ( 282) 00:19:52.679 1474.561 - 1482.008: 3.1716% ( 274) 00:19:52.679 1482.008 - 1489.456: 3.5499% ( 264) 00:19:52.679 1489.456 - 1496.903: 4.0014% ( 315) 00:19:52.679 1496.903 - 1504.350: 4.4915% ( 342) 00:19:52.679 1504.350 - 1511.797: 4.9903% ( 348) 00:19:52.679 1511.797 - 1519.245: 5.5707% ( 405) 00:19:52.679 1519.245 - 1526.692: 6.2142% ( 449) 00:19:52.679 1526.692 - 1534.139: 6.9365% ( 504) 00:19:52.679 1534.139 - 1541.586: 7.7133% ( 542) 00:19:52.679 1541.586 - 1549.034: 8.5574% ( 589) 00:19:52.679 1549.034 - 1556.481: 9.4803% ( 644) 00:19:52.679 1556.481 - 1563.928: 10.4004% ( 642) 00:19:52.679 1563.928 - 1571.376: 11.3133% ( 637) 00:19:52.679 1571.376 - 1578.823: 12.2005% ( 619) 00:19:52.679 1578.823 - 1586.270: 13.1879% ( 689) 00:19:52.679 1586.270 - 1593.717: 14.2528% ( 743) 00:19:52.679 1593.717 - 1601.165: 15.4380% ( 827) 00:19:52.679 1601.165 - 1608.612: 16.6261% ( 829) 00:19:52.679 1608.612 - 1616.059: 17.9947% ( 955) 00:19:52.679 1616.059 - 1623.507: 19.3276% ( 930) 00:19:52.679 1623.507 - 1630.954: 20.6303% ( 909) 00:19:52.679 1630.954 - 1638.401: 21.9689% ( 934) 00:19:52.679 1638.401 - 1645.848: 23.2874% ( 920) 00:19:52.679 1645.848 - 1653.296: 24.7449% ( 1017) 00:19:52.679 1653.296 - 1660.743: 26.1738% ( 997) 00:19:52.679 1660.743 - 1668.190: 27.6255% ( 1013) 00:19:52.679 1668.190 - 1675.637: 29.0802% ( 1015) 00:19:52.680 1675.637 - 1683.085: 30.7427% ( 1160) 00:19:52.680 1683.085 - 1690.532: 32.3191% ( 1100) 00:19:52.680 1690.532 - 1697.979: 33.8641% ( 1078) 00:19:52.680 1697.979 - 1705.427: 35.4033% ( 1074) 00:19:52.680 1705.427 - 1712.874: 36.9052% ( 1048) 00:19:52.680 1712.874 - 1720.321: 38.2653% ( 949) 00:19:52.680 1720.321 - 1727.768: 39.5580% ( 902) 00:19:52.680 1727.768 - 1735.216: 40.7676% ( 844) 00:19:52.680 1735.216 - 1742.663: 41.9457% ( 822) 00:19:52.680 1742.663 - 1750.110: 43.0463% ( 768) 00:19:52.680 1750.110 - 1757.558: 44.0180% ( 678) 00:19:52.680 1757.558 - 1765.005: 44.9309% ( 637) 00:19:52.680 1765.005 - 1772.452: 45.8567% ( 646) 00:19:52.680 1772.452 - 1779.899: 46.8270% ( 677) 00:19:52.680 1779.899 - 1787.347: 47.7743% ( 661) 00:19:52.680 1787.347 - 1794.794: 48.6514% ( 612) 00:19:52.680 1794.794 - 1802.241: 49.5428% ( 622) 00:19:52.680 1802.241 - 1809.688: 50.4213% ( 613) 00:19:52.680 1809.688 - 1817.136: 51.3386% ( 640) 00:19:52.680 1817.136 - 1824.583: 52.2601% ( 643) 00:19:52.680 1824.583 - 1832.030: 53.2045% ( 659) 00:19:52.680 1832.030 - 1839.478: 54.1533% ( 662) 00:19:52.680 1839.478 - 1846.925: 55.0977% ( 659) 00:19:52.680 1846.925 - 1854.372: 56.0895% ( 692) 00:19:52.680 1854.372 - 1861.819: 57.1013% ( 706) 00:19:52.680 1861.819 - 1869.267: 58.0801% ( 683) 00:19:52.680 1869.267 - 1876.714: 59.0088% ( 648) 00:19:52.680 1876.714 - 1884.161: 60.0335% ( 715) 00:19:52.680 1884.161 - 1891.609: 61.0783% ( 729) 00:19:52.680 1891.609 - 1899.056: 62.0615% ( 686) 00:19:52.680 1899.056 - 1906.503: 63.0933% ( 720) 00:19:52.680 1906.503 - 1921.398: 65.2531% ( 1507) 00:19:52.680 1921.398 - 1936.292: 67.4243% ( 1515) 00:19:52.680 1936.292 - 1951.187: 69.5870% ( 1509) 00:19:52.680 1951.187 - 1966.081: 71.7553% ( 1513) 00:19:52.680 1966.081 - 1980.976: 73.7345% ( 1381) 00:19:52.680 1980.976 - 1995.870: 75.7022% ( 1373) 00:19:52.680 1995.870 - 2010.765: 77.6485% ( 1358) 00:19:52.680 2010.765 - 2025.660: 79.5202% ( 1306) 00:19:52.680 2025.660 - 2040.554: 81.3331% ( 1265) 00:19:52.680 2040.554 - 2055.449: 83.0558% ( 1202) 00:19:52.680 2055.449 - 2070.343: 84.7254% ( 1165) 00:19:52.680 2070.343 - 2085.238: 86.2890% ( 1091) 00:19:52.680 2085.238 - 2100.132: 87.6748% ( 967) 00:19:52.680 2100.132 - 2115.027: 88.9246% ( 872) 00:19:52.680 2115.027 - 2129.921: 90.1399% ( 848) 00:19:52.680 2129.921 - 2144.816: 91.1173% ( 682) 00:19:52.680 2144.816 - 2159.711: 91.9643% ( 591) 00:19:52.680 2159.711 - 2174.605: 92.7095% ( 520) 00:19:52.680 2174.605 - 2189.500: 93.3487% ( 446) 00:19:52.680 2189.500 - 2204.394: 93.9449% ( 416) 00:19:52.680 2204.394 - 2219.289: 94.4752% ( 370) 00:19:52.680 2219.289 - 2234.183: 94.9610% ( 339) 00:19:52.680 2234.183 - 2249.078: 95.3795% ( 292) 00:19:52.680 2249.078 - 2263.972: 95.7922% ( 288) 00:19:52.680 2263.972 - 2278.867: 96.2107% ( 292) 00:19:52.680 2278.867 - 2293.762: 96.5232% ( 218) 00:19:52.680 2293.762 - 2308.656: 96.8084% ( 199) 00:19:52.680 2308.656 - 2323.551: 97.0678% ( 181) 00:19:52.680 2323.551 - 2338.445: 97.3186% ( 175) 00:19:52.680 2338.445 - 2353.340: 97.5407% ( 155) 00:19:52.680 2353.340 - 2368.234: 97.7256% ( 129) 00:19:52.680 2368.234 - 2383.129: 97.9320% ( 144) 00:19:52.680 2383.129 - 2398.023: 98.1039% ( 120) 00:19:52.680 2398.023 - 2412.918: 98.2458% ( 99) 00:19:52.680 2412.918 - 2427.813: 98.3519% ( 74) 00:19:52.680 2427.813 - 2442.707: 98.4350% ( 58) 00:19:52.680 2442.707 - 2457.602: 98.5382% ( 72) 00:19:52.680 2457.602 - 2472.496: 98.6270% ( 62) 00:19:52.680 2472.496 - 2487.391: 98.7188% ( 64) 00:19:52.680 2487.391 - 2502.285: 98.8019% ( 58) 00:19:52.680 2502.285 - 2517.180: 98.8692% ( 47) 00:19:52.680 2517.180 - 2532.074: 98.9352% ( 46) 00:19:52.680 2532.074 - 2546.969: 98.9997% ( 45) 00:19:52.680 2546.969 - 2561.864: 99.0555% ( 39) 00:19:52.680 2561.864 - 2576.758: 99.1028% ( 33) 00:19:52.680 2576.758 - 2591.653: 99.1645% ( 43) 00:19:52.680 2591.653 - 2606.547: 99.2161% ( 36) 00:19:52.680 2606.547 - 2621.442: 99.2591% ( 30) 00:19:52.680 2621.442 - 2636.336: 99.2963% ( 26) 00:19:52.680 2636.336 - 2651.231: 99.3364% ( 28) 00:19:52.680 2651.231 - 2666.125: 99.3694% ( 23) 00:19:52.680 2666.125 - 2681.020: 99.4052% ( 25) 00:19:52.680 2681.020 - 2695.915: 99.4511% ( 32) 00:19:52.680 2695.915 - 2710.809: 99.4955% ( 31) 00:19:52.680 2710.809 - 2725.704: 99.5127% ( 12) 00:19:52.680 2725.704 - 2740.598: 99.5299% ( 12) 00:19:52.680 2740.598 - 2755.493: 99.5600% ( 21) 00:19:52.680 2755.493 - 2770.387: 99.5916% ( 22) 00:19:52.680 2770.387 - 2785.282: 99.6145% ( 16) 00:19:52.680 2785.282 - 2800.176: 99.6259% ( 8) 00:19:52.680 2800.176 - 2815.071: 99.6374% ( 8) 00:19:52.680 2815.071 - 2829.966: 99.6618% ( 17) 00:19:52.680 2829.966 - 2844.860: 99.6847% ( 16) 00:19:52.680 2844.860 - 2859.755: 99.7076% ( 16) 00:19:52.680 2859.755 - 2874.649: 99.7220% ( 10) 00:19:52.680 2874.649 - 2889.544: 99.7392% ( 12) 00:19:52.680 2889.544 - 2904.438: 99.7506% ( 8) 00:19:52.680 2904.438 - 2919.333: 99.7621% ( 8) 00:19:52.680 2919.333 - 2934.227: 99.7879% ( 18) 00:19:52.680 2934.227 - 2949.122: 99.7979% ( 7) 00:19:52.680 2949.122 - 2964.017: 99.8094% ( 8) 00:19:52.680 2964.017 - 2978.911: 99.8209% ( 8) 00:19:52.680 2978.911 - 2993.806: 99.8323% ( 8) 00:19:52.680 2993.806 - 3008.700: 99.8596% ( 19) 00:19:52.680 3008.700 - 3023.595: 99.8739% ( 10) 00:19:52.680 3023.595 - 3038.489: 99.8825% ( 6) 00:19:52.680 3038.489 - 3053.384: 99.8911% ( 6) 00:19:52.680 3053.384 - 3068.278: 99.8997% ( 6) 00:19:52.680 3068.278 - 3083.173: 99.9083% ( 6) 00:19:52.680 3083.173 - 3098.068: 99.9154% ( 5) 00:19:52.680 3098.068 - 3112.962: 99.9169% ( 1) 00:19:52.680 3142.751 - 3157.646: 99.9183% ( 1) 00:19:52.680 3232.119 - 3247.013: 99.9240% ( 4) 00:19:52.680 3261.908 - 3276.802: 99.9341% ( 7) 00:19:52.680 3321.486 - 3336.380: 99.9355% ( 1) 00:19:52.680 3351.275 - 3366.170: 99.9369% ( 1) 00:19:52.680 3381.064 - 3395.959: 99.9384% ( 1) 00:19:52.680 3395.959 - 3410.853: 99.9398% ( 1) 00:19:52.680 3425.748 - 3440.642: 99.9412% ( 1) 00:19:52.680 3589.588 - 3604.482: 99.9441% ( 2) 00:19:52.680 3604.482 - 3619.377: 99.9470% ( 2) 00:19:52.680 3634.271 - 3649.166: 99.9484% ( 1) 00:19:52.680 3664.061 - 3678.955: 99.9556% ( 5) 00:19:52.680 3708.744 - 3723.639: 99.9584% ( 2) 00:19:52.680 3723.639 - 3738.533: 99.9713% ( 9) 00:19:52.680 3768.322 - 3783.217: 99.9799% ( 6) 00:19:52.680 3783.217 - 3798.112: 99.9871% ( 5) 00:19:52.680 3798.112 - 3813.006: 99.9885% ( 1) 00:19:52.680 3872.584 - 3902.373: 99.9900% ( 1) 00:19:52.680 3902.373 - 3932.163: 99.9928% ( 2) 00:19:52.680 4617.312 - 4647.101: 99.9943% ( 1) 00:19:52.680 5391.829 - 5421.618: 99.9957% ( 1) 00:19:52.680 5510.985 - 5540.775: 99.9986% ( 2) 00:19:52.680 5719.509 - 5749.298: 100.0000% ( 1) 00:19:52.680 00:19:52.938 15:08:18 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:19:52.938 00:19:52.938 real 0m3.726s 00:19:52.938 user 0m2.485s 00:19:52.938 sys 0m1.239s 00:19:52.938 15:08:18 nvme.nvme_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:52.938 15:08:18 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:19:52.938 ************************************ 00:19:52.938 END TEST nvme_perf 00:19:52.938 ************************************ 00:19:52.938 15:08:18 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:52.938 15:08:18 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:19:52.938 15:08:18 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:52.938 15:08:18 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:52.938 15:08:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:52.938 ************************************ 00:19:52.938 START TEST nvme_hello_world 00:19:52.938 ************************************ 00:19:52.938 15:08:18 nvme.nvme_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:19:53.500 EAL: TSC is not safe to use in SMP mode 00:19:53.500 EAL: TSC is not invariant 00:19:53.500 [2024-07-12 15:08:19.215083] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:53.500 Initializing NVMe Controllers 00:19:53.500 Attaching to 0000:00:10.0 00:19:53.500 Attached to 0000:00:10.0 00:19:53.500 Namespace ID: 1 size: 5GB 00:19:53.500 Initialization complete. 00:19:53.500 INFO: using host memory buffer for IO 00:19:53.500 Hello world! 00:19:53.500 00:19:53.500 real 0m0.598s 00:19:53.500 user 0m0.000s 00:19:53.500 sys 0m0.598s 00:19:53.500 ************************************ 00:19:53.500 END TEST nvme_hello_world 00:19:53.500 ************************************ 00:19:53.500 15:08:19 nvme.nvme_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:53.500 15:08:19 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:53.500 15:08:19 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:53.500 15:08:19 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:19:53.500 15:08:19 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:53.500 15:08:19 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:53.500 15:08:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:53.500 ************************************ 00:19:53.500 START TEST nvme_sgl 00:19:53.500 ************************************ 00:19:53.500 15:08:19 nvme.nvme_sgl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:19:54.065 EAL: TSC is not safe to use in SMP mode 00:19:54.065 EAL: TSC is not invariant 00:19:54.065 [2024-07-12 15:08:19.846445] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:54.065 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:19:54.065 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:19:54.065 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:19:54.065 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:19:54.065 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:19:54.065 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:19:54.322 NVMe Readv/Writev Request test 00:19:54.322 Attaching to 0000:00:10.0 00:19:54.322 Attached to 0000:00:10.0 00:19:54.322 0000:00:10.0: build_io_request_2 test passed 00:19:54.322 0000:00:10.0: build_io_request_4 test passed 00:19:54.322 0000:00:10.0: build_io_request_5 test passed 00:19:54.322 0000:00:10.0: build_io_request_6 test passed 00:19:54.322 0000:00:10.0: build_io_request_7 test passed 00:19:54.322 0000:00:10.0: build_io_request_10 test passed 00:19:54.322 Cleaning up... 00:19:54.322 00:19:54.322 real 0m0.600s 00:19:54.322 user 0m0.017s 00:19:54.322 sys 0m0.583s 00:19:54.322 15:08:19 nvme.nvme_sgl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:54.322 ************************************ 00:19:54.322 END TEST nvme_sgl 00:19:54.322 ************************************ 00:19:54.322 15:08:19 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:19:54.322 15:08:19 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:54.322 15:08:19 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:19:54.322 15:08:19 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:54.322 15:08:19 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:54.322 15:08:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:54.322 ************************************ 00:19:54.322 START TEST nvme_e2edp 00:19:54.322 ************************************ 00:19:54.322 15:08:19 nvme.nvme_e2edp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:19:54.888 EAL: TSC is not safe to use in SMP mode 00:19:54.888 EAL: TSC is not invariant 00:19:54.888 [2024-07-12 15:08:20.501043] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:54.888 NVMe Write/Read with End-to-End data protection test 00:19:54.888 Attaching to 0000:00:10.0 00:19:54.888 Attached to 0000:00:10.0 00:19:54.888 Cleaning up... 00:19:54.888 00:19:54.888 real 0m0.594s 00:19:54.888 user 0m0.017s 00:19:54.888 sys 0m0.577s 00:19:54.888 15:08:20 nvme.nvme_e2edp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:54.888 15:08:20 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:19:54.888 ************************************ 00:19:54.888 END TEST nvme_e2edp 00:19:54.888 ************************************ 00:19:54.888 15:08:20 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:54.888 15:08:20 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:19:54.888 15:08:20 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:54.888 15:08:20 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:54.888 15:08:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:54.888 ************************************ 00:19:54.888 START TEST nvme_reserve 00:19:54.888 ************************************ 00:19:54.888 15:08:20 nvme.nvme_reserve -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:19:55.453 EAL: TSC is not safe to use in SMP mode 00:19:55.453 EAL: TSC is not invariant 00:19:55.453 [2024-07-12 15:08:21.116029] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:55.453 ===================================================== 00:19:55.453 NVMe Controller at PCI bus 0, device 16, function 0 00:19:55.453 ===================================================== 00:19:55.453 Reservations: Not Supported 00:19:55.453 Reservation test passed 00:19:55.453 00:19:55.453 real 0m0.564s 00:19:55.453 user 0m0.023s 00:19:55.453 sys 0m0.542s 00:19:55.453 15:08:21 nvme.nvme_reserve -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:55.453 ************************************ 00:19:55.453 END TEST nvme_reserve 00:19:55.453 ************************************ 00:19:55.453 15:08:21 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:19:55.453 15:08:21 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:55.453 15:08:21 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:19:55.453 15:08:21 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:55.453 15:08:21 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:55.453 15:08:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:55.453 ************************************ 00:19:55.453 START TEST nvme_err_injection 00:19:55.453 ************************************ 00:19:55.453 15:08:21 nvme.nvme_err_injection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:19:56.082 EAL: TSC is not safe to use in SMP mode 00:19:56.082 EAL: TSC is not invariant 00:19:56.082 [2024-07-12 15:08:21.743840] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:56.082 NVMe Error Injection test 00:19:56.082 Attaching to 0000:00:10.0 00:19:56.082 Attached to 0000:00:10.0 00:19:56.082 0000:00:10.0: get features failed as expected 00:19:56.082 0000:00:10.0: get features successfully as expected 00:19:56.082 0000:00:10.0: read failed as expected 00:19:56.082 0000:00:10.0: read successfully as expected 00:19:56.082 Cleaning up... 00:19:56.082 00:19:56.082 real 0m0.581s 00:19:56.082 user 0m0.015s 00:19:56.082 sys 0m0.566s 00:19:56.082 15:08:21 nvme.nvme_err_injection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:56.082 15:08:21 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:19:56.082 ************************************ 00:19:56.082 END TEST nvme_err_injection 00:19:56.082 ************************************ 00:19:56.082 15:08:21 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:56.082 15:08:21 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:19:56.082 15:08:21 nvme -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:19:56.082 15:08:21 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:56.082 15:08:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:56.082 ************************************ 00:19:56.082 START TEST nvme_overhead 00:19:56.082 ************************************ 00:19:56.082 15:08:21 nvme.nvme_overhead -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:19:56.645 EAL: TSC is not safe to use in SMP mode 00:19:56.645 EAL: TSC is not invariant 00:19:56.645 [2024-07-12 15:08:22.392700] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:57.575 Initializing NVMe Controllers 00:19:57.575 Attaching to 0000:00:10.0 00:19:57.575 Attached to 0000:00:10.0 00:19:57.575 Initialization complete. Launching workers. 00:19:57.575 submit (in ns) avg, min, max = 9267.7, 7451.4, 79758.2 00:19:57.575 complete (in ns) avg, min, max = 6591.0, 5862.3, 73231.0 00:19:57.575 00:19:57.575 Submit histogram 00:19:57.575 ================ 00:19:57.575 Range in us Cumulative Count 00:19:57.575 7.447 - 7.505: 0.0088% ( 1) 00:19:57.575 7.971 - 8.029: 0.0352% ( 3) 00:19:57.575 8.029 - 8.087: 0.2552% ( 25) 00:19:57.575 8.087 - 8.145: 0.7657% ( 58) 00:19:57.575 8.145 - 8.204: 1.2234% ( 52) 00:19:57.575 8.204 - 8.262: 1.6546% ( 49) 00:19:57.575 8.262 - 8.320: 2.0859% ( 49) 00:19:57.575 8.320 - 8.378: 2.4203% ( 38) 00:19:57.575 8.378 - 8.436: 2.8604% ( 50) 00:19:57.575 8.436 - 8.495: 3.1773% ( 36) 00:19:57.575 8.495 - 8.553: 3.4589% ( 32) 00:19:57.575 8.553 - 8.611: 3.7669% ( 35) 00:19:57.575 8.611 - 8.669: 4.1014% ( 38) 00:19:57.575 8.669 - 8.727: 4.4710% ( 42) 00:19:57.575 8.727 - 8.785: 5.1047% ( 72) 00:19:57.575 8.785 - 8.844: 8.0708% ( 337) 00:19:57.575 8.844 - 8.902: 18.9755% ( 1239) 00:19:57.575 8.902 - 8.960: 38.9720% ( 2272) 00:19:57.575 8.960 - 9.018: 59.7782% ( 2364) 00:19:57.575 9.018 - 9.076: 70.9734% ( 1272) 00:19:57.575 9.076 - 9.135: 76.2718% ( 602) 00:19:57.575 9.135 - 9.193: 79.3170% ( 346) 00:19:57.575 9.193 - 9.251: 81.3061% ( 226) 00:19:57.575 9.251 - 9.309: 82.5471% ( 141) 00:19:57.575 9.309 - 9.367: 83.4800% ( 106) 00:19:57.575 9.367 - 9.425: 84.0433% ( 64) 00:19:57.575 9.425 - 9.484: 84.3954% ( 40) 00:19:57.575 9.484 - 9.542: 84.6594% ( 30) 00:19:57.575 9.542 - 9.600: 84.8178% ( 18) 00:19:57.575 9.600 - 9.658: 84.9762% ( 18) 00:19:57.575 9.658 - 9.716: 85.1347% ( 18) 00:19:57.575 9.716 - 9.775: 85.2579% ( 14) 00:19:57.575 9.775 - 9.833: 85.3195% ( 7) 00:19:57.575 9.833 - 9.891: 85.3547% ( 4) 00:19:57.575 9.891 - 9.949: 85.5395% ( 21) 00:19:57.575 9.949 - 10.007: 86.6397% ( 125) 00:19:57.575 10.007 - 10.065: 88.5231% ( 214) 00:19:57.575 10.065 - 10.124: 90.7939% ( 258) 00:19:57.575 10.124 - 10.182: 92.4309% ( 186) 00:19:57.575 10.182 - 10.240: 93.5399% ( 126) 00:19:57.575 10.240 - 10.298: 94.5520% ( 115) 00:19:57.575 10.298 - 10.356: 95.2737% ( 82) 00:19:57.575 10.356 - 10.415: 95.4938% ( 25) 00:19:57.575 10.415 - 10.473: 95.7138% ( 25) 00:19:57.575 10.473 - 10.531: 95.7666% ( 6) 00:19:57.575 10.531 - 10.589: 95.8106% ( 5) 00:19:57.575 10.589 - 10.647: 95.8634% ( 6) 00:19:57.575 10.647 - 10.705: 95.8810% ( 2) 00:19:57.575 10.705 - 10.764: 95.8898% ( 1) 00:19:57.575 10.764 - 10.822: 95.9074% ( 2) 00:19:57.575 10.822 - 10.880: 95.9250% ( 2) 00:19:57.575 10.880 - 10.938: 95.9778% ( 6) 00:19:57.575 10.938 - 10.996: 96.1538% ( 20) 00:19:57.575 10.996 - 11.055: 96.3123% ( 18) 00:19:57.575 11.055 - 11.113: 96.4531% ( 16) 00:19:57.576 11.113 - 11.171: 96.5059% ( 6) 00:19:57.576 11.171 - 11.229: 96.6115% ( 12) 00:19:57.576 11.229 - 11.287: 96.8667% ( 29) 00:19:57.576 11.287 - 11.345: 97.2364% ( 42) 00:19:57.576 11.345 - 11.404: 97.6589% ( 48) 00:19:57.576 11.404 - 11.462: 97.9493% ( 33) 00:19:57.576 11.462 - 11.520: 98.0725% ( 14) 00:19:57.576 11.520 - 11.578: 98.1781% ( 12) 00:19:57.576 11.578 - 11.636: 98.1957% ( 2) 00:19:57.576 11.636 - 11.695: 98.2397% ( 5) 00:19:57.576 11.695 - 11.753: 98.2662% ( 3) 00:19:57.576 11.753 - 11.811: 98.2750% ( 1) 00:19:57.576 11.811 - 11.869: 98.2838% ( 1) 00:19:57.576 11.869 - 11.927: 98.2926% ( 1) 00:19:57.576 11.927 - 11.985: 98.3014% ( 1) 00:19:57.576 12.044 - 12.102: 98.3278% ( 3) 00:19:57.576 12.102 - 12.160: 98.3366% ( 1) 00:19:57.576 12.160 - 12.218: 98.3630% ( 3) 00:19:57.576 12.218 - 12.276: 98.3894% ( 3) 00:19:57.576 12.335 - 12.393: 98.4070% ( 2) 00:19:57.576 12.393 - 12.451: 98.4158% ( 1) 00:19:57.576 12.451 - 12.509: 98.4246% ( 1) 00:19:57.576 12.509 - 12.567: 98.4510% ( 3) 00:19:57.576 12.567 - 12.625: 98.4598% ( 1) 00:19:57.576 12.684 - 12.742: 98.5302% ( 8) 00:19:57.576 12.742 - 12.800: 98.5478% ( 2) 00:19:57.576 12.800 - 12.858: 98.5566% ( 1) 00:19:57.576 12.858 - 12.916: 98.6006% ( 5) 00:19:57.576 12.916 - 12.975: 98.6094% ( 1) 00:19:57.576 12.975 - 13.033: 98.6534% ( 5) 00:19:57.576 13.033 - 13.091: 98.6710% ( 2) 00:19:57.576 13.091 - 13.149: 98.7062% ( 4) 00:19:57.576 13.149 - 13.207: 98.7238% ( 2) 00:19:57.576 13.207 - 13.265: 98.7590% ( 4) 00:19:57.576 13.265 - 13.324: 98.7942% ( 4) 00:19:57.576 13.324 - 13.382: 98.8558% ( 7) 00:19:57.576 13.382 - 13.440: 98.8646% ( 1) 00:19:57.576 13.440 - 13.498: 98.8998% ( 4) 00:19:57.576 13.498 - 13.556: 98.9174% ( 2) 00:19:57.576 13.556 - 13.615: 98.9526% ( 4) 00:19:57.576 13.615 - 13.673: 98.9791% ( 3) 00:19:57.576 13.673 - 13.731: 99.0407% ( 7) 00:19:57.576 13.789 - 13.847: 99.0495% ( 1) 00:19:57.576 13.847 - 13.905: 99.0671% ( 2) 00:19:57.576 13.905 - 13.964: 99.0759% ( 1) 00:19:57.576 13.964 - 14.022: 99.0847% ( 1) 00:19:57.576 14.022 - 14.080: 99.1023% ( 2) 00:19:57.576 14.080 - 14.138: 99.1287% ( 3) 00:19:57.576 14.138 - 14.196: 99.1375% ( 1) 00:19:57.576 14.196 - 14.255: 99.1463% ( 1) 00:19:57.576 14.255 - 14.313: 99.1551% ( 1) 00:19:57.576 14.313 - 14.371: 99.1639% ( 1) 00:19:57.576 14.371 - 14.429: 99.1815% ( 2) 00:19:57.576 14.429 - 14.487: 99.2079% ( 3) 00:19:57.576 14.487 - 14.545: 99.2167% ( 1) 00:19:57.576 14.545 - 14.604: 99.2255% ( 1) 00:19:57.576 14.720 - 14.778: 99.2343% ( 1) 00:19:57.576 14.778 - 14.836: 99.2431% ( 1) 00:19:57.576 14.836 - 14.895: 99.2607% ( 2) 00:19:57.576 14.895 - 15.011: 99.2695% ( 1) 00:19:57.576 15.011 - 15.127: 99.2871% ( 2) 00:19:57.576 15.127 - 15.244: 99.3047% ( 2) 00:19:57.576 15.244 - 15.360: 99.3575% ( 6) 00:19:57.576 15.360 - 15.476: 99.3927% ( 4) 00:19:57.576 15.593 - 15.709: 99.4191% ( 3) 00:19:57.576 15.709 - 15.825: 99.4279% ( 1) 00:19:57.576 15.825 - 15.942: 99.4367% ( 1) 00:19:57.576 15.942 - 16.058: 99.4543% ( 2) 00:19:57.576 16.058 - 16.175: 99.4631% ( 1) 00:19:57.576 16.175 - 16.291: 99.4719% ( 1) 00:19:57.576 16.291 - 16.407: 99.5071% ( 4) 00:19:57.576 16.640 - 16.756: 99.5335% ( 3) 00:19:57.576 16.756 - 16.873: 99.5511% ( 2) 00:19:57.576 16.873 - 16.989: 99.5599% ( 1) 00:19:57.576 16.989 - 17.105: 99.5687% ( 1) 00:19:57.576 17.105 - 17.222: 99.5775% ( 1) 00:19:57.576 17.222 - 17.338: 99.5863% ( 1) 00:19:57.576 17.571 - 17.687: 99.5951% ( 1) 00:19:57.576 17.687 - 17.804: 99.6039% ( 1) 00:19:57.576 17.920 - 18.036: 99.6127% ( 1) 00:19:57.576 18.036 - 18.153: 99.6303% ( 2) 00:19:57.576 18.153 - 18.269: 99.6479% ( 2) 00:19:57.576 18.269 - 18.385: 99.6568% ( 1) 00:19:57.576 18.385 - 18.502: 99.6744% ( 2) 00:19:57.576 18.618 - 18.735: 99.6832% ( 1) 00:19:57.576 18.735 - 18.851: 99.6920% ( 1) 00:19:57.576 18.967 - 19.084: 99.7008% ( 1) 00:19:57.576 19.084 - 19.200: 99.7096% ( 1) 00:19:57.576 19.200 - 19.316: 99.7184% ( 1) 00:19:57.576 19.316 - 19.433: 99.7624% ( 5) 00:19:57.576 19.433 - 19.549: 99.7712% ( 1) 00:19:57.576 19.665 - 19.782: 99.7800% ( 1) 00:19:57.576 20.015 - 20.131: 99.7888% ( 1) 00:19:57.576 20.713 - 20.829: 99.7976% ( 1) 00:19:57.576 20.829 - 20.945: 99.8064% ( 1) 00:19:57.576 20.945 - 21.062: 99.8152% ( 1) 00:19:57.576 21.062 - 21.178: 99.8240% ( 1) 00:19:57.576 21.178 - 21.295: 99.8328% ( 1) 00:19:57.576 21.295 - 21.411: 99.8504% ( 2) 00:19:57.576 21.527 - 21.644: 99.8592% ( 1) 00:19:57.576 21.760 - 21.876: 99.8680% ( 1) 00:19:57.576 22.225 - 22.342: 99.8768% ( 1) 00:19:57.576 22.458 - 22.575: 99.8856% ( 1) 00:19:57.576 22.575 - 22.691: 99.8944% ( 1) 00:19:57.576 23.273 - 23.389: 99.9032% ( 1) 00:19:57.576 23.505 - 23.622: 99.9208% ( 2) 00:19:57.576 23.622 - 23.738: 99.9296% ( 1) 00:19:57.576 24.087 - 24.204: 99.9384% ( 1) 00:19:57.576 24.785 - 24.902: 99.9472% ( 1) 00:19:57.576 24.902 - 25.018: 99.9560% ( 1) 00:19:57.576 25.135 - 25.251: 99.9648% ( 1) 00:19:57.576 26.065 - 26.182: 99.9736% ( 1) 00:19:57.576 34.909 - 35.142: 99.9824% ( 1) 00:19:57.576 43.520 - 43.753: 99.9912% ( 1) 00:19:57.576 79.593 - 80.058: 100.0000% ( 1) 00:19:57.576 00:19:57.576 Complete histogram 00:19:57.576 ================== 00:19:57.576 Range in us Cumulative Count 00:19:57.576 5.847 - 5.876: 0.0176% ( 2) 00:19:57.576 5.876 - 5.905: 0.0440% ( 3) 00:19:57.576 5.905 - 5.935: 0.0616% ( 2) 00:19:57.576 5.935 - 5.964: 0.0880% ( 3) 00:19:57.576 5.964 - 5.993: 0.3432% ( 29) 00:19:57.576 5.993 - 6.022: 1.4874% ( 130) 00:19:57.576 6.022 - 6.051: 4.4974% ( 342) 00:19:57.576 6.051 - 6.080: 12.7970% ( 943) 00:19:57.576 6.080 - 6.109: 26.5622% ( 1564) 00:19:57.576 6.109 - 6.138: 39.4121% ( 1460) 00:19:57.576 6.138 - 6.167: 51.3290% ( 1354) 00:19:57.576 6.167 - 6.196: 60.0070% ( 986) 00:19:57.576 6.196 - 6.225: 65.1822% ( 588) 00:19:57.576 6.225 - 6.255: 68.5003% ( 377) 00:19:57.576 6.255 - 6.284: 71.0526% ( 290) 00:19:57.576 6.284 - 6.313: 72.5048% ( 165) 00:19:57.576 6.313 - 6.342: 73.7194% ( 138) 00:19:57.576 6.342 - 6.371: 74.5995% ( 100) 00:19:57.576 6.371 - 6.400: 75.5413% ( 107) 00:19:57.576 6.400 - 6.429: 76.2806% ( 84) 00:19:57.576 6.429 - 6.458: 77.0551% ( 88) 00:19:57.576 6.458 - 6.487: 77.7944% ( 84) 00:19:57.576 6.487 - 6.516: 78.4633% ( 76) 00:19:57.576 6.516 - 6.545: 78.9914% ( 60) 00:19:57.577 6.545 - 6.575: 79.7923% ( 91) 00:19:57.577 6.575 - 6.604: 80.4964% ( 80) 00:19:57.577 6.604 - 6.633: 81.2533% ( 86) 00:19:57.577 6.633 - 6.662: 81.9662% ( 81) 00:19:57.577 6.662 - 6.691: 82.5119% ( 62) 00:19:57.577 6.691 - 6.720: 83.0752% ( 64) 00:19:57.577 6.720 - 6.749: 83.4272% ( 40) 00:19:57.577 6.749 - 6.778: 83.8057% ( 43) 00:19:57.577 6.778 - 6.807: 84.0433% ( 27) 00:19:57.577 6.807 - 6.836: 84.2721% ( 26) 00:19:57.577 6.836 - 6.865: 84.4834% ( 24) 00:19:57.577 6.865 - 6.895: 84.5802% ( 11) 00:19:57.577 6.895 - 6.924: 84.7298% ( 17) 00:19:57.577 6.924 - 6.953: 84.7914% ( 7) 00:19:57.577 6.953 - 6.982: 84.8794% ( 10) 00:19:57.577 6.982 - 7.011: 84.9674% ( 10) 00:19:57.577 7.011 - 7.040: 84.9938% ( 3) 00:19:57.577 7.040 - 7.069: 85.0202% ( 3) 00:19:57.577 7.069 - 7.098: 85.0642% ( 5) 00:19:57.577 7.098 - 7.127: 85.1083% ( 5) 00:19:57.577 7.127 - 7.156: 85.1523% ( 5) 00:19:57.577 7.156 - 7.185: 85.1963% ( 5) 00:19:57.577 7.185 - 7.215: 85.2403% ( 5) 00:19:57.577 7.215 - 7.244: 85.2755% ( 4) 00:19:57.577 7.244 - 7.273: 85.3019% ( 3) 00:19:57.577 7.273 - 7.302: 85.3195% ( 2) 00:19:57.577 7.302 - 7.331: 85.3547% ( 4) 00:19:57.577 7.331 - 7.360: 85.3635% ( 1) 00:19:57.577 7.360 - 7.389: 85.3723% ( 1) 00:19:57.577 7.389 - 7.418: 85.3811% ( 1) 00:19:57.577 7.447 - 7.505: 85.3987% ( 2) 00:19:57.577 7.505 - 7.564: 85.4163% ( 2) 00:19:57.577 7.564 - 7.622: 85.4515% ( 4) 00:19:57.577 7.680 - 7.738: 85.5923% ( 16) 00:19:57.577 7.738 - 7.796: 86.1468% ( 63) 00:19:57.577 7.796 - 7.855: 86.6925% ( 62) 00:19:57.577 7.855 - 7.913: 87.2382% ( 62) 00:19:57.577 7.913 - 7.971: 87.4758% ( 27) 00:19:57.577 7.971 - 8.029: 87.5814% ( 12) 00:19:57.577 8.029 - 8.087: 87.6430% ( 7) 00:19:57.577 8.087 - 8.145: 88.5407% ( 102) 00:19:57.577 8.145 - 8.204: 90.6443% ( 239) 00:19:57.577 8.204 - 8.262: 92.4925% ( 210) 00:19:57.577 8.262 - 8.320: 94.0239% ( 174) 00:19:57.577 8.320 - 8.378: 95.1153% ( 124) 00:19:57.577 8.378 - 8.436: 96.1010% ( 112) 00:19:57.577 8.436 - 8.495: 96.6995% ( 68) 00:19:57.577 8.495 - 8.553: 97.2100% ( 58) 00:19:57.577 8.553 - 8.611: 97.4916% ( 32) 00:19:57.577 8.611 - 8.669: 97.8261% ( 38) 00:19:57.577 8.669 - 8.727: 97.9581% ( 15) 00:19:57.577 8.727 - 8.785: 98.0813% ( 14) 00:19:57.577 8.785 - 8.844: 98.1869% ( 12) 00:19:57.577 8.844 - 8.902: 98.2662% ( 9) 00:19:57.577 8.902 - 8.960: 98.3014% ( 4) 00:19:57.577 8.960 - 9.018: 98.3102% ( 1) 00:19:57.577 9.018 - 9.076: 98.3454% ( 4) 00:19:57.577 9.076 - 9.135: 98.3718% ( 3) 00:19:57.577 9.135 - 9.193: 98.3894% ( 2) 00:19:57.577 9.193 - 9.251: 98.4070% ( 2) 00:19:57.577 9.251 - 9.309: 98.4158% ( 1) 00:19:57.577 9.309 - 9.367: 98.4246% ( 1) 00:19:57.577 9.367 - 9.425: 98.4422% ( 2) 00:19:57.577 9.600 - 9.658: 98.4598% ( 2) 00:19:57.577 9.658 - 9.716: 98.4774% ( 2) 00:19:57.577 9.775 - 9.833: 98.4950% ( 2) 00:19:57.577 9.833 - 9.891: 98.5214% ( 3) 00:19:57.577 9.891 - 9.949: 98.5566% ( 4) 00:19:57.577 9.949 - 10.007: 98.6094% ( 6) 00:19:57.577 10.007 - 10.065: 98.6534% ( 5) 00:19:57.577 10.065 - 10.124: 98.6710% ( 2) 00:19:57.577 10.182 - 10.240: 98.6974% ( 3) 00:19:57.577 10.240 - 10.298: 98.7502% ( 6) 00:19:57.577 10.298 - 10.356: 98.7678% ( 2) 00:19:57.577 10.415 - 10.473: 98.7766% ( 1) 00:19:57.577 10.473 - 10.531: 98.8030% ( 3) 00:19:57.577 10.531 - 10.589: 98.8118% ( 1) 00:19:57.577 10.589 - 10.647: 98.8294% ( 2) 00:19:57.577 10.705 - 10.764: 98.8910% ( 7) 00:19:57.577 10.764 - 10.822: 98.9086% ( 2) 00:19:57.577 10.822 - 10.880: 98.9526% ( 5) 00:19:57.577 10.880 - 10.938: 98.9615% ( 1) 00:19:57.577 10.938 - 10.996: 99.0055% ( 5) 00:19:57.577 10.996 - 11.055: 99.0143% ( 1) 00:19:57.577 11.055 - 11.113: 99.0231% ( 1) 00:19:57.577 11.171 - 11.229: 99.0319% ( 1) 00:19:57.577 11.229 - 11.287: 99.0495% ( 2) 00:19:57.577 11.287 - 11.345: 99.0671% ( 2) 00:19:57.577 11.345 - 11.404: 99.0759% ( 1) 00:19:57.577 11.404 - 11.462: 99.0847% ( 1) 00:19:57.577 11.520 - 11.578: 99.0935% ( 1) 00:19:57.577 11.578 - 11.636: 99.1023% ( 1) 00:19:57.577 11.695 - 11.753: 99.1199% ( 2) 00:19:57.577 11.753 - 11.811: 99.1375% ( 2) 00:19:57.577 11.811 - 11.869: 99.1551% ( 2) 00:19:57.577 12.044 - 12.102: 99.1639% ( 1) 00:19:57.577 12.102 - 12.160: 99.1727% ( 1) 00:19:57.577 12.160 - 12.218: 99.1815% ( 1) 00:19:57.577 12.218 - 12.276: 99.1903% ( 1) 00:19:57.577 12.276 - 12.335: 99.1991% ( 1) 00:19:57.577 12.393 - 12.451: 99.2079% ( 1) 00:19:57.577 12.451 - 12.509: 99.2255% ( 2) 00:19:57.577 12.509 - 12.567: 99.2431% ( 2) 00:19:57.577 12.567 - 12.625: 99.2607% ( 2) 00:19:57.577 12.684 - 12.742: 99.2695% ( 1) 00:19:57.577 12.742 - 12.800: 99.2783% ( 1) 00:19:57.577 12.800 - 12.858: 99.2959% ( 2) 00:19:57.577 12.858 - 12.916: 99.3135% ( 2) 00:19:57.577 12.916 - 12.975: 99.3223% ( 1) 00:19:57.577 12.975 - 13.033: 99.3311% ( 1) 00:19:57.577 13.033 - 13.091: 99.3575% ( 3) 00:19:57.577 13.091 - 13.149: 99.3663% ( 1) 00:19:57.577 13.149 - 13.207: 99.3751% ( 1) 00:19:57.577 13.207 - 13.265: 99.3839% ( 1) 00:19:57.577 13.265 - 13.324: 99.3927% ( 1) 00:19:57.577 13.324 - 13.382: 99.4015% ( 1) 00:19:57.577 13.440 - 13.498: 99.4191% ( 2) 00:19:57.577 13.498 - 13.556: 99.4279% ( 1) 00:19:57.577 13.615 - 13.673: 99.4367% ( 1) 00:19:57.577 13.673 - 13.731: 99.4455% ( 1) 00:19:57.577 13.731 - 13.789: 99.4719% ( 3) 00:19:57.577 13.847 - 13.905: 99.4983% ( 3) 00:19:57.577 13.964 - 14.022: 99.5159% ( 2) 00:19:57.577 14.022 - 14.080: 99.5335% ( 2) 00:19:57.577 14.080 - 14.138: 99.5511% ( 2) 00:19:57.577 14.138 - 14.196: 99.5599% ( 1) 00:19:57.577 14.255 - 14.313: 99.5687% ( 1) 00:19:57.577 14.313 - 14.371: 99.5775% ( 1) 00:19:57.577 14.371 - 14.429: 99.5863% ( 1) 00:19:57.577 14.429 - 14.487: 99.5951% ( 1) 00:19:57.577 14.487 - 14.545: 99.6127% ( 2) 00:19:57.577 14.662 - 14.720: 99.6303% ( 2) 00:19:57.577 14.778 - 14.836: 99.6391% ( 1) 00:19:57.577 14.836 - 14.895: 99.6479% ( 1) 00:19:57.577 15.127 - 15.244: 99.6656% ( 2) 00:19:57.577 15.244 - 15.360: 99.6744% ( 1) 00:19:57.577 15.709 - 15.825: 99.6832% ( 1) 00:19:57.577 15.825 - 15.942: 99.7096% ( 3) 00:19:57.577 16.058 - 16.175: 99.7184% ( 1) 00:19:57.577 16.291 - 16.407: 99.7272% ( 1) 00:19:57.577 16.640 - 16.756: 99.7360% ( 1) 00:19:57.577 16.989 - 17.105: 99.7536% ( 2) 00:19:57.577 17.222 - 17.338: 99.7624% ( 1) 00:19:57.577 17.338 - 17.455: 99.7712% ( 1) 00:19:57.577 17.455 - 17.571: 99.7800% ( 1) 00:19:57.834 17.571 - 17.687: 99.7976% ( 2) 00:19:57.835 17.804 - 17.920: 99.8152% ( 2) 00:19:57.835 18.036 - 18.153: 99.8240% ( 1) 00:19:57.835 18.153 - 18.269: 99.8328% ( 1) 00:19:57.835 18.385 - 18.502: 99.8416% ( 1) 00:19:57.835 19.200 - 19.316: 99.8504% ( 1) 00:19:57.835 20.015 - 20.131: 99.8592% ( 1) 00:19:57.835 20.364 - 20.480: 99.8680% ( 1) 00:19:57.835 20.480 - 20.596: 99.8768% ( 1) 00:19:57.835 20.829 - 20.945: 99.9208% ( 5) 00:19:57.835 20.945 - 21.062: 99.9472% ( 3) 00:19:57.835 21.178 - 21.295: 99.9560% ( 1) 00:19:57.835 30.022 - 30.255: 99.9648% ( 1) 00:19:57.835 30.487 - 30.720: 99.9736% ( 1) 00:19:57.835 50.735 - 50.967: 99.9824% ( 1) 00:19:57.835 56.553 - 56.785: 99.9912% ( 1) 00:19:57.835 73.076 - 73.542: 100.0000% ( 1) 00:19:57.835 00:19:57.835 00:19:57.835 real 0m1.602s 00:19:57.835 user 0m1.017s 00:19:57.835 sys 0m0.584s 00:19:57.835 15:08:23 nvme.nvme_overhead -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:57.835 ************************************ 00:19:57.835 END TEST nvme_overhead 00:19:57.835 ************************************ 00:19:57.835 15:08:23 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:19:57.835 15:08:23 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:57.835 15:08:23 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:19:57.835 15:08:23 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:19:57.835 15:08:23 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:57.835 15:08:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:57.835 ************************************ 00:19:57.835 START TEST nvme_arbitration 00:19:57.835 ************************************ 00:19:57.835 15:08:23 nvme.nvme_arbitration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:19:58.399 EAL: TSC is not safe to use in SMP mode 00:19:58.399 EAL: TSC is not invariant 00:19:58.399 [2024-07-12 15:08:24.012154] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:20:01.720 Initializing NVMe Controllers 00:20:01.720 Attaching to 0000:00:10.0 00:20:01.720 Attached to 0000:00:10.0 00:20:01.720 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:20:01.720 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:20:01.720 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:20:01.720 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:20:01.720 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:20:01.720 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:20:01.720 Initialization complete. Launching workers. 00:20:01.720 Starting thread on core 1 with urgent priority queue 00:20:01.720 Starting thread on core 2 with urgent priority queue 00:20:01.720 Starting thread on core 3 with urgent priority queue 00:20:01.720 Starting thread on core 0 with urgent priority queue 00:20:01.720 QEMU NVMe Ctrl (12340 ) core 0: 6401.33 IO/s 15.62 secs/100000 ios 00:20:01.720 QEMU NVMe Ctrl (12340 ) core 1: 6251.67 IO/s 16.00 secs/100000 ios 00:20:01.720 QEMU NVMe Ctrl (12340 ) core 2: 6202.00 IO/s 16.12 secs/100000 ios 00:20:01.720 QEMU NVMe Ctrl (12340 ) core 3: 6242.00 IO/s 16.02 secs/100000 ios 00:20:01.720 ======================================================== 00:20:01.720 00:20:01.720 00:20:01.720 real 0m3.943s 00:20:01.720 user 0m12.407s 00:20:01.720 sys 0m0.574s 00:20:01.720 15:08:27 nvme.nvme_arbitration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:01.720 15:08:27 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:20:01.720 ************************************ 00:20:01.720 END TEST nvme_arbitration 00:20:01.720 ************************************ 00:20:01.720 15:08:27 nvme -- common/autotest_common.sh@1142 -- # return 0 00:20:01.720 15:08:27 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:20:01.720 15:08:27 nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:01.720 15:08:27 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:01.720 15:08:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:01.720 ************************************ 00:20:01.720 START TEST nvme_single_aen 00:20:01.720 ************************************ 00:20:01.720 15:08:27 nvme.nvme_single_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:20:02.654 EAL: TSC is not safe to use in SMP mode 00:20:02.654 EAL: TSC is not invariant 00:20:02.654 [2024-07-12 15:08:28.158929] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:20:02.654 Asynchronous Event Request test 00:20:02.654 Attaching to 0000:00:10.0 00:20:02.654 Attached to 0000:00:10.0 00:20:02.654 Reset controller to setup AER completions for this process 00:20:02.654 Registering asynchronous event callbacks... 00:20:02.654 Getting orig temperature thresholds of all controllers 00:20:02.654 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:20:02.654 Setting all controllers temperature threshold low to trigger AER 00:20:02.654 Waiting for all controllers temperature threshold to be set lower 00:20:02.654 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:20:02.654 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:20:02.654 Waiting for all controllers to trigger AER and reset threshold 00:20:02.654 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:20:02.654 Cleaning up... 00:20:02.654 00:20:02.654 real 0m0.739s 00:20:02.654 user 0m0.007s 00:20:02.654 sys 0m0.732s 00:20:02.654 15:08:28 nvme.nvme_single_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:02.654 ************************************ 00:20:02.654 END TEST nvme_single_aen 00:20:02.654 ************************************ 00:20:02.654 15:08:28 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:20:02.654 15:08:28 nvme -- common/autotest_common.sh@1142 -- # return 0 00:20:02.654 15:08:28 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:20:02.654 15:08:28 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:02.654 15:08:28 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:02.654 15:08:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:02.654 ************************************ 00:20:02.654 START TEST nvme_doorbell_aers 00:20:02.654 ************************************ 00:20:02.654 15:08:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1123 -- # nvme_doorbell_aers 00:20:02.654 15:08:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:20:02.654 15:08:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:20:02.654 15:08:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:20:02.654 15:08:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:20:02.654 15:08:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:20:02.654 15:08:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:20:02.654 15:08:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:02.654 15:08:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:02.654 15:08:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:20:02.654 15:08:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:20:02.654 15:08:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:20:02.654 15:08:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:20:02.655 15:08:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:03.222 EAL: TSC is not safe to use in SMP mode 00:20:03.222 EAL: TSC is not invariant 00:20:03.222 [2024-07-12 15:08:28.987167] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:20:03.222 Executing: test_write_invalid_db 00:20:03.222 Waiting for AER completion... 00:20:03.222 Asynchronous Event received. 00:20:03.222 Error Informaton Log Page received. 00:20:03.222 Success: test_write_invalid_db 00:20:03.222 00:20:03.222 Executing: test_invalid_db_write_overflow_sq 00:20:03.222 Waiting for AER completion... 00:20:03.222 Asynchronous Event received. 00:20:03.222 Error Informaton Log Page received. 00:20:03.222 Success: test_invalid_db_write_overflow_sq 00:20:03.222 00:20:03.222 Executing: test_invalid_db_write_overflow_cq 00:20:03.222 Waiting for AER completion... 00:20:03.222 Asynchronous Event received. 00:20:03.222 Error Informaton Log Page received. 00:20:03.222 Success: test_invalid_db_write_overflow_cq 00:20:03.222 00:20:03.222 00:20:03.222 real 0m0.783s 00:20:03.222 user 0m0.034s 00:20:03.222 sys 0m0.761s 00:20:03.222 15:08:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:03.222 15:08:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:20:03.222 ************************************ 00:20:03.222 END TEST nvme_doorbell_aers 00:20:03.222 ************************************ 00:20:03.481 15:08:29 nvme -- common/autotest_common.sh@1142 -- # return 0 00:20:03.481 15:08:29 nvme -- nvme/nvme.sh@97 -- # uname 00:20:03.481 15:08:29 nvme -- nvme/nvme.sh@97 -- # '[' FreeBSD '!=' FreeBSD ']' 00:20:03.481 15:08:29 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:20:03.481 15:08:29 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:03.481 15:08:29 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:03.481 15:08:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:03.481 ************************************ 00:20:03.481 START TEST bdev_nvme_reset_stuck_adm_cmd 00:20:03.481 ************************************ 00:20:03.481 15:08:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:20:03.481 * Looking for test storage... 00:20:03.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:20:03.481 15:08:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:20:03.481 15:08:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:20:03.481 15:08:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:20:03.481 15:08:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:20:03.481 15:08:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:20:03.481 15:08:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:20:03.481 15:08:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:20:03.481 15:08:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:20:03.481 15:08:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:20:03.481 15:08:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:20:03.481 15:08:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:20:03.481 15:08:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:20:03.481 15:08:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:03.481 15:08:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:03.481 15:08:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:20:03.481 15:08:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:20:03.481 15:08:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:20:03.481 15:08:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:20:03.481 15:08:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:20:03.481 15:08:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:20:03.481 15:08:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=69033 00:20:03.481 15:08:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:03.481 15:08:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:20:03.481 15:08:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 69033 00:20:03.481 15:08:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@829 -- # '[' -z 69033 ']' 00:20:03.481 15:08:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.481 15:08:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:03.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.481 15:08:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.481 15:08:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:03.481 15:08:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:03.481 [2024-07-12 15:08:29.284058] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:20:03.481 [2024-07-12 15:08:29.284308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:20:04.416 EAL: TSC is not safe to use in SMP mode 00:20:04.416 EAL: TSC is not invariant 00:20:04.416 [2024-07-12 15:08:29.987749] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:04.416 [2024-07-12 15:08:30.077938] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:20:04.416 [2024-07-12 15:08:30.077994] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:20:04.416 [2024-07-12 15:08:30.078004] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:20:04.416 [2024-07-12 15:08:30.078011] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:20:04.416 [2024-07-12 15:08:30.081919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.416 [2024-07-12 15:08:30.082125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.416 [2024-07-12 15:08:30.082035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.416 [2024-07-12 15:08:30.082123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:04.675 15:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:04.675 15:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # return 0 00:20:04.675 15:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:20:04.675 15:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.675 15:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:04.675 [2024-07-12 15:08:30.346754] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:20:04.675 nvme0n1 00:20:04.675 15:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.675 15:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:20:04.675 15:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_XXXXX.txt 00:20:04.675 15:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:20:04.675 15:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.675 15:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:04.675 true 00:20:04.675 15:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.675 15:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:20:04.675 15:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1720796910 00:20:04.675 15:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=69045 00:20:04.675 15:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:04.675 15:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:20:04.675 15:08:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:07.212 [2024-07-12 15:08:32.431229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:20:07.212 [2024-07-12 15:08:32.432799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.212 [2024-07-12 15:08:32.432831] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:20:07.212 [2024-07-12 15:08:32.432843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.212 [2024-07-12 15:08:32.433941] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.212 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 69045 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 69045 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 69045 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_XXXXX.txt 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.qKtknN 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.HAgErm 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_XXXXX.txt 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 69033 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@948 -- # '[' -z 69033 ']' 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # kill -0 69033 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # uname 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps -c -o command 69033 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # tail -1 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:20:07.212 killing process with pid 69033 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69033' 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@967 -- # kill 69033 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # wait 69033 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:20:07.212 00:20:07.212 real 0m3.692s 00:20:07.212 user 0m11.368s 00:20:07.212 sys 0m0.961s 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:07.212 ************************************ 00:20:07.212 END TEST bdev_nvme_reset_stuck_adm_cmd 00:20:07.212 ************************************ 00:20:07.212 15:08:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:07.212 15:08:32 nvme -- common/autotest_common.sh@1142 -- # return 0 00:20:07.212 15:08:32 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:20:07.212 15:08:32 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:20:07.212 15:08:32 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:07.212 15:08:32 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:07.212 15:08:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:07.212 ************************************ 00:20:07.212 START TEST nvme_fio 00:20:07.212 ************************************ 00:20:07.212 15:08:32 nvme.nvme_fio -- common/autotest_common.sh@1123 -- # nvme_fio_test 00:20:07.212 15:08:32 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:20:07.212 15:08:32 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:20:07.212 15:08:32 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:20:07.212 15:08:32 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:20:07.212 15:08:32 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:20:07.212 15:08:32 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:07.212 15:08:32 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:07.212 15:08:32 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:20:07.212 15:08:32 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:20:07.212 15:08:32 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:20:07.212 15:08:32 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0') 00:20:07.212 15:08:32 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:20:07.212 15:08:32 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:20:07.212 15:08:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:07.212 15:08:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:20:07.777 EAL: TSC is not safe to use in SMP mode 00:20:07.777 EAL: TSC is not invariant 00:20:07.777 [2024-07-12 15:08:33.375866] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:20:07.777 15:08:33 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:07.777 15:08:33 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:20:08.344 EAL: TSC is not safe to use in SMP mode 00:20:08.344 EAL: TSC is not invariant 00:20:08.344 [2024-07-12 15:08:33.968028] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:20:08.344 15:08:34 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:20:08.344 15:08:34 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:20:08.344 15:08:34 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:20:08.344 15:08:34 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:08.344 15:08:34 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:08.344 15:08:34 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:08.344 15:08:34 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:08.344 15:08:34 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:20:08.344 15:08:34 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:08.344 15:08:34 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:08.344 15:08:34 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:08.344 15:08:34 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:20:08.344 15:08:34 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:08.344 15:08:34 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:08.344 15:08:34 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:08.344 15:08:34 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:08.344 15:08:34 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:08.344 15:08:34 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:08.344 15:08:34 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:08.344 15:08:34 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:08.344 15:08:34 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:08.344 15:08:34 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:08.344 15:08:34 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:20:08.344 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:08.344 fio-3.35 00:20:08.344 Starting 1 thread 00:20:08.911 EAL: TSC is not safe to use in SMP mode 00:20:08.911 EAL: TSC is not invariant 00:20:08.911 [2024-07-12 15:08:34.660487] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:20:11.442 00:20:11.442 test: (groupid=0, jobs=1): err= 0: pid=101535: Fri Jul 12 15:08:37 2024 00:20:11.442 read: IOPS=47.3k, BW=185MiB/s (194MB/s)(370MiB/2001msec) 00:20:11.442 slat (nsec): min=468, max=21650, avg=550.78, stdev=220.85 00:20:11.442 clat (usec): min=286, max=5691, avg=1354.31, stdev=234.62 00:20:11.442 lat (usec): min=286, max=5713, avg=1354.86, stdev=234.64 00:20:11.442 clat percentiles (usec): 00:20:11.442 | 1.00th=[ 979], 5.00th=[ 1074], 10.00th=[ 1123], 20.00th=[ 1172], 00:20:11.442 | 30.00th=[ 1237], 40.00th=[ 1287], 50.00th=[ 1336], 60.00th=[ 1385], 00:20:11.442 | 70.00th=[ 1434], 80.00th=[ 1483], 90.00th=[ 1614], 95.00th=[ 1745], 00:20:11.442 | 99.00th=[ 2180], 99.50th=[ 2376], 99.90th=[ 3097], 99.95th=[ 3228], 00:20:11.442 | 99.99th=[ 3523] 00:20:11.442 bw ( KiB/s): min=176854, max=194752, per=99.46%, avg=188092.67, stdev=9788.38, samples=3 00:20:11.442 iops : min=44213, max=48688, avg=47023.00, stdev=2447.38, samples=3 00:20:11.442 write: IOPS=47.1k, BW=184MiB/s (193MB/s)(368MiB/2001msec); 0 zone resets 00:20:11.442 slat (nsec): min=488, max=14719, avg=729.47, stdev=461.15 00:20:11.442 clat (usec): min=280, max=5674, avg=1353.68, stdev=233.16 00:20:11.442 lat (usec): min=281, max=5677, avg=1354.41, stdev=233.18 00:20:11.442 clat percentiles (usec): 00:20:11.442 | 1.00th=[ 979], 5.00th=[ 1074], 10.00th=[ 1123], 20.00th=[ 1172], 00:20:11.442 | 30.00th=[ 1237], 40.00th=[ 1287], 50.00th=[ 1336], 60.00th=[ 1385], 00:20:11.442 | 70.00th=[ 1434], 80.00th=[ 1483], 90.00th=[ 1614], 95.00th=[ 1745], 00:20:11.442 | 99.00th=[ 2147], 99.50th=[ 2376], 99.90th=[ 3097], 99.95th=[ 3261], 00:20:11.442 | 99.99th=[ 3490] 00:20:11.442 bw ( KiB/s): min=176063, max=192896, per=99.16%, avg=186946.33, stdev=9438.92, samples=3 00:20:11.442 iops : min=44015, max=48224, avg=46736.33, stdev=2360.16, samples=3 00:20:11.442 lat (usec) : 500=0.18%, 750=0.19%, 1000=0.93% 00:20:11.442 lat (msec) : 2=97.25%, 4=1.43%, 10=0.01% 00:20:11.442 cpu : usr=100.00%, sys=0.00%, ctx=22, majf=0, minf=2 00:20:11.442 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:20:11.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:11.442 issued rwts: total=94607,94308,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.442 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:11.442 00:20:11.442 Run status group 0 (all jobs): 00:20:11.442 READ: bw=185MiB/s (194MB/s), 185MiB/s-185MiB/s (194MB/s-194MB/s), io=370MiB (388MB), run=2001-2001msec 00:20:11.442 WRITE: bw=184MiB/s (193MB/s), 184MiB/s-184MiB/s (193MB/s-193MB/s), io=368MiB (386MB), run=2001-2001msec 00:20:12.072 15:08:37 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:20:12.072 15:08:37 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:20:12.072 00:20:12.072 real 0m5.029s 00:20:12.072 user 0m2.584s 00:20:12.072 sys 0m2.357s 00:20:12.072 ************************************ 00:20:12.072 END TEST nvme_fio 00:20:12.072 ************************************ 00:20:12.072 15:08:37 nvme.nvme_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:12.072 15:08:37 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:20:12.359 15:08:37 nvme -- common/autotest_common.sh@1142 -- # return 0 00:20:12.359 00:20:12.359 real 0m25.242s 00:20:12.359 user 0m30.305s 00:20:12.359 sys 0m12.388s 00:20:12.359 15:08:37 nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:12.359 ************************************ 00:20:12.359 END TEST nvme 00:20:12.359 ************************************ 00:20:12.359 15:08:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:12.359 15:08:37 -- common/autotest_common.sh@1142 -- # return 0 00:20:12.359 15:08:37 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:20:12.359 15:08:37 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:20:12.359 15:08:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:12.359 15:08:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:12.359 15:08:37 -- common/autotest_common.sh@10 -- # set +x 00:20:12.359 ************************************ 00:20:12.359 START TEST nvme_scc 00:20:12.359 ************************************ 00:20:12.359 15:08:37 nvme_scc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:20:12.359 * Looking for test storage... 00:20:12.359 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:20:12.359 15:08:38 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:20:12.359 15:08:38 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:20:12.359 15:08:38 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:20:12.359 15:08:38 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:12.359 15:08:38 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:12.359 15:08:38 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.359 15:08:38 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.359 15:08:38 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.359 15:08:38 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:20:12.359 15:08:38 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:20:12.359 15:08:38 nvme_scc -- paths/export.sh@4 -- # export PATH 00:20:12.359 15:08:38 nvme_scc -- paths/export.sh@5 -- # echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:20:12.359 15:08:38 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:20:12.359 15:08:38 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:20:12.359 15:08:38 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:20:12.359 15:08:38 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:20:12.359 15:08:38 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:20:12.359 15:08:38 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:20:12.359 15:08:38 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:20:12.359 15:08:38 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:20:12.359 15:08:38 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:20:12.359 15:08:38 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:12.359 15:08:38 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:20:12.359 15:08:38 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ FreeBSD == Linux ]] 00:20:12.359 15:08:38 nvme_scc -- nvme/nvme_scc.sh@12 -- # exit 0 00:20:12.359 00:20:12.359 real 0m0.163s 00:20:12.359 user 0m0.098s 00:20:12.359 sys 0m0.135s 00:20:12.359 15:08:38 nvme_scc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:12.359 15:08:38 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:20:12.359 ************************************ 00:20:12.359 END TEST nvme_scc 00:20:12.359 ************************************ 00:20:12.359 15:08:38 -- common/autotest_common.sh@1142 -- # return 0 00:20:12.359 15:08:38 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:20:12.359 15:08:38 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:20:12.359 15:08:38 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:20:12.359 15:08:38 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:20:12.359 15:08:38 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:20:12.359 15:08:38 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:20:12.359 15:08:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:12.359 15:08:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:12.359 15:08:38 -- common/autotest_common.sh@10 -- # set +x 00:20:12.359 ************************************ 00:20:12.359 START TEST nvme_rpc 00:20:12.359 ************************************ 00:20:12.359 15:08:38 nvme_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:20:12.681 * Looking for test storage... 00:20:12.681 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:20:12.681 15:08:38 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:12.681 15:08:38 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:20:12.681 15:08:38 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:20:12.681 15:08:38 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:20:12.681 15:08:38 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:20:12.681 15:08:38 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:20:12.681 15:08:38 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:20:12.681 15:08:38 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:20:12.681 15:08:38 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:12.681 15:08:38 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:12.681 15:08:38 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:20:12.681 15:08:38 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:20:12.681 15:08:38 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:20:12.681 15:08:38 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:20:12.681 15:08:38 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:20:12.681 15:08:38 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=69287 00:20:12.681 15:08:38 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:20:12.681 15:08:38 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:20:12.681 15:08:38 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 69287 00:20:12.681 15:08:38 nvme_rpc -- common/autotest_common.sh@829 -- # '[' -z 69287 ']' 00:20:12.681 15:08:38 nvme_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.681 15:08:38 nvme_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:12.681 15:08:38 nvme_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.681 15:08:38 nvme_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:12.681 15:08:38 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:12.681 [2024-07-12 15:08:38.282356] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:20:12.681 [2024-07-12 15:08:38.282567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:20:12.976 EAL: TSC is not safe to use in SMP mode 00:20:12.976 EAL: TSC is not invariant 00:20:13.234 [2024-07-12 15:08:38.797620] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:13.234 [2024-07-12 15:08:38.902037] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:20:13.234 [2024-07-12 15:08:38.902098] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:20:13.234 [2024-07-12 15:08:38.904800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.234 [2024-07-12 15:08:38.904789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.493 15:08:39 nvme_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:13.493 15:08:39 nvme_rpc -- common/autotest_common.sh@862 -- # return 0 00:20:13.493 15:08:39 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:20:13.750 [2024-07-12 15:08:39.577686] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:20:14.008 Nvme0n1 00:20:14.008 15:08:39 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:20:14.008 15:08:39 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:20:14.266 request: 00:20:14.266 { 00:20:14.266 "bdev_name": "Nvme0n1", 00:20:14.266 "filename": "non_existing_file", 00:20:14.266 "method": "bdev_nvme_apply_firmware", 00:20:14.266 "req_id": 1 00:20:14.266 } 00:20:14.266 Got JSON-RPC error response 00:20:14.266 response: 00:20:14.266 { 00:20:14.266 "code": -32603, 00:20:14.266 "message": "open file failed." 00:20:14.266 } 00:20:14.266 15:08:39 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:20:14.266 15:08:39 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:20:14.266 15:08:39 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:20:14.524 15:08:40 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:20:14.524 15:08:40 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 69287 00:20:14.524 15:08:40 nvme_rpc -- common/autotest_common.sh@948 -- # '[' -z 69287 ']' 00:20:14.524 15:08:40 nvme_rpc -- common/autotest_common.sh@952 -- # kill -0 69287 00:20:14.524 15:08:40 nvme_rpc -- common/autotest_common.sh@953 -- # uname 00:20:14.524 15:08:40 nvme_rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:20:14.524 15:08:40 nvme_rpc -- common/autotest_common.sh@956 -- # ps -c -o command 69287 00:20:14.524 15:08:40 nvme_rpc -- common/autotest_common.sh@956 -- # tail -1 00:20:14.524 15:08:40 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:20:14.524 15:08:40 nvme_rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:20:14.524 killing process with pid 69287 00:20:14.524 15:08:40 nvme_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69287' 00:20:14.524 15:08:40 nvme_rpc -- common/autotest_common.sh@967 -- # kill 69287 00:20:14.524 15:08:40 nvme_rpc -- common/autotest_common.sh@972 -- # wait 69287 00:20:14.783 00:20:14.783 real 0m2.281s 00:20:14.783 user 0m4.173s 00:20:14.783 sys 0m0.811s 00:20:14.783 15:08:40 nvme_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:14.783 15:08:40 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:14.783 ************************************ 00:20:14.783 END TEST nvme_rpc 00:20:14.783 ************************************ 00:20:14.783 15:08:40 -- common/autotest_common.sh@1142 -- # return 0 00:20:14.783 15:08:40 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:20:14.783 15:08:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:14.783 15:08:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:14.783 15:08:40 -- common/autotest_common.sh@10 -- # set +x 00:20:14.783 ************************************ 00:20:14.783 START TEST nvme_rpc_timeouts 00:20:14.783 ************************************ 00:20:14.783 15:08:40 nvme_rpc_timeouts -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:20:14.783 * Looking for test storage... 00:20:14.783 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:20:14.783 15:08:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:14.783 15:08:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_69324 00:20:14.783 15:08:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_69324 00:20:14.783 15:08:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:20:14.783 15:08:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=69352 00:20:14.783 15:08:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:20:14.783 15:08:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 69352 00:20:14.783 15:08:40 nvme_rpc_timeouts -- common/autotest_common.sh@829 -- # '[' -z 69352 ']' 00:20:14.784 15:08:40 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.784 15:08:40 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:14.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.784 15:08:40 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.784 15:08:40 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:14.784 15:08:40 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:20:14.784 [2024-07-12 15:08:40.608335] Starting SPDK v24.09-pre git sha1 eea7da688 / DPDK 24.03.0 initialization... 00:20:14.784 [2024-07-12 15:08:40.608543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:20:15.350 EAL: TSC is not safe to use in SMP mode 00:20:15.350 EAL: TSC is not invariant 00:20:15.350 [2024-07-12 15:08:41.138903] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:15.608 [2024-07-12 15:08:41.230082] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:20:15.608 [2024-07-12 15:08:41.230156] app.c: 932:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:20:15.608 [2024-07-12 15:08:41.232969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.608 [2024-07-12 15:08:41.232954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.867 15:08:41 nvme_rpc_timeouts -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:15.867 15:08:41 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # return 0 00:20:15.867 Checking default timeout settings: 00:20:15.867 15:08:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:20:15.867 15:08:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:20:16.125 Making settings changes with rpc: 00:20:16.125 15:08:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:20:16.125 15:08:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:20:16.383 Check default vs. modified settings: 00:20:16.383 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:20:16.383 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:20:16.641 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:20:16.641 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:20:16.641 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_69324 00:20:16.641 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:20:16.641 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:20:16.641 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:20:16.642 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_69324 00:20:16.642 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:20:16.642 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:20:16.642 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:20:16.642 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:20:16.642 Setting action_on_timeout is changed as expected. 00:20:16.642 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:20:16.899 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:20:16.899 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_69324 00:20:16.900 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:20:16.900 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:20:16.900 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:20:16.900 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:20:16.900 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:20:16.900 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_69324 00:20:16.900 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:20:16.900 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:20:16.900 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:20:16.900 Setting timeout_us is changed as expected. 00:20:16.900 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:20:16.900 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_69324 00:20:16.900 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:20:16.900 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:20:16.900 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:20:16.900 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_69324 00:20:16.900 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:20:16.900 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:20:16.900 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:20:16.900 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:20:16.900 Setting timeout_admin_us is changed as expected. 00:20:16.900 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:20:16.900 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:20:16.900 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_69324 /tmp/settings_modified_69324 00:20:16.900 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 69352 00:20:16.900 15:08:42 nvme_rpc_timeouts -- common/autotest_common.sh@948 -- # '[' -z 69352 ']' 00:20:16.900 15:08:42 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # kill -0 69352 00:20:16.900 15:08:42 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # uname 00:20:16.900 15:08:42 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:20:16.900 15:08:42 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps -c -o command 69352 00:20:16.900 15:08:42 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # tail -1 00:20:16.900 15:08:42 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:20:16.900 killing process with pid 69352 00:20:16.900 15:08:42 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:20:16.900 15:08:42 nvme_rpc_timeouts -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69352' 00:20:16.900 15:08:42 nvme_rpc_timeouts -- common/autotest_common.sh@967 -- # kill 69352 00:20:16.900 15:08:42 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # wait 69352 00:20:17.156 RPC TIMEOUT SETTING TEST PASSED. 00:20:17.156 15:08:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:20:17.156 00:20:17.156 real 0m2.335s 00:20:17.156 user 0m4.213s 00:20:17.156 sys 0m0.872s 00:20:17.156 15:08:42 nvme_rpc_timeouts -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:17.156 15:08:42 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:20:17.156 ************************************ 00:20:17.156 END TEST nvme_rpc_timeouts 00:20:17.156 ************************************ 00:20:17.156 15:08:42 -- common/autotest_common.sh@1142 -- # return 0 00:20:17.156 15:08:42 -- spdk/autotest.sh@243 -- # uname -s 00:20:17.156 15:08:42 -- spdk/autotest.sh@243 -- # '[' FreeBSD = Linux ']' 00:20:17.156 15:08:42 -- spdk/autotest.sh@247 -- # [[ 0 -eq 1 ]] 00:20:17.156 15:08:42 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:20:17.156 15:08:42 -- spdk/autotest.sh@260 -- # timing_exit lib 00:20:17.156 15:08:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:17.157 15:08:42 -- common/autotest_common.sh@10 -- # set +x 00:20:17.157 15:08:42 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:20:17.157 15:08:42 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:20:17.157 15:08:42 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:20:17.157 15:08:42 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:20:17.157 15:08:42 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:20:17.157 15:08:42 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:20:17.157 15:08:42 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:20:17.157 15:08:42 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:20:17.157 15:08:42 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:20:17.157 15:08:42 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:20:17.157 15:08:42 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:20:17.157 15:08:42 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:20:17.157 15:08:42 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:20:17.157 15:08:42 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:20:17.157 15:08:42 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:20:17.157 15:08:42 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:20:17.157 15:08:42 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:20:17.157 15:08:42 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:20:17.157 15:08:42 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:20:17.157 15:08:42 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:20:17.157 15:08:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:17.157 15:08:42 -- common/autotest_common.sh@10 -- # set +x 00:20:17.157 15:08:42 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:20:17.157 15:08:42 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:20:17.157 15:08:42 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:20:17.157 15:08:42 -- common/autotest_common.sh@10 -- # set +x 00:20:17.722 setup.sh cleanup function not yet supported on FreeBSD 00:20:17.722 15:08:43 -- common/autotest_common.sh@1451 -- # return 0 00:20:17.722 15:08:43 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:20:17.722 15:08:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:17.722 15:08:43 -- common/autotest_common.sh@10 -- # set +x 00:20:17.722 15:08:43 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:20:17.722 15:08:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:17.722 15:08:43 -- common/autotest_common.sh@10 -- # set +x 00:20:17.722 15:08:43 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:17.722 15:08:43 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:17.722 15:08:43 -- spdk/autotest.sh@391 -- # hash lcov 00:20:17.722 /home/vagrant/spdk_repo/spdk/autotest.sh: line 391: hash: lcov: not found 00:20:17.980 15:08:43 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:17.980 15:08:43 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:20:17.980 15:08:43 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:17.980 15:08:43 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:17.980 15:08:43 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:20:17.980 15:08:43 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:20:17.980 15:08:43 -- paths/export.sh@4 -- $ export PATH 00:20:17.980 15:08:43 -- paths/export.sh@5 -- $ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:20:17.980 15:08:43 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:20:17.980 15:08:43 -- common/autobuild_common.sh@444 -- $ date +%s 00:20:17.980 15:08:43 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720796923.XXXXXX 00:20:17.980 15:08:43 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720796923.XXXXXX.iq0dCkrdn7 00:20:17.980 15:08:43 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:20:17.980 15:08:43 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:20:17.980 15:08:43 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:20:17.980 15:08:43 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:20:17.980 15:08:43 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:20:17.980 15:08:43 -- common/autobuild_common.sh@460 -- $ get_config_params 00:20:17.980 15:08:43 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:20:17.980 15:08:43 -- common/autotest_common.sh@10 -- $ set +x 00:20:17.980 15:08:43 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:20:17.980 15:08:43 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:20:17.980 15:08:43 -- pm/common@17 -- $ local monitor 00:20:17.980 15:08:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:17.980 15:08:43 -- pm/common@25 -- $ sleep 1 00:20:17.980 15:08:43 -- pm/common@21 -- $ date +%s 00:20:17.980 15:08:43 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720796923 00:20:17.980 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720796923_collect-vmstat.pm.log 00:20:19.353 15:08:44 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:20:19.353 15:08:44 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:20:19.353 15:08:44 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:20:19.353 15:08:44 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:20:19.353 15:08:44 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:20:19.353 15:08:44 -- spdk/autopackage.sh@19 -- $ timing_finish 00:20:19.353 15:08:44 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:19.353 15:08:44 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:20:19.353 15:08:44 -- spdk/autopackage.sh@20 -- $ exit 0 00:20:19.353 15:08:44 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:20:19.353 15:08:44 -- pm/common@29 -- $ signal_monitor_resources TERM 00:20:19.353 15:08:44 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:20:19.353 15:08:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:19.353 15:08:44 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:20:19.353 15:08:44 -- pm/common@44 -- $ pid=69575 00:20:19.353 15:08:44 -- pm/common@50 -- $ kill -TERM 69575 00:20:19.353 + [[ -n 1231 ]] 00:20:19.353 + sudo kill 1231 00:20:20.296 [Pipeline] } 00:20:20.317 [Pipeline] // timeout 00:20:20.323 [Pipeline] } 00:20:20.341 [Pipeline] // stage 00:20:20.348 [Pipeline] } 00:20:20.364 [Pipeline] // catchError 00:20:20.373 [Pipeline] stage 00:20:20.375 [Pipeline] { (Stop VM) 00:20:20.388 [Pipeline] sh 00:20:20.662 + vagrant halt 00:20:24.842 ==> default: Halting domain... 00:20:46.772 [Pipeline] sh 00:20:47.049 + vagrant destroy -f 00:20:51.236 ==> default: Removing domain... 00:20:51.248 [Pipeline] sh 00:20:51.527 + mv output /var/jenkins/workspace/freebsd-vg-autotest_2/output 00:20:51.537 [Pipeline] } 00:20:51.554 [Pipeline] // stage 00:20:51.557 [Pipeline] } 00:20:51.575 [Pipeline] // dir 00:20:51.578 [Pipeline] } 00:20:51.592 [Pipeline] // wrap 00:20:51.597 [Pipeline] } 00:20:51.610 [Pipeline] // catchError 00:20:51.617 [Pipeline] stage 00:20:51.619 [Pipeline] { (Epilogue) 00:20:51.632 [Pipeline] sh 00:20:51.965 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:51.976 [Pipeline] catchError 00:20:51.978 [Pipeline] { 00:20:51.994 [Pipeline] sh 00:20:52.278 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:52.278 Artifacts sizes are good 00:20:52.287 [Pipeline] } 00:20:52.304 [Pipeline] // catchError 00:20:52.315 [Pipeline] archiveArtifacts 00:20:52.321 Archiving artifacts 00:20:52.366 [Pipeline] cleanWs 00:20:52.378 [WS-CLEANUP] Deleting project workspace... 00:20:52.378 [WS-CLEANUP] Deferred wipeout is used... 00:20:52.384 [WS-CLEANUP] done 00:20:52.385 [Pipeline] } 00:20:52.402 [Pipeline] // stage 00:20:52.408 [Pipeline] } 00:20:52.424 [Pipeline] // node 00:20:52.429 [Pipeline] End of Pipeline 00:20:52.461 Finished: SUCCESS