00:00:00.001 Started by upstream project "autotest-per-patch" build number 126254 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.120 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.120 The recommended git tool is: git 00:00:00.120 using credential 00000000-0000-0000-0000-000000000002 00:00:00.122 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.162 Fetching changes from the remote Git repository 00:00:00.166 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.205 Using shallow fetch with depth 1 00:00:00.205 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.205 > git --version # timeout=10 00:00:00.232 > git --version # 'git version 2.39.2' 00:00:00.233 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.251 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.251 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.896 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.907 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.917 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:06.917 > git config core.sparsecheckout # timeout=10 00:00:06.927 > git read-tree -mu HEAD # timeout=10 00:00:06.941 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:06.958 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:06.959 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:07.038 [Pipeline] Start of Pipeline 00:00:07.055 [Pipeline] library 00:00:07.057 Loading library shm_lib@master 00:00:07.232 Library shm_lib@master is cached. Copying from home. 00:00:07.272 [Pipeline] node 00:00:07.361 Running on VM-host-SM17 in /var/jenkins/workspace/freebsd-vg-autotest_2 00:00:07.364 [Pipeline] { 00:00:07.382 [Pipeline] catchError 00:00:07.385 [Pipeline] { 00:00:07.429 [Pipeline] wrap 00:00:07.444 [Pipeline] { 00:00:07.457 [Pipeline] stage 00:00:07.460 [Pipeline] { (Prologue) 00:00:07.481 [Pipeline] echo 00:00:07.483 Node: VM-host-SM17 00:00:07.488 [Pipeline] cleanWs 00:00:07.495 [WS-CLEANUP] Deleting project workspace... 00:00:07.495 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.500 [WS-CLEANUP] done 00:00:07.693 [Pipeline] setCustomBuildProperty 00:00:07.762 [Pipeline] httpRequest 00:00:07.775 [Pipeline] echo 00:00:07.776 Sorcerer 10.211.164.101 is alive 00:00:07.782 [Pipeline] httpRequest 00:00:07.784 HttpMethod: GET 00:00:07.785 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.785 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.794 Response Code: HTTP/1.1 200 OK 00:00:07.795 Success: Status code 200 is in the accepted range: 200,404 00:00:07.795 Saving response body to /var/jenkins/workspace/freebsd-vg-autotest_2/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:09.932 [Pipeline] sh 00:00:10.211 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:10.227 [Pipeline] httpRequest 00:00:10.305 [Pipeline] echo 00:00:10.306 Sorcerer 10.211.164.101 is alive 00:00:10.314 [Pipeline] httpRequest 00:00:10.318 HttpMethod: GET 00:00:10.319 URL: http://10.211.164.101/packages/spdk_a83ad116ad9e96cd017a455fe18f2048177986b5.tar.gz 00:00:10.319 Sending request to url: http://10.211.164.101/packages/spdk_a83ad116ad9e96cd017a455fe18f2048177986b5.tar.gz 00:00:10.334 Response Code: HTTP/1.1 200 OK 00:00:10.334 Success: Status code 200 is in the accepted range: 200,404 00:00:10.335 Saving response body to /var/jenkins/workspace/freebsd-vg-autotest_2/spdk_a83ad116ad9e96cd017a455fe18f2048177986b5.tar.gz 00:00:46.834 [Pipeline] sh 00:00:47.111 + tar --no-same-owner -xf spdk_a83ad116ad9e96cd017a455fe18f2048177986b5.tar.gz 00:00:50.474 [Pipeline] sh 00:00:50.751 + git -C spdk log --oneline -n5 00:00:50.751 a83ad116a scripts/setup.sh: Use HUGE_EVEN_ALLOC logic by default 00:00:50.751 a95bbf233 blob: set parent_id properly on spdk_bs_blob_set_external_parent. 00:00:50.751 248c547d0 nvmf/tcp: add option for selecting a sock impl 00:00:50.751 2d30d9f83 accel: introduce tasks in sequence limit 00:00:50.751 2728651ee accel: adjust task per ch define name 00:00:50.771 [Pipeline] writeFile 00:00:50.787 [Pipeline] sh 00:00:51.067 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:51.078 [Pipeline] sh 00:00:51.356 + cat autorun-spdk.conf 00:00:51.356 SPDK_TEST_UNITTEST=1 00:00:51.356 SPDK_RUN_VALGRIND=0 00:00:51.356 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:51.356 SPDK_TEST_NVME=1 00:00:51.356 SPDK_TEST_BLOCKDEV=1 00:00:51.356 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:51.362 RUN_NIGHTLY=0 00:00:51.367 [Pipeline] } 00:00:51.388 [Pipeline] // stage 00:00:51.410 [Pipeline] stage 00:00:51.413 [Pipeline] { (Run VM) 00:00:51.431 [Pipeline] sh 00:00:51.712 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:51.712 + echo 'Start stage prepare_nvme.sh' 00:00:51.712 Start stage prepare_nvme.sh 00:00:51.712 + [[ -n 2 ]] 00:00:51.712 + disk_prefix=ex2 00:00:51.712 + [[ -n /var/jenkins/workspace/freebsd-vg-autotest_2 ]] 00:00:51.712 + [[ -e /var/jenkins/workspace/freebsd-vg-autotest_2/autorun-spdk.conf ]] 00:00:51.712 + source /var/jenkins/workspace/freebsd-vg-autotest_2/autorun-spdk.conf 00:00:51.712 ++ SPDK_TEST_UNITTEST=1 00:00:51.712 ++ SPDK_RUN_VALGRIND=0 00:00:51.712 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:51.712 ++ SPDK_TEST_NVME=1 00:00:51.712 ++ SPDK_TEST_BLOCKDEV=1 00:00:51.712 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:51.712 ++ RUN_NIGHTLY=0 00:00:51.712 + cd /var/jenkins/workspace/freebsd-vg-autotest_2 00:00:51.712 + nvme_files=() 00:00:51.712 + declare -A nvme_files 00:00:51.712 + backend_dir=/var/lib/libvirt/images/backends 00:00:51.712 + nvme_files['nvme.img']=5G 00:00:51.712 + nvme_files['nvme-cmb.img']=5G 00:00:51.712 + nvme_files['nvme-multi0.img']=4G 00:00:51.712 + nvme_files['nvme-multi1.img']=4G 00:00:51.712 + nvme_files['nvme-multi2.img']=4G 00:00:51.712 + nvme_files['nvme-openstack.img']=8G 00:00:51.712 + nvme_files['nvme-zns.img']=5G 00:00:51.712 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:51.712 + (( SPDK_TEST_FTL == 1 )) 00:00:51.712 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:51.712 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:51.712 + for nvme in "${!nvme_files[@]}" 00:00:51.712 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:00:51.712 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:51.712 + for nvme in "${!nvme_files[@]}" 00:00:51.712 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:00:51.712 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:51.712 + for nvme in "${!nvme_files[@]}" 00:00:51.712 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:00:51.712 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:51.712 + for nvme in "${!nvme_files[@]}" 00:00:51.712 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:00:51.712 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:51.712 + for nvme in "${!nvme_files[@]}" 00:00:51.712 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:00:51.712 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:51.712 + for nvme in "${!nvme_files[@]}" 00:00:51.712 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:00:51.712 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:51.712 + for nvme in "${!nvme_files[@]}" 00:00:51.712 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:00:52.026 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:52.026 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:00:52.026 + echo 'End stage prepare_nvme.sh' 00:00:52.026 End stage prepare_nvme.sh 00:00:52.038 [Pipeline] sh 00:00:52.317 + DISTRO=freebsd14 CPUS=10 RAM=14336 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:52.317 Setup: -n 10 -s 14336 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -H -a -v -f freebsd14 00:00:52.317 00:00:52.317 DIR=/var/jenkins/workspace/freebsd-vg-autotest_2/spdk/scripts/vagrant 00:00:52.317 SPDK_DIR=/var/jenkins/workspace/freebsd-vg-autotest_2/spdk 00:00:52.317 VAGRANT_TARGET=/var/jenkins/workspace/freebsd-vg-autotest_2 00:00:52.317 HELP=0 00:00:52.317 DRY_RUN=0 00:00:52.317 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img, 00:00:52.317 NVME_DISKS_TYPE=nvme, 00:00:52.317 NVME_AUTO_CREATE=0 00:00:52.317 NVME_DISKS_NAMESPACES=, 00:00:52.317 NVME_CMB=, 00:00:52.317 NVME_PMR=, 00:00:52.317 NVME_ZNS=, 00:00:52.317 NVME_MS=, 00:00:52.317 NVME_FDP=, 00:00:52.317 SPDK_VAGRANT_DISTRO=freebsd14 00:00:52.317 SPDK_VAGRANT_VMCPU=10 00:00:52.317 SPDK_VAGRANT_VMRAM=14336 00:00:52.317 SPDK_VAGRANT_PROVIDER=libvirt 00:00:52.317 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:52.317 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:52.317 SPDK_OPENSTACK_NETWORK=0 00:00:52.317 VAGRANT_PACKAGE_BOX=0 00:00:52.317 VAGRANTFILE=/var/jenkins/workspace/freebsd-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:52.317 FORCE_DISTRO=true 00:00:52.317 VAGRANT_BOX_VERSION= 00:00:52.317 EXTRA_VAGRANTFILES= 00:00:52.317 NIC_MODEL=e1000 00:00:52.317 00:00:52.317 mkdir: created directory '/var/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt' 00:00:52.317 /var/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt /var/jenkins/workspace/freebsd-vg-autotest_2 00:00:55.603 Bringing machine 'default' up with 'libvirt' provider... 00:00:56.173 ==> default: Creating image (snapshot of base box volume). 00:00:56.173 ==> default: Creating domain with the following settings... 00:00:56.173 ==> default: -- Name: freebsd14-14.0-RELEASE-1718332871-2294_default_1721079430_1861d60485caa1a12f2c 00:00:56.173 ==> default: -- Domain type: kvm 00:00:56.173 ==> default: -- Cpus: 10 00:00:56.173 ==> default: -- Feature: acpi 00:00:56.173 ==> default: -- Feature: apic 00:00:56.173 ==> default: -- Feature: pae 00:00:56.173 ==> default: -- Memory: 14336M 00:00:56.173 ==> default: -- Memory Backing: hugepages: 00:00:56.173 ==> default: -- Management MAC: 00:00:56.173 ==> default: -- Loader: 00:00:56.173 ==> default: -- Nvram: 00:00:56.173 ==> default: -- Base box: spdk/freebsd14 00:00:56.173 ==> default: -- Storage pool: default 00:00:56.173 ==> default: -- Image: /var/lib/libvirt/images/freebsd14-14.0-RELEASE-1718332871-2294_default_1721079430_1861d60485caa1a12f2c.img (32G) 00:00:56.173 ==> default: -- Volume Cache: default 00:00:56.173 ==> default: -- Kernel: 00:00:56.173 ==> default: -- Initrd: 00:00:56.173 ==> default: -- Graphics Type: vnc 00:00:56.173 ==> default: -- Graphics Port: -1 00:00:56.173 ==> default: -- Graphics IP: 127.0.0.1 00:00:56.173 ==> default: -- Graphics Password: Not defined 00:00:56.173 ==> default: -- Video Type: cirrus 00:00:56.173 ==> default: -- Video VRAM: 9216 00:00:56.173 ==> default: -- Sound Type: 00:00:56.173 ==> default: -- Keymap: en-us 00:00:56.173 ==> default: -- TPM Path: 00:00:56.173 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:56.173 ==> default: -- Command line args: 00:00:56.173 ==> default: -> value=-device, 00:00:56.173 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:56.173 ==> default: -> value=-drive, 00:00:56.173 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:00:56.173 ==> default: -> value=-device, 00:00:56.173 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:56.432 ==> default: Creating shared folders metadata... 00:00:56.432 ==> default: Starting domain. 00:00:58.335 ==> default: Waiting for domain to get an IP address... 00:01:20.371 ==> default: Waiting for SSH to become available... 00:01:30.334 ==> default: Configuring and enabling network interfaces... 00:01:36.898 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:51.805 ==> default: Mounting SSHFS shared folder... 00:01:51.805 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt/output => /home/vagrant/spdk_repo/output 00:01:51.805 ==> default: Checking Mount.. 00:01:53.188 ==> default: Folder Successfully Mounted! 00:01:53.188 ==> default: Running provisioner: file... 00:01:54.121 default: ~/.gitconfig => .gitconfig 00:01:54.382 00:01:54.382 SUCCESS! 00:01:54.382 00:01:54.382 cd to /var/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt and type "vagrant ssh" to use. 00:01:54.382 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:54.382 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt" to destroy all trace of vm. 00:01:54.382 00:01:54.391 [Pipeline] } 00:01:54.409 [Pipeline] // stage 00:01:54.418 [Pipeline] dir 00:01:54.419 Running in /var/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt 00:01:54.420 [Pipeline] { 00:01:54.465 [Pipeline] catchError 00:01:54.467 [Pipeline] { 00:01:54.482 [Pipeline] sh 00:01:54.758 + vagrant ssh-config --host vagrant 00:01:54.758 + sed -ne /^Host/,$p 00:01:54.758 + tee ssh_conf 00:01:58.941 Host vagrant 00:01:58.941 HostName 192.168.121.207 00:01:58.941 User vagrant 00:01:58.941 Port 22 00:01:58.941 UserKnownHostsFile /dev/null 00:01:58.941 StrictHostKeyChecking no 00:01:58.941 PasswordAuthentication no 00:01:58.941 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-freebsd14/14.0-RELEASE-1718332871-2294/libvirt/freebsd14 00:01:58.941 IdentitiesOnly yes 00:01:58.941 LogLevel FATAL 00:01:58.941 ForwardAgent yes 00:01:58.941 ForwardX11 yes 00:01:58.941 00:01:58.953 [Pipeline] withEnv 00:01:58.955 [Pipeline] { 00:01:58.971 [Pipeline] sh 00:01:59.248 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:59.248 source /etc/os-release 00:01:59.248 [[ -e /image.version ]] && img=$(< /image.version) 00:01:59.248 # Minimal, systemd-like check. 00:01:59.248 if [[ -e /.dockerenv ]]; then 00:01:59.248 # Clear garbage from the node's name: 00:01:59.248 # agt-er_autotest_547-896 -> autotest_547-896 00:01:59.248 # $HOSTNAME is the actual container id 00:01:59.248 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:59.248 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:59.248 # We can assume this is a mount from a host where container is running, 00:01:59.248 # so fetch its hostname to easily identify the target swarm worker. 00:01:59.248 container="$(< /etc/hostname) ($agent)" 00:01:59.248 else 00:01:59.248 # Fallback 00:01:59.248 container=$agent 00:01:59.248 fi 00:01:59.248 fi 00:01:59.248 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:59.248 00:01:59.259 [Pipeline] } 00:01:59.279 [Pipeline] // withEnv 00:01:59.287 [Pipeline] setCustomBuildProperty 00:01:59.298 [Pipeline] stage 00:01:59.300 [Pipeline] { (Tests) 00:01:59.316 [Pipeline] sh 00:01:59.595 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:59.609 [Pipeline] sh 00:01:59.888 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:00.162 [Pipeline] timeout 00:02:00.163 Timeout set to expire in 1 hr 30 min 00:02:00.165 [Pipeline] { 00:02:00.181 [Pipeline] sh 00:02:00.460 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:01.027 HEAD is now at a83ad116a scripts/setup.sh: Use HUGE_EVEN_ALLOC logic by default 00:02:01.042 [Pipeline] sh 00:02:01.318 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:01.331 [Pipeline] sh 00:02:01.606 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:01.882 [Pipeline] sh 00:02:02.162 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant CXX=/usr/bin/clang++ CC=/usr/bin/clang JOB_BASE_NAME=freebsd-vg-autotest ./autoruner.sh spdk_repo 00:02:02.162 ++ readlink -f spdk_repo 00:02:02.162 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:02.162 + [[ -n /home/vagrant/spdk_repo ]] 00:02:02.162 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:02.162 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:02.162 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:02.162 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:02.162 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:02.162 + [[ freebsd-vg-autotest == pkgdep-* ]] 00:02:02.162 + cd /home/vagrant/spdk_repo 00:02:02.162 + source /etc/os-release 00:02:02.162 ++ NAME=FreeBSD 00:02:02.162 ++ VERSION=14.0-RELEASE 00:02:02.162 ++ VERSION_ID=14.0 00:02:02.162 ++ ID=freebsd 00:02:02.162 ++ ANSI_COLOR='0;31' 00:02:02.162 ++ PRETTY_NAME='FreeBSD 14.0-RELEASE' 00:02:02.162 ++ CPE_NAME=cpe:/o:freebsd:freebsd:14.0 00:02:02.162 ++ HOME_URL=https://FreeBSD.org/ 00:02:02.162 ++ BUG_REPORT_URL=https://bugs.FreeBSD.org/ 00:02:02.162 + uname -a 00:02:02.162 FreeBSD freebsd-cloud-1718332871-2294.local 14.0-RELEASE FreeBSD 14.0-RELEASE #0 releng/14.0-n265380-f9716eee8ab4: Fri Nov 10 05:57:23 UTC 2023 root@releng1.nyi.freebsd.org:/usr/obj/usr/src/amd64.amd64/sys/GENERIC amd64 00:02:02.162 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:02.486 Contigmem (not present) 00:02:02.486 Buffer Size: not set 00:02:02.486 Num Buffers: not set 00:02:02.486 00:02:02.486 00:02:02.486 Type BDF Vendor Device Driver 00:02:02.486 NVMe 0:0:16:0 0x1b36 0x0010 nvme0 00:02:02.486 + rm -f /tmp/spdk-ld-path 00:02:02.486 + source autorun-spdk.conf 00:02:02.486 ++ SPDK_TEST_UNITTEST=1 00:02:02.486 ++ SPDK_RUN_VALGRIND=0 00:02:02.486 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:02.486 ++ SPDK_TEST_NVME=1 00:02:02.486 ++ SPDK_TEST_BLOCKDEV=1 00:02:02.486 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:02.486 ++ RUN_NIGHTLY=0 00:02:02.486 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:02.486 + [[ -n '' ]] 00:02:02.486 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:02.486 + for M in /var/spdk/build-*-manifest.txt 00:02:02.486 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:02.486 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:02.487 + for M in /var/spdk/build-*-manifest.txt 00:02:02.487 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:02.487 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:02.487 ++ uname 00:02:02.487 + [[ FreeBSD == \L\i\n\u\x ]] 00:02:02.487 + dmesg_pid=1231 00:02:02.487 + [[ FreeBSD == FreeBSD ]] 00:02:02.487 + export LC_ALL=C LC_CTYPE=C 00:02:02.487 + LC_ALL=C 00:02:02.487 + LC_CTYPE=C 00:02:02.487 + tail -F /var/log/messages 00:02:02.487 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:02.487 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:02.487 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:02.487 + [[ -x /usr/src/fio-static/fio ]] 00:02:02.487 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:02.487 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:02.487 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:02.487 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:02:02.487 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:02.487 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:02.487 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:02.487 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:02.487 Test configuration: 00:02:02.487 SPDK_TEST_UNITTEST=1 00:02:02.487 SPDK_RUN_VALGRIND=0 00:02:02.487 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:02.487 SPDK_TEST_NVME=1 00:02:02.487 SPDK_TEST_BLOCKDEV=1 00:02:02.487 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:02.487 RUN_NIGHTLY=0 21:38:17 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:02.487 21:38:17 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:02.487 21:38:17 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:02.487 21:38:17 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:02.487 21:38:17 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:02:02.487 21:38:17 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:02:02.487 21:38:17 -- paths/export.sh@4 -- $ export PATH 00:02:02.487 21:38:17 -- paths/export.sh@5 -- $ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:02:02.487 21:38:17 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:02.487 21:38:17 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:02.487 21:38:17 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721079497.XXXXXX 00:02:02.487 21:38:17 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721079497.XXXXXX.EaLozCLRgg 00:02:02.487 21:38:17 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:02.487 21:38:17 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:02:02.487 21:38:17 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:02.487 21:38:17 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:02.487 21:38:17 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:02.487 21:38:17 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:02.487 21:38:17 -- common/autotest_common.sh@390 -- $ xtrace_disable 00:02:02.487 21:38:17 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.487 21:38:17 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:02:02.487 21:38:17 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:02.487 21:38:17 -- pm/common@17 -- $ local monitor 00:02:02.487 21:38:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.487 21:38:17 -- pm/common@25 -- $ sleep 1 00:02:02.487 21:38:17 -- pm/common@21 -- $ date +%s 00:02:02.487 21:38:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721079497 00:02:02.487 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721079497_collect-vmstat.pm.log 00:02:03.863 21:38:18 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:03.863 21:38:18 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:03.863 21:38:18 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:03.863 21:38:18 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:03.863 21:38:18 -- spdk/autobuild.sh@16 -- $ date -u 00:02:03.863 Mon Jul 15 21:38:18 UTC 2024 00:02:03.863 21:38:18 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:03.863 v24.09-pre-210-ga83ad116a 00:02:03.863 21:38:18 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:03.863 21:38:18 -- spdk/autobuild.sh@23 -- $ '[' 0 -eq 1 ']' 00:02:03.863 21:38:18 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:03.863 21:38:18 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:03.863 21:38:18 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:03.863 21:38:18 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:03.863 21:38:18 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:03.863 21:38:18 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:02:03.863 21:38:18 -- spdk/autobuild.sh@58 -- $ unittest_build 00:02:03.863 21:38:18 -- common/autobuild_common.sh@420 -- $ run_test unittest_build _unittest_build 00:02:03.863 21:38:18 -- common/autotest_common.sh@1093 -- $ '[' 2 -le 1 ']' 00:02:03.863 21:38:18 -- common/autotest_common.sh@1099 -- $ xtrace_disable 00:02:03.863 21:38:18 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.863 ************************************ 00:02:03.863 START TEST unittest_build 00:02:03.863 ************************************ 00:02:03.863 21:38:18 unittest_build -- common/autotest_common.sh@1117 -- $ _unittest_build 00:02:03.863 21:38:18 unittest_build -- common/autobuild_common.sh@411 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --without-shared 00:02:04.430 Notice: Vhost, rte_vhost library, virtio, and fuse 00:02:04.430 are only supported on Linux. Turning off default feature. 00:02:04.430 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:04.430 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:04.996 RDMA_OPTION_ID_ACK_TIMEOUT is not supported 00:02:04.996 Using 'verbs' RDMA provider 00:02:15.537 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:25.553 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:25.553 Creating mk/config.mk...done. 00:02:25.553 Creating mk/cc.flags.mk...done. 00:02:25.553 Type 'gmake' to build. 00:02:25.553 21:38:39 unittest_build -- common/autobuild_common.sh@412 -- $ gmake -j10 00:02:25.553 gmake[1]: Nothing to be done for 'all'. 00:02:28.834 ps: stdin: not a terminal 00:02:34.160 The Meson build system 00:02:34.160 Version: 1.4.0 00:02:34.160 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:34.160 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:34.160 Build type: native build 00:02:34.160 Program cat found: YES (/bin/cat) 00:02:34.160 Project name: DPDK 00:02:34.160 Project version: 24.03.0 00:02:34.160 C compiler for the host machine: /usr/bin/clang (clang 16.0.6 "FreeBSD clang version 16.0.6 (https://github.com/llvm/llvm-project.git llvmorg-16.0.6-0-g7cbf1a259152)") 00:02:34.160 C linker for the host machine: /usr/bin/clang ld.lld 16.0.6 00:02:34.160 Host machine cpu family: x86_64 00:02:34.160 Host machine cpu: x86_64 00:02:34.160 Message: ## Building in Developer Mode ## 00:02:34.160 Program pkg-config found: YES (/usr/local/bin/pkg-config) 00:02:34.160 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:34.160 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:34.160 Program python3 found: YES (/usr/local/bin/python3.9) 00:02:34.160 Program cat found: YES (/bin/cat) 00:02:34.160 Compiler for C supports arguments -march=native: YES 00:02:34.160 Checking for size of "void *" : 8 00:02:34.160 Checking for size of "void *" : 8 (cached) 00:02:34.160 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:34.160 Library m found: YES 00:02:34.160 Library numa found: NO 00:02:34.160 Library fdt found: NO 00:02:34.160 Library execinfo found: YES 00:02:34.160 Has header "execinfo.h" : YES 00:02:34.160 Found pkg-config: YES (/usr/local/bin/pkg-config) 2.2.0 00:02:34.160 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:34.160 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:34.160 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:34.160 Run-time dependency openssl found: YES 3.0.13 00:02:34.160 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:34.160 Library pcap found: YES 00:02:34.160 Has header "pcap.h" with dependency -lpcap: YES 00:02:34.160 Compiler for C supports arguments -Wcast-qual: YES 00:02:34.160 Compiler for C supports arguments -Wdeprecated: YES 00:02:34.160 Compiler for C supports arguments -Wformat: YES 00:02:34.160 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:34.160 Compiler for C supports arguments -Wformat-security: YES 00:02:34.160 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:34.160 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:34.160 Compiler for C supports arguments -Wnested-externs: YES 00:02:34.160 Compiler for C supports arguments -Wold-style-definition: YES 00:02:34.160 Compiler for C supports arguments -Wpointer-arith: YES 00:02:34.160 Compiler for C supports arguments -Wsign-compare: YES 00:02:34.160 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:34.160 Compiler for C supports arguments -Wundef: YES 00:02:34.160 Compiler for C supports arguments -Wwrite-strings: YES 00:02:34.160 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:34.160 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:02:34.160 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:34.160 Compiler for C supports arguments -mavx512f: YES 00:02:34.160 Checking if "AVX512 checking" compiles: YES 00:02:34.160 Fetching value of define "__SSE4_2__" : 1 00:02:34.160 Fetching value of define "__AES__" : 1 00:02:34.160 Fetching value of define "__AVX__" : 1 00:02:34.160 Fetching value of define "__AVX2__" : 1 00:02:34.160 Fetching value of define "__AVX512BW__" : (undefined) 00:02:34.160 Fetching value of define "__AVX512CD__" : (undefined) 00:02:34.160 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:34.160 Fetching value of define "__AVX512F__" : (undefined) 00:02:34.160 Fetching value of define "__AVX512VL__" : (undefined) 00:02:34.160 Fetching value of define "__PCLMUL__" : 1 00:02:34.160 Fetching value of define "__RDRND__" : 1 00:02:34.160 Fetching value of define "__RDSEED__" : 1 00:02:34.160 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:34.160 Fetching value of define "__znver1__" : (undefined) 00:02:34.160 Fetching value of define "__znver2__" : (undefined) 00:02:34.160 Fetching value of define "__znver3__" : (undefined) 00:02:34.160 Fetching value of define "__znver4__" : (undefined) 00:02:34.160 Compiler for C supports arguments -Wno-format-truncation: NO 00:02:34.160 Message: lib/log: Defining dependency "log" 00:02:34.160 Message: lib/kvargs: Defining dependency "kvargs" 00:02:34.160 Message: lib/telemetry: Defining dependency "telemetry" 00:02:34.160 Checking if "Detect argument count for CPU_OR" compiles: YES 00:02:34.160 Checking for function "getentropy" : YES 00:02:34.160 Message: lib/eal: Defining dependency "eal" 00:02:34.160 Message: lib/ring: Defining dependency "ring" 00:02:34.160 Message: lib/rcu: Defining dependency "rcu" 00:02:34.160 Message: lib/mempool: Defining dependency "mempool" 00:02:34.160 Message: lib/mbuf: Defining dependency "mbuf" 00:02:34.160 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:34.160 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:34.160 Compiler for C supports arguments -mpclmul: YES 00:02:34.160 Compiler for C supports arguments -maes: YES 00:02:34.160 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:34.160 Compiler for C supports arguments -mavx512bw: YES 00:02:34.160 Compiler for C supports arguments -mavx512dq: YES 00:02:34.160 Compiler for C supports arguments -mavx512vl: YES 00:02:34.160 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:34.160 Compiler for C supports arguments -mavx2: YES 00:02:34.160 Compiler for C supports arguments -mavx: YES 00:02:34.160 Message: lib/net: Defining dependency "net" 00:02:34.160 Message: lib/meter: Defining dependency "meter" 00:02:34.160 Message: lib/ethdev: Defining dependency "ethdev" 00:02:34.160 Message: lib/pci: Defining dependency "pci" 00:02:34.160 Message: lib/cmdline: Defining dependency "cmdline" 00:02:34.160 Message: lib/hash: Defining dependency "hash" 00:02:34.160 Message: lib/timer: Defining dependency "timer" 00:02:34.160 Message: lib/compressdev: Defining dependency "compressdev" 00:02:34.160 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:34.160 Message: lib/dmadev: Defining dependency "dmadev" 00:02:34.160 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:34.160 Message: lib/reorder: Defining dependency "reorder" 00:02:34.160 Message: lib/security: Defining dependency "security" 00:02:34.160 Has header "linux/userfaultfd.h" : NO 00:02:34.160 Has header "linux/vduse.h" : NO 00:02:34.160 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:02:34.160 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:34.160 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:34.160 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:34.160 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:34.160 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:34.160 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:34.160 Message: Disabling vdpa/* drivers: missing internal dependency "vhost" 00:02:34.160 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:34.160 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:34.160 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:34.160 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:34.160 Configuring doxy-api-html.conf using configuration 00:02:34.160 Configuring doxy-api-man.conf using configuration 00:02:34.160 Program mandb found: NO 00:02:34.160 Program sphinx-build found: NO 00:02:34.160 Configuring rte_build_config.h using configuration 00:02:34.160 Message: 00:02:34.160 ================= 00:02:34.160 Applications Enabled 00:02:34.160 ================= 00:02:34.160 00:02:34.160 apps: 00:02:34.160 00:02:34.160 00:02:34.160 Message: 00:02:34.160 ================= 00:02:34.160 Libraries Enabled 00:02:34.160 ================= 00:02:34.160 00:02:34.160 libs: 00:02:34.160 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:34.160 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:34.160 cryptodev, dmadev, reorder, security, 00:02:34.160 00:02:34.160 Message: 00:02:34.160 =============== 00:02:34.160 Drivers Enabled 00:02:34.160 =============== 00:02:34.160 00:02:34.160 common: 00:02:34.160 00:02:34.160 bus: 00:02:34.160 pci, vdev, 00:02:34.160 mempool: 00:02:34.160 ring, 00:02:34.160 dma: 00:02:34.160 00:02:34.160 net: 00:02:34.160 00:02:34.160 crypto: 00:02:34.160 00:02:34.160 compress: 00:02:34.160 00:02:34.160 00:02:34.160 Message: 00:02:34.160 ================= 00:02:34.160 Content Skipped 00:02:34.160 ================= 00:02:34.160 00:02:34.160 apps: 00:02:34.160 dumpcap: explicitly disabled via build config 00:02:34.160 graph: explicitly disabled via build config 00:02:34.160 pdump: explicitly disabled via build config 00:02:34.160 proc-info: explicitly disabled via build config 00:02:34.160 test-acl: explicitly disabled via build config 00:02:34.160 test-bbdev: explicitly disabled via build config 00:02:34.160 test-cmdline: explicitly disabled via build config 00:02:34.160 test-compress-perf: explicitly disabled via build config 00:02:34.160 test-crypto-perf: explicitly disabled via build config 00:02:34.160 test-dma-perf: explicitly disabled via build config 00:02:34.160 test-eventdev: explicitly disabled via build config 00:02:34.160 test-fib: explicitly disabled via build config 00:02:34.160 test-flow-perf: explicitly disabled via build config 00:02:34.160 test-gpudev: explicitly disabled via build config 00:02:34.160 test-mldev: explicitly disabled via build config 00:02:34.160 test-pipeline: explicitly disabled via build config 00:02:34.160 test-pmd: explicitly disabled via build config 00:02:34.160 test-regex: explicitly disabled via build config 00:02:34.160 test-sad: explicitly disabled via build config 00:02:34.160 test-security-perf: explicitly disabled via build config 00:02:34.160 00:02:34.160 libs: 00:02:34.160 argparse: explicitly disabled via build config 00:02:34.160 metrics: explicitly disabled via build config 00:02:34.160 acl: explicitly disabled via build config 00:02:34.160 bbdev: explicitly disabled via build config 00:02:34.160 bitratestats: explicitly disabled via build config 00:02:34.160 bpf: explicitly disabled via build config 00:02:34.160 cfgfile: explicitly disabled via build config 00:02:34.160 distributor: explicitly disabled via build config 00:02:34.160 efd: explicitly disabled via build config 00:02:34.160 eventdev: explicitly disabled via build config 00:02:34.160 dispatcher: explicitly disabled via build config 00:02:34.160 gpudev: explicitly disabled via build config 00:02:34.160 gro: explicitly disabled via build config 00:02:34.160 gso: explicitly disabled via build config 00:02:34.160 ip_frag: explicitly disabled via build config 00:02:34.160 jobstats: explicitly disabled via build config 00:02:34.160 latencystats: explicitly disabled via build config 00:02:34.160 lpm: explicitly disabled via build config 00:02:34.160 member: explicitly disabled via build config 00:02:34.160 pcapng: explicitly disabled via build config 00:02:34.160 power: only supported on Linux 00:02:34.160 rawdev: explicitly disabled via build config 00:02:34.160 regexdev: explicitly disabled via build config 00:02:34.160 mldev: explicitly disabled via build config 00:02:34.160 rib: explicitly disabled via build config 00:02:34.160 sched: explicitly disabled via build config 00:02:34.160 stack: explicitly disabled via build config 00:02:34.160 vhost: only supported on Linux 00:02:34.160 ipsec: explicitly disabled via build config 00:02:34.160 pdcp: explicitly disabled via build config 00:02:34.160 fib: explicitly disabled via build config 00:02:34.160 port: explicitly disabled via build config 00:02:34.160 pdump: explicitly disabled via build config 00:02:34.161 table: explicitly disabled via build config 00:02:34.161 pipeline: explicitly disabled via build config 00:02:34.161 graph: explicitly disabled via build config 00:02:34.161 node: explicitly disabled via build config 00:02:34.161 00:02:34.161 drivers: 00:02:34.161 common/cpt: not in enabled drivers build config 00:02:34.161 common/dpaax: not in enabled drivers build config 00:02:34.161 common/iavf: not in enabled drivers build config 00:02:34.161 common/idpf: not in enabled drivers build config 00:02:34.161 common/ionic: not in enabled drivers build config 00:02:34.161 common/mvep: not in enabled drivers build config 00:02:34.161 common/octeontx: not in enabled drivers build config 00:02:34.161 bus/auxiliary: not in enabled drivers build config 00:02:34.161 bus/cdx: not in enabled drivers build config 00:02:34.161 bus/dpaa: not in enabled drivers build config 00:02:34.161 bus/fslmc: not in enabled drivers build config 00:02:34.161 bus/ifpga: not in enabled drivers build config 00:02:34.161 bus/platform: not in enabled drivers build config 00:02:34.161 bus/uacce: not in enabled drivers build config 00:02:34.161 bus/vmbus: not in enabled drivers build config 00:02:34.161 common/cnxk: not in enabled drivers build config 00:02:34.161 common/mlx5: not in enabled drivers build config 00:02:34.161 common/nfp: not in enabled drivers build config 00:02:34.161 common/nitrox: not in enabled drivers build config 00:02:34.161 common/qat: not in enabled drivers build config 00:02:34.161 common/sfc_efx: not in enabled drivers build config 00:02:34.161 mempool/bucket: not in enabled drivers build config 00:02:34.161 mempool/cnxk: not in enabled drivers build config 00:02:34.161 mempool/dpaa: not in enabled drivers build config 00:02:34.161 mempool/dpaa2: not in enabled drivers build config 00:02:34.161 mempool/octeontx: not in enabled drivers build config 00:02:34.161 mempool/stack: not in enabled drivers build config 00:02:34.161 dma/cnxk: not in enabled drivers build config 00:02:34.161 dma/dpaa: not in enabled drivers build config 00:02:34.161 dma/dpaa2: not in enabled drivers build config 00:02:34.161 dma/hisilicon: not in enabled drivers build config 00:02:34.161 dma/idxd: not in enabled drivers build config 00:02:34.161 dma/ioat: not in enabled drivers build config 00:02:34.161 dma/skeleton: not in enabled drivers build config 00:02:34.161 net/af_packet: not in enabled drivers build config 00:02:34.161 net/af_xdp: not in enabled drivers build config 00:02:34.161 net/ark: not in enabled drivers build config 00:02:34.161 net/atlantic: not in enabled drivers build config 00:02:34.161 net/avp: not in enabled drivers build config 00:02:34.161 net/axgbe: not in enabled drivers build config 00:02:34.161 net/bnx2x: not in enabled drivers build config 00:02:34.161 net/bnxt: not in enabled drivers build config 00:02:34.161 net/bonding: not in enabled drivers build config 00:02:34.161 net/cnxk: not in enabled drivers build config 00:02:34.161 net/cpfl: not in enabled drivers build config 00:02:34.161 net/cxgbe: not in enabled drivers build config 00:02:34.161 net/dpaa: not in enabled drivers build config 00:02:34.161 net/dpaa2: not in enabled drivers build config 00:02:34.161 net/e1000: not in enabled drivers build config 00:02:34.161 net/ena: not in enabled drivers build config 00:02:34.161 net/enetc: not in enabled drivers build config 00:02:34.161 net/enetfec: not in enabled drivers build config 00:02:34.161 net/enic: not in enabled drivers build config 00:02:34.161 net/failsafe: not in enabled drivers build config 00:02:34.161 net/fm10k: not in enabled drivers build config 00:02:34.161 net/gve: not in enabled drivers build config 00:02:34.161 net/hinic: not in enabled drivers build config 00:02:34.161 net/hns3: not in enabled drivers build config 00:02:34.161 net/i40e: not in enabled drivers build config 00:02:34.161 net/iavf: not in enabled drivers build config 00:02:34.161 net/ice: not in enabled drivers build config 00:02:34.161 net/idpf: not in enabled drivers build config 00:02:34.161 net/igc: not in enabled drivers build config 00:02:34.161 net/ionic: not in enabled drivers build config 00:02:34.161 net/ipn3ke: not in enabled drivers build config 00:02:34.161 net/ixgbe: not in enabled drivers build config 00:02:34.161 net/mana: not in enabled drivers build config 00:02:34.161 net/memif: not in enabled drivers build config 00:02:34.161 net/mlx4: not in enabled drivers build config 00:02:34.161 net/mlx5: not in enabled drivers build config 00:02:34.161 net/mvneta: not in enabled drivers build config 00:02:34.161 net/mvpp2: not in enabled drivers build config 00:02:34.161 net/netvsc: not in enabled drivers build config 00:02:34.161 net/nfb: not in enabled drivers build config 00:02:34.161 net/nfp: not in enabled drivers build config 00:02:34.161 net/ngbe: not in enabled drivers build config 00:02:34.161 net/null: not in enabled drivers build config 00:02:34.161 net/octeontx: not in enabled drivers build config 00:02:34.161 net/octeon_ep: not in enabled drivers build config 00:02:34.161 net/pcap: not in enabled drivers build config 00:02:34.161 net/pfe: not in enabled drivers build config 00:02:34.161 net/qede: not in enabled drivers build config 00:02:34.161 net/ring: not in enabled drivers build config 00:02:34.161 net/sfc: not in enabled drivers build config 00:02:34.161 net/softnic: not in enabled drivers build config 00:02:34.161 net/tap: not in enabled drivers build config 00:02:34.161 net/thunderx: not in enabled drivers build config 00:02:34.161 net/txgbe: not in enabled drivers build config 00:02:34.161 net/vdev_netvsc: not in enabled drivers build config 00:02:34.161 net/vhost: not in enabled drivers build config 00:02:34.161 net/virtio: not in enabled drivers build config 00:02:34.161 net/vmxnet3: not in enabled drivers build config 00:02:34.161 raw/*: missing internal dependency, "rawdev" 00:02:34.161 crypto/armv8: not in enabled drivers build config 00:02:34.161 crypto/bcmfs: not in enabled drivers build config 00:02:34.161 crypto/caam_jr: not in enabled drivers build config 00:02:34.161 crypto/ccp: not in enabled drivers build config 00:02:34.161 crypto/cnxk: not in enabled drivers build config 00:02:34.161 crypto/dpaa_sec: not in enabled drivers build config 00:02:34.161 crypto/dpaa2_sec: not in enabled drivers build config 00:02:34.161 crypto/ipsec_mb: not in enabled drivers build config 00:02:34.161 crypto/mlx5: not in enabled drivers build config 00:02:34.161 crypto/mvsam: not in enabled drivers build config 00:02:34.161 crypto/nitrox: not in enabled drivers build config 00:02:34.161 crypto/null: not in enabled drivers build config 00:02:34.161 crypto/octeontx: not in enabled drivers build config 00:02:34.161 crypto/openssl: not in enabled drivers build config 00:02:34.161 crypto/scheduler: not in enabled drivers build config 00:02:34.161 crypto/uadk: not in enabled drivers build config 00:02:34.161 crypto/virtio: not in enabled drivers build config 00:02:34.161 compress/isal: not in enabled drivers build config 00:02:34.161 compress/mlx5: not in enabled drivers build config 00:02:34.161 compress/nitrox: not in enabled drivers build config 00:02:34.161 compress/octeontx: not in enabled drivers build config 00:02:34.161 compress/zlib: not in enabled drivers build config 00:02:34.161 regex/*: missing internal dependency, "regexdev" 00:02:34.161 ml/*: missing internal dependency, "mldev" 00:02:34.161 vdpa/*: missing internal dependency, "vhost" 00:02:34.161 event/*: missing internal dependency, "eventdev" 00:02:34.161 baseband/*: missing internal dependency, "bbdev" 00:02:34.161 gpu/*: missing internal dependency, "gpudev" 00:02:34.161 00:02:34.161 00:02:34.161 Build targets in project: 81 00:02:34.161 00:02:34.161 DPDK 24.03.0 00:02:34.161 00:02:34.161 User defined options 00:02:34.161 buildtype : debug 00:02:34.161 default_library : static 00:02:34.161 libdir : lib 00:02:34.161 prefix : / 00:02:34.161 c_args : -fPIC -Werror 00:02:34.161 c_link_args : 00:02:34.161 cpu_instruction_set: native 00:02:34.161 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:34.161 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:34.161 enable_docs : false 00:02:34.161 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:34.161 enable_kmods : true 00:02:34.161 max_lcores : 128 00:02:34.161 tests : false 00:02:34.161 00:02:34.161 Found ninja-1.11.1 at /usr/local/bin/ninja 00:02:34.420 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:34.686 [1/233] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:34.686 [2/233] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:34.686 [3/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:34.686 [4/233] Compiling C object lib/librte_log.a.p/log_log_freebsd.c.o 00:02:34.686 [5/233] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:34.686 [6/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:34.977 [7/233] Linking static target lib/librte_log.a 00:02:34.977 [8/233] Linking static target lib/librte_kvargs.a 00:02:34.977 [9/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:34.977 [10/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:35.235 [11/233] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:35.235 [12/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:35.235 [13/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:35.235 [14/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:35.235 [15/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:35.235 [16/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:35.235 [17/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:35.235 [18/233] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:35.494 [19/233] Linking static target lib/librte_telemetry.a 00:02:35.494 [20/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:35.494 [21/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:35.752 [22/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:35.752 [23/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:35.752 [24/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:35.752 [25/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:35.752 [26/233] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.752 [27/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:36.011 [28/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:36.011 [29/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:36.011 [30/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:36.011 [31/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:36.011 [32/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:36.011 [33/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:36.011 [34/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:36.011 [35/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:36.269 [36/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:36.269 [37/233] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:36.269 [38/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:36.269 [39/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:36.269 [40/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:36.269 [41/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:36.528 [42/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:36.528 [43/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:36.528 [44/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:36.528 [45/233] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:36.786 [46/233] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:36.787 [47/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:37.045 [48/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:37.045 [49/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:37.045 [50/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:37.045 [51/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:37.045 [52/233] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:37.045 [53/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_cpuflags.c.o 00:02:37.045 [54/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:37.045 [55/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:37.305 [56/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:37.305 [57/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:37.305 [58/233] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:37.564 [59/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_dev.c.o 00:02:37.564 [60/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_hugepage_info.c.o 00:02:37.564 [61/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:37.564 [62/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:37.564 [63/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_interrupts.c.o 00:02:37.564 [64/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:37.564 [65/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memalloc.c.o 00:02:37.564 [66/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_lcore.c.o 00:02:37.564 [67/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memory.c.o 00:02:37.823 [68/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal.c.o 00:02:37.823 [69/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_timer.c.o 00:02:37.823 [70/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_alarm.c.o 00:02:37.823 [71/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_thread.c.o 00:02:38.081 [72/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:38.081 [73/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:38.081 [74/233] Linking static target lib/librte_eal.a 00:02:38.081 [75/233] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:38.081 [76/233] Linking static target lib/librte_ring.a 00:02:38.081 [77/233] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:38.081 [78/233] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.081 [79/233] Linking static target lib/librte_rcu.a 00:02:38.348 [80/233] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:38.348 [81/233] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:38.348 [82/233] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.348 [83/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:38.348 [84/233] Linking target lib/librte_log.so.24.1 00:02:38.348 [85/233] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:38.605 [86/233] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:38.605 [87/233] Linking target lib/librte_kvargs.so.24.1 00:02:38.605 [88/233] Linking target lib/librte_telemetry.so.24.1 00:02:38.605 [89/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:38.605 [90/233] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:38.605 [91/233] Linking static target lib/librte_mempool.a 00:02:38.605 [92/233] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:38.605 [93/233] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:38.605 [94/233] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.863 [95/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:38.863 [96/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:38.863 [97/233] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:38.863 [98/233] Linking static target lib/librte_mbuf.a 00:02:38.863 [99/233] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:38.863 [100/233] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.863 [101/233] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:38.863 [102/233] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:39.122 [103/233] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:39.122 [104/233] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:39.122 [105/233] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:39.379 [106/233] Linking static target lib/librte_net.a 00:02:39.379 [107/233] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:39.379 [108/233] Linking static target lib/librte_meter.a 00:02:39.379 [109/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:39.379 [110/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:39.637 [111/233] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.637 [112/233] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.637 [113/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:39.637 [114/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:39.895 [115/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:39.895 [116/233] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.895 [117/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:40.153 [118/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:40.153 [119/233] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:40.153 [120/233] Linking static target lib/librte_pci.a 00:02:40.153 [121/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:40.153 [122/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:40.411 [123/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:40.412 [124/233] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.412 [125/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:40.412 [126/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:40.412 [127/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:40.412 [128/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:40.412 [129/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:40.412 [130/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:40.412 [131/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:40.412 [132/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:40.412 [133/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:40.412 [134/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:40.412 [135/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:40.412 [136/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:40.412 [137/233] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:40.412 [138/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:40.412 [139/233] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.670 [140/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:40.928 [141/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:40.928 [142/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:40.928 [143/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:40.928 [144/233] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:40.928 [145/233] Linking static target lib/librte_timer.a 00:02:41.187 [146/233] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:41.187 [147/233] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:41.187 [148/233] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:41.187 [149/233] Linking static target lib/librte_ethdev.a 00:02:41.187 [150/233] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:41.187 [151/233] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:41.187 [152/233] Linking static target lib/librte_hash.a 00:02:41.187 [153/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:41.187 [154/233] Linking static target lib/librte_cmdline.a 00:02:41.445 [155/233] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:41.445 [156/233] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:41.445 [157/233] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:41.445 [158/233] Linking static target lib/librte_compressdev.a 00:02:41.703 [159/233] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.703 [160/233] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:41.703 [161/233] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:41.703 [162/233] Linking static target lib/librte_dmadev.a 00:02:41.703 [163/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:41.703 [164/233] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:41.961 [165/233] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.962 [166/233] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:41.962 [167/233] Linking static target lib/librte_reorder.a 00:02:42.220 [168/233] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:42.220 [169/233] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.220 [170/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:42.220 [171/233] Linking static target lib/librte_cryptodev.a 00:02:42.220 [172/233] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:42.220 [173/233] Linking static target lib/librte_security.a 00:02:42.220 [174/233] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:42.220 [175/233] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.220 [176/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:42.220 [177/233] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.220 [178/233] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.220 [179/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_bsd_pci.c.o 00:02:42.220 [180/233] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:42.479 [181/233] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.479 [182/233] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:42.479 [183/233] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:42.479 [184/233] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:42.479 [185/233] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:42.479 [186/233] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:42.479 [187/233] Linking static target drivers/librte_bus_pci.a 00:02:42.737 [188/233] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:42.737 [189/233] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:42.737 [190/233] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:42.737 [191/233] Linking static target drivers/librte_bus_vdev.a 00:02:42.737 [192/233] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:42.737 [193/233] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:42.996 [194/233] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.996 [195/233] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.996 [196/233] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.996 [197/233] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:42.997 [198/233] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:42.997 [199/233] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:42.997 [200/233] Linking static target drivers/librte_mempool_ring.a 00:02:43.564 [201/233] Generating kernel/freebsd/contigmem with a custom command 00:02:43.564 machine -> /usr/src/sys/amd64/include 00:02:43.564 x86 -> /usr/src/sys/x86/include 00:02:43.564 i386 -> /usr/src/sys/i386/include 00:02:43.564 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/device_if.m -h 00:02:43.564 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/bus_if.m -h 00:02:43.564 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/dev/pci/pci_if.m -h 00:02:43.564 touch opt_global.h 00:02:43.564 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/home/vagrant/spdk_repo/spdk/dpdk/config -include /home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -fdebug-prefix-map=./i386=/usr/src/sys/i386/include -MD -MF.depend.contigmem.o -MTcontigmem.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-format-zero-length -mno-aes -mno-avx -std=gnu99 -c /home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/contigmem/contigmem.c -o contigmem.o 00:02:43.564 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o contigmem.ko contigmem.o 00:02:43.564 :> export_syms 00:02:43.564 awk -f /usr/src/sys/conf/kmod_syms.awk contigmem.ko export_syms | xargs -J% objcopy % contigmem.ko 00:02:43.564 objcopy --strip-debug contigmem.ko 00:02:43.843 [202/233] Generating kernel/freebsd/nic_uio with a custom command 00:02:43.844 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/home/vagrant/spdk_repo/spdk/dpdk/config -include /home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -fdebug-prefix-map=./i386=/usr/src/sys/i386/include -MD -MF.depend.nic_uio.o -MTnic_uio.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-format-zero-length -mno-aes -mno-avx -std=gnu99 -c /home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/nic_uio/nic_uio.c -o nic_uio.o 00:02:43.844 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o nic_uio.ko nic_uio.o 00:02:43.844 :> export_syms 00:02:43.844 awk -f /usr/src/sys/conf/kmod_syms.awk nic_uio.ko export_syms | xargs -J% objcopy % nic_uio.ko 00:02:43.844 objcopy --strip-debug nic_uio.ko 00:02:46.415 [203/233] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.949 [204/233] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.949 [205/233] Linking target lib/librte_eal.so.24.1 00:02:48.949 [206/233] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:48.949 [207/233] Linking target lib/librte_meter.so.24.1 00:02:48.949 [208/233] Linking target lib/librte_ring.so.24.1 00:02:48.949 [209/233] Linking target lib/librte_pci.so.24.1 00:02:48.949 [210/233] Linking target lib/librte_timer.so.24.1 00:02:48.949 [211/233] Linking target lib/librte_dmadev.so.24.1 00:02:48.949 [212/233] Linking target drivers/librte_bus_vdev.so.24.1 00:02:48.949 [213/233] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:48.949 [214/233] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:48.949 [215/233] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:48.949 [216/233] Linking target lib/librte_rcu.so.24.1 00:02:48.949 [217/233] Linking target lib/librte_mempool.so.24.1 00:02:48.949 [218/233] Linking target drivers/librte_bus_pci.so.24.1 00:02:49.208 [219/233] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:49.208 [220/233] Linking target lib/librte_mbuf.so.24.1 00:02:49.208 [221/233] Linking target drivers/librte_mempool_ring.so.24.1 00:02:49.208 [222/233] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:49.208 [223/233] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:49.467 [224/233] Linking target lib/librte_compressdev.so.24.1 00:02:49.467 [225/233] Linking target lib/librte_net.so.24.1 00:02:49.467 [226/233] Linking target lib/librte_reorder.so.24.1 00:02:49.467 [227/233] Linking target lib/librte_cryptodev.so.24.1 00:02:49.467 [228/233] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:49.467 [229/233] Linking target lib/librte_cmdline.so.24.1 00:02:49.467 [230/233] Linking target lib/librte_hash.so.24.1 00:02:49.467 [231/233] Linking target lib/librte_ethdev.so.24.1 00:02:49.467 [232/233] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:49.467 [233/233] Linking target lib/librte_security.so.24.1 00:02:49.467 INFO: autodetecting backend as ninja 00:02:49.467 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:50.403 CC lib/ut_mock/mock.o 00:02:50.403 CC lib/log/log.o 00:02:50.403 CC lib/log/log_flags.o 00:02:50.403 CC lib/log/log_deprecated.o 00:02:50.403 CC lib/ut/ut.o 00:02:50.403 LIB libspdk_ut.a 00:02:50.403 LIB libspdk_ut_mock.a 00:02:50.403 LIB libspdk_log.a 00:02:50.661 CC lib/util/base64.o 00:02:50.661 CC lib/util/bit_array.o 00:02:50.661 CC lib/util/cpuset.o 00:02:50.661 CC lib/util/crc16.o 00:02:50.661 CC lib/ioat/ioat.o 00:02:50.661 CC lib/util/crc32.o 00:02:50.661 CC lib/util/crc32c.o 00:02:50.661 CC lib/util/crc32_ieee.o 00:02:50.661 CXX lib/trace_parser/trace.o 00:02:50.661 CC lib/dma/dma.o 00:02:50.661 CC lib/util/crc64.o 00:02:50.661 CC lib/util/dif.o 00:02:50.661 CC lib/util/fd.o 00:02:50.661 CC lib/util/file.o 00:02:50.661 CC lib/util/hexlify.o 00:02:50.661 CC lib/util/iov.o 00:02:50.661 CC lib/util/math.o 00:02:50.661 LIB libspdk_dma.a 00:02:50.661 CC lib/util/pipe.o 00:02:50.661 LIB libspdk_ioat.a 00:02:50.920 CC lib/util/strerror_tls.o 00:02:50.920 CC lib/util/string.o 00:02:50.920 CC lib/util/uuid.o 00:02:50.920 CC lib/util/fd_group.o 00:02:50.920 CC lib/util/xor.o 00:02:50.920 CC lib/util/zipf.o 00:02:50.920 LIB libspdk_util.a 00:02:51.185 CC lib/conf/conf.o 00:02:51.185 CC lib/json/json_parse.o 00:02:51.185 CC lib/json/json_util.o 00:02:51.185 CC lib/json/json_write.o 00:02:51.185 CC lib/idxd/idxd.o 00:02:51.185 CC lib/env_dpdk/env.o 00:02:51.185 CC lib/rdma_utils/rdma_utils.o 00:02:51.185 CC lib/rdma_provider/common.o 00:02:51.185 CC lib/vmd/vmd.o 00:02:51.185 LIB libspdk_conf.a 00:02:51.185 CC lib/env_dpdk/memory.o 00:02:51.185 LIB libspdk_rdma_utils.a 00:02:51.185 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:51.185 CC lib/idxd/idxd_user.o 00:02:51.185 CC lib/env_dpdk/pci.o 00:02:51.185 CC lib/env_dpdk/init.o 00:02:51.185 CC lib/env_dpdk/threads.o 00:02:51.444 LIB libspdk_json.a 00:02:51.444 CC lib/vmd/led.o 00:02:51.444 LIB libspdk_idxd.a 00:02:51.444 CC lib/env_dpdk/pci_ioat.o 00:02:51.444 CC lib/env_dpdk/pci_virtio.o 00:02:51.444 LIB libspdk_rdma_provider.a 00:02:51.444 CC lib/env_dpdk/pci_vmd.o 00:02:51.444 CC lib/jsonrpc/jsonrpc_server.o 00:02:51.444 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:51.444 LIB libspdk_vmd.a 00:02:51.444 CC lib/jsonrpc/jsonrpc_client.o 00:02:51.444 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:51.444 CC lib/env_dpdk/pci_idxd.o 00:02:51.444 CC lib/env_dpdk/pci_event.o 00:02:51.444 CC lib/env_dpdk/sigbus_handler.o 00:02:51.444 CC lib/env_dpdk/pci_dpdk.o 00:02:51.444 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:51.444 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:51.444 LIB libspdk_jsonrpc.a 00:02:51.703 CC lib/rpc/rpc.o 00:02:51.961 LIB libspdk_rpc.a 00:02:51.961 CC lib/notify/notify.o 00:02:51.961 CC lib/keyring/keyring.o 00:02:51.961 CC lib/notify/notify_rpc.o 00:02:51.961 CC lib/keyring/keyring_rpc.o 00:02:51.961 CC lib/trace/trace.o 00:02:51.961 CC lib/trace/trace_flags.o 00:02:51.961 CC lib/trace/trace_rpc.o 00:02:51.961 LIB libspdk_notify.a 00:02:51.961 LIB libspdk_keyring.a 00:02:52.219 LIB libspdk_trace.a 00:02:52.220 LIB libspdk_trace_parser.a 00:02:52.220 LIB libspdk_env_dpdk.a 00:02:52.220 CC lib/thread/thread.o 00:02:52.220 CC lib/thread/iobuf.o 00:02:52.220 CC lib/sock/sock.o 00:02:52.220 CC lib/sock/sock_rpc.o 00:02:52.477 LIB libspdk_sock.a 00:02:52.477 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:52.477 CC lib/nvme/nvme_ctrlr.o 00:02:52.477 CC lib/nvme/nvme_fabric.o 00:02:52.477 CC lib/nvme/nvme_ns_cmd.o 00:02:52.477 CC lib/nvme/nvme_ns.o 00:02:52.477 CC lib/nvme/nvme_pcie.o 00:02:52.477 CC lib/nvme/nvme_pcie_common.o 00:02:52.477 CC lib/nvme/nvme_qpair.o 00:02:52.477 CC lib/nvme/nvme.o 00:02:52.735 LIB libspdk_thread.a 00:02:52.735 CC lib/nvme/nvme_quirks.o 00:02:53.299 CC lib/accel/accel.o 00:02:53.299 CC lib/accel/accel_rpc.o 00:02:53.299 CC lib/accel/accel_sw.o 00:02:53.299 CC lib/nvme/nvme_transport.o 00:02:53.299 CC lib/blob/blobstore.o 00:02:53.299 CC lib/blob/request.o 00:02:53.299 CC lib/blob/zeroes.o 00:02:53.299 CC lib/init/json_config.o 00:02:53.299 CC lib/blob/blob_bs_dev.o 00:02:53.299 CC lib/init/subsystem.o 00:02:53.557 LIB libspdk_accel.a 00:02:53.557 CC lib/nvme/nvme_discovery.o 00:02:53.557 CC lib/init/subsystem_rpc.o 00:02:53.557 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:53.557 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:53.557 CC lib/nvme/nvme_tcp.o 00:02:53.557 CC lib/init/rpc.o 00:02:53.557 CC lib/bdev/bdev.o 00:02:53.557 CC lib/nvme/nvme_opal.o 00:02:53.557 LIB libspdk_init.a 00:02:53.557 CC lib/bdev/bdev_rpc.o 00:02:53.814 CC lib/nvme/nvme_io_msg.o 00:02:53.814 CC lib/event/app.o 00:02:53.814 CC lib/bdev/bdev_zone.o 00:02:54.101 CC lib/event/reactor.o 00:02:54.101 CC lib/bdev/part.o 00:02:54.101 CC lib/event/log_rpc.o 00:02:54.101 CC lib/nvme/nvme_poll_group.o 00:02:54.101 LIB libspdk_blob.a 00:02:54.101 CC lib/bdev/scsi_nvme.o 00:02:54.101 CC lib/nvme/nvme_zns.o 00:02:54.101 CC lib/event/app_rpc.o 00:02:54.418 CC lib/nvme/nvme_stubs.o 00:02:54.418 CC lib/blobfs/blobfs.o 00:02:54.418 LIB libspdk_bdev.a 00:02:54.418 CC lib/event/scheduler_static.o 00:02:54.418 CC lib/nvme/nvme_auth.o 00:02:54.418 CC lib/blobfs/tree.o 00:02:54.418 CC lib/lvol/lvol.o 00:02:54.418 LIB libspdk_event.a 00:02:54.418 CC lib/nvme/nvme_rdma.o 00:02:54.418 CC lib/scsi/dev.o 00:02:54.418 LIB libspdk_blobfs.a 00:02:54.418 CC lib/scsi/lun.o 00:02:54.418 CC lib/scsi/port.o 00:02:54.418 CC lib/scsi/scsi.o 00:02:54.676 LIB libspdk_lvol.a 00:02:54.676 CC lib/scsi/scsi_bdev.o 00:02:54.676 CC lib/scsi/scsi_pr.o 00:02:54.676 CC lib/scsi/scsi_rpc.o 00:02:54.676 CC lib/scsi/task.o 00:02:54.933 LIB libspdk_scsi.a 00:02:54.933 CC lib/iscsi/conn.o 00:02:54.933 CC lib/iscsi/init_grp.o 00:02:54.933 CC lib/iscsi/iscsi.o 00:02:54.933 CC lib/iscsi/md5.o 00:02:54.933 CC lib/iscsi/param.o 00:02:54.933 CC lib/iscsi/portal_grp.o 00:02:54.933 CC lib/iscsi/tgt_node.o 00:02:54.933 CC lib/iscsi/iscsi_subsystem.o 00:02:54.933 CC lib/iscsi/iscsi_rpc.o 00:02:54.933 CC lib/iscsi/task.o 00:02:55.190 LIB libspdk_nvme.a 00:02:55.448 LIB libspdk_iscsi.a 00:02:55.448 CC lib/nvmf/ctrlr.o 00:02:55.448 CC lib/nvmf/ctrlr_bdev.o 00:02:55.448 CC lib/nvmf/ctrlr_discovery.o 00:02:55.448 CC lib/nvmf/subsystem.o 00:02:55.448 CC lib/nvmf/nvmf.o 00:02:55.448 CC lib/nvmf/nvmf_rpc.o 00:02:55.448 CC lib/nvmf/transport.o 00:02:55.448 CC lib/nvmf/tcp.o 00:02:55.448 CC lib/nvmf/stubs.o 00:02:55.448 CC lib/nvmf/mdns_server.o 00:02:55.448 CC lib/nvmf/rdma.o 00:02:55.448 CC lib/nvmf/auth.o 00:02:56.015 LIB libspdk_nvmf.a 00:02:56.015 CC module/env_dpdk/env_dpdk_rpc.o 00:02:56.015 CC module/accel/error/accel_error.o 00:02:56.015 CC module/accel/error/accel_error_rpc.o 00:02:56.015 CC module/keyring/file/keyring.o 00:02:56.015 CC module/accel/ioat/accel_ioat.o 00:02:56.015 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:56.015 CC module/accel/iaa/accel_iaa.o 00:02:56.015 CC module/blob/bdev/blob_bdev.o 00:02:56.015 CC module/sock/posix/posix.o 00:02:56.015 CC module/accel/dsa/accel_dsa.o 00:02:56.015 LIB libspdk_env_dpdk_rpc.a 00:02:56.015 CC module/accel/ioat/accel_ioat_rpc.o 00:02:56.015 CC module/accel/iaa/accel_iaa_rpc.o 00:02:56.015 CC module/keyring/file/keyring_rpc.o 00:02:56.274 LIB libspdk_accel_error.a 00:02:56.274 CC module/accel/dsa/accel_dsa_rpc.o 00:02:56.274 LIB libspdk_scheduler_dynamic.a 00:02:56.274 LIB libspdk_blob_bdev.a 00:02:56.274 LIB libspdk_accel_ioat.a 00:02:56.274 LIB libspdk_keyring_file.a 00:02:56.274 LIB libspdk_accel_iaa.a 00:02:56.274 LIB libspdk_accel_dsa.a 00:02:56.274 CC module/bdev/null/bdev_null.o 00:02:56.274 CC module/bdev/delay/vbdev_delay.o 00:02:56.274 CC module/bdev/lvol/vbdev_lvol.o 00:02:56.274 CC module/bdev/gpt/gpt.o 00:02:56.274 CC module/bdev/error/vbdev_error.o 00:02:56.274 CC module/blobfs/bdev/blobfs_bdev.o 00:02:56.274 CC module/bdev/passthru/vbdev_passthru.o 00:02:56.274 CC module/bdev/malloc/bdev_malloc.o 00:02:56.274 LIB libspdk_sock_posix.a 00:02:56.274 CC module/bdev/nvme/bdev_nvme.o 00:02:56.274 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:56.274 CC module/bdev/gpt/vbdev_gpt.o 00:02:56.532 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:56.532 CC module/bdev/null/bdev_null_rpc.o 00:02:56.532 CC module/bdev/error/vbdev_error_rpc.o 00:02:56.532 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:56.532 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:56.532 LIB libspdk_bdev_passthru.a 00:02:56.532 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:56.532 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:56.532 CC module/bdev/raid/bdev_raid.o 00:02:56.532 LIB libspdk_blobfs_bdev.a 00:02:56.532 LIB libspdk_bdev_error.a 00:02:56.532 LIB libspdk_bdev_null.a 00:02:56.532 LIB libspdk_bdev_gpt.a 00:02:56.532 CC module/bdev/nvme/nvme_rpc.o 00:02:56.532 LIB libspdk_bdev_malloc.a 00:02:56.532 CC module/bdev/nvme/bdev_mdns_client.o 00:02:56.532 CC module/bdev/raid/bdev_raid_rpc.o 00:02:56.532 LIB libspdk_bdev_delay.a 00:02:56.532 CC module/bdev/raid/bdev_raid_sb.o 00:02:56.532 CC module/bdev/raid/raid0.o 00:02:56.532 CC module/bdev/raid/raid1.o 00:02:56.532 CC module/bdev/raid/concat.o 00:02:56.790 LIB libspdk_bdev_lvol.a 00:02:56.790 CC module/bdev/split/vbdev_split.o 00:02:56.790 CC module/bdev/split/vbdev_split_rpc.o 00:02:56.790 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:56.790 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:56.790 CC module/bdev/aio/bdev_aio.o 00:02:56.790 CC module/bdev/aio/bdev_aio_rpc.o 00:02:56.790 LIB libspdk_bdev_raid.a 00:02:56.790 LIB libspdk_bdev_split.a 00:02:56.790 LIB libspdk_bdev_zone_block.a 00:02:56.790 LIB libspdk_bdev_aio.a 00:02:57.048 LIB libspdk_bdev_nvme.a 00:02:57.306 CC module/event/subsystems/sock/sock.o 00:02:57.306 CC module/event/subsystems/iobuf/iobuf.o 00:02:57.306 CC module/event/subsystems/scheduler/scheduler.o 00:02:57.306 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:57.306 CC module/event/subsystems/vmd/vmd.o 00:02:57.306 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:57.306 CC module/event/subsystems/keyring/keyring.o 00:02:57.306 LIB libspdk_event_keyring.a 00:02:57.306 LIB libspdk_event_scheduler.a 00:02:57.306 LIB libspdk_event_vmd.a 00:02:57.306 LIB libspdk_event_sock.a 00:02:57.306 LIB libspdk_event_iobuf.a 00:02:57.306 CC module/event/subsystems/accel/accel.o 00:02:57.564 LIB libspdk_event_accel.a 00:02:57.564 CC module/event/subsystems/bdev/bdev.o 00:02:57.823 LIB libspdk_event_bdev.a 00:02:57.823 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:57.823 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:57.823 CC module/event/subsystems/scsi/scsi.o 00:02:57.823 LIB libspdk_event_scsi.a 00:02:58.081 LIB libspdk_event_nvmf.a 00:02:58.081 CC module/event/subsystems/iscsi/iscsi.o 00:02:58.081 LIB libspdk_event_iscsi.a 00:02:58.338 CC app/trace_record/trace_record.o 00:02:58.338 CXX app/trace/trace.o 00:02:58.338 CC app/spdk_lspci/spdk_lspci.o 00:02:58.338 CC app/spdk_nvme_perf/perf.o 00:02:58.338 CC app/iscsi_tgt/iscsi_tgt.o 00:02:58.338 CC examples/ioat/perf/perf.o 00:02:58.338 CC app/nvmf_tgt/nvmf_main.o 00:02:58.338 CC test/thread/poller_perf/poller_perf.o 00:02:58.338 CC app/spdk_tgt/spdk_tgt.o 00:02:58.338 CC examples/util/zipf/zipf.o 00:02:58.338 LINK spdk_lspci 00:02:58.338 LINK ioat_perf 00:02:58.338 LINK spdk_trace_record 00:02:58.338 LINK poller_perf 00:02:58.338 LINK zipf 00:02:58.338 CC test/thread/lock/spdk_lock.o 00:02:58.338 LINK nvmf_tgt 00:02:58.338 LINK iscsi_tgt 00:02:58.595 LINK spdk_tgt 00:02:58.595 CC examples/ioat/verify/verify.o 00:02:58.595 TEST_HEADER include/spdk/accel.h 00:02:58.595 TEST_HEADER include/spdk/accel_module.h 00:02:58.595 TEST_HEADER include/spdk/assert.h 00:02:58.595 TEST_HEADER include/spdk/barrier.h 00:02:58.595 TEST_HEADER include/spdk/base64.h 00:02:58.595 TEST_HEADER include/spdk/bdev.h 00:02:58.595 TEST_HEADER include/spdk/bdev_module.h 00:02:58.595 TEST_HEADER include/spdk/bdev_zone.h 00:02:58.595 TEST_HEADER include/spdk/bit_array.h 00:02:58.595 TEST_HEADER include/spdk/bit_pool.h 00:02:58.595 TEST_HEADER include/spdk/blob.h 00:02:58.595 TEST_HEADER include/spdk/blob_bdev.h 00:02:58.595 TEST_HEADER include/spdk/blobfs.h 00:02:58.595 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:58.595 TEST_HEADER include/spdk/conf.h 00:02:58.595 TEST_HEADER include/spdk/config.h 00:02:58.596 CC test/dma/test_dma/test_dma.o 00:02:58.596 TEST_HEADER include/spdk/cpuset.h 00:02:58.596 LINK spdk_nvme_perf 00:02:58.596 TEST_HEADER include/spdk/crc16.h 00:02:58.596 TEST_HEADER include/spdk/crc32.h 00:02:58.596 TEST_HEADER include/spdk/crc64.h 00:02:58.596 TEST_HEADER include/spdk/dif.h 00:02:58.596 TEST_HEADER include/spdk/dma.h 00:02:58.596 TEST_HEADER include/spdk/endian.h 00:02:58.596 TEST_HEADER include/spdk/env.h 00:02:58.596 TEST_HEADER include/spdk/env_dpdk.h 00:02:58.596 TEST_HEADER include/spdk/event.h 00:02:58.596 TEST_HEADER include/spdk/fd.h 00:02:58.596 TEST_HEADER include/spdk/fd_group.h 00:02:58.596 TEST_HEADER include/spdk/file.h 00:02:58.596 TEST_HEADER include/spdk/ftl.h 00:02:58.596 LINK verify 00:02:58.596 TEST_HEADER include/spdk/gpt_spec.h 00:02:58.596 TEST_HEADER include/spdk/hexlify.h 00:02:58.596 TEST_HEADER include/spdk/histogram_data.h 00:02:58.596 CC test/app/bdev_svc/bdev_svc.o 00:02:58.596 TEST_HEADER include/spdk/idxd.h 00:02:58.596 TEST_HEADER include/spdk/idxd_spec.h 00:02:58.596 TEST_HEADER include/spdk/init.h 00:02:58.596 TEST_HEADER include/spdk/ioat.h 00:02:58.596 TEST_HEADER include/spdk/ioat_spec.h 00:02:58.596 TEST_HEADER include/spdk/iscsi_spec.h 00:02:58.596 TEST_HEADER include/spdk/json.h 00:02:58.596 TEST_HEADER include/spdk/jsonrpc.h 00:02:58.596 TEST_HEADER include/spdk/keyring.h 00:02:58.596 TEST_HEADER include/spdk/keyring_module.h 00:02:58.596 TEST_HEADER include/spdk/likely.h 00:02:58.596 TEST_HEADER include/spdk/log.h 00:02:58.596 TEST_HEADER include/spdk/lvol.h 00:02:58.596 TEST_HEADER include/spdk/memory.h 00:02:58.596 TEST_HEADER include/spdk/mmio.h 00:02:58.596 TEST_HEADER include/spdk/nbd.h 00:02:58.596 TEST_HEADER include/spdk/notify.h 00:02:58.596 TEST_HEADER include/spdk/nvme.h 00:02:58.596 TEST_HEADER include/spdk/nvme_intel.h 00:02:58.596 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:58.596 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:58.596 TEST_HEADER include/spdk/nvme_spec.h 00:02:58.596 TEST_HEADER include/spdk/nvme_zns.h 00:02:58.596 TEST_HEADER include/spdk/nvmf.h 00:02:58.596 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:58.596 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:58.596 TEST_HEADER include/spdk/nvmf_spec.h 00:02:58.596 TEST_HEADER include/spdk/nvmf_transport.h 00:02:58.596 TEST_HEADER include/spdk/opal.h 00:02:58.596 TEST_HEADER include/spdk/opal_spec.h 00:02:58.596 TEST_HEADER include/spdk/pci_ids.h 00:02:58.596 TEST_HEADER include/spdk/pipe.h 00:02:58.596 TEST_HEADER include/spdk/queue.h 00:02:58.596 TEST_HEADER include/spdk/reduce.h 00:02:58.596 TEST_HEADER include/spdk/rpc.h 00:02:58.596 TEST_HEADER include/spdk/scheduler.h 00:02:58.596 TEST_HEADER include/spdk/scsi.h 00:02:58.596 TEST_HEADER include/spdk/scsi_spec.h 00:02:58.596 TEST_HEADER include/spdk/sock.h 00:02:58.596 TEST_HEADER include/spdk/stdinc.h 00:02:58.596 TEST_HEADER include/spdk/string.h 00:02:58.596 TEST_HEADER include/spdk/thread.h 00:02:58.596 TEST_HEADER include/spdk/trace.h 00:02:58.596 TEST_HEADER include/spdk/trace_parser.h 00:02:58.596 TEST_HEADER include/spdk/tree.h 00:02:58.596 TEST_HEADER include/spdk/ublk.h 00:02:58.596 TEST_HEADER include/spdk/util.h 00:02:58.596 TEST_HEADER include/spdk/uuid.h 00:02:58.596 TEST_HEADER include/spdk/version.h 00:02:58.596 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:58.596 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:58.596 TEST_HEADER include/spdk/vhost.h 00:02:58.596 TEST_HEADER include/spdk/vmd.h 00:02:58.596 TEST_HEADER include/spdk/xor.h 00:02:58.596 TEST_HEADER include/spdk/zipf.h 00:02:58.596 CXX test/cpp_headers/accel.o 00:02:58.596 CC test/env/mem_callbacks/mem_callbacks.o 00:02:58.596 CC test/env/vtophys/vtophys.o 00:02:58.596 CC examples/thread/thread/thread_ex.o 00:02:58.596 CC examples/sock/hello_world/hello_sock.o 00:02:58.596 LINK bdev_svc 00:02:58.596 CC test/rpc_client/rpc_client_test.o 00:02:58.853 LINK test_dma 00:02:58.853 LINK vtophys 00:02:58.853 LINK rpc_client_test 00:02:58.853 LINK spdk_lock 00:02:58.853 LINK hello_sock 00:02:58.853 LINK thread 00:02:58.853 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:58.853 CXX test/cpp_headers/accel_module.o 00:02:58.853 CC app/spdk_nvme_identify/identify.o 00:02:58.853 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:58.853 CC test/app/histogram_perf/histogram_perf.o 00:02:58.853 CC examples/vmd/lsvmd/lsvmd.o 00:02:58.853 CC test/app/jsoncat/jsoncat.o 00:02:59.111 LINK histogram_perf 00:02:59.111 LINK nvme_fuzz 00:02:59.111 CXX test/cpp_headers/assert.o 00:02:59.111 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:02:59.111 LINK lsvmd 00:02:59.111 LINK jsoncat 00:02:59.111 CC app/spdk_nvme_discover/discovery_aer.o 00:02:59.111 LINK histogram_ut 00:02:59.111 CC examples/vmd/led/led.o 00:02:59.111 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:59.111 CC examples/idxd/perf/perf.o 00:02:59.111 CXX test/cpp_headers/barrier.o 00:02:59.111 LINK spdk_nvme_identify 00:02:59.111 LINK spdk_nvme_discover 00:02:59.111 LINK led 00:02:59.111 LINK spdk_trace 00:02:59.111 LINK mem_callbacks 00:02:59.369 CC test/unit/lib/log/log.c/log_ut.o 00:02:59.369 CXX test/cpp_headers/base64.o 00:02:59.369 LINK env_dpdk_post_init 00:02:59.369 LINK idxd_perf 00:02:59.369 CC test/env/memory/memory_ut.o 00:02:59.369 CC app/spdk_top/spdk_top.o 00:02:59.369 CC test/env/pci/pci_ut.o 00:02:59.369 CXX test/cpp_headers/bdev.o 00:02:59.369 LINK iscsi_fuzz 00:02:59.369 CC test/nvme/aer/aer.o 00:02:59.369 LINK log_ut 00:02:59.369 CC examples/accel/perf/accel_perf.o 00:02:59.369 CXX test/cpp_headers/bdev_module.o 00:02:59.369 CC test/app/stub/stub.o 00:02:59.369 CC test/unit/lib/rdma/common.c/common_ut.o 00:02:59.369 CC test/nvme/reset/reset.o 00:02:59.369 LINK pci_ut 00:02:59.369 LINK aer 00:02:59.628 LINK stub 00:02:59.628 LINK accel_perf 00:02:59.628 CC test/nvme/sgl/sgl.o 00:02:59.628 LINK spdk_top 00:02:59.628 LINK reset 00:02:59.628 CC app/fio/nvme/fio_plugin.o 00:02:59.628 CXX test/cpp_headers/bdev_zone.o 00:02:59.628 CC examples/blob/hello_world/hello_blob.o 00:02:59.628 LINK sgl 00:02:59.628 CC examples/blob/cli/blobcli.o 00:02:59.628 CC examples/nvme/hello_world/hello_world.o 00:02:59.628 LINK common_ut 00:02:59.628 CC test/accel/dif/dif.o 00:02:59.887 CC test/nvme/e2edp/nvme_dp.o 00:02:59.887 fio_plugin.c:1582:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:02:59.887 struct spdk_nvme_fdp_ruhs ruhs; 00:02:59.887 ^ 00:02:59.887 CC examples/bdev/hello_world/hello_bdev.o 00:02:59.887 LINK hello_blob 00:02:59.887 LINK hello_world 00:02:59.887 CXX test/cpp_headers/bit_array.o 00:02:59.887 LINK blobcli 00:02:59.887 CC test/unit/lib/util/base64.c/base64_ut.o 00:02:59.887 1 warning generated. 00:02:59.887 LINK spdk_nvme 00:02:59.887 LINK nvme_dp 00:02:59.887 LINK dif 00:02:59.887 LINK hello_bdev 00:02:59.887 CC examples/nvme/reconnect/reconnect.o 00:02:59.887 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:02:59.887 CXX test/cpp_headers/bit_pool.o 00:02:59.887 LINK memory_ut 00:02:59.887 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:00.147 LINK base64_ut 00:03:00.147 CC test/nvme/overhead/overhead.o 00:03:00.147 CC app/fio/bdev/fio_plugin.o 00:03:00.147 CC examples/bdev/bdevperf/bdevperf.o 00:03:00.147 LINK reconnect 00:03:00.147 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:03:00.147 CC test/blobfs/mkfs/mkfs.o 00:03:00.147 LINK overhead 00:03:00.147 CC test/unit/lib/dma/dma.c/dma_ut.o 00:03:00.147 CXX test/cpp_headers/blob.o 00:03:00.147 LINK cpuset_ut 00:03:00.147 LINK bit_array_ut 00:03:00.147 CC test/nvme/err_injection/err_injection.o 00:03:00.147 LINK nvme_manage 00:03:00.147 LINK mkfs 00:03:00.147 CXX test/cpp_headers/blob_bdev.o 00:03:00.424 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:03:00.424 CC test/event/event_perf/event_perf.o 00:03:00.424 LINK bdevperf 00:03:00.424 CC examples/nvme/arbitration/arbitration.o 00:03:00.424 LINK err_injection 00:03:00.424 LINK spdk_bdev 00:03:00.424 LINK crc16_ut 00:03:00.424 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:03:00.424 LINK event_perf 00:03:00.424 CC examples/nvme/hotplug/hotplug.o 00:03:00.424 LINK dma_ut 00:03:00.424 CXX test/cpp_headers/blobfs.o 00:03:00.424 gmake[2]: Nothing to be done for 'all'. 00:03:00.424 CC test/nvme/startup/startup.o 00:03:00.425 LINK crc32_ieee_ut 00:03:00.425 LINK arbitration 00:03:00.425 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:03:00.425 CC test/event/reactor/reactor.o 00:03:00.425 CC test/bdev/bdevio/bdevio.o 00:03:00.425 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:00.690 CC test/event/reactor_perf/reactor_perf.o 00:03:00.690 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:03:00.690 LINK hotplug 00:03:00.690 LINK startup 00:03:00.690 LINK crc32c_ut 00:03:00.690 CC test/nvme/reserve/reserve.o 00:03:00.690 LINK reactor 00:03:00.690 LINK reactor_perf 00:03:00.690 CXX test/cpp_headers/blobfs_bdev.o 00:03:00.690 LINK cmb_copy 00:03:00.690 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:03:00.690 CC examples/nvme/abort/abort.o 00:03:00.690 CXX test/cpp_headers/conf.o 00:03:00.690 CC test/nvme/simple_copy/simple_copy.o 00:03:00.690 LINK reserve 00:03:00.690 LINK crc64_ut 00:03:00.690 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:00.690 CC test/nvme/connect_stress/connect_stress.o 00:03:00.690 CXX test/cpp_headers/config.o 00:03:00.690 CC test/unit/lib/util/dif.c/dif_ut.o 00:03:00.690 LINK bdevio 00:03:00.690 LINK ioat_ut 00:03:00.690 LINK simple_copy 00:03:00.690 LINK abort 00:03:00.690 CXX test/cpp_headers/cpuset.o 00:03:00.948 CC test/unit/lib/util/iov.c/iov_ut.o 00:03:00.948 LINK connect_stress 00:03:00.948 CC test/nvme/boot_partition/boot_partition.o 00:03:00.948 LINK pmr_persistence 00:03:00.948 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:03:00.948 CC test/nvme/compliance/nvme_compliance.o 00:03:00.948 LINK iov_ut 00:03:00.948 CC test/nvme/fused_ordering/fused_ordering.o 00:03:00.948 CC test/unit/lib/util/math.c/math_ut.o 00:03:00.948 CXX test/cpp_headers/crc16.o 00:03:00.948 CXX test/cpp_headers/crc32.o 00:03:00.948 LINK boot_partition 00:03:00.948 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:00.948 LINK math_ut 00:03:00.948 LINK fused_ordering 00:03:01.206 CC examples/nvmf/nvmf/nvmf.o 00:03:01.206 LINK doorbell_aers 00:03:01.206 CC test/unit/lib/util/string.c/string_ut.o 00:03:01.206 LINK nvme_compliance 00:03:01.206 CC test/nvme/fdp/fdp.o 00:03:01.206 CXX test/cpp_headers/crc64.o 00:03:01.206 LINK dif_ut 00:03:01.206 CXX test/cpp_headers/dif.o 00:03:01.206 CC test/unit/lib/util/xor.c/xor_ut.o 00:03:01.206 CXX test/cpp_headers/dma.o 00:03:01.206 CXX test/cpp_headers/endian.o 00:03:01.206 LINK pipe_ut 00:03:01.206 CXX test/cpp_headers/env.o 00:03:01.206 CXX test/cpp_headers/env_dpdk.o 00:03:01.206 LINK string_ut 00:03:01.206 CXX test/cpp_headers/event.o 00:03:01.206 CXX test/cpp_headers/fd.o 00:03:01.206 LINK nvmf 00:03:01.206 LINK xor_ut 00:03:01.464 LINK fdp 00:03:01.464 CXX test/cpp_headers/fd_group.o 00:03:01.464 CXX test/cpp_headers/file.o 00:03:01.464 CXX test/cpp_headers/ftl.o 00:03:01.464 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:03:01.464 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:03:01.464 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:03:01.464 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:03:01.464 CXX test/cpp_headers/gpt_spec.o 00:03:01.464 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:03:01.464 CXX test/cpp_headers/hexlify.o 00:03:01.464 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:03:01.464 CXX test/cpp_headers/histogram_data.o 00:03:01.722 CXX test/cpp_headers/idxd.o 00:03:01.722 CXX test/cpp_headers/idxd_spec.o 00:03:01.722 CXX test/cpp_headers/init.o 00:03:01.722 LINK idxd_user_ut 00:03:01.722 LINK pci_event_ut 00:03:01.722 CXX test/cpp_headers/ioat.o 00:03:01.722 LINK json_util_ut 00:03:01.722 CXX test/cpp_headers/ioat_spec.o 00:03:01.722 CXX test/cpp_headers/iscsi_spec.o 00:03:01.722 CXX test/cpp_headers/json.o 00:03:01.722 CXX test/cpp_headers/jsonrpc.o 00:03:01.722 CXX test/cpp_headers/keyring.o 00:03:01.722 LINK idxd_ut 00:03:01.722 CXX test/cpp_headers/keyring_module.o 00:03:01.980 CXX test/cpp_headers/likely.o 00:03:01.980 LINK json_write_ut 00:03:01.980 CXX test/cpp_headers/log.o 00:03:01.980 CXX test/cpp_headers/lvol.o 00:03:01.980 CXX test/cpp_headers/memory.o 00:03:01.980 CXX test/cpp_headers/mmio.o 00:03:01.980 CXX test/cpp_headers/nbd.o 00:03:01.980 CXX test/cpp_headers/notify.o 00:03:01.980 CXX test/cpp_headers/nvme.o 00:03:01.980 CXX test/cpp_headers/nvme_intel.o 00:03:01.980 CXX test/cpp_headers/nvme_ocssd.o 00:03:01.980 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:01.980 CXX test/cpp_headers/nvme_spec.o 00:03:01.980 CXX test/cpp_headers/nvme_zns.o 00:03:01.980 CXX test/cpp_headers/nvmf.o 00:03:02.238 CXX test/cpp_headers/nvmf_cmd.o 00:03:02.238 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:02.238 CXX test/cpp_headers/nvmf_spec.o 00:03:02.238 CXX test/cpp_headers/nvmf_transport.o 00:03:02.238 CXX test/cpp_headers/opal.o 00:03:02.238 CXX test/cpp_headers/opal_spec.o 00:03:02.238 CXX test/cpp_headers/pci_ids.o 00:03:02.238 CXX test/cpp_headers/pipe.o 00:03:02.238 CXX test/cpp_headers/queue.o 00:03:02.238 CXX test/cpp_headers/reduce.o 00:03:02.238 LINK json_parse_ut 00:03:02.238 CXX test/cpp_headers/rpc.o 00:03:02.238 CXX test/cpp_headers/scheduler.o 00:03:02.497 CXX test/cpp_headers/scsi.o 00:03:02.497 CXX test/cpp_headers/scsi_spec.o 00:03:02.497 CXX test/cpp_headers/sock.o 00:03:02.497 CXX test/cpp_headers/stdinc.o 00:03:02.497 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:03:02.497 CXX test/cpp_headers/string.o 00:03:02.497 CXX test/cpp_headers/thread.o 00:03:02.497 CXX test/cpp_headers/trace.o 00:03:02.497 CXX test/cpp_headers/trace_parser.o 00:03:02.497 CXX test/cpp_headers/tree.o 00:03:02.497 CXX test/cpp_headers/ublk.o 00:03:02.497 CXX test/cpp_headers/util.o 00:03:02.497 CXX test/cpp_headers/uuid.o 00:03:02.497 CXX test/cpp_headers/version.o 00:03:02.497 CXX test/cpp_headers/vfio_user_pci.o 00:03:02.497 CXX test/cpp_headers/vfio_user_spec.o 00:03:02.755 CXX test/cpp_headers/vhost.o 00:03:02.755 CXX test/cpp_headers/vmd.o 00:03:02.755 LINK jsonrpc_server_ut 00:03:02.755 CXX test/cpp_headers/xor.o 00:03:02.755 CXX test/cpp_headers/zipf.o 00:03:02.755 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:03:03.014 LINK rpc_ut 00:03:03.273 CC test/unit/lib/sock/posix.c/posix_ut.o 00:03:03.273 CC test/unit/lib/sock/sock.c/sock_ut.o 00:03:03.273 CC test/unit/lib/thread/thread.c/thread_ut.o 00:03:03.273 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:03:03.273 CC test/unit/lib/notify/notify.c/notify_ut.o 00:03:03.273 CC test/unit/lib/keyring/keyring.c/keyring_ut.o 00:03:03.531 LINK keyring_ut 00:03:03.531 LINK notify_ut 00:03:03.531 LINK posix_ut 00:03:03.531 LINK iobuf_ut 00:03:03.789 LINK thread_ut 00:03:03.789 LINK sock_ut 00:03:03.789 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:03:03.789 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:03:03.789 CC test/unit/lib/blob/blob.c/blob_ut.o 00:03:03.789 CC test/unit/lib/accel/accel.c/accel_ut.o 00:03:03.789 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:03:04.046 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:03:04.046 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:03:04.046 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:03:04.046 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:03:04.046 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:03:04.046 LINK subsystem_ut 00:03:04.046 LINK rpc_ut 00:03:04.046 LINK blob_bdev_ut 00:03:04.305 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:03:04.305 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:03:04.305 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:03:04.563 LINK nvme_ns_ut 00:03:04.846 LINK nvme_ctrlr_cmd_ut 00:03:04.846 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:03:04.846 CC test/unit/lib/event/app.c/app_ut.o 00:03:04.846 LINK nvme_ctrlr_ocssd_cmd_ut 00:03:04.846 LINK accel_ut 00:03:04.846 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:03:04.846 LINK nvme_ut 00:03:05.105 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:03:05.105 LINK app_ut 00:03:05.105 LINK nvme_ns_ocssd_cmd_ut 00:03:05.105 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:03:05.105 CC test/unit/lib/bdev/part.c/part_ut.o 00:03:05.105 LINK nvme_ctrlr_ut 00:03:05.105 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:03:05.105 LINK nvme_pcie_ut 00:03:05.105 LINK reactor_ut 00:03:05.363 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:03:05.363 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:03:05.363 LINK scsi_nvme_ut 00:03:05.363 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:03:05.363 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:03:05.363 LINK nvme_ns_cmd_ut 00:03:05.622 LINK gpt_ut 00:03:05.622 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:03:05.622 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:03:05.622 LINK nvme_poll_group_ut 00:03:05.880 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:03:05.880 LINK blob_ut 00:03:05.880 LINK nvme_quirks_ut 00:03:05.880 LINK nvme_qpair_ut 00:03:05.880 LINK vbdev_lvol_ut 00:03:05.880 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:03:05.880 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:03:05.880 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:03:05.880 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:03:05.880 LINK part_ut 00:03:06.139 LINK bdev_zone_ut 00:03:06.139 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:03:06.139 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:03:06.139 LINK vbdev_zone_block_ut 00:03:06.398 LINK bdev_raid_ut 00:03:06.398 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:03:06.398 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:03:06.398 LINK bdev_raid_sb_ut 00:03:06.398 LINK nvme_transport_ut 00:03:06.398 LINK bdev_ut 00:03:06.656 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:03:06.656 LINK bdev_ut 00:03:06.656 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:03:06.656 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:03:06.656 LINK tree_ut 00:03:06.656 LINK nvme_tcp_ut 00:03:06.656 LINK concat_ut 00:03:06.656 LINK nvme_io_msg_ut 00:03:06.656 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:03:06.656 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:03:06.656 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:03:06.656 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:03:06.656 LINK nvme_opal_ut 00:03:06.656 LINK nvme_pcie_common_ut 00:03:06.656 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:03:06.915 CC test/unit/lib/bdev/raid/raid0.c/raid0_ut.o 00:03:06.915 LINK blobfs_bdev_ut 00:03:06.915 LINK nvme_fabric_ut 00:03:06.915 LINK raid1_ut 00:03:07.226 LINK raid0_ut 00:03:07.226 LINK blobfs_sync_ut 00:03:07.226 LINK blobfs_async_ut 00:03:07.226 LINK lvol_ut 00:03:08.164 LINK bdev_nvme_ut 00:03:08.164 LINK nvme_rdma_ut 00:03:08.164 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:03:08.164 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:03:08.164 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:03:08.164 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:03:08.164 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:03:08.164 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:03:08.164 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:03:08.164 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:03:08.164 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:03:08.164 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:03:08.164 LINK scsi_ut 00:03:08.422 LINK dev_ut 00:03:08.422 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:03:08.422 LINK scsi_pr_ut 00:03:08.422 LINK scsi_bdev_ut 00:03:08.422 CC test/unit/lib/nvmf/auth.c/auth_ut.o 00:03:08.422 LINK lun_ut 00:03:08.422 LINK ctrlr_bdev_ut 00:03:08.681 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:03:08.681 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:03:08.681 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:03:08.681 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:03:08.681 LINK ctrlr_discovery_ut 00:03:08.681 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:03:08.940 LINK nvmf_ut 00:03:08.940 LINK init_grp_ut 00:03:08.940 LINK subsystem_ut 00:03:08.940 CC test/unit/lib/iscsi/param.c/param_ut.o 00:03:08.940 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:03:09.198 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:03:09.198 LINK auth_ut 00:03:09.198 LINK ctrlr_ut 00:03:09.198 LINK conn_ut 00:03:09.198 LINK param_ut 00:03:09.456 LINK rdma_ut 00:03:09.456 LINK portal_grp_ut 00:03:09.456 LINK tcp_ut 00:03:09.457 LINK tgt_node_ut 00:03:09.457 LINK transport_ut 00:03:09.794 LINK iscsi_ut 00:03:09.794 00:03:09.794 real 1m6.081s 00:03:09.794 user 4m58.734s 00:03:09.794 sys 0m47.980s 00:03:09.794 21:39:24 unittest_build -- common/autotest_common.sh@1118 -- $ xtrace_disable 00:03:09.794 21:39:24 unittest_build -- common/autotest_common.sh@10 -- $ set +x 00:03:09.794 ************************************ 00:03:09.794 END TEST unittest_build 00:03:09.794 ************************************ 00:03:09.794 21:39:24 -- common/autotest_common.sh@1136 -- $ return 0 00:03:09.794 21:39:24 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:09.794 21:39:24 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:09.794 21:39:24 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:09.794 21:39:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.794 21:39:24 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:09.794 21:39:24 -- pm/common@44 -- $ pid=1274 00:03:09.794 21:39:24 -- pm/common@50 -- $ kill -TERM 1274 00:03:10.103 21:39:25 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:10.103 21:39:25 -- nvmf/common.sh@7 -- # uname -s 00:03:10.103 21:39:25 -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:03:10.103 21:39:25 -- nvmf/common.sh@7 -- # return 0 00:03:10.103 21:39:25 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:10.103 21:39:25 -- spdk/autotest.sh@32 -- # uname -s 00:03:10.103 21:39:25 -- spdk/autotest.sh@32 -- # '[' FreeBSD = Linux ']' 00:03:10.103 21:39:25 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:10.103 21:39:25 -- pm/common@17 -- # local monitor 00:03:10.103 21:39:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.103 21:39:25 -- pm/common@25 -- # sleep 1 00:03:10.103 21:39:25 -- pm/common@21 -- # date +%s 00:03:10.103 21:39:25 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721079565 00:03:10.103 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721079565_collect-vmstat.pm.log 00:03:11.036 21:39:26 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:11.036 21:39:26 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:11.036 21:39:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:11.036 21:39:26 -- common/autotest_common.sh@10 -- # set +x 00:03:11.036 21:39:26 -- spdk/autotest.sh@59 -- # create_test_list 00:03:11.036 21:39:26 -- common/autotest_common.sh@740 -- # xtrace_disable 00:03:11.036 21:39:26 -- common/autotest_common.sh@10 -- # set +x 00:03:11.036 21:39:26 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:11.036 21:39:26 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:11.036 21:39:26 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:11.036 21:39:26 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:11.036 21:39:26 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:11.036 21:39:26 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:11.036 21:39:26 -- common/autotest_common.sh@1449 -- # uname 00:03:11.036 21:39:26 -- common/autotest_common.sh@1449 -- # '[' FreeBSD = FreeBSD ']' 00:03:11.036 21:39:26 -- common/autotest_common.sh@1450 -- # kldunload contigmem.ko 00:03:11.036 kldunload: can't find file contigmem.ko 00:03:11.036 21:39:26 -- common/autotest_common.sh@1450 -- # true 00:03:11.036 21:39:26 -- common/autotest_common.sh@1451 -- # '[' -n '' ']' 00:03:11.036 21:39:26 -- common/autotest_common.sh@1457 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/modules/ 00:03:11.036 21:39:26 -- common/autotest_common.sh@1458 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/kernel/ 00:03:11.036 21:39:26 -- common/autotest_common.sh@1459 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/modules/ 00:03:11.036 21:39:26 -- common/autotest_common.sh@1460 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/kernel/ 00:03:11.036 21:39:26 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:11.036 21:39:26 -- common/autotest_common.sh@1469 -- # uname 00:03:11.036 21:39:26 -- common/autotest_common.sh@1469 -- # [[ FreeBSD = FreeBSD ]] 00:03:11.036 21:39:26 -- common/autotest_common.sh@1469 -- # sysctl -n kern.ipc.maxsockbuf 00:03:11.036 21:39:26 -- common/autotest_common.sh@1469 -- # (( 2097152 < 4194304 )) 00:03:11.036 21:39:26 -- common/autotest_common.sh@1470 -- # sysctl kern.ipc.maxsockbuf=4194304 00:03:11.036 kern.ipc.maxsockbuf: 2097152 -> 4194304 00:03:11.036 21:39:26 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:11.036 21:39:26 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:03:11.036 21:39:26 -- spdk/autotest.sh@72 -- # hash lcov 00:03:11.036 /home/vagrant/spdk_repo/spdk/autotest.sh: line 72: hash: lcov: not found 00:03:11.036 21:39:26 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:11.036 21:39:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:11.036 21:39:26 -- common/autotest_common.sh@10 -- # set +x 00:03:11.036 21:39:26 -- spdk/autotest.sh@91 -- # rm -f 00:03:11.036 21:39:26 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:11.295 kldunload: can't find file contigmem.ko 00:03:11.295 kldunload: can't find file nic_uio.ko 00:03:11.295 21:39:26 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:11.295 21:39:26 -- common/autotest_common.sh@1663 -- # zoned_devs=() 00:03:11.295 21:39:26 -- common/autotest_common.sh@1663 -- # local -gA zoned_devs 00:03:11.295 21:39:26 -- common/autotest_common.sh@1664 -- # local nvme bdf 00:03:11.295 21:39:26 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:11.295 21:39:26 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:11.295 21:39:26 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:11.295 21:39:26 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0ns1 00:03:11.295 21:39:26 -- scripts/common.sh@378 -- # local block=/dev/nvme0ns1 pt 00:03:11.295 21:39:26 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0ns1 00:03:11.295 nvme0ns1 is not a block device 00:03:11.295 21:39:26 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0ns1 00:03:11.295 /home/vagrant/spdk_repo/spdk/scripts/common.sh: line 391: blkid: command not found 00:03:11.295 21:39:26 -- scripts/common.sh@391 -- # pt= 00:03:11.295 21:39:26 -- scripts/common.sh@392 -- # return 1 00:03:11.295 21:39:26 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0ns1 bs=1M count=1 00:03:11.295 1+0 records in 00:03:11.295 1+0 records out 00:03:11.295 1048576 bytes transferred in 0.007041 secs (148926839 bytes/sec) 00:03:11.295 21:39:26 -- spdk/autotest.sh@118 -- # sync 00:03:11.911 21:39:26 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:11.911 21:39:26 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:11.911 21:39:26 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:12.846 21:39:27 -- spdk/autotest.sh@124 -- # uname -s 00:03:12.846 21:39:27 -- spdk/autotest.sh@124 -- # '[' FreeBSD = Linux ']' 00:03:12.846 21:39:27 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:12.846 Contigmem (not present) 00:03:12.846 Buffer Size: not set 00:03:12.846 Num Buffers: not set 00:03:12.846 00:03:12.846 00:03:12.846 Type BDF Vendor Device Driver 00:03:12.846 NVMe 0:0:16:0 0x1b36 0x0010 nvme0 00:03:12.846 21:39:27 -- spdk/autotest.sh@130 -- # uname -s 00:03:12.846 21:39:27 -- spdk/autotest.sh@130 -- # [[ FreeBSD == Linux ]] 00:03:12.846 21:39:27 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:12.846 21:39:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:12.846 21:39:27 -- common/autotest_common.sh@10 -- # set +x 00:03:12.846 21:39:27 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:12.846 21:39:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:12.846 21:39:27 -- common/autotest_common.sh@10 -- # set +x 00:03:12.846 21:39:27 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:13.104 kldunload: can't find file nic_uio.ko 00:03:13.104 hw.nic_uio.bdfs="0:16:0" 00:03:13.104 hw.contigmem.num_buffers="8" 00:03:13.104 hw.contigmem.buffer_size="268435456" 00:03:14.040 21:39:28 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:14.040 21:39:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:14.040 21:39:28 -- common/autotest_common.sh@10 -- # set +x 00:03:14.040 21:39:28 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:14.040 21:39:28 -- common/autotest_common.sh@1585 -- # mapfile -t bdfs 00:03:14.040 21:39:28 -- common/autotest_common.sh@1585 -- # get_nvme_bdfs_by_id 0x0a54 00:03:14.040 21:39:28 -- common/autotest_common.sh@1571 -- # bdfs=() 00:03:14.040 21:39:28 -- common/autotest_common.sh@1571 -- # local bdfs 00:03:14.040 21:39:28 -- common/autotest_common.sh@1573 -- # get_nvme_bdfs 00:03:14.040 21:39:28 -- common/autotest_common.sh@1507 -- # bdfs=() 00:03:14.040 21:39:28 -- common/autotest_common.sh@1507 -- # local bdfs 00:03:14.040 21:39:28 -- common/autotest_common.sh@1508 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:14.040 21:39:28 -- common/autotest_common.sh@1508 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:14.040 21:39:28 -- common/autotest_common.sh@1508 -- # jq -r '.config[].params.traddr' 00:03:14.040 21:39:28 -- common/autotest_common.sh@1509 -- # (( 1 == 0 )) 00:03:14.040 21:39:28 -- common/autotest_common.sh@1513 -- # printf '%s\n' 0000:00:10.0 00:03:14.040 21:39:28 -- common/autotest_common.sh@1573 -- # for bdf in $(get_nvme_bdfs) 00:03:14.040 21:39:28 -- common/autotest_common.sh@1574 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:14.040 cat: /sys/bus/pci/devices/0000:00:10.0/device: No such file or directory 00:03:14.040 21:39:28 -- common/autotest_common.sh@1574 -- # device= 00:03:14.040 21:39:28 -- common/autotest_common.sh@1574 -- # true 00:03:14.040 21:39:28 -- common/autotest_common.sh@1575 -- # [[ '' == \0\x\0\a\5\4 ]] 00:03:14.040 21:39:28 -- common/autotest_common.sh@1580 -- # printf '%s\n' 00:03:14.040 21:39:28 -- common/autotest_common.sh@1586 -- # [[ -z '' ]] 00:03:14.040 21:39:28 -- common/autotest_common.sh@1587 -- # return 0 00:03:14.040 21:39:28 -- spdk/autotest.sh@150 -- # '[' 1 -eq 1 ']' 00:03:14.040 21:39:28 -- spdk/autotest.sh@151 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:14.040 21:39:28 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:14.040 21:39:28 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:14.040 21:39:28 -- common/autotest_common.sh@10 -- # set +x 00:03:14.040 ************************************ 00:03:14.040 START TEST unittest 00:03:14.040 ************************************ 00:03:14.040 21:39:28 unittest -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:14.040 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:14.040 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:03:14.040 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:03:14.040 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:14.040 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:03:14.040 + rootdir=/home/vagrant/spdk_repo/spdk 00:03:14.040 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:03:14.040 ++ rpc_py=rpc_cmd 00:03:14.040 ++ set -e 00:03:14.040 ++ shopt -s nullglob 00:03:14.040 ++ shopt -s extglob 00:03:14.040 ++ shopt -s inherit_errexit 00:03:14.040 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:03:14.040 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:03:14.040 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:03:14.040 +++ CONFIG_WPDK_DIR= 00:03:14.040 +++ CONFIG_ASAN=n 00:03:14.040 +++ CONFIG_VBDEV_COMPRESS=n 00:03:14.040 +++ CONFIG_HAVE_EXECINFO_H=y 00:03:14.040 +++ CONFIG_USDT=n 00:03:14.040 +++ CONFIG_CUSTOMOCF=n 00:03:14.040 +++ CONFIG_PREFIX=/usr/local 00:03:14.040 +++ CONFIG_RBD=n 00:03:14.040 +++ CONFIG_LIBDIR= 00:03:14.040 +++ CONFIG_IDXD=y 00:03:14.040 +++ CONFIG_NVME_CUSE=n 00:03:14.040 +++ CONFIG_SMA=n 00:03:14.040 +++ CONFIG_VTUNE=n 00:03:14.040 +++ CONFIG_TSAN=n 00:03:14.040 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:03:14.040 +++ CONFIG_VFIO_USER_DIR= 00:03:14.040 +++ CONFIG_PGO_CAPTURE=n 00:03:14.040 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=n 00:03:14.040 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:14.040 +++ CONFIG_LTO=n 00:03:14.040 +++ CONFIG_ISCSI_INITIATOR=n 00:03:14.040 +++ CONFIG_CET=n 00:03:14.040 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:03:14.040 +++ CONFIG_OCF_PATH= 00:03:14.040 +++ CONFIG_RDMA_SET_TOS=y 00:03:14.040 +++ CONFIG_HAVE_ARC4RANDOM=y 00:03:14.040 +++ CONFIG_HAVE_LIBARCHIVE=n 00:03:14.040 +++ CONFIG_UBLK=n 00:03:14.040 +++ CONFIG_ISAL_CRYPTO=y 00:03:14.040 +++ CONFIG_OPENSSL_PATH= 00:03:14.040 +++ CONFIG_OCF=n 00:03:14.040 +++ CONFIG_FUSE=n 00:03:14.040 +++ CONFIG_VTUNE_DIR= 00:03:14.040 +++ CONFIG_FUZZER_LIB= 00:03:14.040 +++ CONFIG_FUZZER=n 00:03:14.040 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:03:14.040 +++ CONFIG_CRYPTO=n 00:03:14.040 +++ CONFIG_PGO_USE=n 00:03:14.040 +++ CONFIG_VHOST=n 00:03:14.040 +++ CONFIG_DAOS=n 00:03:14.040 +++ CONFIG_DPDK_INC_DIR= 00:03:14.040 +++ CONFIG_DAOS_DIR= 00:03:14.040 +++ CONFIG_UNIT_TESTS=y 00:03:14.040 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=n 00:03:14.040 +++ CONFIG_VIRTIO=n 00:03:14.040 +++ CONFIG_DPDK_UADK=n 00:03:14.040 +++ CONFIG_COVERAGE=n 00:03:14.040 +++ CONFIG_RDMA=y 00:03:14.040 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:03:14.040 +++ CONFIG_URING_PATH= 00:03:14.040 +++ CONFIG_XNVME=n 00:03:14.040 +++ CONFIG_VFIO_USER=n 00:03:14.040 +++ CONFIG_ARCH=native 00:03:14.040 +++ CONFIG_HAVE_EVP_MAC=y 00:03:14.040 +++ CONFIG_URING_ZNS=n 00:03:14.040 +++ CONFIG_WERROR=y 00:03:14.040 +++ CONFIG_HAVE_LIBBSD=n 00:03:14.040 +++ CONFIG_UBSAN=n 00:03:14.040 +++ CONFIG_IPSEC_MB_DIR= 00:03:14.040 +++ CONFIG_GOLANG=n 00:03:14.040 +++ CONFIG_ISAL=y 00:03:14.040 +++ CONFIG_IDXD_KERNEL=n 00:03:14.040 +++ CONFIG_DPDK_LIB_DIR= 00:03:14.040 +++ CONFIG_RDMA_PROV=verbs 00:03:14.040 +++ CONFIG_APPS=y 00:03:14.040 +++ CONFIG_SHARED=n 00:03:14.040 +++ CONFIG_HAVE_KEYUTILS=n 00:03:14.040 +++ CONFIG_FC_PATH= 00:03:14.040 +++ CONFIG_DPDK_PKG_CONFIG=n 00:03:14.040 +++ CONFIG_FC=n 00:03:14.040 +++ CONFIG_AVAHI=n 00:03:14.040 +++ CONFIG_FIO_PLUGIN=y 00:03:14.040 +++ CONFIG_RAID5F=n 00:03:14.040 +++ CONFIG_EXAMPLES=y 00:03:14.040 +++ CONFIG_TESTS=y 00:03:14.040 +++ CONFIG_CRYPTO_MLX5=n 00:03:14.040 +++ CONFIG_MAX_LCORES=128 00:03:14.040 +++ CONFIG_IPSEC_MB=n 00:03:14.040 +++ CONFIG_PGO_DIR= 00:03:14.040 +++ CONFIG_DEBUG=y 00:03:14.040 +++ CONFIG_DPDK_COMPRESSDEV=n 00:03:14.040 +++ CONFIG_CROSS_PREFIX= 00:03:14.040 +++ CONFIG_URING=n 00:03:14.040 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:03:14.040 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:03:14.040 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:03:14.040 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:03:14.040 +++ _root=/home/vagrant/spdk_repo/spdk 00:03:14.040 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:03:14.040 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:03:14.040 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:03:14.040 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:03:14.040 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:03:14.040 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:03:14.040 +++ VHOST_APP=("$_app_dir/vhost") 00:03:14.040 +++ DD_APP=("$_app_dir/spdk_dd") 00:03:14.040 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:03:14.040 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:03:14.041 +++ [[ #ifndef SPDK_CONFIG_H 00:03:14.041 #define SPDK_CONFIG_H 00:03:14.041 #define SPDK_CONFIG_APPS 1 00:03:14.041 #define SPDK_CONFIG_ARCH native 00:03:14.041 #undef SPDK_CONFIG_ASAN 00:03:14.041 #undef SPDK_CONFIG_AVAHI 00:03:14.041 #undef SPDK_CONFIG_CET 00:03:14.041 #undef SPDK_CONFIG_COVERAGE 00:03:14.041 #define SPDK_CONFIG_CROSS_PREFIX 00:03:14.041 #undef SPDK_CONFIG_CRYPTO 00:03:14.041 #undef SPDK_CONFIG_CRYPTO_MLX5 00:03:14.041 #undef SPDK_CONFIG_CUSTOMOCF 00:03:14.041 #undef SPDK_CONFIG_DAOS 00:03:14.041 #define SPDK_CONFIG_DAOS_DIR 00:03:14.041 #define SPDK_CONFIG_DEBUG 1 00:03:14.041 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:03:14.041 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:14.041 #define SPDK_CONFIG_DPDK_INC_DIR 00:03:14.041 #define SPDK_CONFIG_DPDK_LIB_DIR 00:03:14.041 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:03:14.041 #undef SPDK_CONFIG_DPDK_UADK 00:03:14.041 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:14.041 #define SPDK_CONFIG_EXAMPLES 1 00:03:14.041 #undef SPDK_CONFIG_FC 00:03:14.041 #define SPDK_CONFIG_FC_PATH 00:03:14.041 #define SPDK_CONFIG_FIO_PLUGIN 1 00:03:14.041 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:03:14.041 #undef SPDK_CONFIG_FUSE 00:03:14.041 #undef SPDK_CONFIG_FUZZER 00:03:14.041 #define SPDK_CONFIG_FUZZER_LIB 00:03:14.041 #undef SPDK_CONFIG_GOLANG 00:03:14.041 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:03:14.041 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:03:14.041 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:03:14.041 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:03:14.041 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:03:14.041 #undef SPDK_CONFIG_HAVE_LIBBSD 00:03:14.041 #undef SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 00:03:14.041 #define SPDK_CONFIG_IDXD 1 00:03:14.041 #undef SPDK_CONFIG_IDXD_KERNEL 00:03:14.041 #undef SPDK_CONFIG_IPSEC_MB 00:03:14.041 #define SPDK_CONFIG_IPSEC_MB_DIR 00:03:14.041 #define SPDK_CONFIG_ISAL 1 00:03:14.041 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:03:14.041 #undef SPDK_CONFIG_ISCSI_INITIATOR 00:03:14.041 #define SPDK_CONFIG_LIBDIR 00:03:14.041 #undef SPDK_CONFIG_LTO 00:03:14.041 #define SPDK_CONFIG_MAX_LCORES 128 00:03:14.041 #undef SPDK_CONFIG_NVME_CUSE 00:03:14.041 #undef SPDK_CONFIG_OCF 00:03:14.041 #define SPDK_CONFIG_OCF_PATH 00:03:14.041 #define SPDK_CONFIG_OPENSSL_PATH 00:03:14.041 #undef SPDK_CONFIG_PGO_CAPTURE 00:03:14.041 #define SPDK_CONFIG_PGO_DIR 00:03:14.041 #undef SPDK_CONFIG_PGO_USE 00:03:14.041 #define SPDK_CONFIG_PREFIX /usr/local 00:03:14.041 #undef SPDK_CONFIG_RAID5F 00:03:14.041 #undef SPDK_CONFIG_RBD 00:03:14.041 #define SPDK_CONFIG_RDMA 1 00:03:14.041 #define SPDK_CONFIG_RDMA_PROV verbs 00:03:14.041 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:03:14.041 #undef SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 00:03:14.041 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:03:14.041 #undef SPDK_CONFIG_SHARED 00:03:14.041 #undef SPDK_CONFIG_SMA 00:03:14.041 #define SPDK_CONFIG_TESTS 1 00:03:14.041 #undef SPDK_CONFIG_TSAN 00:03:14.041 #undef SPDK_CONFIG_UBLK 00:03:14.041 #undef SPDK_CONFIG_UBSAN 00:03:14.041 #define SPDK_CONFIG_UNIT_TESTS 1 00:03:14.041 #undef SPDK_CONFIG_URING 00:03:14.041 #define SPDK_CONFIG_URING_PATH 00:03:14.041 #undef SPDK_CONFIG_URING_ZNS 00:03:14.041 #undef SPDK_CONFIG_USDT 00:03:14.041 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:03:14.041 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:03:14.041 #undef SPDK_CONFIG_VFIO_USER 00:03:14.041 #define SPDK_CONFIG_VFIO_USER_DIR 00:03:14.041 #undef SPDK_CONFIG_VHOST 00:03:14.041 #undef SPDK_CONFIG_VIRTIO 00:03:14.041 #undef SPDK_CONFIG_VTUNE 00:03:14.041 #define SPDK_CONFIG_VTUNE_DIR 00:03:14.041 #define SPDK_CONFIG_WERROR 1 00:03:14.041 #define SPDK_CONFIG_WPDK_DIR 00:03:14.041 #undef SPDK_CONFIG_XNVME 00:03:14.041 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:03:14.041 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:03:14.041 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:14.041 +++ [[ -e /bin/wpdk_common.sh ]] 00:03:14.041 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:14.041 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:14.041 ++++ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:03:14.041 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:03:14.041 ++++ export PATH 00:03:14.041 ++++ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:03:14.041 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:03:14.041 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:03:14.041 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:03:14.041 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:03:14.041 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:03:14.041 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:03:14.041 +++ TEST_TAG=N/A 00:03:14.041 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:03:14.041 +++ PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:03:14.041 ++++ uname -s 00:03:14.041 +++ PM_OS=FreeBSD 00:03:14.041 +++ MONITOR_RESOURCES_SUDO=() 00:03:14.041 +++ declare -A MONITOR_RESOURCES_SUDO 00:03:14.041 +++ MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:03:14.041 +++ MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:03:14.041 +++ MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:03:14.041 +++ MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:03:14.041 +++ SUDO[0]= 00:03:14.041 +++ SUDO[1]='sudo -E' 00:03:14.041 +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:03:14.041 +++ [[ FreeBSD == FreeBSD ]] 00:03:14.041 +++ MONITOR_RESOURCES=(collect-vmstat) 00:03:14.041 +++ [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:03:14.041 ++ : 0 00:03:14.041 ++ export RUN_NIGHTLY 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_RUN_VALGRIND 00:03:14.041 ++ : 1 00:03:14.041 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:03:14.041 ++ : 1 00:03:14.041 ++ export SPDK_TEST_UNITTEST 00:03:14.041 ++ : 00:03:14.041 ++ export SPDK_TEST_AUTOBUILD 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_RELEASE_BUILD 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_ISAL 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_ISCSI 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_ISCSI_INITIATOR 00:03:14.041 ++ : 1 00:03:14.041 ++ export SPDK_TEST_NVME 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_NVME_PMR 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_NVME_BP 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_NVME_CLI 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_NVME_CUSE 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_NVME_FDP 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_NVMF 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_VFIOUSER 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_VFIOUSER_QEMU 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_FUZZER 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_FUZZER_SHORT 00:03:14.041 ++ : rdma 00:03:14.041 ++ export SPDK_TEST_NVMF_TRANSPORT 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_RBD 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_VHOST 00:03:14.041 ++ : 1 00:03:14.041 ++ export SPDK_TEST_BLOCKDEV 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_IOAT 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_BLOBFS 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_VHOST_INIT 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_LVOL 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_VBDEV_COMPRESS 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_RUN_ASAN 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_RUN_UBSAN 00:03:14.041 ++ : 00:03:14.041 ++ export SPDK_RUN_EXTERNAL_DPDK 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_RUN_NON_ROOT 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_CRYPTO 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_FTL 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_OCF 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_VMD 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_OPAL 00:03:14.041 ++ : 00:03:14.041 ++ export SPDK_TEST_NATIVE_DPDK 00:03:14.041 ++ : true 00:03:14.041 ++ export SPDK_AUTOTEST_X 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_RAID5 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_URING 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_USDT 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_USE_IGB_UIO 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_SCHEDULER 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_SCANBUILD 00:03:14.041 ++ : 00:03:14.041 ++ export SPDK_TEST_NVMF_NICS 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_SMA 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_DAOS 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_XNVME 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_ACCEL_DSA 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_ACCEL_IAA 00:03:14.041 ++ : 00:03:14.041 ++ export SPDK_TEST_FUZZER_TARGET 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_TEST_NVMF_MDNS 00:03:14.041 ++ : 0 00:03:14.041 ++ export SPDK_JSONRPC_GO_CLIENT 00:03:14.041 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:03:14.041 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:03:14.041 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:03:14.041 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:03:14.041 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:14.041 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:14.041 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:14.041 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:14.041 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:03:14.041 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:03:14.042 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:03:14.042 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:03:14.042 ++ export PYTHONDONTWRITEBYTECODE=1 00:03:14.042 ++ PYTHONDONTWRITEBYTECODE=1 00:03:14.042 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:03:14.042 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:03:14.042 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:03:14.042 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:03:14.042 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:03:14.042 ++ rm -rf /var/tmp/asan_suppression_file 00:03:14.042 ++ cat 00:03:14.042 ++ echo leak:libfuse3.so 00:03:14.042 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:03:14.042 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:03:14.042 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:03:14.042 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:03:14.042 ++ '[' -z /var/spdk/dependencies ']' 00:03:14.042 ++ export DEPENDENCY_DIR 00:03:14.042 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:03:14.042 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:03:14.042 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:03:14.042 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:03:14.042 ++ export QEMU_BIN= 00:03:14.042 ++ QEMU_BIN= 00:03:14.042 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:03:14.042 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:03:14.042 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:03:14.042 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:03:14.042 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:14.042 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:14.042 ++ '[' 0 -eq 0 ']' 00:03:14.042 ++ export valgrind= 00:03:14.042 ++ valgrind= 00:03:14.042 +++ uname -s 00:03:14.042 ++ '[' FreeBSD = Linux ']' 00:03:14.042 +++ uname -s 00:03:14.042 ++ '[' FreeBSD = FreeBSD ']' 00:03:14.042 ++ MAKE=gmake 00:03:14.042 +++ sysctl -a 00:03:14.042 +++ grep -E -i hw.ncpu 00:03:14.042 +++ awk '{print $2}' 00:03:14.042 ++ MAKEFLAGS=-j10 00:03:14.042 ++ HUGEMEM=2048 00:03:14.042 ++ export HUGEMEM=2048 00:03:14.042 ++ HUGEMEM=2048 00:03:14.042 ++ NO_HUGE=() 00:03:14.042 ++ TEST_MODE= 00:03:14.042 ++ [[ -z '' ]] 00:03:14.042 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:03:14.042 ++ exec 00:03:14.042 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:03:14.042 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:03:14.042 ++ set_test_storage 2147483648 00:03:14.042 ++ [[ -v testdir ]] 00:03:14.042 ++ local requested_size=2147483648 00:03:14.042 ++ local mount target_dir 00:03:14.042 ++ local -A mounts fss sizes avails uses 00:03:14.042 ++ local source fs size avail mount use 00:03:14.042 ++ local storage_fallback storage_candidates 00:03:14.042 +++ mktemp -udt spdk.XXXXXX 00:03:14.042 ++ storage_fallback=/tmp/spdk.XXXXXX.ziEYMNfeWU 00:03:14.042 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:03:14.042 ++ [[ -n '' ]] 00:03:14.042 ++ [[ -n '' ]] 00:03:14.042 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.XXXXXX.ziEYMNfeWU/tests/unit /tmp/spdk.XXXXXX.ziEYMNfeWU 00:03:14.042 ++ requested_size=2214592512 00:03:14.042 ++ read -r source fs size use avail _ mount 00:03:14.042 +++ df -T 00:03:14.042 +++ grep -v Filesystem 00:03:14.042 ++ mounts["$mount"]=/dev/gptid/043e6f36-2a13-11ef-a525-001e676338ce 00:03:14.042 ++ fss["$mount"]=ufs 00:03:14.042 ++ avails["$mount"]=17237143552 00:03:14.042 ++ sizes["$mount"]=31182712832 00:03:14.042 ++ uses["$mount"]=11450953728 00:03:14.042 ++ read -r source fs size use avail _ mount 00:03:14.042 ++ mounts["$mount"]=devfs 00:03:14.042 ++ fss["$mount"]=devfs 00:03:14.042 ++ avails["$mount"]=1024 00:03:14.042 ++ sizes["$mount"]=1024 00:03:14.042 ++ uses["$mount"]=0 00:03:14.042 ++ read -r source fs size use avail _ mount 00:03:14.042 ++ mounts["$mount"]=tmpfs 00:03:14.042 ++ fss["$mount"]=tmpfs 00:03:14.042 ++ avails["$mount"]=2147438592 00:03:14.042 ++ sizes["$mount"]=2147483648 00:03:14.042 ++ uses["$mount"]=45056 00:03:14.042 ++ read -r source fs size use avail _ mount 00:03:14.042 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt/output 00:03:14.042 ++ fss["$mount"]=fusefs.sshfs 00:03:14.042 ++ avails["$mount"]=93550850048 00:03:14.042 ++ sizes["$mount"]=105088212992 00:03:14.042 ++ uses["$mount"]=6151929856 00:03:14.042 ++ read -r source fs size use avail _ mount 00:03:14.042 ++ printf '* Looking for test storage...\n' 00:03:14.042 * Looking for test storage... 00:03:14.042 ++ local target_space new_size 00:03:14.042 ++ for target_dir in "${storage_candidates[@]}" 00:03:14.042 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:03:14.042 +++ awk '$1 !~ /Filesystem/{print $6}' 00:03:14.042 ++ mount=/ 00:03:14.042 ++ target_space=17237143552 00:03:14.042 ++ (( target_space == 0 || target_space < requested_size )) 00:03:14.042 ++ (( target_space >= requested_size )) 00:03:14.042 ++ [[ ufs == tmpfs ]] 00:03:14.042 ++ [[ ufs == ramfs ]] 00:03:14.042 ++ [[ / == / ]] 00:03:14.042 ++ new_size=13665546240 00:03:14.042 ++ (( new_size * 100 / sizes[/] > 95 )) 00:03:14.042 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:03:14.042 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:03:14.042 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:03:14.042 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:03:14.042 ++ return 0 00:03:14.042 ++ set -o errtrace 00:03:14.042 ++ shopt -s extdebug 00:03:14.042 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:03:14.042 ++ PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:03:14.042 21:39:29 unittest -- common/autotest_common.sh@1681 -- # true 00:03:14.042 21:39:29 unittest -- common/autotest_common.sh@1683 -- # xtrace_fd 00:03:14.042 21:39:29 unittest -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:03:14.042 21:39:29 unittest -- common/autotest_common.sh@29 -- # exec 00:03:14.042 21:39:29 unittest -- common/autotest_common.sh@31 -- # xtrace_restore 00:03:14.042 21:39:29 unittest -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:03:14.042 21:39:29 unittest -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:03:14.042 21:39:29 unittest -- common/autotest_common.sh@18 -- # set -x 00:03:14.042 21:39:29 unittest -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:03:14.042 21:39:29 unittest -- unit/unittest.sh@153 -- # '[' 0 -eq 1 ']' 00:03:14.042 21:39:29 unittest -- unit/unittest.sh@160 -- # '[' -z x ']' 00:03:14.042 21:39:29 unittest -- unit/unittest.sh@167 -- # '[' 0 -eq 1 ']' 00:03:14.042 21:39:29 unittest -- unit/unittest.sh@180 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:03:14.042 21:39:29 unittest -- unit/unittest.sh@180 -- # CC_TYPE=CC_TYPE=clang 00:03:14.042 21:39:29 unittest -- unit/unittest.sh@181 -- # hash lcov 00:03:14.042 /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh: line 181: hash: lcov: not found 00:03:14.042 21:39:29 unittest -- unit/unittest.sh@184 -- # cov_avail=no 00:03:14.042 21:39:29 unittest -- unit/unittest.sh@186 -- # '[' no = yes ']' 00:03:14.042 21:39:29 unittest -- unit/unittest.sh@208 -- # uname -m 00:03:14.042 21:39:29 unittest -- unit/unittest.sh@208 -- # '[' amd64 = aarch64 ']' 00:03:14.042 21:39:29 unittest -- unit/unittest.sh@212 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:03:14.042 21:39:29 unittest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:14.042 21:39:29 unittest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:14.042 21:39:29 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:14.042 ************************************ 00:03:14.042 START TEST unittest_pci_event 00:03:14.042 ************************************ 00:03:14.042 21:39:29 unittest.unittest_pci_event -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:03:14.042 00:03:14.042 00:03:14.042 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.042 http://cunit.sourceforge.net/ 00:03:14.042 00:03:14.042 00:03:14.042 Suite: pci_event 00:03:14.042 Test: test_pci_parse_event ...passed 00:03:14.042 00:03:14.042 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.042 suites 1 1 n/a 0 0 00:03:14.042 tests 1 1 1 0 0 00:03:14.042 asserts 1 1 1 0 n/a 00:03:14.042 00:03:14.042 Elapsed time = 0.000 seconds 00:03:14.042 00:03:14.042 real 0m0.030s 00:03:14.042 user 0m0.004s 00:03:14.042 sys 0m0.011s 00:03:14.042 21:39:29 unittest.unittest_pci_event -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:14.042 21:39:29 unittest.unittest_pci_event -- common/autotest_common.sh@10 -- # set +x 00:03:14.042 ************************************ 00:03:14.042 END TEST unittest_pci_event 00:03:14.042 ************************************ 00:03:14.042 21:39:29 unittest -- common/autotest_common.sh@1136 -- # return 0 00:03:14.042 21:39:29 unittest -- unit/unittest.sh@213 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:03:14.042 21:39:29 unittest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:14.042 21:39:29 unittest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:14.042 21:39:29 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:14.042 ************************************ 00:03:14.042 START TEST unittest_include 00:03:14.042 ************************************ 00:03:14.042 21:39:29 unittest.unittest_include -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:03:14.042 00:03:14.042 00:03:14.042 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.042 http://cunit.sourceforge.net/ 00:03:14.042 00:03:14.042 00:03:14.042 Suite: histogram 00:03:14.042 Test: histogram_test ...passed 00:03:14.042 Test: histogram_merge ...passed 00:03:14.042 00:03:14.042 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.042 suites 1 1 n/a 0 0 00:03:14.042 tests 2 2 2 0 0 00:03:14.042 asserts 50 50 50 0 n/a 00:03:14.042 00:03:14.042 Elapsed time = 0.000 seconds 00:03:14.042 00:03:14.042 real 0m0.006s 00:03:14.042 user 0m0.000s 00:03:14.042 sys 0m0.007s 00:03:14.042 21:39:29 unittest.unittest_include -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:14.042 21:39:29 unittest.unittest_include -- common/autotest_common.sh@10 -- # set +x 00:03:14.043 ************************************ 00:03:14.043 END TEST unittest_include 00:03:14.043 ************************************ 00:03:14.377 21:39:29 unittest -- common/autotest_common.sh@1136 -- # return 0 00:03:14.377 21:39:29 unittest -- unit/unittest.sh@214 -- # run_test unittest_bdev unittest_bdev 00:03:14.377 21:39:29 unittest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:14.377 21:39:29 unittest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:14.377 21:39:29 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:14.377 ************************************ 00:03:14.377 START TEST unittest_bdev 00:03:14.377 ************************************ 00:03:14.377 21:39:29 unittest.unittest_bdev -- common/autotest_common.sh@1117 -- # unittest_bdev 00:03:14.377 21:39:29 unittest.unittest_bdev -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:03:14.377 00:03:14.377 00:03:14.377 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.377 http://cunit.sourceforge.net/ 00:03:14.377 00:03:14.377 00:03:14.377 Suite: bdev 00:03:14.377 Test: bytes_to_blocks_test ...passed 00:03:14.377 Test: num_blocks_test ...passed 00:03:14.377 Test: io_valid_test ...passed 00:03:14.377 Test: open_write_test ...[2024-07-15 21:39:29.264910] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:03:14.377 [2024-07-15 21:39:29.265162] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:03:14.377 [2024-07-15 21:39:29.265183] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:03:14.377 passed 00:03:14.377 Test: claim_test ...passed 00:03:14.377 Test: alias_add_del_test ...[2024-07-15 21:39:29.268373] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:03:14.377 [2024-07-15 21:39:29.268421] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4643:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:03:14.377 [2024-07-15 21:39:29.268442] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:03:14.377 passed 00:03:14.377 Test: get_device_stat_test ...passed 00:03:14.377 Test: bdev_io_types_test ...passed 00:03:14.377 Test: bdev_io_wait_test ...passed 00:03:14.377 Test: bdev_io_spans_split_test ...passed 00:03:14.377 Test: bdev_io_boundary_split_test ...passed 00:03:14.377 Test: bdev_io_max_size_and_segment_split_test ...[2024-07-15 21:39:29.274632] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3208:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:03:14.377 passed 00:03:14.377 Test: bdev_io_mix_split_test ...passed 00:03:14.377 Test: bdev_io_split_with_io_wait ...passed 00:03:14.377 Test: bdev_io_write_unit_split_test ...[2024-07-15 21:39:29.279026] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:03:14.377 [2024-07-15 21:39:29.279081] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:03:14.377 [2024-07-15 21:39:29.279103] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:03:14.377 [2024-07-15 21:39:29.279122] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:03:14.377 passed 00:03:14.377 Test: bdev_io_alignment_with_boundary ...passed 00:03:14.377 Test: bdev_io_alignment ...passed 00:03:14.377 Test: bdev_histograms ...passed 00:03:14.377 Test: bdev_write_zeroes ...passed 00:03:14.377 Test: bdev_compare_and_write ...passed 00:03:14.377 Test: bdev_compare ...passed 00:03:14.377 Test: bdev_compare_emulated ...passed 00:03:14.377 Test: bdev_zcopy_write ...passed 00:03:14.377 Test: bdev_zcopy_read ...passed 00:03:14.377 Test: bdev_open_while_hotremove ...passed 00:03:14.377 Test: bdev_close_while_hotremove ...passed 00:03:14.377 Test: bdev_open_ext_test ...passed 00:03:14.377 Test: bdev_open_ext_unregister ...passed 00:03:14.377 Test: bdev_set_io_timeout ...[2024-07-15 21:39:29.296383] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8184:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:03:14.377 [2024-07-15 21:39:29.296436] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8184:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:03:14.377 passed 00:03:14.377 Test: bdev_set_qd_sampling ...passed 00:03:14.377 Test: lba_range_overlap ...passed 00:03:14.377 Test: lock_lba_range_check_ranges ...passed 00:03:14.377 Test: lock_lba_range_with_io_outstanding ...passed 00:03:14.377 Test: lock_lba_range_overlapped ...passed 00:03:14.377 Test: bdev_quiesce ...[2024-07-15 21:39:29.304619] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10107:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:03:14.377 passed 00:03:14.377 Test: bdev_io_abort ...passed 00:03:14.377 Test: bdev_unmap ...passed 00:03:14.377 Test: bdev_write_zeroes_split_test ...passed 00:03:14.377 Test: bdev_set_options_test ...passed 00:03:14.377 Test: bdev_get_memory_domains ...passed 00:03:14.377 Test: bdev_io_ext ...[2024-07-15 21:39:29.309641] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 502:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:03:14.377 passed 00:03:14.377 Test: bdev_io_ext_no_opts ...passed 00:03:14.377 Test: bdev_io_ext_invalid_opts ...passed 00:03:14.377 Test: bdev_io_ext_split ...passed 00:03:14.377 Test: bdev_io_ext_bounce_buffer ...passed 00:03:14.377 Test: bdev_register_uuid_alias ...[2024-07-15 21:39:29.317773] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name b7193b39-42f2-11ef-9f7f-e9a656123a8b already exists 00:03:14.377 [2024-07-15 21:39:29.317808] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:b7193b39-42f2-11ef-9f7f-e9a656123a8b alias for bdev bdev0 00:03:14.377 passed 00:03:14.377 Test: bdev_unregister_by_name ...passed 00:03:14.377 Test: for_each_bdev_test ...[2024-07-15 21:39:29.318136] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7974:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:03:14.377 [2024-07-15 21:39:29.318149] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7983:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:03:14.377 passed 00:03:14.377 Test: bdev_seek_test ...passed 00:03:14.377 Test: bdev_copy ...passed 00:03:14.377 Test: bdev_copy_split_test ...passed 00:03:14.377 Test: examine_locks ...passed 00:03:14.377 Test: claim_v2_rwo ...[2024-07-15 21:39:29.322690] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:14.377 [2024-07-15 21:39:29.322722] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8708:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:14.377 [2024-07-15 21:39:29.322736] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:14.377 [2024-07-15 21:39:29.322753] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:14.377 [2024-07-15 21:39:29.322777] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:14.377 [2024-07-15 21:39:29.322789] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8704:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:03:14.377 passed 00:03:14.378 Test: claim_v2_rom ...[2024-07-15 21:39:29.322843] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:14.378 [2024-07-15 21:39:29.322857] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:14.378 [2024-07-15 21:39:29.322867] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:14.378 [2024-07-15 21:39:29.322880] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:14.378 [2024-07-15 21:39:29.322900] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8746:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:03:14.378 [2024-07-15 21:39:29.322930] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8742:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:03:14.378 passed 00:03:14.378 Test: claim_v2_rwm ...[2024-07-15 21:39:29.322957] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8777:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:03:14.378 [2024-07-15 21:39:29.322977] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:14.378 [2024-07-15 21:39:29.322995] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:14.378 [2024-07-15 21:39:29.323007] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:14.378 [2024-07-15 21:39:29.323015] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:14.378 [2024-07-15 21:39:29.323027] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8796:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:03:14.378 passed 00:03:14.378 Test: claim_v2_existing_writer ...[2024-07-15 21:39:29.323045] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8777:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:03:14.378 [2024-07-15 21:39:29.323086] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8742:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:03:14.378 [2024-07-15 21:39:29.323105] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8742:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:03:14.378 passed 00:03:14.378 Test: claim_v2_existing_v1 ...[2024-07-15 21:39:29.323133] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:03:14.378 [2024-07-15 21:39:29.323142] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:03:14.378 passed 00:03:14.378 Test: claim_v1_existing_v2 ...[2024-07-15 21:39:29.323151] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:03:14.378 [2024-07-15 21:39:29.323171] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:14.378 [2024-07-15 21:39:29.323182] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:14.378 [2024-07-15 21:39:29.323193] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:14.378 passed 00:03:14.378 Test: examine_claimed ...[2024-07-15 21:39:29.323250] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:03:14.378 passed 00:03:14.378 00:03:14.378 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.378 suites 1 1 n/a 0 0 00:03:14.378 tests 59 59 59 0 0 00:03:14.378 asserts 4599 4599 4599 0 n/a 00:03:14.378 00:03:14.378 Elapsed time = 0.062 seconds 00:03:14.378 21:39:29 unittest.unittest_bdev -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:03:14.378 00:03:14.378 00:03:14.378 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.378 http://cunit.sourceforge.net/ 00:03:14.378 00:03:14.378 00:03:14.378 Suite: nvme 00:03:14.378 Test: test_create_ctrlr ...passed 00:03:14.378 Test: test_reset_ctrlr ...[2024-07-15 21:39:29.333273] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:14.378 passed 00:03:14.378 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:03:14.378 Test: test_failover_ctrlr ...passed 00:03:14.378 Test: test_race_between_failover_and_add_secondary_trid ...passed 00:03:14.378 Test: test_pending_reset ...[2024-07-15 21:39:29.333671] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:14.378 [2024-07-15 21:39:29.333710] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:14.378 [2024-07-15 21:39:29.333731] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:14.378 passed 00:03:14.378 Test: test_attach_ctrlr ...passed 00:03:14.378 Test: test_aer_cb ...[2024-07-15 21:39:29.333874] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:14.378 [2024-07-15 21:39:29.333904] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:14.378 [2024-07-15 21:39:29.333969] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:03:14.378 passed 00:03:14.378 Test: test_submit_nvme_cmd ...passed 00:03:14.378 Test: test_add_remove_trid ...passed 00:03:14.378 Test: test_abort ...passed 00:03:14.378 Test: test_get_io_qpair ...passed 00:03:14.378 Test: test_bdev_unregister ...[2024-07-15 21:39:29.334210] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7452:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:03:14.378 passed 00:03:14.378 Test: test_compare_ns ...passed 00:03:14.378 Test: test_init_ana_log_page ...passed 00:03:14.378 Test: test_get_memory_domains ...passed 00:03:14.378 Test: test_reconnect_qpair ...passed 00:03:14.378 Test: test_create_bdev_ctrlr ...passed 00:03:14.378 Test: test_add_multi_ns_to_bdev ...[2024-07-15 21:39:29.334444] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:14.378 [2024-07-15 21:39:29.334495] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5382:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:03:14.378 passed 00:03:14.378 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:03:14.378 Test: test_admin_path ...passed 00:03:14.378 Test: test_reset_bdev_ctrlr ...[2024-07-15 21:39:29.334612] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4573:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:03:14.378 passed 00:03:14.378 Test: test_find_io_path ...passed 00:03:14.378 Test: test_retry_io_if_ana_state_is_updating ...passed 00:03:14.378 Test: test_retry_io_for_io_path_error ...passed 00:03:14.378 Test: test_retry_io_count ...passed 00:03:14.378 Test: test_concurrent_read_ana_log_page ...passed 00:03:14.378 Test: test_retry_io_for_ana_error ...passed 00:03:14.378 Test: test_check_io_error_resiliency_params ...[2024-07-15 21:39:29.335118] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6076:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:03:14.378 [2024-07-15 21:39:29.335135] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6080:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:03:14.378 [2024-07-15 21:39:29.335145] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6089:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:03:14.378 [2024-07-15 21:39:29.335155] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6092:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:03:14.378 [2024-07-15 21:39:29.335164] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6104:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:03:14.378 [2024-07-15 21:39:29.335174] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6104:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:03:14.378 passed 00:03:14.378 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:03:14.378 Test: test_reconnect_ctrlr ...[2024-07-15 21:39:29.335184] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6084:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:03:14.378 [2024-07-15 21:39:29.335193] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6099:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:03:14.378 [2024-07-15 21:39:29.335203] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6096:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:03:14.378 [2024-07-15 21:39:29.335282] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:14.378 [2024-07-15 21:39:29.335301] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:14.378 [2024-07-15 21:39:29.335335] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:14.378 passed 00:03:14.378 Test: test_retry_failover_ctrlr ...passed 00:03:14.378 Test: test_fail_path ...passed 00:03:14.378 Test: test_nvme_ns_cmp ...passed 00:03:14.378 Test: test_ana_transition ...passed 00:03:14.378 Test: test_set_preferred_path ...[2024-07-15 21:39:29.335351] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:14.378 [2024-07-15 21:39:29.335367] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:14.378 [2024-07-15 21:39:29.335406] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:14.378 [2024-07-15 21:39:29.335468] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:14.378 [2024-07-15 21:39:29.335486] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:14.378 [2024-07-15 21:39:29.335502] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:14.378 [2024-07-15 21:39:29.335517] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:14.378 [2024-07-15 21:39:29.335532] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:14.378 passed 00:03:14.378 Test: test_find_next_io_path ...passed 00:03:14.378 Test: test_find_io_path_min_qd ...passed 00:03:14.378 Test: test_disable_auto_failback ...passed 00:03:14.378 Test: test_set_multipath_policy ...[2024-07-15 21:39:29.335680] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:14.378 passed 00:03:14.378 Test: test_uuid_generation ...passed 00:03:14.378 Test: test_retry_io_to_same_path ...passed 00:03:14.378 Test: test_race_between_reset_and_disconnected ...passed 00:03:14.378 Test: test_ctrlr_op_rpc ...passed 00:03:14.378 Test: test_bdev_ctrlr_op_rpc ...passed 00:03:14.378 Test: test_disable_enable_ctrlr ...passed 00:03:14.379 Test: test_delete_ctrlr_done ...[2024-07-15 21:39:29.374129] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:14.379 [2024-07-15 21:39:29.374200] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:14.379 passed 00:03:14.379 Test: test_ns_remove_during_reset ...passed 00:03:14.379 Test: test_io_path_is_current ...passed 00:03:14.379 00:03:14.379 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.379 suites 1 1 n/a 0 0 00:03:14.379 tests 49 49 49 0 0 00:03:14.379 asserts 3577 3577 3577 0 n/a 00:03:14.379 00:03:14.379 Elapsed time = 0.008 seconds 00:03:14.379 21:39:29 unittest.unittest_bdev -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:03:14.379 00:03:14.379 00:03:14.379 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.379 http://cunit.sourceforge.net/ 00:03:14.379 00:03:14.379 Test Options 00:03:14.379 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:03:14.379 00:03:14.379 Suite: raid 00:03:14.379 Test: test_create_raid ...passed 00:03:14.379 Test: test_create_raid_superblock ...passed 00:03:14.379 Test: test_delete_raid ...passed 00:03:14.379 Test: test_create_raid_invalid_args ...[2024-07-15 21:39:29.387290] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1481:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:03:14.379 [2024-07-15 21:39:29.387521] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1475:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:03:14.379 [2024-07-15 21:39:29.387646] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1465:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:03:14.379 [2024-07-15 21:39:29.387697] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3193:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:03:14.379 [2024-07-15 21:39:29.387716] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3369:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:03:14.379 [2024-07-15 21:39:29.387891] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3193:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:03:14.379 [2024-07-15 21:39:29.387925] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3369:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:03:14.379 passed 00:03:14.379 Test: test_delete_raid_invalid_args ...passed 00:03:14.379 Test: test_io_channel ...passed 00:03:14.379 Test: test_reset_io ...passed 00:03:14.379 Test: test_multi_raid ...passed 00:03:14.379 Test: test_io_type_supported ...passed 00:03:14.379 Test: test_raid_json_dump_info ...passed 00:03:14.379 Test: test_context_size ...passed 00:03:14.379 Test: test_raid_level_conversions ...passed 00:03:14.379 Test: test_raid_io_split ...passed 00:03:14.379 Test: test_raid_process ...passed 00:03:14.379 00:03:14.379 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.379 suites 1 1 n/a 0 0 00:03:14.379 tests 14 14 14 0 0 00:03:14.379 asserts 6183 6183 6183 0 n/a 00:03:14.379 00:03:14.379 Elapsed time = 0.008 seconds 00:03:14.379 21:39:29 unittest.unittest_bdev -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:03:14.379 00:03:14.379 00:03:14.379 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.379 http://cunit.sourceforge.net/ 00:03:14.379 00:03:14.379 00:03:14.379 Suite: raid_sb 00:03:14.379 Test: test_raid_bdev_write_superblock ...passed 00:03:14.379 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:03:14.379 Test: test_raid_bdev_parse_superblock ...passed 00:03:14.379 Suite: raid_sb_md 00:03:14.379 Test: test_raid_bdev_write_superblock ...passed 00:03:14.379 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:03:14.379 Test: test_raid_bdev_parse_superblock ...passed 00:03:14.379 Suite: raid_sb_md_interleaved 00:03:14.379 Test: test_raid_bdev_write_superblock ...[2024-07-15 21:39:29.394842] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:03:14.379 [2024-07-15 21:39:29.395037] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:03:14.379 passed 00:03:14.379 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:03:14.379 Test: test_raid_bdev_parse_superblock ...passed 00:03:14.379 00:03:14.379 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.379 suites 3 3 n/a 0 0 00:03:14.379 tests 9 9 9 0 0 00:03:14.379 asserts 139 139 139 0 n/a 00:03:14.379 00:03:14.379 Elapsed time = 0.000 seconds 00:03:14.379 [2024-07-15 21:39:29.395118] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:03:14.379 21:39:29 unittest.unittest_bdev -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:03:14.379 00:03:14.379 00:03:14.379 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.379 http://cunit.sourceforge.net/ 00:03:14.379 00:03:14.379 00:03:14.379 Suite: concat 00:03:14.379 Test: test_concat_start ...passed 00:03:14.379 Test: test_concat_rw ...passed 00:03:14.379 Test: test_concat_null_payload ...passed 00:03:14.379 00:03:14.379 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.379 suites 1 1 n/a 0 0 00:03:14.379 tests 3 3 3 0 0 00:03:14.379 asserts 8460 8460 8460 0 n/a 00:03:14.379 00:03:14.379 Elapsed time = 0.000 seconds 00:03:14.379 21:39:29 unittest.unittest_bdev -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid0.c/raid0_ut 00:03:14.379 00:03:14.379 00:03:14.379 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.379 http://cunit.sourceforge.net/ 00:03:14.379 00:03:14.379 00:03:14.379 Suite: raid0 00:03:14.379 Test: test_write_io ...passed 00:03:14.379 Test: test_read_io ...passed 00:03:14.379 Test: test_unmap_io ...passed 00:03:14.379 Test: test_io_failure ...passed 00:03:14.379 Suite: raid0_dif 00:03:14.379 Test: test_write_io ...passed 00:03:14.379 Test: test_read_io ...passed 00:03:14.379 Test: test_unmap_io ...passed 00:03:14.379 Test: test_io_failure ...passed 00:03:14.379 00:03:14.379 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.379 suites 2 2 n/a 0 0 00:03:14.379 tests 8 8 8 0 0 00:03:14.379 asserts 368291 368291 368291 0 n/a 00:03:14.379 00:03:14.379 Elapsed time = 0.016 seconds 00:03:14.379 21:39:29 unittest.unittest_bdev -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:03:14.379 00:03:14.379 00:03:14.379 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.379 http://cunit.sourceforge.net/ 00:03:14.379 00:03:14.379 00:03:14.379 Suite: raid1 00:03:14.379 Test: test_raid1_start ...passed 00:03:14.379 Test: test_raid1_read_balancing ...passed 00:03:14.379 Test: test_raid1_write_error ...passed 00:03:14.379 Test: test_raid1_read_error ...passed 00:03:14.379 00:03:14.379 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.379 suites 1 1 n/a 0 0 00:03:14.379 tests 4 4 4 0 0 00:03:14.379 asserts 4374 4374 4374 0 n/a 00:03:14.379 00:03:14.379 Elapsed time = 0.000 seconds 00:03:14.379 21:39:29 unittest.unittest_bdev -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:03:14.379 00:03:14.379 00:03:14.379 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.379 http://cunit.sourceforge.net/ 00:03:14.379 00:03:14.379 00:03:14.379 Suite: zone 00:03:14.379 Test: test_zone_get_operation ...passed 00:03:14.379 Test: test_bdev_zone_get_info ...passed 00:03:14.379 Test: test_bdev_zone_management ...passed 00:03:14.379 Test: test_bdev_zone_append ...passed 00:03:14.379 Test: test_bdev_zone_append_with_md ...passed 00:03:14.379 Test: test_bdev_zone_appendv ...passed 00:03:14.379 Test: test_bdev_zone_appendv_with_md ...passed 00:03:14.379 Test: test_bdev_io_get_append_location ...passed 00:03:14.379 00:03:14.379 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.379 suites 1 1 n/a 0 0 00:03:14.379 tests 8 8 8 0 0 00:03:14.379 asserts 94 94 94 0 n/a 00:03:14.379 00:03:14.379 Elapsed time = 0.000 seconds 00:03:14.379 21:39:29 unittest.unittest_bdev -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:03:14.379 00:03:14.379 00:03:14.379 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.379 http://cunit.sourceforge.net/ 00:03:14.379 00:03:14.379 00:03:14.379 Suite: gpt_parse 00:03:14.379 Test: test_parse_mbr_and_primary ...[2024-07-15 21:39:29.438266] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:14.379 [2024-07-15 21:39:29.438549] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:14.379 [2024-07-15 21:39:29.438596] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:03:14.379 [2024-07-15 21:39:29.438612] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:03:14.379 [2024-07-15 21:39:29.438631] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:03:14.379 [2024-07-15 21:39:29.438646] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:03:14.379 passed 00:03:14.379 Test: test_parse_secondary ...[2024-07-15 21:39:29.438878] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:03:14.379 [2024-07-15 21:39:29.438893] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:03:14.379 [2024-07-15 21:39:29.438930] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:03:14.379 [2024-07-15 21:39:29.438946] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:03:14.379 passed 00:03:14.379 Test: test_check_mbr ...[2024-07-15 21:39:29.439182] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:14.379 [2024-07-15 21:39:29.439198] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:14.379 passed 00:03:14.380 Test: test_read_header ...[2024-07-15 21:39:29.439223] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:03:14.380 [2024-07-15 21:39:29.439240] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 178:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:03:14.380 [2024-07-15 21:39:29.439256] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:03:14.380 [2024-07-15 21:39:29.439273] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 192:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:03:14.380 [2024-07-15 21:39:29.439289] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 136:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:03:14.380 passed 00:03:14.380 Test: test_read_partitions ...[2024-07-15 21:39:29.439304] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:03:14.380 [2024-07-15 21:39:29.439327] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:03:14.380 [2024-07-15 21:39:29.439344] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 96:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:03:14.380 [2024-07-15 21:39:29.439364] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:03:14.380 [2024-07-15 21:39:29.439378] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:03:14.380 [2024-07-15 21:39:29.439495] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:03:14.380 passed 00:03:14.380 00:03:14.380 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.380 suites 1 1 n/a 0 0 00:03:14.380 tests 5 5 5 0 0 00:03:14.380 asserts 33 33 33 0 n/a 00:03:14.380 00:03:14.380 Elapsed time = 0.000 seconds 00:03:14.380 21:39:29 unittest.unittest_bdev -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:03:14.380 00:03:14.380 00:03:14.380 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.380 http://cunit.sourceforge.net/ 00:03:14.380 00:03:14.380 00:03:14.380 Suite: bdev_part 00:03:14.380 Test: part_test ...[2024-07-15 21:39:29.447983] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 73b39e7d-365c-c45a-bad6-e80e18e6b325 already exists 00:03:14.380 passed 00:03:14.380 Test: part_free_test ...passed 00:03:14.380 Test: part_get_io_channel_test ...[2024-07-15 21:39:29.448192] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:73b39e7d-365c-c45a-bad6-e80e18e6b325 alias for bdev test1 00:03:14.380 passed 00:03:14.380 Test: part_construct_ext ...passed 00:03:14.380 00:03:14.380 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.380 suites 1 1 n/a 0 0 00:03:14.380 tests 4 4 4 0 0 00:03:14.380 asserts 48 48 48 0 n/a 00:03:14.380 00:03:14.380 Elapsed time = 0.000 seconds 00:03:14.380 21:39:29 unittest.unittest_bdev -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:03:14.380 00:03:14.380 00:03:14.380 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.380 http://cunit.sourceforge.net/ 00:03:14.380 00:03:14.380 00:03:14.380 Suite: scsi_nvme_suite 00:03:14.380 Test: scsi_nvme_translate_test ...passed 00:03:14.380 00:03:14.380 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.380 suites 1 1 n/a 0 0 00:03:14.380 tests 1 1 1 0 0 00:03:14.380 asserts 104 104 104 0 n/a 00:03:14.380 00:03:14.380 Elapsed time = 0.000 seconds 00:03:14.380 21:39:29 unittest.unittest_bdev -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:03:14.380 00:03:14.380 00:03:14.380 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.380 http://cunit.sourceforge.net/ 00:03:14.380 00:03:14.380 00:03:14.380 Suite: lvol 00:03:14.380 Test: ut_lvs_init ...passed 00:03:14.380 Test: ut_lvol_init ...passed 00:03:14.380 Test: ut_lvol_snapshot ...passed 00:03:14.380 Test: ut_lvol_clone ...passed 00:03:14.380 Test: ut_lvs_destroy ...passed 00:03:14.380 Test: ut_lvs_unload ...passed 00:03:14.380 Test: ut_lvol_resize ...passed 00:03:14.380 Test: ut_lvol_set_read_only ...passed 00:03:14.380 Test: ut_lvol_hotremove ...passed 00:03:14.380 Test: ut_vbdev_lvol_get_io_channel ...passed 00:03:14.380 Test: ut_vbdev_lvol_io_type_supported ...passed 00:03:14.380 Test: ut_lvol_read_write ...passed 00:03:14.380 Test: ut_vbdev_lvol_submit_request ...passed 00:03:14.380 Test: ut_lvol_examine_config ...passed 00:03:14.380 Test: ut_lvol_examine_disk ...passed 00:03:14.380 Test: ut_lvol_rename ...passed 00:03:14.380 Test: ut_bdev_finish ...passed 00:03:14.380 Test: ut_lvs_rename ...passed 00:03:14.380 Test: ut_lvol_seek ...passed 00:03:14.380 Test: ut_esnap_dev_create ...passed 00:03:14.380 Test: ut_lvol_esnap_clone_bad_args ...passed 00:03:14.380 Test: ut_lvol_shallow_copy ...passed 00:03:14.380 Test: ut_lvol_set_external_parent ...passed 00:03:14.380 00:03:14.380 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.380 suites 1 1 n/a 0 0 00:03:14.380 tests 23 23 23 0 0 00:03:14.380 asserts 770 770 770 0 n/a 00:03:14.380 00:03:14.380 Elapsed time = 0.000 seconds 00:03:14.380 [2024-07-15 21:39:29.463838] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:03:14.380 [2024-07-15 21:39:29.464006] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:03:14.380 [2024-07-15 21:39:29.464102] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1394:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:03:14.380 [2024-07-15 21:39:29.464180] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1536:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:03:14.380 [2024-07-15 21:39:29.464217] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:03:14.380 [2024-07-15 21:39:29.464227] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1344:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:03:14.380 [2024-07-15 21:39:29.464262] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:03:14.380 [2024-07-15 21:39:29.464272] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1885:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:03:14.380 [2024-07-15 21:39:29.464281] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1890:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:03:14.380 [2024-07-15 21:39:29.464307] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1280:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:03:14.380 [2024-07-15 21:39:29.464317] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1287:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:03:14.380 [2024-07-15 21:39:29.464339] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1977:vbdev_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:03:14.380 [2024-07-15 21:39:29.464348] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1982:vbdev_lvol_shallow_copy: *ERROR*: lvol lvol_sc, bdev name must not be NULL 00:03:14.380 [2024-07-15 21:39:29.464363] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:2037:vbdev_lvol_set_external_parent: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:03:14.380 21:39:29 unittest.unittest_bdev -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:03:14.380 00:03:14.380 00:03:14.380 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.380 http://cunit.sourceforge.net/ 00:03:14.380 00:03:14.380 00:03:14.380 Suite: zone_block 00:03:14.380 Test: test_zone_block_create ...passed 00:03:14.380 Test: test_zone_block_create_invalid ...[2024-07-15 21:39:29.478228] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:03:14.380 passed 00:03:14.380 Test: test_get_zone_info ...passed 00:03:14.380 Test: test_supported_io_types ...passed 00:03:14.380 Test: test_reset_zone ...passed 00:03:14.380 Test: test_open_zone ...[2024-07-15 21:39:29.478554] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-15 21:39:29.478581] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:03:14.380 [2024-07-15 21:39:29.478596] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-15 21:39:29.478612] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:03:14.380 [2024-07-15 21:39:29.478625] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-15 21:39:29.478637] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:03:14.380 [2024-07-15 21:39:29.478649] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-15 21:39:29.478734] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.380 [2024-07-15 21:39:29.478757] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.380 [2024-07-15 21:39:29.478772] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.380 [2024-07-15 21:39:29.478859] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.380 [2024-07-15 21:39:29.478876] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.380 [2024-07-15 21:39:29.478940] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.380 passed 00:03:14.380 Test: test_zone_write ...[2024-07-15 21:39:29.479195] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.380 [2024-07-15 21:39:29.479206] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.380 [2024-07-15 21:39:29.479242] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:03:14.380 [2024-07-15 21:39:29.479252] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.380 [2024-07-15 21:39:29.479264] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:03:14.380 [2024-07-15 21:39:29.479273] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.380 [2024-07-15 21:39:29.479894] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:03:14.380 [2024-07-15 21:39:29.479908] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.380 [2024-07-15 21:39:29.479921] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:03:14.381 [2024-07-15 21:39:29.479935] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.381 [2024-07-15 21:39:29.480585] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:03:14.381 [2024-07-15 21:39:29.480612] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.381 passed 00:03:14.381 Test: test_zone_read ...passed 00:03:14.381 Test: test_close_zone ...[2024-07-15 21:39:29.480651] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:03:14.381 [2024-07-15 21:39:29.480661] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.381 [2024-07-15 21:39:29.480673] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:03:14.381 [2024-07-15 21:39:29.480681] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.381 [2024-07-15 21:39:29.480728] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:03:14.381 [2024-07-15 21:39:29.480736] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.381 [2024-07-15 21:39:29.480763] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.381 [2024-07-15 21:39:29.480778] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.381 passed 00:03:14.381 Test: test_finish_zone ...passed 00:03:14.381 Test: test_append_zone ...[2024-07-15 21:39:29.480815] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.381 [2024-07-15 21:39:29.480827] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.381 [2024-07-15 21:39:29.480882] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.381 [2024-07-15 21:39:29.480893] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.381 [2024-07-15 21:39:29.480921] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:03:14.381 [2024-07-15 21:39:29.480930] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.381 [2024-07-15 21:39:29.480941] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:03:14.381 [2024-07-15 21:39:29.480949] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.381 [2024-07-15 21:39:29.482181] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:03:14.381 [2024-07-15 21:39:29.482207] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.381 passed 00:03:14.381 00:03:14.381 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.381 suites 1 1 n/a 0 0 00:03:14.381 tests 11 11 11 0 0 00:03:14.381 asserts 3437 3437 3437 0 n/a 00:03:14.381 00:03:14.381 Elapsed time = 0.008 seconds 00:03:14.381 21:39:29 unittest.unittest_bdev -- unit/unittest.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:03:14.381 00:03:14.381 00:03:14.381 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.381 http://cunit.sourceforge.net/ 00:03:14.381 00:03:14.381 00:03:14.381 Suite: bdev 00:03:14.381 Test: basic ...[2024-07-15 21:39:29.491135] thread.c:2374:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x24b269): Operation not permitted (rc=-1) 00:03:14.381 [2024-07-15 21:39:29.491338] thread.c:2374:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x10046dc6a480 (0x24b260): Operation not permitted (rc=-1) 00:03:14.381 [2024-07-15 21:39:29.491355] thread.c:2374:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x24b269): Operation not permitted (rc=-1) 00:03:14.381 passed 00:03:14.381 Test: unregister_and_close ...passed 00:03:14.381 Test: unregister_and_close_different_threads ...passed 00:03:14.381 Test: basic_qos ...passed 00:03:14.381 Test: put_channel_during_reset ...passed 00:03:14.381 Test: aborted_reset ...passed 00:03:14.381 Test: aborted_reset_no_outstanding_io ...passed 00:03:14.381 Test: io_during_reset ...passed 00:03:14.381 Test: reset_completions ...passed 00:03:14.381 Test: io_during_qos_queue ...passed 00:03:14.381 Test: io_during_qos_reset ...passed 00:03:14.381 Test: enomem ...passed 00:03:14.381 Test: enomem_multi_bdev ...passed 00:03:14.381 Test: enomem_multi_bdev_unregister ...passed 00:03:14.381 Test: enomem_multi_io_target ...passed 00:03:14.381 Test: qos_dynamic_enable ...passed 00:03:14.381 Test: bdev_histograms_mt ...passed 00:03:14.381 Test: bdev_set_io_timeout_mt ...[2024-07-15 21:39:29.525456] thread.c: 471:spdk_thread_lib_fini: *ERROR*: io_device 0x10046dc6a600 not unregistered 00:03:14.381 passed 00:03:14.381 Test: lock_lba_range_then_submit_io ...[2024-07-15 21:39:29.526505] thread.c:2178:spdk_io_device_register: *ERROR*: io_device 0x24b248 already registered (old:0x10046dc6a600 new:0x10046dc6a780) 00:03:14.381 passed 00:03:14.381 Test: unregister_during_reset ...passed 00:03:14.381 Test: event_notify_and_close ...passed 00:03:14.381 Test: unregister_and_qos_poller ...passed 00:03:14.381 Suite: bdev_wrong_thread 00:03:14.381 Test: spdk_bdev_register_wt ...passed 00:03:14.381 Test: spdk_bdev_examine_wt ...passed[2024-07-15 21:39:29.532663] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8503:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x10046dc33380 (0x10046dc33380) 00:03:14.381 [2024-07-15 21:39:29.532715] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 811:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x10046dc33380 (0x10046dc33380) 00:03:14.381 00:03:14.381 00:03:14.381 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.381 suites 2 2 n/a 0 0 00:03:14.381 tests 24 24 24 0 0 00:03:14.381 asserts 621 621 621 0 n/a 00:03:14.381 00:03:14.381 Elapsed time = 0.047 seconds 00:03:14.381 00:03:14.381 real 0m0.280s 00:03:14.381 user 0m0.159s 00:03:14.381 sys 0m0.094s 00:03:14.381 21:39:29 unittest.unittest_bdev -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:14.381 21:39:29 unittest.unittest_bdev -- common/autotest_common.sh@10 -- # set +x 00:03:14.381 ************************************ 00:03:14.381 END TEST unittest_bdev 00:03:14.381 ************************************ 00:03:14.638 21:39:29 unittest -- common/autotest_common.sh@1136 -- # return 0 00:03:14.638 21:39:29 unittest -- unit/unittest.sh@215 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:14.638 21:39:29 unittest -- unit/unittest.sh@220 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:14.638 21:39:29 unittest -- unit/unittest.sh@225 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:14.638 21:39:29 unittest -- unit/unittest.sh@229 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:14.638 21:39:29 unittest -- unit/unittest.sh@233 -- # run_test unittest_blob_blobfs unittest_blob 00:03:14.638 21:39:29 unittest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:14.638 21:39:29 unittest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:14.638 21:39:29 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:14.638 ************************************ 00:03:14.638 START TEST unittest_blob_blobfs 00:03:14.639 ************************************ 00:03:14.639 21:39:29 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1117 -- # unittest_blob 00:03:14.639 21:39:29 unittest.unittest_blob_blobfs -- unit/unittest.sh@39 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:03:14.639 21:39:29 unittest.unittest_blob_blobfs -- unit/unittest.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:03:14.639 00:03:14.639 00:03:14.639 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.639 http://cunit.sourceforge.net/ 00:03:14.639 00:03:14.639 00:03:14.639 Suite: blob_nocopy_noextent 00:03:14.639 Test: blob_init ...[2024-07-15 21:39:29.597932] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:14.639 passed 00:03:14.639 Test: blob_thin_provision ...passed 00:03:14.639 Test: blob_read_only ...passed 00:03:14.639 Test: bs_load ...[2024-07-15 21:39:29.699684] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:14.639 passed 00:03:14.639 Test: bs_load_custom_cluster_size ...passed 00:03:14.639 Test: bs_load_after_failed_grow ...passed 00:03:14.639 Test: bs_cluster_sz ...[2024-07-15 21:39:29.734949] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:14.639 [2024-07-15 21:39:29.735050] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:14.639 [2024-07-15 21:39:29.735070] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:14.639 passed 00:03:14.639 Test: bs_resize_md ...passed 00:03:14.639 Test: bs_destroy ...passed 00:03:14.639 Test: bs_type ...passed 00:03:14.898 Test: bs_super_block ...passed 00:03:14.898 Test: bs_test_recover_cluster_count ...passed 00:03:14.898 Test: bs_grow_live ...passed 00:03:14.898 Test: bs_grow_live_no_space ...passed 00:03:14.898 Test: bs_test_grow ...passed 00:03:14.898 Test: blob_serialize_test ...passed 00:03:14.898 Test: super_block_crc ...passed 00:03:14.898 Test: blob_thin_prov_write_count_io ...passed 00:03:14.898 Test: blob_thin_prov_unmap_cluster ...passed 00:03:14.898 Test: bs_load_iter_test ...passed 00:03:14.898 Test: blob_relations ...[2024-07-15 21:39:29.984584] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:14.898 [2024-07-15 21:39:29.984673] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:14.898 [2024-07-15 21:39:29.984801] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:14.898 [2024-07-15 21:39:29.984813] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:14.898 passed 00:03:14.898 Test: blob_relations2 ...[2024-07-15 21:39:30.002431] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:14.898 [2024-07-15 21:39:30.002491] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:14.898 [2024-07-15 21:39:30.002514] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:14.898 [2024-07-15 21:39:30.002522] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:14.898 [2024-07-15 21:39:30.002675] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:14.898 [2024-07-15 21:39:30.002686] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:14.898 [2024-07-15 21:39:30.002721] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:14.898 [2024-07-15 21:39:30.002729] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:14.898 passed 00:03:14.898 Test: blob_relations3 ...passed 00:03:15.157 Test: blobstore_clean_power_failure ...passed 00:03:15.157 Test: blob_delete_snapshot_power_failure ...[2024-07-15 21:39:30.238805] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:15.157 [2024-07-15 21:39:30.255653] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:15.157 [2024-07-15 21:39:30.255713] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:15.157 [2024-07-15 21:39:30.255723] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:15.157 [2024-07-15 21:39:30.272446] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:15.157 [2024-07-15 21:39:30.272487] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:15.157 [2024-07-15 21:39:30.272496] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:15.157 [2024-07-15 21:39:30.272504] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:15.157 [2024-07-15 21:39:30.289213] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:15.157 [2024-07-15 21:39:30.289261] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:15.157 [2024-07-15 21:39:30.305928] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:15.157 [2024-07-15 21:39:30.305985] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:15.157 [2024-07-15 21:39:30.323307] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:15.157 [2024-07-15 21:39:30.323362] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:15.416 passed 00:03:15.416 Test: blob_create_snapshot_power_failure ...[2024-07-15 21:39:30.373958] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:15.416 [2024-07-15 21:39:30.407493] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:15.416 [2024-07-15 21:39:30.424424] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:15.416 passed 00:03:15.416 Test: blob_io_unit ...passed 00:03:15.416 Test: blob_io_unit_compatibility ...passed 00:03:15.416 Test: blob_ext_md_pages ...passed 00:03:15.416 Test: blob_esnap_io_4096_4096 ...passed 00:03:15.416 Test: blob_esnap_io_512_512 ...passed 00:03:15.674 Test: blob_esnap_io_4096_512 ...passed 00:03:15.674 Test: blob_esnap_io_512_4096 ...passed 00:03:15.674 Test: blob_esnap_clone_resize ...passed 00:03:15.674 Suite: blob_bs_nocopy_noextent 00:03:15.674 Test: blob_open ...passed 00:03:15.674 Test: blob_create ...[2024-07-15 21:39:30.790052] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:15.674 passed 00:03:15.933 Test: blob_create_loop ...passed 00:03:15.933 Test: blob_create_fail ...[2024-07-15 21:39:30.903811] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:15.933 passed 00:03:15.933 Test: blob_create_internal ...passed 00:03:15.933 Test: blob_create_zero_extent ...passed 00:03:15.933 Test: blob_snapshot ...passed 00:03:16.191 Test: blob_clone ...passed 00:03:16.191 Test: blob_inflate ...[2024-07-15 21:39:31.167287] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:16.191 passed 00:03:16.191 Test: blob_delete ...passed 00:03:16.191 Test: blob_resize_test ...[2024-07-15 21:39:31.267645] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:16.191 passed 00:03:16.191 Test: blob_resize_thin_test ...passed 00:03:16.450 Test: channel_ops ...passed 00:03:16.450 Test: blob_super ...passed 00:03:16.450 Test: blob_rw_verify_iov ...passed 00:03:16.450 Test: blob_unmap ...passed 00:03:16.450 Test: blob_iter ...passed 00:03:16.708 Test: blob_parse_md ...passed 00:03:16.708 Test: bs_load_pending_removal ...passed 00:03:16.708 Test: bs_unload ...[2024-07-15 21:39:31.770541] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:16.708 passed 00:03:16.708 Test: bs_usable_clusters ...passed 00:03:16.708 Test: blob_crc ...[2024-07-15 21:39:31.881156] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:16.708 [2024-07-15 21:39:31.881217] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:16.967 passed 00:03:16.967 Test: blob_flags ...passed 00:03:16.967 Test: bs_version ...passed 00:03:16.967 Test: blob_set_xattrs_test ...[2024-07-15 21:39:32.044391] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:16.967 [2024-07-15 21:39:32.044468] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:16.967 passed 00:03:16.967 Test: blob_thin_prov_alloc ...passed 00:03:17.224 Test: blob_insert_cluster_msg_test ...passed 00:03:17.224 Test: blob_thin_prov_rw ...passed 00:03:17.224 Test: blob_thin_prov_rle ...passed 00:03:17.224 Test: blob_thin_prov_rw_iov ...passed 00:03:17.224 Test: blob_snapshot_rw ...passed 00:03:17.483 Test: blob_snapshot_rw_iov ...passed 00:03:17.483 Test: blob_inflate_rw ...passed 00:03:17.483 Test: blob_snapshot_freeze_io ...passed 00:03:17.483 Test: blob_operation_split_rw ...passed 00:03:17.747 Test: blob_operation_split_rw_iov ...passed 00:03:17.747 Test: blob_simultaneous_operations ...[2024-07-15 21:39:32.776902] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:17.747 [2024-07-15 21:39:32.776986] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:17.747 [2024-07-15 21:39:32.777572] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:17.747 [2024-07-15 21:39:32.777582] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:17.747 [2024-07-15 21:39:32.782234] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:17.747 [2024-07-15 21:39:32.782259] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:17.747 [2024-07-15 21:39:32.782287] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:17.747 [2024-07-15 21:39:32.782295] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:17.747 passed 00:03:17.747 Test: blob_persist_test ...passed 00:03:17.747 Test: blob_decouple_snapshot ...passed 00:03:18.005 Test: blob_seek_io_unit ...passed 00:03:18.005 Test: blob_nested_freezes ...passed 00:03:18.005 Test: blob_clone_resize ...passed 00:03:18.005 Test: blob_shallow_copy ...[2024-07-15 21:39:33.117797] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:03:18.005 [2024-07-15 21:39:33.117877] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:03:18.005 [2024-07-15 21:39:33.117889] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:03:18.005 passed 00:03:18.005 Suite: blob_blob_nocopy_noextent 00:03:18.005 Test: blob_write ...passed 00:03:18.262 Test: blob_read ...passed 00:03:18.262 Test: blob_rw_verify ...passed 00:03:18.262 Test: blob_rw_verify_iov_nomem ...passed 00:03:18.262 Test: blob_rw_iov_read_only ...passed 00:03:18.520 Test: blob_xattr ...passed 00:03:18.520 Test: blob_dirty_shutdown ...passed 00:03:18.520 Test: blob_is_degraded ...passed 00:03:18.520 Suite: blob_esnap_bs_nocopy_noextent 00:03:18.520 Test: blob_esnap_create ...passed 00:03:18.520 Test: blob_esnap_thread_add_remove ...passed 00:03:18.778 Test: blob_esnap_clone_snapshot ...passed 00:03:18.778 Test: blob_esnap_clone_inflate ...passed 00:03:18.778 Test: blob_esnap_clone_decouple ...passed 00:03:18.778 Test: blob_esnap_clone_reload ...passed 00:03:18.778 Test: blob_esnap_hotplug ...passed 00:03:18.778 Test: blob_set_parent ...[2024-07-15 21:39:33.960245] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:03:18.778 [2024-07-15 21:39:33.960325] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:03:18.778 [2024-07-15 21:39:33.960351] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:03:18.778 [2024-07-15 21:39:33.960361] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:03:18.778 [2024-07-15 21:39:33.960421] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:19.036 passed 00:03:19.036 Test: blob_set_external_parent ...[2024-07-15 21:39:34.009868] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:03:19.036 [2024-07-15 21:39:34.009947] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7797:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:03:19.036 [2024-07-15 21:39:34.009957] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:03:19.036 [2024-07-15 21:39:34.010013] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:19.036 passed 00:03:19.036 Suite: blob_nocopy_extent 00:03:19.036 Test: blob_init ...[2024-07-15 21:39:34.026172] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:19.036 passed 00:03:19.036 Test: blob_thin_provision ...passed 00:03:19.036 Test: blob_read_only ...passed 00:03:19.036 Test: bs_load ...[2024-07-15 21:39:34.094908] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:19.036 passed 00:03:19.036 Test: bs_load_custom_cluster_size ...passed 00:03:19.036 Test: bs_load_after_failed_grow ...passed 00:03:19.036 Test: bs_cluster_sz ...[2024-07-15 21:39:34.128056] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:19.036 [2024-07-15 21:39:34.128131] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:19.036 [2024-07-15 21:39:34.128170] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:19.036 passed 00:03:19.036 Test: bs_resize_md ...passed 00:03:19.036 Test: bs_destroy ...passed 00:03:19.036 Test: bs_type ...passed 00:03:19.295 Test: bs_super_block ...passed 00:03:19.295 Test: bs_test_recover_cluster_count ...passed 00:03:19.295 Test: bs_grow_live ...passed 00:03:19.295 Test: bs_grow_live_no_space ...passed 00:03:19.295 Test: bs_test_grow ...passed 00:03:19.295 Test: blob_serialize_test ...passed 00:03:19.295 Test: super_block_crc ...passed 00:03:19.295 Test: blob_thin_prov_write_count_io ...passed 00:03:19.295 Test: blob_thin_prov_unmap_cluster ...passed 00:03:19.295 Test: bs_load_iter_test ...passed 00:03:19.295 Test: blob_relations ...[2024-07-15 21:39:34.364049] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:19.295 [2024-07-15 21:39:34.364121] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:19.295 [2024-07-15 21:39:34.364261] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:19.295 [2024-07-15 21:39:34.364272] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:19.295 passed 00:03:19.295 Test: blob_relations2 ...[2024-07-15 21:39:34.381595] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:19.295 [2024-07-15 21:39:34.381633] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:19.295 [2024-07-15 21:39:34.381643] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:19.295 [2024-07-15 21:39:34.381650] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:19.295 [2024-07-15 21:39:34.381804] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:19.295 [2024-07-15 21:39:34.381815] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:19.295 [2024-07-15 21:39:34.381852] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:19.295 [2024-07-15 21:39:34.381860] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:19.295 passed 00:03:19.295 Test: blob_relations3 ...passed 00:03:19.554 Test: blobstore_clean_power_failure ...passed 00:03:19.554 Test: blob_delete_snapshot_power_failure ...[2024-07-15 21:39:34.623065] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:19.554 [2024-07-15 21:39:34.640316] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:19.554 [2024-07-15 21:39:34.658085] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:19.554 [2024-07-15 21:39:34.658178] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:19.554 [2024-07-15 21:39:34.658195] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:19.554 [2024-07-15 21:39:34.676054] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:19.554 [2024-07-15 21:39:34.676110] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:19.554 [2024-07-15 21:39:34.676119] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:19.554 [2024-07-15 21:39:34.676127] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:19.554 [2024-07-15 21:39:34.693491] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:19.554 [2024-07-15 21:39:34.693538] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:19.554 [2024-07-15 21:39:34.693548] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:19.554 [2024-07-15 21:39:34.693555] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:19.554 [2024-07-15 21:39:34.711935] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:19.554 [2024-07-15 21:39:34.712005] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:19.554 [2024-07-15 21:39:34.729370] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:19.554 [2024-07-15 21:39:34.729435] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:19.813 [2024-07-15 21:39:34.747198] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:19.813 [2024-07-15 21:39:34.747276] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:19.813 passed 00:03:19.813 Test: blob_create_snapshot_power_failure ...[2024-07-15 21:39:34.800583] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:19.813 [2024-07-15 21:39:34.818194] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:19.813 [2024-07-15 21:39:34.854476] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:19.813 [2024-07-15 21:39:34.873927] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:19.813 passed 00:03:19.813 Test: blob_io_unit ...passed 00:03:19.813 Test: blob_io_unit_compatibility ...passed 00:03:19.813 Test: blob_ext_md_pages ...passed 00:03:20.071 Test: blob_esnap_io_4096_4096 ...passed 00:03:20.071 Test: blob_esnap_io_512_512 ...passed 00:03:20.071 Test: blob_esnap_io_4096_512 ...passed 00:03:20.071 Test: blob_esnap_io_512_4096 ...passed 00:03:20.071 Test: blob_esnap_clone_resize ...passed 00:03:20.071 Suite: blob_bs_nocopy_extent 00:03:20.071 Test: blob_open ...passed 00:03:20.071 Test: blob_create ...[2024-07-15 21:39:35.243795] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:20.329 passed 00:03:20.329 Test: blob_create_loop ...passed 00:03:20.329 Test: blob_create_fail ...[2024-07-15 21:39:35.360141] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:20.329 passed 00:03:20.329 Test: blob_create_internal ...passed 00:03:20.329 Test: blob_create_zero_extent ...passed 00:03:20.588 Test: blob_snapshot ...passed 00:03:20.588 Test: blob_clone ...passed 00:03:20.588 Test: blob_inflate ...[2024-07-15 21:39:35.624585] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:20.588 passed 00:03:20.588 Test: blob_delete ...passed 00:03:20.588 Test: blob_resize_test ...[2024-07-15 21:39:35.724126] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:20.588 passed 00:03:20.847 Test: blob_resize_thin_test ...passed 00:03:20.847 Test: channel_ops ...passed 00:03:20.847 Test: blob_super ...passed 00:03:20.847 Test: blob_rw_verify_iov ...passed 00:03:20.847 Test: blob_unmap ...passed 00:03:21.106 Test: blob_iter ...passed 00:03:21.106 Test: blob_parse_md ...passed 00:03:21.106 Test: bs_load_pending_removal ...passed 00:03:21.106 Test: bs_unload ...[2024-07-15 21:39:36.178419] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:21.106 passed 00:03:21.106 Test: bs_usable_clusters ...passed 00:03:21.106 Test: blob_crc ...[2024-07-15 21:39:36.280243] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:21.106 [2024-07-15 21:39:36.280329] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:21.375 passed 00:03:21.375 Test: blob_flags ...passed 00:03:21.375 Test: bs_version ...passed 00:03:21.375 Test: blob_set_xattrs_test ...[2024-07-15 21:39:36.436179] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:21.375 [2024-07-15 21:39:36.436259] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:21.375 passed 00:03:21.375 Test: blob_thin_prov_alloc ...passed 00:03:21.664 Test: blob_insert_cluster_msg_test ...passed 00:03:21.664 Test: blob_thin_prov_rw ...passed 00:03:21.664 Test: blob_thin_prov_rle ...passed 00:03:21.664 Test: blob_thin_prov_rw_iov ...passed 00:03:21.664 Test: blob_snapshot_rw ...passed 00:03:21.664 Test: blob_snapshot_rw_iov ...passed 00:03:21.924 Test: blob_inflate_rw ...passed 00:03:21.924 Test: blob_snapshot_freeze_io ...passed 00:03:21.924 Test: blob_operation_split_rw ...passed 00:03:22.182 Test: blob_operation_split_rw_iov ...passed 00:03:22.182 Test: blob_simultaneous_operations ...[2024-07-15 21:39:37.175435] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:22.182 [2024-07-15 21:39:37.175520] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:22.182 [2024-07-15 21:39:37.176011] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:22.182 [2024-07-15 21:39:37.176022] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:22.182 [2024-07-15 21:39:37.180743] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:22.182 [2024-07-15 21:39:37.180769] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:22.182 [2024-07-15 21:39:37.180790] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:22.182 [2024-07-15 21:39:37.180798] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:22.182 passed 00:03:22.182 Test: blob_persist_test ...passed 00:03:22.182 Test: blob_decouple_snapshot ...passed 00:03:22.182 Test: blob_seek_io_unit ...passed 00:03:22.441 Test: blob_nested_freezes ...passed 00:03:22.441 Test: blob_clone_resize ...passed 00:03:22.441 Test: blob_shallow_copy ...[2024-07-15 21:39:37.498777] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:03:22.441 [2024-07-15 21:39:37.498862] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:03:22.441 [2024-07-15 21:39:37.498874] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:03:22.441 passed 00:03:22.441 Suite: blob_blob_nocopy_extent 00:03:22.441 Test: blob_write ...passed 00:03:22.441 Test: blob_read ...passed 00:03:22.700 Test: blob_rw_verify ...passed 00:03:22.700 Test: blob_rw_verify_iov_nomem ...passed 00:03:22.700 Test: blob_rw_iov_read_only ...passed 00:03:22.700 Test: blob_xattr ...passed 00:03:22.700 Test: blob_dirty_shutdown ...passed 00:03:22.959 Test: blob_is_degraded ...passed 00:03:22.959 Suite: blob_esnap_bs_nocopy_extent 00:03:22.959 Test: blob_esnap_create ...passed 00:03:22.959 Test: blob_esnap_thread_add_remove ...passed 00:03:22.959 Test: blob_esnap_clone_snapshot ...passed 00:03:22.959 Test: blob_esnap_clone_inflate ...passed 00:03:23.227 Test: blob_esnap_clone_decouple ...passed 00:03:23.227 Test: blob_esnap_clone_reload ...passed 00:03:23.227 Test: blob_esnap_hotplug ...passed 00:03:23.227 Test: blob_set_parent ...[2024-07-15 21:39:38.301742] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:03:23.227 [2024-07-15 21:39:38.301823] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:03:23.227 [2024-07-15 21:39:38.301861] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:03:23.227 [2024-07-15 21:39:38.301872] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:03:23.227 [2024-07-15 21:39:38.301941] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:23.227 passed 00:03:23.227 Test: blob_set_external_parent ...[2024-07-15 21:39:38.352350] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:03:23.227 [2024-07-15 21:39:38.352405] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7797:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:03:23.227 [2024-07-15 21:39:38.352415] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:03:23.227 [2024-07-15 21:39:38.352477] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:23.227 passed 00:03:23.227 Suite: blob_copy_noextent 00:03:23.227 Test: blob_init ...[2024-07-15 21:39:38.369179] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:23.227 passed 00:03:23.227 Test: blob_thin_provision ...passed 00:03:23.488 Test: blob_read_only ...passed 00:03:23.488 Test: bs_load ...[2024-07-15 21:39:38.437296] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:23.488 passed 00:03:23.488 Test: bs_load_custom_cluster_size ...passed 00:03:23.488 Test: bs_load_after_failed_grow ...passed 00:03:23.488 Test: bs_cluster_sz ...[2024-07-15 21:39:38.472489] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:23.488 [2024-07-15 21:39:38.472611] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:23.488 [2024-07-15 21:39:38.472630] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:23.488 passed 00:03:23.488 Test: bs_resize_md ...passed 00:03:23.488 Test: bs_destroy ...passed 00:03:23.488 Test: bs_type ...passed 00:03:23.488 Test: bs_super_block ...passed 00:03:23.488 Test: bs_test_recover_cluster_count ...passed 00:03:23.488 Test: bs_grow_live ...passed 00:03:23.488 Test: bs_grow_live_no_space ...passed 00:03:23.488 Test: bs_test_grow ...passed 00:03:23.488 Test: blob_serialize_test ...passed 00:03:23.488 Test: super_block_crc ...passed 00:03:23.488 Test: blob_thin_prov_write_count_io ...passed 00:03:23.747 Test: blob_thin_prov_unmap_cluster ...passed 00:03:23.747 Test: bs_load_iter_test ...passed 00:03:23.747 Test: blob_relations ...[2024-07-15 21:39:38.716184] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:23.747 [2024-07-15 21:39:38.716254] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.747 [2024-07-15 21:39:38.716378] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:23.747 [2024-07-15 21:39:38.716389] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.747 passed 00:03:23.747 Test: blob_relations2 ...[2024-07-15 21:39:38.733336] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:23.747 [2024-07-15 21:39:38.733382] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.747 [2024-07-15 21:39:38.733393] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:23.747 [2024-07-15 21:39:38.733400] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.747 [2024-07-15 21:39:38.733546] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:23.747 [2024-07-15 21:39:38.733557] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.747 [2024-07-15 21:39:38.733591] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:23.747 [2024-07-15 21:39:38.733599] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.747 passed 00:03:23.747 Test: blob_relations3 ...passed 00:03:24.005 Test: blobstore_clean_power_failure ...passed 00:03:24.005 Test: blob_delete_snapshot_power_failure ...[2024-07-15 21:39:38.971045] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:24.005 [2024-07-15 21:39:38.988462] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:24.005 [2024-07-15 21:39:38.988539] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:24.005 [2024-07-15 21:39:38.988550] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:24.005 [2024-07-15 21:39:39.005361] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:24.005 [2024-07-15 21:39:39.005430] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:24.005 [2024-07-15 21:39:39.005439] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:24.005 [2024-07-15 21:39:39.005447] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:24.005 [2024-07-15 21:39:39.021964] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:24.005 [2024-07-15 21:39:39.022010] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:24.005 [2024-07-15 21:39:39.038868] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:24.005 [2024-07-15 21:39:39.038940] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:24.005 [2024-07-15 21:39:39.056019] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:24.005 [2024-07-15 21:39:39.056095] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:24.005 passed 00:03:24.005 Test: blob_create_snapshot_power_failure ...[2024-07-15 21:39:39.107635] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:24.005 [2024-07-15 21:39:39.140098] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:24.005 [2024-07-15 21:39:39.156104] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:24.263 passed 00:03:24.263 Test: blob_io_unit ...passed 00:03:24.263 Test: blob_io_unit_compatibility ...passed 00:03:24.263 Test: blob_ext_md_pages ...passed 00:03:24.263 Test: blob_esnap_io_4096_4096 ...passed 00:03:24.263 Test: blob_esnap_io_512_512 ...passed 00:03:24.263 Test: blob_esnap_io_4096_512 ...passed 00:03:24.263 Test: blob_esnap_io_512_4096 ...passed 00:03:24.263 Test: blob_esnap_clone_resize ...passed 00:03:24.263 Suite: blob_bs_copy_noextent 00:03:24.522 Test: blob_open ...passed 00:03:24.522 Test: blob_create ...[2024-07-15 21:39:39.499170] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:24.522 passed 00:03:24.522 Test: blob_create_loop ...passed 00:03:24.522 Test: blob_create_fail ...[2024-07-15 21:39:39.610968] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:24.522 passed 00:03:24.522 Test: blob_create_internal ...passed 00:03:24.781 Test: blob_create_zero_extent ...passed 00:03:24.781 Test: blob_snapshot ...passed 00:03:24.781 Test: blob_clone ...passed 00:03:24.781 Test: blob_inflate ...[2024-07-15 21:39:39.860775] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:24.781 passed 00:03:24.781 Test: blob_delete ...passed 00:03:24.781 Test: blob_resize_test ...[2024-07-15 21:39:39.950946] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:24.781 passed 00:03:25.040 Test: blob_resize_thin_test ...passed 00:03:25.040 Test: channel_ops ...passed 00:03:25.040 Test: blob_super ...passed 00:03:25.040 Test: blob_rw_verify_iov ...passed 00:03:25.040 Test: blob_unmap ...passed 00:03:25.298 Test: blob_iter ...passed 00:03:25.298 Test: blob_parse_md ...passed 00:03:25.298 Test: bs_load_pending_removal ...passed 00:03:25.298 Test: bs_unload ...[2024-07-15 21:39:40.393551] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:25.298 passed 00:03:25.298 Test: bs_usable_clusters ...passed 00:03:25.557 Test: blob_crc ...[2024-07-15 21:39:40.488032] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:25.557 [2024-07-15 21:39:40.488100] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:25.557 passed 00:03:25.557 Test: blob_flags ...passed 00:03:25.557 Test: bs_version ...passed 00:03:25.557 Test: blob_set_xattrs_test ...[2024-07-15 21:39:40.636078] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:25.557 [2024-07-15 21:39:40.636139] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:25.557 passed 00:03:25.557 Test: blob_thin_prov_alloc ...passed 00:03:25.816 Test: blob_insert_cluster_msg_test ...passed 00:03:25.816 Test: blob_thin_prov_rw ...passed 00:03:25.816 Test: blob_thin_prov_rle ...passed 00:03:25.816 Test: blob_thin_prov_rw_iov ...passed 00:03:25.816 Test: blob_snapshot_rw ...passed 00:03:26.074 Test: blob_snapshot_rw_iov ...passed 00:03:26.074 Test: blob_inflate_rw ...passed 00:03:26.074 Test: blob_snapshot_freeze_io ...passed 00:03:26.074 Test: blob_operation_split_rw ...passed 00:03:26.333 Test: blob_operation_split_rw_iov ...passed 00:03:26.333 Test: blob_simultaneous_operations ...[2024-07-15 21:39:41.340462] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:26.333 [2024-07-15 21:39:41.340533] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:26.333 [2024-07-15 21:39:41.340954] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:26.333 [2024-07-15 21:39:41.340964] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:26.333 [2024-07-15 21:39:41.344189] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:26.333 [2024-07-15 21:39:41.344208] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:26.333 [2024-07-15 21:39:41.344227] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:26.333 [2024-07-15 21:39:41.344235] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:26.333 passed 00:03:26.333 Test: blob_persist_test ...passed 00:03:26.333 Test: blob_decouple_snapshot ...passed 00:03:26.333 Test: blob_seek_io_unit ...passed 00:03:26.591 Test: blob_nested_freezes ...passed 00:03:26.591 Test: blob_clone_resize ...passed 00:03:26.591 Test: blob_shallow_copy ...[2024-07-15 21:39:41.656595] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:03:26.591 [2024-07-15 21:39:41.656673] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:03:26.591 [2024-07-15 21:39:41.656686] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:03:26.591 passed 00:03:26.591 Suite: blob_blob_copy_noextent 00:03:26.591 Test: blob_write ...passed 00:03:26.591 Test: blob_read ...passed 00:03:26.850 Test: blob_rw_verify ...passed 00:03:26.850 Test: blob_rw_verify_iov_nomem ...passed 00:03:26.850 Test: blob_rw_iov_read_only ...passed 00:03:26.850 Test: blob_xattr ...passed 00:03:26.850 Test: blob_dirty_shutdown ...passed 00:03:27.108 Test: blob_is_degraded ...passed 00:03:27.108 Suite: blob_esnap_bs_copy_noextent 00:03:27.108 Test: blob_esnap_create ...passed 00:03:27.108 Test: blob_esnap_thread_add_remove ...passed 00:03:27.108 Test: blob_esnap_clone_snapshot ...passed 00:03:27.108 Test: blob_esnap_clone_inflate ...passed 00:03:27.369 Test: blob_esnap_clone_decouple ...passed 00:03:27.369 Test: blob_esnap_clone_reload ...passed 00:03:27.370 Test: blob_esnap_hotplug ...passed 00:03:27.370 Test: blob_set_parent ...[2024-07-15 21:39:42.442741] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:03:27.370 [2024-07-15 21:39:42.442813] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:03:27.370 [2024-07-15 21:39:42.442838] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:03:27.370 [2024-07-15 21:39:42.442849] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:03:27.370 [2024-07-15 21:39:42.442914] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:27.370 passed 00:03:27.370 Test: blob_set_external_parent ...[2024-07-15 21:39:42.489307] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:03:27.370 [2024-07-15 21:39:42.489363] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7797:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:03:27.370 [2024-07-15 21:39:42.489388] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:03:27.370 [2024-07-15 21:39:42.489445] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:27.370 passed 00:03:27.370 Suite: blob_copy_extent 00:03:27.370 Test: blob_init ...[2024-07-15 21:39:42.505681] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:27.370 passed 00:03:27.370 Test: blob_thin_provision ...passed 00:03:27.370 Test: blob_read_only ...passed 00:03:27.628 Test: bs_load ...[2024-07-15 21:39:42.571244] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:27.628 passed 00:03:27.628 Test: bs_load_custom_cluster_size ...passed 00:03:27.628 Test: bs_load_after_failed_grow ...passed 00:03:27.628 Test: bs_cluster_sz ...[2024-07-15 21:39:42.604411] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:27.628 [2024-07-15 21:39:42.604498] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:27.628 [2024-07-15 21:39:42.604513] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:27.628 passed 00:03:27.628 Test: bs_resize_md ...passed 00:03:27.628 Test: bs_destroy ...passed 00:03:27.628 Test: bs_type ...passed 00:03:27.628 Test: bs_super_block ...passed 00:03:27.628 Test: bs_test_recover_cluster_count ...passed 00:03:27.628 Test: bs_grow_live ...passed 00:03:27.628 Test: bs_grow_live_no_space ...passed 00:03:27.628 Test: bs_test_grow ...passed 00:03:27.628 Test: blob_serialize_test ...passed 00:03:27.628 Test: super_block_crc ...passed 00:03:27.628 Test: blob_thin_prov_write_count_io ...passed 00:03:27.628 Test: blob_thin_prov_unmap_cluster ...passed 00:03:27.628 Test: bs_load_iter_test ...passed 00:03:27.910 Test: blob_relations ...[2024-07-15 21:39:42.830970] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:27.910 [2024-07-15 21:39:42.831045] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:27.910 [2024-07-15 21:39:42.831180] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:27.910 [2024-07-15 21:39:42.831191] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:27.910 passed 00:03:27.910 Test: blob_relations2 ...[2024-07-15 21:39:42.848112] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:27.910 [2024-07-15 21:39:42.848153] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:27.910 [2024-07-15 21:39:42.848163] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:27.910 [2024-07-15 21:39:42.848170] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:27.910 [2024-07-15 21:39:42.848317] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:27.910 [2024-07-15 21:39:42.848328] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:27.910 [2024-07-15 21:39:42.848365] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:27.910 [2024-07-15 21:39:42.848373] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:27.910 passed 00:03:27.910 Test: blob_relations3 ...passed 00:03:27.910 Test: blobstore_clean_power_failure ...passed 00:03:27.910 Test: blob_delete_snapshot_power_failure ...[2024-07-15 21:39:43.070164] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:28.174 [2024-07-15 21:39:43.086049] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:28.174 [2024-07-15 21:39:43.101901] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:28.174 [2024-07-15 21:39:43.101950] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:28.174 [2024-07-15 21:39:43.101959] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:28.174 [2024-07-15 21:39:43.118134] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:28.174 [2024-07-15 21:39:43.118177] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:28.174 [2024-07-15 21:39:43.118186] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:28.174 [2024-07-15 21:39:43.118194] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:28.174 [2024-07-15 21:39:43.134673] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:28.174 [2024-07-15 21:39:43.134714] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:28.174 [2024-07-15 21:39:43.134723] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:28.174 [2024-07-15 21:39:43.134730] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:28.174 [2024-07-15 21:39:43.150910] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:28.174 [2024-07-15 21:39:43.150959] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:28.174 [2024-07-15 21:39:43.166751] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:28.174 [2024-07-15 21:39:43.166805] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:28.174 [2024-07-15 21:39:43.183435] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:28.174 [2024-07-15 21:39:43.183480] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:28.174 passed 00:03:28.174 Test: blob_create_snapshot_power_failure ...[2024-07-15 21:39:43.234438] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:28.174 [2024-07-15 21:39:43.252337] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:28.174 [2024-07-15 21:39:43.286768] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:28.174 [2024-07-15 21:39:43.303681] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:28.174 passed 00:03:28.433 Test: blob_io_unit ...passed 00:03:28.433 Test: blob_io_unit_compatibility ...passed 00:03:28.433 Test: blob_ext_md_pages ...passed 00:03:28.433 Test: blob_esnap_io_4096_4096 ...passed 00:03:28.433 Test: blob_esnap_io_512_512 ...passed 00:03:28.433 Test: blob_esnap_io_4096_512 ...passed 00:03:28.433 Test: blob_esnap_io_512_4096 ...passed 00:03:28.433 Test: blob_esnap_clone_resize ...passed 00:03:28.433 Suite: blob_bs_copy_extent 00:03:28.433 Test: blob_open ...passed 00:03:28.691 Test: blob_create ...[2024-07-15 21:39:43.641712] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:28.691 passed 00:03:28.691 Test: blob_create_loop ...passed 00:03:28.691 Test: blob_create_fail ...[2024-07-15 21:39:43.739943] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:28.691 passed 00:03:28.691 Test: blob_create_internal ...passed 00:03:28.691 Test: blob_create_zero_extent ...passed 00:03:28.691 Test: blob_snapshot ...passed 00:03:28.949 Test: blob_clone ...passed 00:03:28.949 Test: blob_inflate ...[2024-07-15 21:39:43.915824] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:28.949 passed 00:03:28.949 Test: blob_delete ...passed 00:03:28.949 Test: blob_resize_test ...[2024-07-15 21:39:43.982612] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:28.949 passed 00:03:28.949 Test: blob_resize_thin_test ...passed 00:03:28.949 Test: channel_ops ...passed 00:03:28.949 Test: blob_super ...passed 00:03:29.208 Test: blob_rw_verify_iov ...passed 00:03:29.208 Test: blob_unmap ...passed 00:03:29.208 Test: blob_iter ...passed 00:03:29.208 Test: blob_parse_md ...passed 00:03:29.208 Test: bs_load_pending_removal ...passed 00:03:29.208 Test: bs_unload ...[2024-07-15 21:39:44.388687] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:29.467 passed 00:03:29.467 Test: bs_usable_clusters ...passed 00:03:29.467 Test: blob_crc ...[2024-07-15 21:39:44.487626] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:29.467 [2024-07-15 21:39:44.487686] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:29.467 passed 00:03:29.467 Test: blob_flags ...passed 00:03:29.467 Test: bs_version ...passed 00:03:29.467 Test: blob_set_xattrs_test ...[2024-07-15 21:39:44.638349] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:29.467 [2024-07-15 21:39:44.638412] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:29.467 passed 00:03:29.726 Test: blob_thin_prov_alloc ...passed 00:03:29.726 Test: blob_insert_cluster_msg_test ...passed 00:03:29.726 Test: blob_thin_prov_rw ...passed 00:03:29.726 Test: blob_thin_prov_rle ...passed 00:03:29.985 Test: blob_thin_prov_rw_iov ...passed 00:03:29.985 Test: blob_snapshot_rw ...passed 00:03:29.985 Test: blob_snapshot_rw_iov ...passed 00:03:29.985 Test: blob_inflate_rw ...passed 00:03:29.985 Test: blob_snapshot_freeze_io ...passed 00:03:30.243 Test: blob_operation_split_rw ...passed 00:03:30.243 Test: blob_operation_split_rw_iov ...passed 00:03:30.243 Test: blob_simultaneous_operations ...[2024-07-15 21:39:45.347858] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:30.243 [2024-07-15 21:39:45.347936] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:30.243 [2024-07-15 21:39:45.348399] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:30.243 [2024-07-15 21:39:45.348410] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:30.243 [2024-07-15 21:39:45.351807] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:30.243 [2024-07-15 21:39:45.351824] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:30.243 [2024-07-15 21:39:45.351844] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:30.243 [2024-07-15 21:39:45.351852] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:30.243 passed 00:03:30.243 Test: blob_persist_test ...passed 00:03:30.500 Test: blob_decouple_snapshot ...passed 00:03:30.500 Test: blob_seek_io_unit ...passed 00:03:30.500 Test: blob_nested_freezes ...passed 00:03:30.500 Test: blob_clone_resize ...passed 00:03:30.500 Test: blob_shallow_copy ...[2024-07-15 21:39:45.666995] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:03:30.500 [2024-07-15 21:39:45.667069] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:03:30.500 [2024-07-15 21:39:45.667081] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:03:30.500 passed 00:03:30.500 Suite: blob_blob_copy_extent 00:03:30.758 Test: blob_write ...passed 00:03:30.758 Test: blob_read ...passed 00:03:30.758 Test: blob_rw_verify ...passed 00:03:30.758 Test: blob_rw_verify_iov_nomem ...passed 00:03:30.758 Test: blob_rw_iov_read_only ...passed 00:03:31.016 Test: blob_xattr ...passed 00:03:31.016 Test: blob_dirty_shutdown ...passed 00:03:31.016 Test: blob_is_degraded ...passed 00:03:31.016 Suite: blob_esnap_bs_copy_extent 00:03:31.016 Test: blob_esnap_create ...passed 00:03:31.016 Test: blob_esnap_thread_add_remove ...passed 00:03:31.016 Test: blob_esnap_clone_snapshot ...passed 00:03:31.313 Test: blob_esnap_clone_inflate ...passed 00:03:31.313 Test: blob_esnap_clone_decouple ...passed 00:03:31.313 Test: blob_esnap_clone_reload ...passed 00:03:31.313 Test: blob_esnap_hotplug ...passed 00:03:31.313 Test: blob_set_parent ...[2024-07-15 21:39:46.332855] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:03:31.313 [2024-07-15 21:39:46.332926] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:03:31.313 [2024-07-15 21:39:46.333102] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:03:31.313 [2024-07-15 21:39:46.333116] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:03:31.313 [2024-07-15 21:39:46.333172] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:31.313 passed 00:03:31.313 Test: blob_set_external_parent ...[2024-07-15 21:39:46.367137] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:03:31.313 [2024-07-15 21:39:46.367191] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7797:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:03:31.313 [2024-07-15 21:39:46.367200] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:03:31.313 [2024-07-15 21:39:46.367249] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:31.313 passed 00:03:31.313 00:03:31.313 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.313 suites 16 16 n/a 0 0 00:03:31.313 tests 376 376 376 0 0 00:03:31.313 asserts 143973 143973 143973 0 n/a 00:03:31.313 00:03:31.313 Elapsed time = 16.773 seconds 00:03:31.313 21:39:46 unittest.unittest_blob_blobfs -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:03:31.313 00:03:31.313 00:03:31.313 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.313 http://cunit.sourceforge.net/ 00:03:31.313 00:03:31.313 00:03:31.313 Suite: blob_bdev 00:03:31.313 Test: create_bs_dev ...passed 00:03:31.313 Test: create_bs_dev_ro ...[2024-07-15 21:39:46.390227] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 529:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:03:31.313 passed 00:03:31.313 Test: create_bs_dev_rw ...passed 00:03:31.313 Test: claim_bs_dev ...passed 00:03:31.313 Test: claim_bs_dev_ro ...[2024-07-15 21:39:46.390960] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:03:31.313 passed 00:03:31.313 Test: deferred_destroy_refs ...passed 00:03:31.313 Test: deferred_destroy_channels ...passed 00:03:31.313 Test: deferred_destroy_threads ...passed 00:03:31.313 00:03:31.313 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.313 suites 1 1 n/a 0 0 00:03:31.313 tests 8 8 8 0 0 00:03:31.313 asserts 119 119 119 0 n/a 00:03:31.313 00:03:31.313 Elapsed time = 0.000 seconds 00:03:31.313 21:39:46 unittest.unittest_blob_blobfs -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:03:31.313 00:03:31.313 00:03:31.313 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.313 http://cunit.sourceforge.net/ 00:03:31.313 00:03:31.313 00:03:31.313 Suite: tree 00:03:31.313 Test: blobfs_tree_op_test ...passed 00:03:31.313 00:03:31.313 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.313 suites 1 1 n/a 0 0 00:03:31.313 tests 1 1 1 0 0 00:03:31.313 asserts 27 27 27 0 n/a 00:03:31.313 00:03:31.313 Elapsed time = 0.000 seconds 00:03:31.313 21:39:46 unittest.unittest_blob_blobfs -- unit/unittest.sh@44 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:03:31.313 00:03:31.313 00:03:31.313 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.313 http://cunit.sourceforge.net/ 00:03:31.313 00:03:31.313 00:03:31.313 Suite: blobfs_async_ut 00:03:31.313 Test: fs_init ...passed 00:03:31.313 Test: fs_open ...passed 00:03:31.593 Test: fs_create ...passed 00:03:31.593 Test: fs_truncate ...passed 00:03:31.593 Test: fs_rename ...[2024-07-15 21:39:46.504697] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:03:31.593 passed 00:03:31.593 Test: fs_rw_async ...passed 00:03:31.593 Test: fs_writev_readv_async ...passed 00:03:31.593 Test: tree_find_buffer_ut ...passed 00:03:31.593 Test: channel_ops ...passed 00:03:31.593 Test: channel_ops_sync ...passed 00:03:31.593 00:03:31.593 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.593 suites 1 1 n/a 0 0 00:03:31.593 tests 10 10 10 0 0 00:03:31.593 asserts 292 292 292 0 n/a 00:03:31.594 00:03:31.594 Elapsed time = 0.156 seconds 00:03:31.594 21:39:46 unittest.unittest_blob_blobfs -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:03:31.594 00:03:31.594 00:03:31.594 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.594 http://cunit.sourceforge.net/ 00:03:31.594 00:03:31.594 00:03:31.594 Suite: blobfs_sync_ut 00:03:31.594 Test: cache_read_after_write ...[2024-07-15 21:39:46.612805] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:03:31.594 passed 00:03:31.594 Test: file_length ...passed 00:03:31.594 Test: append_write_to_extend_blob ...passed 00:03:31.594 Test: partial_buffer ...passed 00:03:31.594 Test: cache_write_null_buffer ...passed 00:03:31.594 Test: fs_create_sync ...passed 00:03:31.594 Test: fs_rename_sync ...passed 00:03:31.594 Test: cache_append_no_cache ...passed 00:03:31.594 Test: fs_delete_file_without_close ...passed 00:03:31.594 00:03:31.594 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.594 suites 1 1 n/a 0 0 00:03:31.594 tests 9 9 9 0 0 00:03:31.594 asserts 345 345 345 0 n/a 00:03:31.594 00:03:31.594 Elapsed time = 0.281 seconds 00:03:31.594 21:39:46 unittest.unittest_blob_blobfs -- unit/unittest.sh@47 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:03:31.594 00:03:31.594 00:03:31.594 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.594 http://cunit.sourceforge.net/ 00:03:31.594 00:03:31.594 00:03:31.594 Suite: blobfs_bdev_ut 00:03:31.594 Test: spdk_blobfs_bdev_detect_test ...passed 00:03:31.594 Test: spdk_blobfs_bdev_create_test ...passed 00:03:31.594 Test: spdk_blobfs_bdev_mount_test ...passed 00:03:31.594 00:03:31.594 [2024-07-15 21:39:46.723620] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:03:31.594 [2024-07-15 21:39:46.723867] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:03:31.594 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.594 suites 1 1 n/a 0 0 00:03:31.594 tests 3 3 3 0 0 00:03:31.594 asserts 9 9 9 0 n/a 00:03:31.594 00:03:31.594 Elapsed time = 0.000 seconds 00:03:31.594 00:03:31.594 real 0m17.136s 00:03:31.594 user 0m17.151s 00:03:31.594 sys 0m0.136s 00:03:31.594 21:39:46 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:31.594 ************************************ 00:03:31.594 END TEST unittest_blob_blobfs 00:03:31.594 ************************************ 00:03:31.594 21:39:46 unittest.unittest_blob_blobfs -- common/autotest_common.sh@10 -- # set +x 00:03:31.594 21:39:46 unittest -- common/autotest_common.sh@1136 -- # return 0 00:03:31.594 21:39:46 unittest -- unit/unittest.sh@234 -- # run_test unittest_event unittest_event 00:03:31.594 21:39:46 unittest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:31.594 21:39:46 unittest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:31.594 21:39:46 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:31.594 ************************************ 00:03:31.594 START TEST unittest_event 00:03:31.594 ************************************ 00:03:31.594 21:39:46 unittest.unittest_event -- common/autotest_common.sh@1117 -- # unittest_event 00:03:31.594 21:39:46 unittest.unittest_event -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:03:31.594 00:03:31.594 00:03:31.594 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.594 http://cunit.sourceforge.net/ 00:03:31.594 00:03:31.594 00:03:31.594 Suite: app_suite 00:03:31.594 Test: test_spdk_app_parse_args ...app_ut [options] 00:03:31.594 00:03:31.594 CPU options: 00:03:31.594 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:03:31.594 (like [0,1,10]) 00:03:31.594 --lcores lcore to CPU mapping list. The list is in the format: 00:03:31.594 [<,lcores[@CPUs]>...] 00:03:31.594 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:03:31.594 Within the group, '-' is used for range separator, 00:03:31.594 ',' is used for single number separator. 00:03:31.594 '( )' can be omitted for single element group, 00:03:31.594 '@' can be omitted if cpus and lcores have the same value 00:03:31.594 --disable-cpumask-locks Disable CPU core lock files. 00:03:31.594 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:03:31.594 pollers in the app support interrupt mode) 00:03:31.594 -p, --main-core main (primary) core for DPDK 00:03:31.594 00:03:31.594 Configuration options: 00:03:31.594 -c, --config, --json JSON config file 00:03:31.594 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:03:31.594 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:03:31.594 --wait-for-rpc wait for RPCs to initialize subsystems 00:03:31.594 --rpcs-allowed comma-separated list of permitted RPCS 00:03:31.594 --json-ignore-init-errors don't exit on invalid config entry 00:03:31.594 00:03:31.594 Memory options: 00:03:31.594 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:03:31.594 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:03:31.594 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:03:31.594 -R, --huge-unlink unlink huge files after initialization 00:03:31.594 -n, --mem-channels number of memory channels used for DPDK 00:03:31.594 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:03:31.594 --msg-mempool-size global message memory pool size in count (default: 262143) 00:03:31.594 --no-huge run without using hugepages 00:03:31.594 -i, --shm-id shared memory ID (optional) 00:03:31.594 -g, --single-file-segments force creating just one hugetlbfs file 00:03:31.594 00:03:31.594 PCI options: 00:03:31.594 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:03:31.594 -B, --pci-blocked pci addr to block (can be used more than once) 00:03:31.594 -u, --no-pci disable PCI access 00:03:31.594 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:03:31.594 00:03:31.594 Log options: 00:03:31.594 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:03:31.594 --silence-noticelog disable notice level logging to stderr 00:03:31.594 00:03:31.594 Trace options: 00:03:31.594 --num-trace-entries number of trace entries for each core, must be power of 2, 00:03:31.594 setting 0 to disable trace (default 32768) 00:03:31.594 Tracepoints vary in size and can use more than one trace entry. 00:03:31.594 -e, --tpoint-group [:] 00:03:31.594 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:03:31.594 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:03:31.594 a tracepoint group. First tpoint inside a group can be enabled by 00:03:31.594 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:03:31.594 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:03:31.594 in /include/spdk_internal/trace_defs.h 00:03:31.594 00:03:31.594 Other options: 00:03:31.594 -h, --help show this usage 00:03:31.594 -v, --version print SPDK version 00:03:31.594 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:03:31.594 --env-context Opaque context for use of the env implementation 00:03:31.594 app_ut [options] 00:03:31.594 00:03:31.594 CPU options: 00:03:31.594 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:03:31.594 (like [0,1,10]) 00:03:31.594 --lcores lcore to CPU mapping list. The list is in the format: 00:03:31.594 [<,lcores[@CPUs]>...] 00:03:31.594 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:03:31.594 Within the group, '-' is used for range separator, 00:03:31.594 ',' is used for single number separator. 00:03:31.594 '( )' can be omitted for single element group, 00:03:31.594 '@' can be omitted if cpus and lcores have the same value 00:03:31.594 --disable-cpumask-locks Disable CPU core lock files. 00:03:31.594 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:03:31.594 pollers in the app support interrupt mode) 00:03:31.594 -p, --main-core main (primary) core for DPDK 00:03:31.594 00:03:31.594 Configuration options: 00:03:31.594 -c, --config, --json JSON config file 00:03:31.594 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:03:31.594 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:03:31.594 --wait-for-rpc wait for RPCs to initialize subsystems 00:03:31.594 --rpcs-allowed comma-separated list of permitted RPCS 00:03:31.594 --json-ignore-init-errors don't exit on invalid config entry 00:03:31.594 00:03:31.594 Memory options: 00:03:31.594 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:03:31.594 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:03:31.594 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:03:31.594 -R, --huge-unlink unlink huge files after initialization 00:03:31.594 -n, --mem-channels number of memory channels used for DPDK 00:03:31.594 -s, --mem-size memory size in MB for DPDK (default: app_ut: invalid option -- z 00:03:31.594 app_ut: unrecognized option `--test-long-opt' 00:03:31.594 all hugepage memory) 00:03:31.594 --msg-mempool-size global message memory pool size in count (default: 262143) 00:03:31.594 --no-huge run without using hugepages 00:03:31.594 -i, --shm-id shared memory ID (optional) 00:03:31.594 -g, --single-file-segments force creating just one hugetlbfs file 00:03:31.594 00:03:31.595 PCI options: 00:03:31.595 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:03:31.595 -B, --pci-blocked pci addr to block (can be used more than once) 00:03:31.595 -u, --no-pci disable PCI access 00:03:31.595 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:03:31.595 00:03:31.595 Log options: 00:03:31.595 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:03:31.595 --silence-noticelog disable notice level logging to stderr 00:03:31.595 00:03:31.595 Trace options: 00:03:31.595 --num-trace-entries number of trace entries for each core, must be power of 2, 00:03:31.595 setting 0 to disable trace (default 32768) 00:03:31.595 Tracepoints vary in size and can use more than one trace entry. 00:03:31.595 -e, --tpoint-group [:] 00:03:31.595 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:03:31.595 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:03:31.595 a tracepoint group. First tpoint inside a group can be enabled by 00:03:31.595 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:03:31.595 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:03:31.595 in /include/spdk_internal/trace_defs.h 00:03:31.595 00:03:31.595 Other options: 00:03:31.595 -h, --help show this usage 00:03:31.595 -v, --version print SPDK version 00:03:31.595 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:03:31.595 --env-context Opaque context for use of the env implementation 00:03:31.595 [2024-07-15 21:39:46.771190] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1193:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:03:31.595 app_ut [options] 00:03:31.595 00:03:31.595 CPU options: 00:03:31.595 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:03:31.595 (like [0,1,10]) 00:03:31.595 --lcores lcore to CPU mapping list. The list is in the format: 00:03:31.595 [<,lcores[@CPUs]>...] 00:03:31.595 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:03:31.595 Within the group, '-' is used for range separator, 00:03:31.595 ',' is used for single number separator. 00:03:31.595 '( )' can be omitted for single element group, 00:03:31.595 '@' can be omitted if cpus and lcores have the same value 00:03:31.595 --disable-cpumask-locks Disable CPU core lock files. 00:03:31.595 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:03:31.595 pollers in the app support interrupt mode) 00:03:31.595 -p, --main-core main (primary) core for DPDK 00:03:31.595 00:03:31.595 Configuration options: 00:03:31.595 -c, --config, --json JSON config file 00:03:31.595 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:03:31.595 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:03:31.595 --wait-for-rpc wait for RPCs to initialize subsystems 00:03:31.595 --rpcs-allowed comma-separated list of permitted RPCS 00:03:31.595 --json-ignore-init-errors don't exit on invalid config entry 00:03:31.595 00:03:31.595 Memory options: 00:03:31.595 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:03:31.595 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:03:31.595 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:03:31.595 -R, --huge-unlink unlink huge files after initialization 00:03:31.595 -n, --mem-channels number of memory channels used for DPDK 00:03:31.595 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:03:31.595 --msg-mempool-size global message memory pool size in count (default: 262143) 00:03:31.595 --no-huge run without using hugepages 00:03:31.595 -i, --shm-id shared memory ID (optional) 00:03:31.595 -g, --single-file-segments force creating just one hugetlbfs file 00:03:31.595 00:03:31.595 PCI options: 00:03:31.595 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:03:31.595 -B, --pci-blocked pci addr to block (can be used more than once) 00:03:31.595 -u, --no-pci disable PCI access 00:03:31.595 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:03:31.595 00:03:31.595 Log options: 00:03:31.595 -L, --logflag enable log flag (all, app_rpc, json_util[2024-07-15 21:39:46.771499] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1373:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:03:31.595 , rpc, thread, trace) 00:03:31.595 --silence-noticelog disable notice level logging to stderr 00:03:31.595 00:03:31.595 Trace options: 00:03:31.595 --num-trace-entries number of trace entries for each core, must be power of 2, 00:03:31.595 setting 0 to disable trace (default 32768) 00:03:31.595 Tracepoints vary in size and can use more than one trace entry. 00:03:31.595 -e, --tpoint-group [:] 00:03:31.595 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:03:31.595 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:03:31.595 a tracepoint group. First tpoint inside a group can be enabled by 00:03:31.595 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:03:31.595 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:03:31.595 in /include/spdk_internal/trace_defs.h 00:03:31.595 00:03:31.595 Other options: 00:03:31.595 -h, --help show this usage 00:03:31.595 -v, --version print SPDK version 00:03:31.595 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:03:31.595 --env-context Opaque context for use of the env implementation 00:03:31.595 passed 00:03:31.595 00:03:31.595 [2024-07-15 21:39:46.771681] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1278:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:03:31.595 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.595 suites 1 1 n/a 0 0 00:03:31.595 tests 1 1 1 0 0 00:03:31.595 asserts 8 8 8 0 n/a 00:03:31.595 00:03:31.595 Elapsed time = 0.000 seconds 00:03:31.595 21:39:46 unittest.unittest_event -- unit/unittest.sh@52 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:03:31.595 00:03:31.595 00:03:31.595 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.595 http://cunit.sourceforge.net/ 00:03:31.595 00:03:31.595 00:03:31.595 Suite: app_suite 00:03:31.595 Test: test_create_reactor ...passed 00:03:31.595 Test: test_init_reactors ...passed 00:03:31.595 Test: test_event_call ...passed 00:03:31.595 Test: test_schedule_thread ...passed 00:03:31.595 Test: test_reschedule_thread ...passed 00:03:31.595 Test: test_bind_thread ...passed 00:03:31.595 Test: test_for_each_reactor ...passed 00:03:31.595 Test: test_reactor_stats ...passed 00:03:31.595 Test: test_scheduler ...passed 00:03:31.855 Test: test_governor ...passed 00:03:31.855 00:03:31.855 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.855 suites 1 1 n/a 0 0 00:03:31.855 tests 10 10 10 0 0 00:03:31.855 asserts 336 336 336 0 n/a 00:03:31.855 00:03:31.855 Elapsed time = 0.000 seconds 00:03:31.855 00:03:31.855 real 0m0.017s 00:03:31.855 user 0m0.001s 00:03:31.855 sys 0m0.024s 00:03:31.855 21:39:46 unittest.unittest_event -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:31.855 21:39:46 unittest.unittest_event -- common/autotest_common.sh@10 -- # set +x 00:03:31.855 ************************************ 00:03:31.855 END TEST unittest_event 00:03:31.855 ************************************ 00:03:31.855 21:39:46 unittest -- common/autotest_common.sh@1136 -- # return 0 00:03:31.855 21:39:46 unittest -- unit/unittest.sh@235 -- # uname -s 00:03:31.855 21:39:46 unittest -- unit/unittest.sh@235 -- # '[' FreeBSD = Linux ']' 00:03:31.855 21:39:46 unittest -- unit/unittest.sh@239 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:03:31.855 21:39:46 unittest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:31.855 21:39:46 unittest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:31.855 21:39:46 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:31.855 ************************************ 00:03:31.855 START TEST unittest_accel 00:03:31.855 ************************************ 00:03:31.855 21:39:46 unittest.unittest_accel -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:03:31.855 00:03:31.855 00:03:31.855 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.855 http://cunit.sourceforge.net/ 00:03:31.855 00:03:31.855 00:03:31.855 Suite: accel_sequence 00:03:31.855 Test: test_sequence_fill_copy ...passed 00:03:31.855 Test: test_sequence_abort ...passed 00:03:31.855 Test: test_sequence_append_error ...passed 00:03:31.855 Test: test_sequence_completion_error ...[2024-07-15 21:39:46.831149] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1960:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x28c573ace100 00:03:31.855 [2024-07-15 21:39:46.831394] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1960:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x28c573ace100 00:03:31.856 [2024-07-15 21:39:46.831415] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1870:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x28c573ace100 00:03:31.856 [2024-07-15 21:39:46.831429] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1870:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x28c573ace100 00:03:31.856 passed 00:03:31.856 Test: test_sequence_decompress ...passed 00:03:31.856 Test: test_sequence_reverse ...passed 00:03:31.856 Test: test_sequence_copy_elision ...passed 00:03:31.856 Test: test_sequence_accel_buffers ...passed 00:03:31.856 Test: test_sequence_memory_domain ...[2024-07-15 21:39:46.832938] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1762:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:03:31.856 passed 00:03:31.856 Test: test_sequence_module_memory_domain ...[2024-07-15 21:39:46.832986] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1801:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -48 00:03:31.856 passed 00:03:31.856 Test: test_sequence_crypto ...passed 00:03:31.856 Test: test_sequence_driver ...[2024-07-15 21:39:46.833822] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1909:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x28c573acef00 using driver: ut 00:03:31.856 passed 00:03:31.856 Test: test_sequence_same_iovs ...[2024-07-15 21:39:46.833869] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1974:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x28c573acef00 through driver: ut 00:03:31.856 passed 00:03:31.856 Test: test_sequence_crc32 ...passed 00:03:31.856 Suite: accel 00:03:31.856 Test: test_spdk_accel_task_complete ...passed 00:03:31.856 Test: test_get_task ...passed 00:03:31.856 Test: test_spdk_accel_submit_copy ...passed 00:03:31.856 Test: test_spdk_accel_submit_dualcast ...passed 00:03:31.856 Test: test_spdk_accel_submit_compare ...passed 00:03:31.856 Test: test_spdk_accel_submit_fill ...passed 00:03:31.856 Test: test_spdk_accel_submit_crc32c ...passed 00:03:31.856 Test: test_spdk_accel_submit_crc32cv ...passed 00:03:31.856 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:03:31.856 Test: test_spdk_accel_submit_xor ...passed 00:03:31.856 Test: test_spdk_accel_module_find_by_name ...passed 00:03:31.856 Test: test_spdk_accel_module_register ...[2024-07-15 21:39:46.834535] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 425:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:03:31.856 [2024-07-15 21:39:46.834557] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 425:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:03:31.856 passed 00:03:31.856 00:03:31.856 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.856 suites 2 2 n/a 0 0 00:03:31.856 tests 26 26 26 0 0 00:03:31.856 asserts 830 830 830 0 n/a 00:03:31.856 00:03:31.856 Elapsed time = 0.008 seconds 00:03:31.856 00:03:31.856 real 0m0.012s 00:03:31.856 user 0m0.004s 00:03:31.856 sys 0m0.010s 00:03:31.856 21:39:46 unittest.unittest_accel -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:31.856 21:39:46 unittest.unittest_accel -- common/autotest_common.sh@10 -- # set +x 00:03:31.856 ************************************ 00:03:31.856 END TEST unittest_accel 00:03:31.856 ************************************ 00:03:31.856 21:39:46 unittest -- common/autotest_common.sh@1136 -- # return 0 00:03:31.856 21:39:46 unittest -- unit/unittest.sh@240 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:03:31.856 21:39:46 unittest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:31.856 21:39:46 unittest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:31.856 21:39:46 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:31.856 ************************************ 00:03:31.856 START TEST unittest_ioat 00:03:31.856 ************************************ 00:03:31.856 21:39:46 unittest.unittest_ioat -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:03:31.856 00:03:31.856 00:03:31.856 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.856 http://cunit.sourceforge.net/ 00:03:31.856 00:03:31.856 00:03:31.856 Suite: ioat 00:03:31.856 Test: ioat_state_check ...passed 00:03:31.856 00:03:31.856 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.856 suites 1 1 n/a 0 0 00:03:31.856 tests 1 1 1 0 0 00:03:31.856 asserts 32 32 32 0 n/a 00:03:31.856 00:03:31.856 Elapsed time = 0.000 seconds 00:03:31.856 00:03:31.856 real 0m0.005s 00:03:31.856 user 0m0.004s 00:03:31.856 sys 0m0.004s 00:03:31.856 21:39:46 unittest.unittest_ioat -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:31.856 21:39:46 unittest.unittest_ioat -- common/autotest_common.sh@10 -- # set +x 00:03:31.856 ************************************ 00:03:31.856 END TEST unittest_ioat 00:03:31.856 ************************************ 00:03:31.856 21:39:46 unittest -- common/autotest_common.sh@1136 -- # return 0 00:03:31.856 21:39:46 unittest -- unit/unittest.sh@241 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:31.856 21:39:46 unittest -- unit/unittest.sh@242 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:03:31.856 21:39:46 unittest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:31.856 21:39:46 unittest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:31.856 21:39:46 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:31.856 ************************************ 00:03:31.856 START TEST unittest_idxd_user 00:03:31.856 ************************************ 00:03:31.856 21:39:46 unittest.unittest_idxd_user -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:03:31.856 00:03:31.856 00:03:31.856 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.856 http://cunit.sourceforge.net/ 00:03:31.856 00:03:31.856 00:03:31.856 Suite: idxd_user 00:03:31.856 Test: test_idxd_wait_cmd ...[2024-07-15 21:39:46.933822] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:03:31.856 passed 00:03:31.856 Test: test_idxd_reset_dev ...passed 00:03:31.856 Test: test_idxd_group_config ...passed 00:03:31.856 Test: test_idxd_wq_config ...passed 00:03:31.856 [2024-07-15 21:39:46.934078] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:03:31.856 [2024-07-15 21:39:46.934112] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:03:31.856 [2024-07-15 21:39:46.934129] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:03:31.856 00:03:31.856 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.856 suites 1 1 n/a 0 0 00:03:31.856 tests 4 4 4 0 0 00:03:31.856 asserts 20 20 20 0 n/a 00:03:31.856 00:03:31.856 Elapsed time = 0.000 seconds 00:03:31.856 00:03:31.856 real 0m0.006s 00:03:31.856 user 0m0.000s 00:03:31.856 sys 0m0.008s 00:03:31.856 21:39:46 unittest.unittest_idxd_user -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:31.856 21:39:46 unittest.unittest_idxd_user -- common/autotest_common.sh@10 -- # set +x 00:03:31.856 ************************************ 00:03:31.856 END TEST unittest_idxd_user 00:03:31.856 ************************************ 00:03:31.856 21:39:46 unittest -- common/autotest_common.sh@1136 -- # return 0 00:03:31.856 21:39:46 unittest -- unit/unittest.sh@244 -- # run_test unittest_iscsi unittest_iscsi 00:03:31.856 21:39:46 unittest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:31.856 21:39:46 unittest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:31.856 21:39:46 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:31.856 ************************************ 00:03:31.856 START TEST unittest_iscsi 00:03:31.856 ************************************ 00:03:31.856 21:39:46 unittest.unittest_iscsi -- common/autotest_common.sh@1117 -- # unittest_iscsi 00:03:31.856 21:39:46 unittest.unittest_iscsi -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:03:31.856 00:03:31.856 00:03:31.856 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.856 http://cunit.sourceforge.net/ 00:03:31.856 00:03:31.856 00:03:31.856 Suite: conn_suite 00:03:31.856 Test: read_task_split_in_order_case ...passed 00:03:31.856 Test: read_task_split_reverse_order_case ...passed 00:03:31.856 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:03:31.856 Test: process_non_read_task_completion_test ...passed 00:03:31.856 Test: free_tasks_on_connection ...passed 00:03:31.856 Test: free_tasks_with_queued_datain ...passed 00:03:31.856 Test: abort_queued_datain_task_test ...passed 00:03:31.856 Test: abort_queued_datain_tasks_test ...passed 00:03:31.856 00:03:31.856 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.856 suites 1 1 n/a 0 0 00:03:31.856 tests 8 8 8 0 0 00:03:31.856 asserts 230 230 230 0 n/a 00:03:31.856 00:03:31.856 Elapsed time = 0.000 seconds 00:03:31.856 21:39:46 unittest.unittest_iscsi -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:03:31.856 00:03:31.856 00:03:31.856 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.856 http://cunit.sourceforge.net/ 00:03:31.856 00:03:31.856 00:03:31.856 Suite: iscsi_suite 00:03:31.856 Test: param_negotiation_test ...passed 00:03:31.856 Test: list_negotiation_test ...passed 00:03:31.856 Test: parse_valid_test ...passed 00:03:31.856 Test: parse_invalid_test ...[2024-07-15 21:39:46.986365] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:03:31.856 [2024-07-15 21:39:46.986626] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:03:31.856 [2024-07-15 21:39:46.986649] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:03:31.856 passed 00:03:31.856 00:03:31.856 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.856 suites 1 1 n/a 0 0 00:03:31.856 tests 4 4 4 0 0 00:03:31.856 asserts 161 161 161 0 n/a 00:03:31.856 00:03:31.856 Elapsed time = 0.000 seconds 00:03:31.856 [2024-07-15 21:39:46.986682] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:03:31.856 [2024-07-15 21:39:46.986701] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:03:31.856 [2024-07-15 21:39:46.986717] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:03:31.856 [2024-07-15 21:39:46.986733] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:03:31.856 21:39:46 unittest.unittest_iscsi -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:03:31.856 00:03:31.856 00:03:31.856 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.856 http://cunit.sourceforge.net/ 00:03:31.856 00:03:31.856 00:03:31.856 Suite: iscsi_target_node_suite 00:03:31.857 Test: add_lun_test_cases ...[2024-07-15 21:39:46.991712] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1253:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:03:31.857 [2024-07-15 21:39:46.991923] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:03:31.857 [2024-07-15 21:39:46.991943] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:03:31.857 passed 00:03:31.857 Test: allow_any_allowed ...passed 00:03:31.857 Test: allow_ipv6_allowed ...passed 00:03:31.857 Test: allow_ipv6_denied ...passed 00:03:31.857 Test: allow_ipv6_invalid ...passed 00:03:31.857 Test: allow_ipv4_allowed ...passed 00:03:31.857 Test: allow_ipv4_denied ...passed 00:03:31.857 Test: allow_ipv4_invalid ...passed 00:03:31.857 Test: node_access_allowed ...passed 00:03:31.857 Test: node_access_denied_by_empty_netmask ...passed 00:03:31.857 Test: node_access_multi_initiator_groups_cases ...passed 00:03:31.857 Test: allow_iscsi_name_multi_maps_case ...passed 00:03:31.857 Test: chap_param_test_cases ...[2024-07-15 21:39:46.991957] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:03:31.857 [2024-07-15 21:39:46.991969] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:03:31.857 [2024-07-15 21:39:46.992076] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:03:31.857 [2024-07-15 21:39:46.992098] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:03:31.857 [2024-07-15 21:39:46.992112] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:03:31.857 [2024-07-15 21:39:46.992124] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:03:31.857 [2024-07-15 21:39:46.992135] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:03:31.857 passed 00:03:31.857 00:03:31.857 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.857 suites 1 1 n/a 0 0 00:03:31.857 tests 13 13 13 0 0 00:03:31.857 asserts 50 50 50 0 n/a 00:03:31.857 00:03:31.857 Elapsed time = 0.000 seconds 00:03:31.857 21:39:46 unittest.unittest_iscsi -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:03:31.857 00:03:31.857 00:03:31.857 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.857 http://cunit.sourceforge.net/ 00:03:31.857 00:03:31.857 00:03:31.857 Suite: iscsi_suite 00:03:31.857 Test: op_login_check_target_test ...passed 00:03:31.857 Test: op_login_session_normal_test ...passed 00:03:31.857 Test: maxburstlength_test ...[2024-07-15 21:39:46.998757] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1439:iscsi_op_login_check_target: *ERROR*: access denied 00:03:31.857 [2024-07-15 21:39:46.999054] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:03:31.857 [2024-07-15 21:39:46.999091] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:03:31.857 [2024-07-15 21:39:46.999120] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:03:31.857 [2024-07-15 21:39:46.999179] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:03:31.857 [2024-07-15 21:39:46.999199] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1475:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:03:31.857 [2024-07-15 21:39:46.999240] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 703:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:03:31.857 [2024-07-15 21:39:46.999257] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1475:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:03:31.857 [2024-07-15 21:39:46.999323] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:03:31.857 [2024-07-15 21:39:46.999342] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4569:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:03:31.857 passed 00:03:31.857 Test: underflow_for_read_transfer_test ...passed 00:03:31.857 Test: underflow_for_zero_read_transfer_test ...passed 00:03:31.857 Test: underflow_for_request_sense_test ...passed 00:03:31.857 Test: underflow_for_check_condition_test ...passed 00:03:31.857 Test: add_transfer_task_test ...passed 00:03:31.857 Test: get_transfer_task_test ...passed 00:03:31.857 Test: del_transfer_task_test ...passed 00:03:31.857 Test: clear_all_transfer_tasks_test ...passed 00:03:31.857 Test: build_iovs_test ...passed 00:03:31.857 Test: build_iovs_with_md_test ...passed 00:03:31.857 Test: pdu_hdr_op_login_test ...passed 00:03:31.857 Test: pdu_hdr_op_text_test ...passed 00:03:31.857 Test: pdu_hdr_op_logout_test ...passed 00:03:31.857 Test: pdu_hdr_op_scsi_test ...passed 00:03:31.857 Test: pdu_hdr_op_task_mgmt_test ...passed 00:03:31.857 Test: pdu_hdr_op_nopout_test ...passed 00:03:31.857 Test: pdu_hdr_op_data_test ...passed 00:03:31.857 Test: empty_text_with_cbit_test ...passed 00:03:31.857 Test: pdu_payload_read_test ...[2024-07-15 21:39:46.999513] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1256:iscsi_op_login_rsp_init: *ERROR*: transit error 00:03:31.857 [2024-07-15 21:39:46.999526] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1264:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:03:31.857 [2024-07-15 21:39:46.999533] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1277:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:03:31.857 [2024-07-15 21:39:46.999545] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2259:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:03:31.857 [2024-07-15 21:39:46.999553] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2290:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:03:31.857 [2024-07-15 21:39:46.999560] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2304:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:03:31.857 [2024-07-15 21:39:46.999569] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2535:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:03:31.857 [2024-07-15 21:39:46.999579] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:03:31.857 [2024-07-15 21:39:46.999586] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:03:31.857 [2024-07-15 21:39:46.999593] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3382:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:03:31.857 [2024-07-15 21:39:46.999600] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3416:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:03:31.857 [2024-07-15 21:39:46.999607] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3423:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:03:31.857 [2024-07-15 21:39:46.999615] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3446:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:03:31.857 [2024-07-15 21:39:46.999623] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3623:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:03:31.857 [2024-07-15 21:39:46.999631] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3712:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:03:31.857 [2024-07-15 21:39:46.999641] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3731:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:03:31.857 [2024-07-15 21:39:46.999648] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:03:31.857 [2024-07-15 21:39:46.999655] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:03:31.857 [2024-07-15 21:39:46.999661] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3761:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:03:31.857 [2024-07-15 21:39:46.999669] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4204:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:03:31.857 [2024-07-15 21:39:46.999676] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:03:31.857 [2024-07-15 21:39:46.999683] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:03:31.857 [2024-07-15 21:39:46.999689] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4235:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:03:31.857 [2024-07-15 21:39:46.999696] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4240:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:03:31.857 [2024-07-15 21:39:46.999703] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4251:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:03:31.857 [2024-07-15 21:39:46.999709] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4263:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:03:31.857 [2024-07-15 21:39:46.999970] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4650:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:03:31.857 passed 00:03:31.857 Test: data_out_pdu_sequence_test ...passed 00:03:31.857 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:03:31.857 00:03:31.857 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.857 suites 1 1 n/a 0 0 00:03:31.857 tests 24 24 24 0 0 00:03:31.857 asserts 150253 150253 150253 0 n/a 00:03:31.857 00:03:31.857 Elapsed time = 0.000 seconds 00:03:31.857 21:39:47 unittest.unittest_iscsi -- unit/unittest.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:03:31.857 00:03:31.857 00:03:31.857 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.857 http://cunit.sourceforge.net/ 00:03:31.857 00:03:31.857 00:03:31.857 Suite: init_grp_suite 00:03:31.857 Test: create_initiator_group_success_case ...passed 00:03:31.857 Test: find_initiator_group_success_case ...passed 00:03:31.857 Test: register_initiator_group_twice_case ...passed 00:03:31.857 Test: add_initiator_name_success_case ...passed 00:03:31.857 Test: add_initiator_name_fail_case ...passed 00:03:31.857 Test: delete_all_initiator_names_success_case ...passed 00:03:31.857 Test: add_netmask_success_case ...passed 00:03:31.857 Test: add_netmask_fail_case ...passed 00:03:31.857 Test: delete_all_netmasks_success_case ...passed 00:03:31.857 Test: initiator_name_overwrite_all_to_any_case ...passed 00:03:31.857 Test: netmask_overwrite_all_to_any_case ...passed 00:03:31.857 Test: add_delete_initiator_names_case ...passed 00:03:31.857 Test: add_duplicated_initiator_names_case ...passed 00:03:31.857 Test: delete_nonexisting_initiator_names_case ...passed 00:03:31.857 Test: add_delete_netmasks_case ...passed 00:03:31.857 Test: add_duplicated_netmasks_case ...passed 00:03:31.857 Test: delete_nonexisting_netmasks_case ...passed 00:03:31.857 00:03:31.857 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.857 suites 1 1 n/a 0 0 00:03:31.857 tests 17 17 17 0 0 00:03:31.857 asserts 108 108 108 0 n/a 00:03:31.857 00:03:31.857 Elapsed time = 0.000 seconds 00:03:31.857 [2024-07-15 21:39:47.006507] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:03:31.857 [2024-07-15 21:39:47.006696] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:03:31.857 21:39:47 unittest.unittest_iscsi -- unit/unittest.sh@73 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:03:31.857 00:03:31.857 00:03:31.858 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.858 http://cunit.sourceforge.net/ 00:03:31.858 00:03:31.858 00:03:31.858 Suite: portal_grp_suite 00:03:31.858 Test: portal_create_ipv4_normal_case ...passed 00:03:31.858 Test: portal_create_ipv6_normal_case ...passed 00:03:31.858 Test: portal_create_ipv4_wildcard_case ...passed 00:03:31.858 Test: portal_create_ipv6_wildcard_case ...passed 00:03:31.858 Test: portal_create_twice_case ...passed 00:03:31.858 Test: portal_grp_register_unregister_case ...passed 00:03:31.858 Test: portal_grp_register_twice_case ...passed 00:03:31.858 Test: portal_grp_add_delete_case ...passed 00:03:31.858 Test: portal_grp_add_delete_twice_case ...[2024-07-15 21:39:47.011939] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:03:31.858 passed 00:03:31.858 00:03:31.858 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.858 suites 1 1 n/a 0 0 00:03:31.858 tests 9 9 9 0 0 00:03:31.858 asserts 44 44 44 0 n/a 00:03:31.858 00:03:31.858 Elapsed time = 0.000 seconds 00:03:31.858 00:03:31.858 real 0m0.038s 00:03:31.858 user 0m0.015s 00:03:31.858 sys 0m0.024s 00:03:31.858 21:39:47 unittest.unittest_iscsi -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:31.858 21:39:47 unittest.unittest_iscsi -- common/autotest_common.sh@10 -- # set +x 00:03:31.858 ************************************ 00:03:31.858 END TEST unittest_iscsi 00:03:31.858 ************************************ 00:03:32.117 21:39:47 unittest -- common/autotest_common.sh@1136 -- # return 0 00:03:32.117 21:39:47 unittest -- unit/unittest.sh@245 -- # run_test unittest_json unittest_json 00:03:32.117 21:39:47 unittest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:32.117 21:39:47 unittest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:32.117 21:39:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:32.117 ************************************ 00:03:32.117 START TEST unittest_json 00:03:32.117 ************************************ 00:03:32.117 21:39:47 unittest.unittest_json -- common/autotest_common.sh@1117 -- # unittest_json 00:03:32.117 21:39:47 unittest.unittest_json -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:03:32.117 00:03:32.117 00:03:32.117 CUnit - A unit testing framework for C - Version 2.1-3 00:03:32.117 http://cunit.sourceforge.net/ 00:03:32.117 00:03:32.117 00:03:32.117 Suite: json 00:03:32.117 Test: test_parse_literal ...passed 00:03:32.117 Test: test_parse_string_simple ...passed 00:03:32.117 Test: test_parse_string_control_chars ...passed 00:03:32.117 Test: test_parse_string_utf8 ...passed 00:03:32.117 Test: test_parse_string_escapes_twochar ...passed 00:03:32.117 Test: test_parse_string_escapes_unicode ...passed 00:03:32.117 Test: test_parse_number ...passed 00:03:32.117 Test: test_parse_array ...passed 00:03:32.117 Test: test_parse_object ...passed 00:03:32.117 Test: test_parse_nesting ...passed 00:03:32.117 Test: test_parse_comment ...passed 00:03:32.117 00:03:32.117 Run Summary: Type Total Ran Passed Failed Inactive 00:03:32.117 suites 1 1 n/a 0 0 00:03:32.117 tests 11 11 11 0 0 00:03:32.117 asserts 1516 1516 1516 0 n/a 00:03:32.117 00:03:32.117 Elapsed time = 0.000 seconds 00:03:32.117 21:39:47 unittest.unittest_json -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:03:32.117 00:03:32.117 00:03:32.117 CUnit - A unit testing framework for C - Version 2.1-3 00:03:32.117 http://cunit.sourceforge.net/ 00:03:32.117 00:03:32.117 00:03:32.117 Suite: json 00:03:32.117 Test: test_strequal ...passed 00:03:32.117 Test: test_num_to_uint16 ...passed 00:03:32.117 Test: test_num_to_int32 ...passed 00:03:32.117 Test: test_num_to_uint64 ...passed 00:03:32.117 Test: test_decode_object ...passed 00:03:32.117 Test: test_decode_array ...passed 00:03:32.117 Test: test_decode_bool ...passed 00:03:32.117 Test: test_decode_uint16 ...passed 00:03:32.117 Test: test_decode_int32 ...passed 00:03:32.118 Test: test_decode_uint32 ...passed 00:03:32.118 Test: test_decode_uint64 ...passed 00:03:32.118 Test: test_decode_string ...passed 00:03:32.118 Test: test_decode_uuid ...passed 00:03:32.118 Test: test_find ...passed 00:03:32.118 Test: test_find_array ...passed 00:03:32.118 Test: test_iterating ...passed 00:03:32.118 Test: test_free_object ...passed 00:03:32.118 00:03:32.118 Run Summary: Type Total Ran Passed Failed Inactive 00:03:32.118 suites 1 1 n/a 0 0 00:03:32.118 tests 17 17 17 0 0 00:03:32.118 asserts 236 236 236 0 n/a 00:03:32.118 00:03:32.118 Elapsed time = 0.000 seconds 00:03:32.118 21:39:47 unittest.unittest_json -- unit/unittest.sh@79 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:03:32.118 00:03:32.118 00:03:32.118 CUnit - A unit testing framework for C - Version 2.1-3 00:03:32.118 http://cunit.sourceforge.net/ 00:03:32.118 00:03:32.118 00:03:32.118 Suite: json 00:03:32.118 Test: test_write_literal ...passed 00:03:32.118 Test: test_write_string_simple ...passed 00:03:32.118 Test: test_write_string_escapes ...passed 00:03:32.118 Test: test_write_string_utf16le ...passed 00:03:32.118 Test: test_write_number_int32 ...passed 00:03:32.118 Test: test_write_number_uint32 ...passed 00:03:32.118 Test: test_write_number_uint128 ...passed 00:03:32.118 Test: test_write_string_number_uint128 ...passed 00:03:32.118 Test: test_write_number_int64 ...passed 00:03:32.118 Test: test_write_number_uint64 ...passed 00:03:32.118 Test: test_write_number_double ...passed 00:03:32.118 Test: test_write_uuid ...passed 00:03:32.118 Test: test_write_array ...passed 00:03:32.118 Test: test_write_object ...passed 00:03:32.118 Test: test_write_nesting ...passed 00:03:32.118 Test: test_write_val ...passed 00:03:32.118 00:03:32.118 Run Summary: Type Total Ran Passed Failed Inactive 00:03:32.118 suites 1 1 n/a 0 0 00:03:32.118 tests 16 16 16 0 0 00:03:32.118 asserts 918 918 918 0 n/a 00:03:32.118 00:03:32.118 Elapsed time = 0.000 seconds 00:03:32.118 21:39:47 unittest.unittest_json -- unit/unittest.sh@80 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:03:32.118 00:03:32.118 00:03:32.118 CUnit - A unit testing framework for C - Version 2.1-3 00:03:32.118 http://cunit.sourceforge.net/ 00:03:32.118 00:03:32.118 00:03:32.118 Suite: jsonrpc 00:03:32.118 Test: test_parse_request ...passed 00:03:32.118 Test: test_parse_request_streaming ...passed 00:03:32.118 00:03:32.118 Run Summary: Type Total Ran Passed Failed Inactive 00:03:32.118 suites 1 1 n/a 0 0 00:03:32.118 tests 2 2 2 0 0 00:03:32.118 asserts 289 289 289 0 n/a 00:03:32.118 00:03:32.118 Elapsed time = 0.000 seconds 00:03:32.118 00:03:32.118 real 0m0.029s 00:03:32.118 user 0m0.030s 00:03:32.118 sys 0m0.004s 00:03:32.118 21:39:47 unittest.unittest_json -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:32.118 21:39:47 unittest.unittest_json -- common/autotest_common.sh@10 -- # set +x 00:03:32.118 ************************************ 00:03:32.118 END TEST unittest_json 00:03:32.118 ************************************ 00:03:32.118 21:39:47 unittest -- common/autotest_common.sh@1136 -- # return 0 00:03:32.118 21:39:47 unittest -- unit/unittest.sh@246 -- # run_test unittest_rpc unittest_rpc 00:03:32.118 21:39:47 unittest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:32.118 21:39:47 unittest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:32.118 21:39:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:32.118 ************************************ 00:03:32.118 START TEST unittest_rpc 00:03:32.118 ************************************ 00:03:32.118 21:39:47 unittest.unittest_rpc -- common/autotest_common.sh@1117 -- # unittest_rpc 00:03:32.118 21:39:47 unittest.unittest_rpc -- unit/unittest.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:03:32.118 00:03:32.118 00:03:32.118 CUnit - A unit testing framework for C - Version 2.1-3 00:03:32.118 http://cunit.sourceforge.net/ 00:03:32.118 00:03:32.118 00:03:32.118 Suite: rpc 00:03:32.118 Test: test_jsonrpc_handler ...passed 00:03:32.118 Test: test_spdk_rpc_is_method_allowed ...passed 00:03:32.118 Test: test_rpc_get_methods ...[2024-07-15 21:39:47.125165] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:03:32.118 passed 00:03:32.118 Test: test_rpc_spdk_get_version ...passed 00:03:32.118 Test: test_spdk_rpc_listen_close ...passed 00:03:32.118 Test: test_rpc_run_multiple_servers ...passed 00:03:32.118 00:03:32.118 Run Summary: Type Total Ran Passed Failed Inactive 00:03:32.118 suites 1 1 n/a 0 0 00:03:32.118 tests 6 6 6 0 0 00:03:32.118 asserts 23 23 23 0 n/a 00:03:32.118 00:03:32.118 Elapsed time = 0.000 seconds 00:03:32.118 00:03:32.118 real 0m0.006s 00:03:32.118 user 0m0.000s 00:03:32.118 sys 0m0.011s 00:03:32.118 ************************************ 00:03:32.118 END TEST unittest_rpc 00:03:32.118 ************************************ 00:03:32.118 21:39:47 unittest.unittest_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:32.118 21:39:47 unittest.unittest_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:32.118 21:39:47 unittest -- common/autotest_common.sh@1136 -- # return 0 00:03:32.118 21:39:47 unittest -- unit/unittest.sh@247 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:03:32.118 21:39:47 unittest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:32.118 21:39:47 unittest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:32.118 21:39:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:32.118 ************************************ 00:03:32.118 START TEST unittest_notify 00:03:32.118 ************************************ 00:03:32.118 21:39:47 unittest.unittest_notify -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:03:32.118 00:03:32.118 00:03:32.118 CUnit - A unit testing framework for C - Version 2.1-3 00:03:32.118 http://cunit.sourceforge.net/ 00:03:32.118 00:03:32.118 00:03:32.118 Suite: app_suite 00:03:32.118 Test: notify ...passed 00:03:32.118 00:03:32.118 Run Summary: Type Total Ran Passed Failed Inactive 00:03:32.118 suites 1 1 n/a 0 0 00:03:32.118 tests 1 1 1 0 0 00:03:32.118 asserts 13 13 13 0 n/a 00:03:32.118 00:03:32.118 Elapsed time = 0.000 seconds 00:03:32.118 00:03:32.118 real 0m0.005s 00:03:32.118 user 0m0.000s 00:03:32.118 sys 0m0.007s 00:03:32.118 21:39:47 unittest.unittest_notify -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:32.118 ************************************ 00:03:32.118 END TEST unittest_notify 00:03:32.118 ************************************ 00:03:32.118 21:39:47 unittest.unittest_notify -- common/autotest_common.sh@10 -- # set +x 00:03:32.118 21:39:47 unittest -- common/autotest_common.sh@1136 -- # return 0 00:03:32.118 21:39:47 unittest -- unit/unittest.sh@248 -- # run_test unittest_nvme unittest_nvme 00:03:32.118 21:39:47 unittest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:32.118 21:39:47 unittest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:32.118 21:39:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:32.118 ************************************ 00:03:32.118 START TEST unittest_nvme 00:03:32.118 ************************************ 00:03:32.118 21:39:47 unittest.unittest_nvme -- common/autotest_common.sh@1117 -- # unittest_nvme 00:03:32.118 21:39:47 unittest.unittest_nvme -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:03:32.118 00:03:32.118 00:03:32.118 CUnit - A unit testing framework for C - Version 2.1-3 00:03:32.118 http://cunit.sourceforge.net/ 00:03:32.118 00:03:32.118 00:03:32.118 Suite: nvme 00:03:32.118 Test: test_opc_data_transfer ...passed 00:03:32.118 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:03:32.118 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:03:32.118 Test: test_trid_parse_and_compare ...[2024-07-15 21:39:47.223371] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1199:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:03:32.118 [2024-07-15 21:39:47.223639] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:03:32.118 [2024-07-15 21:39:47.223665] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1212:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:03:32.118 [2024-07-15 21:39:47.223681] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:03:32.118 [2024-07-15 21:39:47.223697] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1222:parse_next_key: *ERROR*: Key without value 00:03:32.118 [2024-07-15 21:39:47.223727] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:03:32.118 passed 00:03:32.118 Test: test_trid_trtype_str ...passed 00:03:32.118 Test: test_trid_adrfam_str ...passed 00:03:32.118 Test: test_nvme_ctrlr_probe ...passed 00:03:32.118 Test: test_spdk_nvme_probe ...passed 00:03:32.118 Test: test_spdk_nvme_connect ...passed 00:03:32.118 Test: test_nvme_ctrlr_probe_internal ...passed 00:03:32.118 Test: test_nvme_init_controllers ...passed 00:03:32.118 Test: test_nvme_driver_init ...[2024-07-15 21:39:47.223895] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:03:32.118 [2024-07-15 21:39:47.223936] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:03:32.118 [2024-07-15 21:39:47.223952] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:03:32.118 [2024-07-15 21:39:47.223971] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 822:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:03:32.118 [2024-07-15 21:39:47.223985] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:03:32.118 [2024-07-15 21:39:47.224017] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1010:spdk_nvme_connect: *ERROR*: No transport ID specified 00:03:32.118 [2024-07-15 21:39:47.224142] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:03:32.118 [2024-07-15 21:39:47.224179] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:03:32.118 [2024-07-15 21:39:47.224195] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:03:32.118 [2024-07-15 21:39:47.224216] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:03:32.118 [2024-07-15 21:39:47.224247] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:03:32.119 [2024-07-15 21:39:47.224264] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:03:32.378 passed 00:03:32.378 Test: test_spdk_nvme_detach ...passed 00:03:32.378 Test: test_nvme_completion_poll_cb ...passed 00:03:32.378 Test: test_nvme_user_copy_cmd_complete ...[2024-07-15 21:39:47.338135] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:03:32.378 passed 00:03:32.378 Test: test_nvme_allocate_request_null ...passed 00:03:32.378 Test: test_nvme_allocate_request ...passed 00:03:32.378 Test: test_nvme_free_request ...passed 00:03:32.378 Test: test_nvme_allocate_request_user_copy ...passed 00:03:32.378 Test: test_nvme_robust_mutex_init_shared ...passed 00:03:32.378 Test: test_nvme_request_check_timeout ...passed 00:03:32.378 Test: test_nvme_wait_for_completion ...passed 00:03:32.378 Test: test_spdk_nvme_parse_func ...passed 00:03:32.378 Test: test_spdk_nvme_detach_async ...passed 00:03:32.378 Test: test_nvme_parse_addr ...passed 00:03:32.378 00:03:32.378 Run Summary: Type Total Ran Passed Failed Inactive 00:03:32.378 suites 1 1 n/a 0 0 00:03:32.378 tests 25 25 25 0 0 00:03:32.378 asserts 326 326 326 0 n/a 00:03:32.378 00:03:32.378 Elapsed time = 0.000 seconds 00:03:32.378 [2024-07-15 21:39:47.338426] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1609:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:03:32.378 21:39:47 unittest.unittest_nvme -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:03:32.378 00:03:32.378 00:03:32.378 CUnit - A unit testing framework for C - Version 2.1-3 00:03:32.378 http://cunit.sourceforge.net/ 00:03:32.378 00:03:32.378 00:03:32.378 Suite: nvme_ctrlr 00:03:32.378 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-07-15 21:39:47.345000] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:32.378 passed 00:03:32.378 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-07-15 21:39:47.346423] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:32.378 passed 00:03:32.379 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-07-15 21:39:47.347632] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:32.379 passed 00:03:32.379 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-07-15 21:39:47.348826] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:32.379 passed 00:03:32.379 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-07-15 21:39:47.350067] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:32.379 [2024-07-15 21:39:47.351220] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-15 21:39:47.352420] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-15 21:39:47.353608] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:03:32.379 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-07-15 21:39:47.356010] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:32.379 [2024-07-15 21:39:47.358370] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-15 21:39:47.359566] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:03:32.379 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-07-15 21:39:47.361949] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:32.379 [2024-07-15 21:39:47.363121] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-15 21:39:47.365510] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:03:32.379 Test: test_nvme_ctrlr_init_delay ...[2024-07-15 21:39:47.367884] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:32.379 passed 00:03:32.379 Test: test_alloc_io_qpair_rr_1 ...[2024-07-15 21:39:47.369162] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:32.379 [2024-07-15 21:39:47.369222] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:03:32.379 [2024-07-15 21:39:47.369248] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:03:32.379 [2024-07-15 21:39:47.369264] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:03:32.379 [2024-07-15 21:39:47.369289] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:03:32.379 passed 00:03:32.379 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:03:32.379 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:03:32.379 Test: test_alloc_io_qpair_wrr_1 ...passed 00:03:32.379 Test: test_alloc_io_qpair_wrr_2 ...passed 00:03:32.379 Test: test_spdk_nvme_ctrlr_update_firmware ...passed 00:03:32.379 Test: test_nvme_ctrlr_fail ...[2024-07-15 21:39:47.369394] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:32.379 [2024-07-15 21:39:47.369433] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:32.379 [2024-07-15 21:39:47.369454] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:03:32.379 [2024-07-15 21:39:47.369522] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4993:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:03:32.379 [2024-07-15 21:39:47.369555] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5030:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:03:32.379 [2024-07-15 21:39:47.369571] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5070:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:03:32.379 [2024-07-15 21:39:47.369586] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5030:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:03:32.379 [2024-07-15 21:39:47.369604] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:03:32.379 passed 00:03:32.379 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:03:32.379 Test: test_nvme_ctrlr_set_supported_features ...passed 00:03:32.379 Test: test_nvme_ctrlr_set_host_feature ...[2024-07-15 21:39:47.369641] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:32.379 passed 00:03:32.379 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:03:32.379 Test: test_nvme_ctrlr_test_active_ns ...[2024-07-15 21:39:47.370899] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:32.379 passed 00:03:32.379 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:03:32.379 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:03:32.379 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:03:32.379 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-07-15 21:39:47.406943] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:32.379 passed 00:03:32.379 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-07-15 21:39:47.413812] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:32.379 passed 00:03:32.379 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-07-15 21:39:47.415031] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:32.379 [2024-07-15 21:39:47.415076] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3003:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:03:32.379 passed 00:03:32.379 Test: test_alloc_io_qpair_fail ...[2024-07-15 21:39:47.416279] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:32.379 passed 00:03:32.379 Test: test_nvme_ctrlr_add_remove_process ...passed 00:03:32.379 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:03:32.379 Test: test_nvme_ctrlr_set_state ...passed 00:03:32.379 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-07-15 21:39:47.416334] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 506:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:03:32.379 [2024-07-15 21:39:47.416393] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1547:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:03:32.379 [2024-07-15 21:39:47.416411] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:32.379 passed 00:03:32.379 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-07-15 21:39:47.420246] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:32.379 passed 00:03:32.379 Test: test_nvme_ctrlr_ns_mgmt ...[2024-07-15 21:39:47.427217] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:32.379 passed 00:03:32.379 Test: test_nvme_ctrlr_reset ...[2024-07-15 21:39:47.428453] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:32.379 passed 00:03:32.379 Test: test_nvme_ctrlr_aer_callback ...[2024-07-15 21:39:47.428541] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:32.379 passed 00:03:32.379 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-07-15 21:39:47.429748] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:32.379 passed 00:03:32.379 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:03:32.379 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:03:32.379 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-07-15 21:39:47.431093] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:32.379 passed 00:03:32.379 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:03:32.379 Test: test_nvme_ctrlr_ana_resize ...[2024-07-15 21:39:47.432306] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:32.379 passed 00:03:32.379 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:03:32.379 Test: test_nvme_transport_ctrlr_ready ...[2024-07-15 21:39:47.433524] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4152:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:03:32.379 passed 00:03:32.379 Test: test_nvme_ctrlr_disable ...[2024-07-15 21:39:47.433558] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4205:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 53 (error) 00:03:32.379 [2024-07-15 21:39:47.433574] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:32.379 passed 00:03:32.379 00:03:32.379 Run Summary: Type Total Ran Passed Failed Inactive 00:03:32.379 suites 1 1 n/a 0 0 00:03:32.379 tests 44 44 44 0 0 00:03:32.379 asserts 10434 10434 10434 0 n/a 00:03:32.379 00:03:32.379 Elapsed time = 0.047 seconds 00:03:32.379 21:39:47 unittest.unittest_nvme -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:03:32.379 00:03:32.379 00:03:32.379 CUnit - A unit testing framework for C - Version 2.1-3 00:03:32.379 http://cunit.sourceforge.net/ 00:03:32.379 00:03:32.379 00:03:32.379 Suite: nvme_ctrlr_cmd 00:03:32.379 Test: test_get_log_pages ...passed 00:03:32.379 Test: test_set_feature_cmd ...passed 00:03:32.379 Test: test_set_feature_ns_cmd ...passed 00:03:32.379 Test: test_get_feature_cmd ...passed 00:03:32.379 Test: test_get_feature_ns_cmd ...passed 00:03:32.379 Test: test_abort_cmd ...passed 00:03:32.379 Test: test_set_host_id_cmds ...passed 00:03:32.379 Test: test_io_cmd_raw_no_payload_build ...passed 00:03:32.379 Test: test_io_raw_cmd ...passed 00:03:32.379 Test: test_io_raw_cmd_with_md ...passed 00:03:32.379 Test: test_namespace_attach ...passed 00:03:32.379 Test: test_namespace_detach ...passed 00:03:32.379 Test: test_namespace_create ...passed 00:03:32.379 Test: test_namespace_delete ...passed 00:03:32.379 Test: test_doorbell_buffer_config ...passed 00:03:32.379 Test: test_format_nvme ...passed 00:03:32.379 Test: test_fw_commit ...passed 00:03:32.379 Test: test_fw_image_download ...passed 00:03:32.379 Test: test_sanitize ...passed 00:03:32.379 Test: test_directive ...passed 00:03:32.379 Test: test_nvme_request_add_abort ...passed 00:03:32.379 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:03:32.379 Test: test_nvme_ctrlr_cmd_identify ...passed 00:03:32.379 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:03:32.379 00:03:32.379 Run Summary: Type Total Ran Passed Failed Inactive 00:03:32.379 suites 1 1 n/a 0 0 00:03:32.380 tests 24 24 24 0 0 00:03:32.380 asserts 198 198 198 0 n/a 00:03:32.380 00:03:32.380 Elapsed time = 0.000 seconds 00:03:32.380 [2024-07-15 21:39:47.441966] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:03:32.380 21:39:47 unittest.unittest_nvme -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:03:32.380 00:03:32.380 00:03:32.380 CUnit - A unit testing framework for C - Version 2.1-3 00:03:32.380 http://cunit.sourceforge.net/ 00:03:32.380 00:03:32.380 00:03:32.380 Suite: nvme_ctrlr_cmd 00:03:32.380 Test: test_geometry_cmd ...passed 00:03:32.380 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:03:32.380 00:03:32.380 Run Summary: Type Total Ran Passed Failed Inactive 00:03:32.380 suites 1 1 n/a 0 0 00:03:32.380 tests 2 2 2 0 0 00:03:32.380 asserts 7 7 7 0 n/a 00:03:32.380 00:03:32.380 Elapsed time = 0.000 seconds 00:03:32.380 21:39:47 unittest.unittest_nvme -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:03:32.380 00:03:32.380 00:03:32.380 CUnit - A unit testing framework for C - Version 2.1-3 00:03:32.380 http://cunit.sourceforge.net/ 00:03:32.380 00:03:32.380 00:03:32.380 Suite: nvme 00:03:32.380 Test: test_nvme_ns_construct ...passed 00:03:32.380 Test: test_nvme_ns_uuid ...passed 00:03:32.380 Test: test_nvme_ns_csi ...passed 00:03:32.380 Test: test_nvme_ns_data ...passed 00:03:32.380 Test: test_nvme_ns_set_identify_data ...passed 00:03:32.380 Test: test_spdk_nvme_ns_get_values ...passed 00:03:32.380 Test: test_spdk_nvme_ns_is_active ...passed 00:03:32.380 Test: spdk_nvme_ns_supports ...passed 00:03:32.380 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:03:32.380 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:03:32.380 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:03:32.380 Test: test_nvme_ns_find_id_desc ...passed 00:03:32.380 00:03:32.380 Run Summary: Type Total Ran Passed Failed Inactive 00:03:32.380 suites 1 1 n/a 0 0 00:03:32.380 tests 12 12 12 0 0 00:03:32.380 asserts 95 95 95 0 n/a 00:03:32.380 00:03:32.380 Elapsed time = 0.000 seconds 00:03:32.380 21:39:47 unittest.unittest_nvme -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:03:32.380 00:03:32.380 00:03:32.380 CUnit - A unit testing framework for C - Version 2.1-3 00:03:32.380 http://cunit.sourceforge.net/ 00:03:32.380 00:03:32.380 00:03:32.380 Suite: nvme_ns_cmd 00:03:32.380 Test: split_test ...passed 00:03:32.380 Test: split_test2 ...passed 00:03:32.380 Test: split_test3 ...passed 00:03:32.380 Test: split_test4 ...passed 00:03:32.380 Test: test_nvme_ns_cmd_flush ...passed 00:03:32.380 Test: test_nvme_ns_cmd_dataset_management ...passed 00:03:32.380 Test: test_nvme_ns_cmd_copy ...passed 00:03:32.380 Test: test_io_flags ...passed 00:03:32.380 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:03:32.380 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:03:32.380 Test: test_nvme_ns_cmd_reservation_register ...passed 00:03:32.380 Test: test_nvme_ns_cmd_reservation_release ...passed 00:03:32.380 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:03:32.380 Test: test_nvme_ns_cmd_reservation_report ...passed 00:03:32.380 Test: test_cmd_child_request ...passed 00:03:32.380 Test: test_nvme_ns_cmd_readv ...passed 00:03:32.380 Test: test_nvme_ns_cmd_read_with_md ...passed 00:03:32.380 Test: test_nvme_ns_cmd_writev ...passed 00:03:32.380 Test: test_nvme_ns_cmd_write_with_md ...passed 00:03:32.380 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:03:32.380 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:03:32.380 Test: test_nvme_ns_cmd_comparev ...passed 00:03:32.380 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:03:32.380 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:03:32.380 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:03:32.380 Test: test_nvme_ns_cmd_setup_request ...passed 00:03:32.380 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:03:32.380 Test: test_spdk_nvme_ns_cmd_writev_ext ...passed 00:03:32.380 Test: test_spdk_nvme_ns_cmd_readv_ext ...passed 00:03:32.380 Test: test_nvme_ns_cmd_verify ...passed 00:03:32.380 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:03:32.380 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:03:32.380 00:03:32.380 Run Summary: Type Total Ran Passed Failed Inactive 00:03:32.380 suites 1 1 n/a 0 0 00:03:32.380 tests 32 32 32 0 0 00:03:32.380 asserts 550 550 550 0 n/a 00:03:32.380 00:03:32.380 Elapsed time = 0.000 seconds 00:03:32.380 [2024-07-15 21:39:47.453091] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:03:32.380 [2024-07-15 21:39:47.453272] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 292:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:03:32.380 [2024-07-15 21:39:47.453352] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:03:32.380 [2024-07-15 21:39:47.453364] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:03:32.380 21:39:47 unittest.unittest_nvme -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:03:32.380 00:03:32.380 00:03:32.380 CUnit - A unit testing framework for C - Version 2.1-3 00:03:32.380 http://cunit.sourceforge.net/ 00:03:32.380 00:03:32.380 00:03:32.380 Suite: nvme_ns_cmd 00:03:32.380 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:03:32.380 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:03:32.380 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:03:32.380 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:03:32.380 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:03:32.380 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:03:32.380 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:03:32.380 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:03:32.380 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:03:32.380 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:03:32.380 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:03:32.380 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:03:32.380 00:03:32.380 Run Summary: Type Total Ran Passed Failed Inactive 00:03:32.380 suites 1 1 n/a 0 0 00:03:32.380 tests 12 12 12 0 0 00:03:32.380 asserts 123 123 123 0 n/a 00:03:32.380 00:03:32.380 Elapsed time = 0.000 seconds 00:03:32.380 21:39:47 unittest.unittest_nvme -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:03:32.380 00:03:32.380 00:03:32.380 CUnit - A unit testing framework for C - Version 2.1-3 00:03:32.380 http://cunit.sourceforge.net/ 00:03:32.380 00:03:32.380 00:03:32.380 Suite: nvme_qpair 00:03:32.380 Test: test3 ...passed 00:03:32.380 Test: test_ctrlr_failed ...passed 00:03:32.380 Test: struct_packing ...passed 00:03:32.380 Test: test_nvme_qpair_process_completions ...passed 00:03:32.380 Test: test_nvme_completion_is_retry ...passed 00:03:32.380 Test: test_get_status_string ...passed 00:03:32.380 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:03:32.380 Test: test_nvme_qpair_submit_request ...passed 00:03:32.380 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:03:32.380 Test: test_nvme_qpair_manual_complete_request ...passed 00:03:32.380 Test: test_nvme_qpair_init_deinit ...passed 00:03:32.380 Test: test_nvme_get_sgl_print_info ...passed 00:03:32.380 00:03:32.380 Run Summary: Type Total Ran Passed Failed Inactive 00:03:32.380 suites 1 1 n/a 0 0 00:03:32.380 tests 12 12 12 0 0 00:03:32.380 asserts 154 154 154 0 n/a 00:03:32.380 00:03:32.380 Elapsed time = 0.000 seconds 00:03:32.380 [2024-07-15 21:39:47.464089] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:03:32.380 [2024-07-15 21:39:47.464283] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:03:32.380 [2024-07-15 21:39:47.464344] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 805:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (Device not configured) on qpair id 0 00:03:32.380 [2024-07-15 21:39:47.464358] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 805:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (Device not configured) on qpair id 1 00:03:32.380 [2024-07-15 21:39:47.464408] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:03:32.380 21:39:47 unittest.unittest_nvme -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:03:32.380 00:03:32.380 00:03:32.380 CUnit - A unit testing framework for C - Version 2.1-3 00:03:32.380 http://cunit.sourceforge.net/ 00:03:32.380 00:03:32.380 00:03:32.380 Suite: nvme_pcie 00:03:32.380 Test: test_prp_list_append ...passed 00:03:32.380 Test: test_nvme_pcie_hotplug_monitor ...passed 00:03:32.380 Test: test_shadow_doorbell_update ...passed 00:03:32.380 Test: test_build_contig_hw_sgl_request ...passed 00:03:32.380 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:03:32.380 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:03:32.380 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:03:32.380 Test: test_nvme_pcie_qpair_build_contig_request ...[2024-07-15 21:39:47.469513] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:03:32.380 [2024-07-15 21:39:47.469728] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1234:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:03:32.380 [2024-07-15 21:39:47.469750] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1224:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:03:32.380 [2024-07-15 21:39:47.469802] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:03:32.380 [2024-07-15 21:39:47.469826] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:03:32.380 passed 00:03:32.380 Test: test_nvme_pcie_ctrlr_regs_get_set ...[2024-07-15 21:39:47.469936] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:03:32.380 passed 00:03:32.380 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:03:32.380 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:03:32.380 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:03:32.380 Test: test_nvme_pcie_ctrlr_config_pmr ...passed 00:03:32.380 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:03:32.380 00:03:32.381 Run Summary: Type Total Ran Passed Failed Inactive 00:03:32.381 suites 1 1 n/a 0 0 00:03:32.381 tests 14 14 14 0 0 00:03:32.381 asserts 235 235 235 0 n/a 00:03:32.381 00:03:32.381 Elapsed time = 0.000 seconds 00:03:32.381 [2024-07-15 21:39:47.469976] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:03:32.381 [2024-07-15 21:39:47.469995] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:03:32.381 [2024-07-15 21:39:47.470011] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:03:32.381 [2024-07-15 21:39:47.470025] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:03:32.381 21:39:47 unittest.unittest_nvme -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:03:32.381 00:03:32.381 00:03:32.381 CUnit - A unit testing framework for C - Version 2.1-3 00:03:32.381 http://cunit.sourceforge.net/ 00:03:32.381 00:03:32.381 00:03:32.381 Suite: nvme_ns_cmd 00:03:32.381 Test: nvme_poll_group_create_test ...passed 00:03:32.381 Test: nvme_poll_group_add_remove_test ...passed 00:03:32.381 Test: nvme_poll_group_process_completions ...passed 00:03:32.381 Test: nvme_poll_group_destroy_test ...passed 00:03:32.381 Test: nvme_poll_group_get_free_stats ...passed 00:03:32.381 00:03:32.381 Run Summary: Type Total Ran Passed Failed Inactive 00:03:32.381 suites 1 1 n/a 0 0 00:03:32.381 tests 5 5 5 0 0 00:03:32.381 asserts 75 75 75 0 n/a 00:03:32.381 00:03:32.381 Elapsed time = 0.000 seconds 00:03:32.381 21:39:47 unittest.unittest_nvme -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:03:32.381 00:03:32.381 00:03:32.381 CUnit - A unit testing framework for C - Version 2.1-3 00:03:32.381 http://cunit.sourceforge.net/ 00:03:32.381 00:03:32.381 00:03:32.381 Suite: nvme_quirks 00:03:32.381 Test: test_nvme_quirks_striping ...passed 00:03:32.381 00:03:32.381 Run Summary: Type Total Ran Passed Failed Inactive 00:03:32.381 suites 1 1 n/a 0 0 00:03:32.381 tests 1 1 1 0 0 00:03:32.381 asserts 5 5 5 0 n/a 00:03:32.381 00:03:32.381 Elapsed time = 0.000 seconds 00:03:32.381 21:39:47 unittest.unittest_nvme -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:03:32.381 00:03:32.381 00:03:32.381 CUnit - A unit testing framework for C - Version 2.1-3 00:03:32.381 http://cunit.sourceforge.net/ 00:03:32.381 00:03:32.381 00:03:32.381 Suite: nvme_tcp 00:03:32.381 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:03:32.381 Test: test_nvme_tcp_build_iovs ...passed 00:03:32.381 Test: test_nvme_tcp_build_sgl_request ...passed 00:03:32.381 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:03:32.381 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:03:32.381 Test: test_nvme_tcp_req_complete_safe ...passed 00:03:32.381 Test: test_nvme_tcp_req_get ...passed 00:03:32.381 Test: test_nvme_tcp_req_init ...passed 00:03:32.381 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:03:32.381 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:03:32.381 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:03:32.381 Test: test_nvme_tcp_alloc_reqs ...passed 00:03:32.381 Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed 00:03:32.381 Test: test_nvme_tcp_pdu_ch_handle ...[2024-07-15 21:39:47.487873] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 849:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x820d52e08, and the iovcnt=16, remaining_size=28672 00:03:32.381 [2024-07-15 21:39:47.488153] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820d549b8 is same with the state(6) to be set 00:03:32.381 [2024-07-15 21:39:47.488198] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820d549b8 is same with the state(5) to be set 00:03:32.381 [2024-07-15 21:39:47.488213] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1190:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x820d54148 00:03:32.381 [2024-07-15 21:39:47.488224] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1250:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:03:32.381 [2024-07-15 21:39:47.488234] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820d549b8 is same with the state(5) to be set 00:03:32.381 [2024-07-15 21:39:47.488244] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1200:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:03:32.381 [2024-07-15 21:39:47.488254] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820d549b8 is same with the state(5) to be set 00:03:32.381 [2024-07-15 21:39:47.488264] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:03:32.381 [2024-07-15 21:39:47.488273] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820d549b8 is same with the state(5) to be set 00:03:32.381 [2024-07-15 21:39:47.488288] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820d549b8 is same with the state(5) to be set 00:03:32.381 [2024-07-15 21:39:47.488299] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820d549b8 is same with the state(5) to be set 00:03:32.381 passed 00:03:32.381 Test: test_nvme_tcp_qpair_connect_sock ...[2024-07-15 21:39:47.488309] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820d549b8 is same with the state(5) to be set 00:03:32.381 [2024-07-15 21:39:47.488338] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820d549b8 is same with the state(5) to be set 00:03:32.381 [2024-07-15 21:39:47.488348] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820d549b8 is same with the state(5) to be set 00:03:32.381 [2024-07-15 21:39:47.488381] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:03:32.381 [2024-07-15 21:39:47.488392] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:03:50.492 [2024-07-15 21:40:02.666365] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:03:50.492 passed 00:03:50.492 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:03:50.492 Test: test_nvme_tcp_c2h_payload_handle ...passed 00:03:50.492 Test: test_nvme_tcp_icresp_handle ...[2024-07-15 21:40:02.666505] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1358:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x820d54580): PDU Sequence Error 00:03:50.492 [2024-07-15 21:40:02.666532] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1576:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:03:50.492 [2024-07-15 21:40:02.666549] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1584:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:03:50.492 passed 00:03:50.492 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:03:50.492 Test: test_nvme_tcp_capsule_resp_hdr_handle ...passed 00:03:50.492 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:03:50.492 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...passed 00:03:50.492 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-07-15 21:40:02.666563] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820d549b8 is same with the state(5) to be set 00:03:50.492 [2024-07-15 21:40:02.666576] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1592:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:03:50.492 [2024-07-15 21:40:02.666592] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820d549b8 is same with the state(5) to be set 00:03:50.492 [2024-07-15 21:40:02.666607] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820d549b8 is same with the state(0) to be set 00:03:50.492 [2024-07-15 21:40:02.666628] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1358:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x820d54580): PDU Sequence Error 00:03:50.492 [2024-07-15 21:40:02.666668] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1653:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x820d549b8 00:03:50.492 [2024-07-15 21:40:02.666724] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 358:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x820d52718, errno=0, rc=0 00:03:50.492 [2024-07-15 21:40:02.666741] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820d52718 is same with the state(5) to be set 00:03:50.492 [2024-07-15 21:40:02.666771] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820d52718 is same with the state(5) to be set 00:03:50.492 [2024-07-15 21:40:02.666878] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2186:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x820d52718 (0): No error: 0 00:03:50.492 [2024-07-15 21:40:02.666900] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2186:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x820d52718 (0): No error: 0 00:03:50.492 [2024-07-15 21:40:02.768365] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2517:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:03:50.492 [2024-07-15 21:40:02.768523] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2517:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:03:50.492 passed 00:03:50.492 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:03:50.492 Test: test_nvme_tcp_poll_group_get_stats ...[2024-07-15 21:40:02.768602] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:50.492 passed 00:03:50.492 Test: test_nvme_tcp_ctrlr_construct ...passed 00:03:50.492 Test: test_nvme_tcp_qpair_submit_request ...passed 00:03:50.492 00:03:50.492 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.492 suites 1 1 n/a 0 0 00:03:50.492 tests 27 27 27 0 0 00:03:50.492 asserts 624 624 624 0 n/a 00:03:50.492 00:03:50.492 Elapsed time = 0.102 seconds 00:03:50.492 [2024-07-15 21:40:02.768622] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:50.492 [2024-07-15 21:40:02.768691] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2517:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:03:50.492 [2024-07-15 21:40:02.768738] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:03:50.492 [2024-07-15 21:40:02.768761] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:03:50.492 [2024-07-15 21:40:02.768808] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:03:50.492 [2024-07-15 21:40:02.768834] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2384:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2049e826b000 with addr=192.168.1.78, port=23 00:03:50.492 [2024-07-15 21:40:02.768850] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:03:50.492 [2024-07-15 21:40:02.768890] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 849:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x2049e8239180, and the iovcnt=1, remaining_size=1024 00:03:50.492 [2024-07-15 21:40:02.768908] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1035:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:03:50.492 21:40:02 unittest.unittest_nvme -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:03:50.492 00:03:50.492 00:03:50.492 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.492 http://cunit.sourceforge.net/ 00:03:50.492 00:03:50.492 00:03:50.492 Suite: nvme_transport 00:03:50.492 Test: test_nvme_get_transport ...passed 00:03:50.492 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:03:50.492 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:03:50.492 Test: test_nvme_transport_poll_group_add_remove ...passed 00:03:50.492 Test: test_ctrlr_get_memory_domains ...passed 00:03:50.492 00:03:50.492 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.492 suites 1 1 n/a 0 0 00:03:50.492 tests 5 5 5 0 0 00:03:50.492 asserts 28 28 28 0 n/a 00:03:50.492 00:03:50.492 Elapsed time = 0.000 seconds 00:03:50.492 21:40:02 unittest.unittest_nvme -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:03:50.492 00:03:50.492 00:03:50.492 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.492 http://cunit.sourceforge.net/ 00:03:50.492 00:03:50.492 00:03:50.492 Suite: nvme_io_msg 00:03:50.492 Test: test_nvme_io_msg_send ...passed 00:03:50.492 Test: test_nvme_io_msg_process ...passed 00:03:50.492 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:03:50.492 00:03:50.492 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.492 suites 1 1 n/a 0 0 00:03:50.492 tests 3 3 3 0 0 00:03:50.492 asserts 56 56 56 0 n/a 00:03:50.492 00:03:50.492 Elapsed time = 0.000 seconds 00:03:50.492 21:40:02 unittest.unittest_nvme -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:03:50.492 00:03:50.492 00:03:50.492 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.492 http://cunit.sourceforge.net/ 00:03:50.492 00:03:50.492 00:03:50.492 Suite: nvme_pcie_common 00:03:50.492 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-07-15 21:40:02.793658] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:03:50.492 passed 00:03:50.492 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:03:50.492 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:03:50.492 Test: test_nvme_pcie_ctrlr_connect_qpair ...passed 00:03:50.492 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-07-15 21:40:02.793903] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 504:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:03:50.492 [2024-07-15 21:40:02.793920] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 457:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:03:50.492 [2024-07-15 21:40:02.793930] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 551:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:03:50.492 passed 00:03:50.492 Test: test_nvme_pcie_poll_group_get_stats ...[2024-07-15 21:40:02.794037] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:50.492 passed 00:03:50.492 00:03:50.492 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.492 suites 1 1 n/a 0 0 00:03:50.492 tests 6 6 6 0 0 00:03:50.492 asserts 148 148 148 0 n/a 00:03:50.492 00:03:50.492 Elapsed time = 0.000 seconds 00:03:50.492 [2024-07-15 21:40:02.794055] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:50.492 21:40:02 unittest.unittest_nvme -- unit/unittest.sh@103 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:03:50.492 00:03:50.492 00:03:50.492 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.492 http://cunit.sourceforge.net/ 00:03:50.492 00:03:50.492 00:03:50.492 Suite: nvme_fabric 00:03:50.492 Test: test_nvme_fabric_prop_set_cmd ...passed 00:03:50.492 Test: test_nvme_fabric_prop_get_cmd ...passed 00:03:50.492 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:03:50.492 Test: test_nvme_fabric_discover_probe ...passed 00:03:50.492 Test: test_nvme_fabric_qpair_connect ...passed 00:03:50.492 00:03:50.492 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.492 suites 1 1 n/a 0 0 00:03:50.492 tests 5 5 5 0 0 00:03:50.492 asserts 60 60 60 0 n/a 00:03:50.492 00:03:50.492 Elapsed time = 0.000 seconds 00:03:50.492 [2024-07-15 21:40:02.800180] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 607:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -85, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:03:50.492 21:40:02 unittest.unittest_nvme -- unit/unittest.sh@104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:03:50.492 00:03:50.492 00:03:50.492 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.492 http://cunit.sourceforge.net/ 00:03:50.492 00:03:50.492 00:03:50.492 Suite: nvme_opal 00:03:50.492 Test: test_opal_nvme_security_recv_send_done ...passed 00:03:50.492 Test: test_opal_add_short_atom_header ...passed 00:03:50.492 00:03:50.492 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.492 suites 1 1 n/a 0 0 00:03:50.492 tests 2 2 2 0 0 00:03:50.493 asserts 22 22 22 0 n/a 00:03:50.493 00:03:50.493 Elapsed time = 0.000 seconds 00:03:50.493 [2024-07-15 21:40:02.804922] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:03:50.493 00:03:50.493 real 0m15.588s 00:03:50.493 user 0m0.085s 00:03:50.493 sys 0m0.166s 00:03:50.493 21:40:02 unittest.unittest_nvme -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:50.493 21:40:02 unittest.unittest_nvme -- common/autotest_common.sh@10 -- # set +x 00:03:50.493 ************************************ 00:03:50.493 END TEST unittest_nvme 00:03:50.493 ************************************ 00:03:50.493 21:40:02 unittest -- common/autotest_common.sh@1136 -- # return 0 00:03:50.493 21:40:02 unittest -- unit/unittest.sh@249 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:03:50.493 21:40:02 unittest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:50.493 21:40:02 unittest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:50.493 21:40:02 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:50.493 ************************************ 00:03:50.493 START TEST unittest_log 00:03:50.493 ************************************ 00:03:50.493 21:40:02 unittest.unittest_log -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:03:50.493 00:03:50.493 00:03:50.493 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.493 http://cunit.sourceforge.net/ 00:03:50.493 00:03:50.493 00:03:50.493 Suite: log 00:03:50.493 Test: log_test ...[2024-07-15 21:40:02.860140] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:03:50.493 passed 00:03:50.493 Test: deprecation ...[2024-07-15 21:40:02.860394] log_ut.c: 57:log_test: *DEBUG*: log test 00:03:50.493 log dump test: 00:03:50.493 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:03:50.493 spdk dump test: 00:03:50.493 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:03:50.493 spdk dump test: 00:03:50.493 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:03:50.493 00000010 65 20 63 68 61 72 73 e chars 00:03:50.493 passed 00:03:50.493 00:03:50.493 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.493 suites 1 1 n/a 0 0 00:03:50.493 tests 2 2 2 0 0 00:03:50.493 asserts 73 73 73 0 n/a 00:03:50.493 00:03:50.493 Elapsed time = 0.000 seconds 00:03:50.493 00:03:50.493 real 0m1.073s 00:03:50.493 user 0m0.007s 00:03:50.493 sys 0m0.006s 00:03:50.493 21:40:03 unittest.unittest_log -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:50.493 21:40:03 unittest.unittest_log -- common/autotest_common.sh@10 -- # set +x 00:03:50.493 ************************************ 00:03:50.493 END TEST unittest_log 00:03:50.493 ************************************ 00:03:50.493 21:40:03 unittest -- common/autotest_common.sh@1136 -- # return 0 00:03:50.493 21:40:03 unittest -- unit/unittest.sh@250 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:03:50.493 21:40:03 unittest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:50.493 21:40:03 unittest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:50.493 21:40:03 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:50.493 ************************************ 00:03:50.493 START TEST unittest_lvol 00:03:50.493 ************************************ 00:03:50.493 21:40:03 unittest.unittest_lvol -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:03:50.493 00:03:50.493 00:03:50.493 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.493 http://cunit.sourceforge.net/ 00:03:50.493 00:03:50.493 00:03:50.493 Suite: lvol 00:03:50.493 Test: lvs_init_unload_success ...[2024-07-15 21:40:03.979620] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:03:50.493 passed 00:03:50.493 Test: lvs_init_destroy_success ...passed 00:03:50.493 Test: lvs_init_opts_success ...passed 00:03:50.493 Test: lvs_unload_lvs_is_null_fail ...passed 00:03:50.493 Test: lvs_names ...[2024-07-15 21:40:03.979843] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:03:50.493 [2024-07-15 21:40:03.979877] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:03:50.493 [2024-07-15 21:40:03.979892] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:03:50.493 [2024-07-15 21:40:03.979903] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:03:50.493 [2024-07-15 21:40:03.979930] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:03:50.493 passed 00:03:50.493 Test: lvol_create_destroy_success ...passed 00:03:50.493 Test: lvol_create_fail ...passed 00:03:50.493 Test: lvol_destroy_fail ...passed 00:03:50.493 Test: lvol_close ...passed 00:03:50.493 Test: lvol_resize ...passed 00:03:50.493 Test: lvol_set_read_only ...passed 00:03:50.493 Test: test_lvs_load ...passed 00:03:50.493 Test: lvols_load ...[2024-07-15 21:40:03.979976] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:03:50.493 [2024-07-15 21:40:03.979995] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:03:50.493 [2024-07-15 21:40:03.980021] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:03:50.493 [2024-07-15 21:40:03.980040] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:03:50.493 [2024-07-15 21:40:03.980050] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:03:50.493 [2024-07-15 21:40:03.980105] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:03:50.493 [2024-07-15 21:40:03.980116] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:03:50.493 [2024-07-15 21:40:03.980136] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:03:50.493 passed 00:03:50.493 Test: lvol_open ...passed 00:03:50.493 Test: lvol_snapshot ...passed 00:03:50.493 Test: lvol_snapshot_fail ...passed 00:03:50.493 Test: lvol_clone ...passed 00:03:50.493 Test: lvol_clone_fail ...passed 00:03:50.493 Test: lvol_iter_clones ...passed 00:03:50.493 Test: lvol_refcnt ...[2024-07-15 21:40:03.980161] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:03:50.493 [2024-07-15 21:40:03.980232] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:03:50.493 [2024-07-15 21:40:03.980277] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:03:50.493 [2024-07-15 21:40:03.980319] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol cbc25117-42f2-11ef-9f7f-e9a656123a8b because it is still open 00:03:50.493 passed 00:03:50.493 Test: lvol_names ...passed 00:03:50.493 Test: lvol_create_thin_provisioned ...passed 00:03:50.493 Test: lvol_rename ...passed 00:03:50.493 Test: lvs_rename ...passed 00:03:50.493 Test: lvol_inflate ...passed 00:03:50.493 Test: lvol_decouple_parent ...[2024-07-15 21:40:03.980337] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:03:50.493 [2024-07-15 21:40:03.980350] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:03:50.493 [2024-07-15 21:40:03.980368] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:03:50.493 [2024-07-15 21:40:03.980406] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:03:50.493 [2024-07-15 21:40:03.980421] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:03:50.493 [2024-07-15 21:40:03.980447] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:03:50.493 [2024-07-15 21:40:03.980470] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:03:50.493 [2024-07-15 21:40:03.980489] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:03:50.493 passed 00:03:50.493 Test: lvol_get_xattr ...passed 00:03:50.493 Test: lvol_esnap_reload ...passed 00:03:50.493 Test: lvol_esnap_create_bad_args ...passed 00:03:50.493 Test: lvol_esnap_create_delete ...passed 00:03:50.493 Test: lvol_esnap_load_esnaps ...passed 00:03:50.493 Test: lvol_esnap_missing ...passed 00:03:50.493 Test: lvol_esnap_hotplug ... 00:03:50.493 [2024-07-15 21:40:03.980529] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:03:50.493 [2024-07-15 21:40:03.980540] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:03:50.493 [2024-07-15 21:40:03.980551] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1260:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:03:50.493 [2024-07-15 21:40:03.980564] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:03:50.493 [2024-07-15 21:40:03.980585] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:03:50.493 [2024-07-15 21:40:03.980619] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1833:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:03:50.493 [2024-07-15 21:40:03.980644] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:03:50.493 [2024-07-15 21:40:03.980655] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:03:50.493 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:03:50.493 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:03:50.493 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:03:50.493 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:03:50.493 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:03:50.493 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:03:50.493 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:03:50.493 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:03:50.493 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:03:50.493 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:03:50.493 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:03:50.493 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:03:50.493 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:03:50.493 passed 00:03:50.493 Test: lvol_get_by ...[2024-07-15 21:40:03.980716] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol cbc26096-42f2-11ef-9f7f-e9a656123a8b: failed to create esnap bs_dev: error -12 00:03:50.493 [2024-07-15 21:40:03.980760] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol cbc2622b-42f2-11ef-9f7f-e9a656123a8b: failed to create esnap bs_dev: error -12 00:03:50.493 [2024-07-15 21:40:03.980784] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol cbc2633a-42f2-11ef-9f7f-e9a656123a8b: failed to create esnap bs_dev: error -12 00:03:50.494 passed 00:03:50.494 Test: lvol_shallow_copy ...passed 00:03:50.494 Test: lvol_set_parent ...passed 00:03:50.494 Test: lvol_set_external_parent ...passed 00:03:50.494 00:03:50.494 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.494 suites 1 1 n/a 0 0 00:03:50.494 tests 37 37 37 0 0 00:03:50.494 asserts 1505 1505 1505 0 n/a 00:03:50.494 00:03:50.494 Elapsed time = 0.000 seconds[2024-07-15 21:40:03.980965] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2274:spdk_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:03:50.494 [2024-07-15 21:40:03.980979] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2281:spdk_lvol_shallow_copy: *ERROR*: lvol cbc26a4a-42f2-11ef-9f7f-e9a656123a8b shallow copy, ext_dev must not be NULL 00:03:50.494 [2024-07-15 21:40:03.981005] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2338:spdk_lvol_set_parent: *ERROR*: lvol must not be NULL 00:03:50.494 [2024-07-15 21:40:03.981015] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2344:spdk_lvol_set_parent: *ERROR*: snapshot must not be NULL 00:03:50.494 [2024-07-15 21:40:03.981034] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2393:spdk_lvol_set_external_parent: *ERROR*: lvol must not be NULL 00:03:50.494 [2024-07-15 21:40:03.981044] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2399:spdk_lvol_set_external_parent: *ERROR*: snapshot must not be NULL 00:03:50.494 [2024-07-15 21:40:03.981054] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2406:spdk_lvol_set_external_parent: *ERROR*: lvol lvol and esnap have the same UUID 00:03:50.494 00:03:50.494 00:03:50.494 real 0m0.008s 00:03:50.494 user 0m0.005s 00:03:50.494 sys 0m0.000s 00:03:50.494 21:40:03 unittest.unittest_lvol -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:50.494 21:40:03 unittest.unittest_lvol -- common/autotest_common.sh@10 -- # set +x 00:03:50.494 ************************************ 00:03:50.494 END TEST unittest_lvol 00:03:50.494 ************************************ 00:03:50.494 21:40:04 unittest -- common/autotest_common.sh@1136 -- # return 0 00:03:50.494 21:40:04 unittest -- unit/unittest.sh@251 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:50.494 21:40:04 unittest -- unit/unittest.sh@252 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:03:50.494 21:40:04 unittest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:50.494 21:40:04 unittest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:50.494 21:40:04 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:50.494 ************************************ 00:03:50.494 START TEST unittest_nvme_rdma 00:03:50.494 ************************************ 00:03:50.494 21:40:04 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:03:50.494 00:03:50.494 00:03:50.494 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.494 http://cunit.sourceforge.net/ 00:03:50.494 00:03:50.494 00:03:50.494 Suite: nvme_rdma 00:03:50.494 Test: test_nvme_rdma_build_sgl_request ...[2024-07-15 21:40:04.032891] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:03:50.494 passed 00:03:50.494 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:03:50.494 Test: test_nvme_rdma_build_contig_request ...[2024-07-15 21:40:04.033447] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1553:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:03:50.494 [2024-07-15 21:40:04.033505] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1609:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:03:50.494 passed 00:03:50.494 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:03:50.494 Test: test_nvme_rdma_create_reqs ...[2024-07-15 21:40:04.033542] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1490:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:03:50.494 [2024-07-15 21:40:04.033571] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 931:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:03:50.494 passed 00:03:50.494 Test: test_nvme_rdma_create_rsps ...passed 00:03:50.494 Test: test_nvme_rdma_ctrlr_create_qpair ...passed 00:03:50.494 Test: test_nvme_rdma_poller_create ...passed 00:03:50.494 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:03:50.494 Test: test_nvme_rdma_ctrlr_construct ...[2024-07-15 21:40:04.033629] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 849:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:03:50.494 [2024-07-15 21:40:04.033670] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1747:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:03:50.494 [2024-07-15 21:40:04.033688] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1747:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:03:50.494 [2024-07-15 21:40:04.033751] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 450:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:03:50.494 passed 00:03:50.494 Test: test_nvme_rdma_req_put_and_get ...passed 00:03:50.494 Test: test_nvme_rdma_req_init ...passed 00:03:50.494 Test: test_nvme_rdma_validate_cm_event ...passed 00:03:50.494 Test: test_nvme_rdma_qpair_init ...passed 00:03:50.494 Test: test_nvme_rdma_qpair_submit_request ...passed 00:03:50.494 Test: test_rdma_ctrlr_get_memory_domains ...[2024-07-15 21:40:04.033835] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 544:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:03:50.494 [2024-07-15 21:40:04.033855] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 544:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:03:50.494 passed 00:03:50.494 Test: test_rdma_get_memory_translation ...passed 00:03:50.494 Test: test_get_rdma_qpair_from_wc ...passed 00:03:50.494 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:03:50.494 Test: test_nvme_rdma_poll_group_get_stats ...passed 00:03:50.494 Test: test_nvme_rdma_qpair_set_poller ...[2024-07-15 21:40:04.033889] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1368:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:03:50.494 [2024-07-15 21:40:04.033906] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:03:50.494 [2024-07-15 21:40:04.033934] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:50.494 [2024-07-15 21:40:04.033950] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:50.494 [2024-07-15 21:40:04.033984] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:03:50.494 [2024-07-15 21:40:04.034001] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:03:50.494 [2024-07-15 21:40:04.034017] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x820de2be8 on poll group 0x29f729a72000 00:03:50.494 [2024-07-15 21:40:04.034033] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:03:50.494 [2024-07-15 21:40:04.034048] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0x0 00:03:50.494 passed 00:03:50.494 00:03:50.494 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.494 suites 1 1 n/a 0 0 00:03:50.494 tests 21 21 21 0 0 00:03:50.494 asserts 397 397 397 0 n/a 00:03:50.494 00:03:50.494 Elapsed time = 0.000 seconds 00:03:50.494 [2024-07-15 21:40:04.034063] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x820de2be8 on poll group 0x29f729a72000 00:03:50.494 [2024-07-15 21:40:04.034154] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 625:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0 00:03:50.494 00:03:50.494 real 0m0.008s 00:03:50.494 user 0m0.000s 00:03:50.494 sys 0m0.008s 00:03:50.494 21:40:04 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:50.494 ************************************ 00:03:50.494 END TEST unittest_nvme_rdma 00:03:50.494 ************************************ 00:03:50.494 21:40:04 unittest.unittest_nvme_rdma -- common/autotest_common.sh@10 -- # set +x 00:03:50.494 21:40:04 unittest -- common/autotest_common.sh@1136 -- # return 0 00:03:50.494 21:40:04 unittest -- unit/unittest.sh@253 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:03:50.494 21:40:04 unittest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:50.494 21:40:04 unittest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:50.494 21:40:04 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:50.494 ************************************ 00:03:50.494 START TEST unittest_nvmf_transport 00:03:50.494 ************************************ 00:03:50.494 21:40:04 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:03:50.494 00:03:50.494 00:03:50.494 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.494 http://cunit.sourceforge.net/ 00:03:50.494 00:03:50.494 00:03:50.494 Suite: nvmf 00:03:50.494 Test: test_spdk_nvmf_transport_create ...[2024-07-15 21:40:04.082897] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:03:50.494 passed 00:03:50.494 Test: test_nvmf_transport_poll_group_create ...passed 00:03:50.494 Test: test_spdk_nvmf_transport_opts_init ...passed 00:03:50.494 Test: test_spdk_nvmf_transport_listen_ext ...[2024-07-15 21:40:04.083135] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:03:50.494 [2024-07-15 21:40:04.083154] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 276:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:03:50.494 [2024-07-15 21:40:04.083188] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 259:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:03:50.494 [2024-07-15 21:40:04.083221] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 799:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:03:50.494 [2024-07-15 21:40:04.083234] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 804:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:03:50.494 [2024-07-15 21:40:04.083246] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 809:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:03:50.494 passed 00:03:50.494 00:03:50.494 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.494 suites 1 1 n/a 0 0 00:03:50.494 tests 4 4 4 0 0 00:03:50.494 asserts 49 49 49 0 n/a 00:03:50.494 00:03:50.494 Elapsed time = 0.000 seconds 00:03:50.494 00:03:50.494 real 0m0.005s 00:03:50.494 user 0m0.000s 00:03:50.494 sys 0m0.008s 00:03:50.494 21:40:04 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:50.494 21:40:04 unittest.unittest_nvmf_transport -- common/autotest_common.sh@10 -- # set +x 00:03:50.494 ************************************ 00:03:50.494 END TEST unittest_nvmf_transport 00:03:50.494 ************************************ 00:03:50.494 21:40:04 unittest -- common/autotest_common.sh@1136 -- # return 0 00:03:50.494 21:40:04 unittest -- unit/unittest.sh@254 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:03:50.494 21:40:04 unittest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:50.495 21:40:04 unittest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:50.495 21:40:04 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:50.495 ************************************ 00:03:50.495 START TEST unittest_rdma 00:03:50.495 ************************************ 00:03:50.495 21:40:04 unittest.unittest_rdma -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:03:50.495 00:03:50.495 00:03:50.495 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.495 http://cunit.sourceforge.net/ 00:03:50.495 00:03:50.495 00:03:50.495 Suite: rdma_common 00:03:50.495 Test: test_spdk_rdma_pd ...[2024-07-15 21:40:04.129141] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:03:50.495 passed 00:03:50.495 00:03:50.495 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.495 suites 1 1 n/a 0 0 00:03:50.495 tests 1 1 1 0 0 00:03:50.495 asserts 31 31 31 0 n/a 00:03:50.495 00:03:50.495 Elapsed time = 0.000 seconds 00:03:50.495 [2024-07-15 21:40:04.129441] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:03:50.495 00:03:50.495 real 0m0.006s 00:03:50.495 user 0m0.000s 00:03:50.495 sys 0m0.008s 00:03:50.495 21:40:04 unittest.unittest_rdma -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:50.495 ************************************ 00:03:50.495 END TEST unittest_rdma 00:03:50.495 ************************************ 00:03:50.495 21:40:04 unittest.unittest_rdma -- common/autotest_common.sh@10 -- # set +x 00:03:50.495 21:40:04 unittest -- common/autotest_common.sh@1136 -- # return 0 00:03:50.495 21:40:04 unittest -- unit/unittest.sh@257 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:50.495 21:40:04 unittest -- unit/unittest.sh@261 -- # run_test unittest_nvmf unittest_nvmf 00:03:50.495 21:40:04 unittest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:50.495 21:40:04 unittest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:50.495 21:40:04 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:50.495 ************************************ 00:03:50.495 START TEST unittest_nvmf 00:03:50.495 ************************************ 00:03:50.495 21:40:04 unittest.unittest_nvmf -- common/autotest_common.sh@1117 -- # unittest_nvmf 00:03:50.495 21:40:04 unittest.unittest_nvmf -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:03:50.495 00:03:50.495 00:03:50.495 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.495 http://cunit.sourceforge.net/ 00:03:50.495 00:03:50.495 00:03:50.495 Suite: nvmf 00:03:50.495 Test: test_get_log_page ...[2024-07-15 21:40:04.181266] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:03:50.495 passed 00:03:50.495 Test: test_process_fabrics_cmd ...[2024-07-15 21:40:04.181797] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4731:nvmf_check_qpair_active: *ERROR*: Received command 0x0 on qid 0 before CONNECT 00:03:50.495 passed 00:03:50.495 Test: test_connect ...[2024-07-15 21:40:04.182309] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1012:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:03:50.495 [2024-07-15 21:40:04.182350] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 875:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:03:50.495 [2024-07-15 21:40:04.182370] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1051:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:03:50.495 [2024-07-15 21:40:04.182387] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:03:50.495 [2024-07-15 21:40:04.182402] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 886:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:03:50.495 [2024-07-15 21:40:04.182418] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 894:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:03:50.495 [2024-07-15 21:40:04.182432] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 900:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:03:50.495 [2024-07-15 21:40:04.182447] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 926:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:03:50.495 [2024-07-15 21:40:04.182468] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:03:50.495 [2024-07-15 21:40:04.182486] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 676:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:03:50.495 [2024-07-15 21:40:04.182514] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 682:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:03:50.495 [2024-07-15 21:40:04.182531] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 689:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:03:50.495 [2024-07-15 21:40:04.182547] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 696:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:03:50.495 [2024-07-15 21:40:04.182564] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 720:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:03:50.495 [2024-07-15 21:40:04.182585] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 295:nvmf_ctrlr_add_qpair: *ERROR*: Got I/O connect with duplicate QID 1 (cntlid:0) 00:03:50.495 passed 00:03:50.495 Test: test_get_ns_id_desc_list ...passed 00:03:50.495 Test: test_identify_ns ...[2024-07-15 21:40:04.182607] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 4, group 0x0) 00:03:50.495 [2024-07-15 21:40:04.182623] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group 0x0) 00:03:50.495 [2024-07-15 21:40:04.182691] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:03:50.495 [2024-07-15 21:40:04.182766] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:03:50.495 passed 00:03:50.495 Test: test_identify_ns_iocs_specific ...passed 00:03:50.495 Test: test_reservation_write_exclusive ...[2024-07-15 21:40:04.182801] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:03:50.495 [2024-07-15 21:40:04.182839] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:03:50.495 [2024-07-15 21:40:04.182912] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:03:50.495 passed 00:03:50.495 Test: test_reservation_exclusive_access ...passed 00:03:50.495 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:03:50.495 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:03:50.495 Test: test_reservation_notification_log_page ...passed 00:03:50.495 Test: test_get_dif_ctx ...passed 00:03:50.495 Test: test_set_get_features ...passed 00:03:50.495 Test: test_identify_ctrlr ...passed 00:03:50.495 Test: test_identify_ctrlr_iocs_specific ...[2024-07-15 21:40:04.183075] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:03:50.495 [2024-07-15 21:40:04.183095] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:03:50.495 [2024-07-15 21:40:04.183109] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1659:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:03:50.495 [2024-07-15 21:40:04.183122] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1735:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:03:50.495 passed 00:03:50.495 Test: test_custom_admin_cmd ...passed 00:03:50.495 Test: test_fused_compare_and_write ...passed 00:03:50.495 Test: test_multi_async_event_reqs ...passed 00:03:50.495 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:03:50.495 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:03:50.495 Test: test_multi_async_events ...passed 00:03:50.495 Test: test_rae ...passed 00:03:50.495 Test: test_nvmf_ctrlr_create_destruct ...passed 00:03:50.495 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:03:50.495 Test: test_spdk_nvmf_request_zcopy_start ...[2024-07-15 21:40:04.183241] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4238:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:03:50.495 [2024-07-15 21:40:04.183257] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4227:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:03:50.495 [2024-07-15 21:40:04.183271] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4245:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:03:50.495 [2024-07-15 21:40:04.183362] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4731:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 before CONNECT 00:03:50.495 passed 00:03:50.495 Test: test_zcopy_read ...passed 00:03:50.495 Test: test_zcopy_write ...passed 00:03:50.495 Test: test_nvmf_property_set ...passed 00:03:50.495 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...passed 00:03:50.495 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-07-15 21:40:04.183381] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4757:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 in state 4 00:03:50.495 [2024-07-15 21:40:04.183434] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:03:50.495 [2024-07-15 21:40:04.183449] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:03:50.495 [2024-07-15 21:40:04.183467] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1969:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:03:50.495 [2024-07-15 21:40:04.183486] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1975:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:03:50.495 passed 00:03:50.495 Test: test_nvmf_ctrlr_ns_attachment ...passed 00:03:50.495 Test: test_nvmf_check_qpair_active ...passed 00:03:50.495 00:03:50.495 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.495 suites 1 1 n/a 0 0 00:03:50.495 tests 32 32 32 0 0 00:03:50.495 asserts 977 977 977 0 n/a 00:03:50.495 00:03:50.495 Elapsed time = 0.000 seconds 00:03:50.495 [2024-07-15 21:40:04.183501] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1987:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:03:50.495 [2024-07-15 21:40:04.183531] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4731:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before CONNECT 00:03:50.495 [2024-07-15 21:40:04.183546] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4745:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before authentication 00:03:50.495 [2024-07-15 21:40:04.183560] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4757:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 0 00:03:50.495 [2024-07-15 21:40:04.183574] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4757:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 4 00:03:50.495 [2024-07-15 21:40:04.183590] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4757:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 5 00:03:50.495 21:40:04 unittest.unittest_nvmf -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:03:50.495 00:03:50.495 00:03:50.495 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.495 http://cunit.sourceforge.net/ 00:03:50.495 00:03:50.495 00:03:50.495 Suite: nvmf 00:03:50.495 Test: test_get_rw_params ...passed 00:03:50.495 Test: test_get_rw_ext_params ...passed 00:03:50.496 Test: test_lba_in_range ...passed 00:03:50.496 Test: test_get_dif_ctx ...passed 00:03:50.496 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:03:50.496 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-07-15 21:40:04.191888] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 447:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:03:50.496 [2024-07-15 21:40:04.192105] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 455:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:03:50.496 passed 00:03:50.496 Test: test_nvmf_bdev_ctrlr_zcopy_start ...passed 00:03:50.496 Test: test_nvmf_bdev_ctrlr_cmd ...passed 00:03:50.496 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:03:50.496 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:03:50.496 00:03:50.496 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.496 suites 1 1 n/a 0 0 00:03:50.496 tests 10 10 10 0 0 00:03:50.496 asserts 159 159 159 0 n/a 00:03:50.496 00:03:50.496 Elapsed time = 0.000 seconds[2024-07-15 21:40:04.192124] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 463:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:03:50.496 [2024-07-15 21:40:04.192142] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 965:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:03:50.496 [2024-07-15 21:40:04.192158] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 973:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:03:50.496 [2024-07-15 21:40:04.192173] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 401:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:03:50.496 [2024-07-15 21:40:04.192185] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 409:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:03:50.496 [2024-07-15 21:40:04.192206] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 500:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:03:50.496 [2024-07-15 21:40:04.192218] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 507:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:03:50.496 00:03:50.496 21:40:04 unittest.unittest_nvmf -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:03:50.496 00:03:50.496 00:03:50.496 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.496 http://cunit.sourceforge.net/ 00:03:50.496 00:03:50.496 00:03:50.496 Suite: nvmf 00:03:50.496 Test: test_discovery_log ...passed 00:03:50.496 Test: test_discovery_log_with_filters ...passed 00:03:50.496 00:03:50.496 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.496 suites 1 1 n/a 0 0 00:03:50.496 tests 2 2 2 0 0 00:03:50.496 asserts 238 238 238 0 n/a 00:03:50.496 00:03:50.496 Elapsed time = 0.000 seconds 00:03:50.496 21:40:04 unittest.unittest_nvmf -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:03:50.496 00:03:50.496 00:03:50.496 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.496 http://cunit.sourceforge.net/ 00:03:50.496 00:03:50.496 00:03:50.496 Suite: nvmf 00:03:50.496 Test: nvmf_test_create_subsystem ...[2024-07-15 21:40:04.204089] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 126:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:03:50.496 [2024-07-15 21:40:04.204244] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid 00:03:50.496 [2024-07-15 21:40:04.204263] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:03:50.496 [2024-07-15 21:40:04.204273] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid 00:03:50.496 [2024-07-15 21:40:04.204283] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:03:50.496 [2024-07-15 21:40:04.204291] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid 00:03:50.496 [2024-07-15 21:40:04.204300] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:03:50.496 [2024-07-15 21:40:04.204308] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid 00:03:50.496 [2024-07-15 21:40:04.204316] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 184:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:03:50.496 [2024-07-15 21:40:04.204324] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid 00:03:50.496 [2024-07-15 21:40:04.204332] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:03:50.496 [2024-07-15 21:40:04.204340] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid 00:03:50.496 [2024-07-15 21:40:04.204354] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:03:50.496 [2024-07-15 21:40:04.204363] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid 00:03:50.496 [2024-07-15 21:40:04.204390] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:03:50.496 [2024-07-15 21:40:04.204400] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid 00:03:50.496 [2024-07-15 21:40:04.204411] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:03:50.496 [2024-07-15 21:40:04.204419] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid 00:03:50.496 [2024-07-15 21:40:04.204428] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:03:50.496 passed 00:03:50.496 Test: test_spdk_nvmf_subsystem_add_ns ...passed 00:03:50.496 Test: test_spdk_nvmf_subsystem_add_fdp_ns ...passed 00:03:50.496 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:03:50.496 Test: test_spdk_nvmf_ns_visible ...passed 00:03:50.496 Test: test_reservation_register ...passed 00:03:50.496 Test: test_reservation_register_with_ptpl ...[2024-07-15 21:40:04.204437] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid 00:03:50.496 [2024-07-15 21:40:04.204454] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:03:50.496 [2024-07-15 21:40:04.204462] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid 00:03:50.496 [2024-07-15 21:40:04.204520] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:03:50.496 [2024-07-15 21:40:04.204532] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2031:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:03:50.496 [2024-07-15 21:40:04.204554] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2162:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem with id: 0 can only add FDP namespace. 00:03:50.496 [2024-07-15 21:40:04.204583] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11 00:03:50.496 [2024-07-15 21:40:04.204648] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3108:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:50.496 [2024-07-15 21:40:04.204663] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3164:nvmf_ns_reservation_register: *ERROR*: No registrant 00:03:50.496 passed 00:03:50.496 Test: test_reservation_acquire_preempt_1 ...passed 00:03:50.496 Test: test_reservation_acquire_release_with_ptpl ...[2024-07-15 21:40:04.204817] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3108:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:50.496 passed 00:03:50.496 Test: test_reservation_release ...passed 00:03:50.496 Test: test_reservation_unregister_notification ...passed 00:03:50.496 Test: test_reservation_release_notification ...passed 00:03:50.496 Test: test_reservation_release_notification_write_exclusive ...passed 00:03:50.496 Test: test_reservation_clear_notification ...[2024-07-15 21:40:04.204943] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3108:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:50.497 [2024-07-15 21:40:04.204965] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3108:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:50.497 [2024-07-15 21:40:04.204981] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3108:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:50.497 [2024-07-15 21:40:04.204995] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3108:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:50.497 passed 00:03:50.497 Test: test_reservation_preempt_notification ...[2024-07-15 21:40:04.205011] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3108:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:50.497 passed 00:03:50.497 Test: test_spdk_nvmf_ns_event ...passed 00:03:50.497 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:03:50.497 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:03:50.497 Test: test_spdk_nvmf_subsystem_add_host ...[2024-07-15 21:40:04.205026] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3108:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:50.497 passed 00:03:50.497 Test: test_nvmf_ns_reservation_report ...passed 00:03:50.497 Test: test_nvmf_nqn_is_valid ...passed 00:03:50.497 Test: test_nvmf_ns_reservation_restore ...passed 00:03:50.497 Test: test_nvmf_subsystem_state_change ...passed 00:03:50.497 Test: test_nvmf_reservation_custom_ops ...passed 00:03:50.497 00:03:50.497 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.497 suites 1 1 n/a 0 0 00:03:50.497 tests 24 24 24 0 0 00:03:50.497 asserts 499 499 499 0 n/a 00:03:50.497 00:03:50.497 Elapsed time = 0.000 seconds 00:03:50.497 [2024-07-15 21:40:04.205113] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 265:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:03:50.497 [2024-07-15 21:40:04.205134] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to transport_ut transport 00:03:50.497 [2024-07-15 21:40:04.205155] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3470:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:03:50.497 [2024-07-15 21:40:04.205175] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:03:50.497 [2024-07-15 21:40:04.205184] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:cbe4a05c-42f2-11ef-9f7f-e9a656123a8": uuid is not the correct length 00:03:50.497 [2024-07-15 21:40:04.205193] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:03:50.497 [2024-07-15 21:40:04.205218] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2663:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:03:50.497 21:40:04 unittest.unittest_nvmf -- unit/unittest.sh@112 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:03:50.497 00:03:50.497 00:03:50.497 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.497 http://cunit.sourceforge.net/ 00:03:50.497 00:03:50.497 00:03:50.497 Suite: nvmf 00:03:50.497 Test: test_nvmf_tcp_create ...[2024-07-15 21:40:04.214288] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 745:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:03:50.497 passed 00:03:50.497 Test: test_nvmf_tcp_destroy ...passed 00:03:50.497 Test: test_nvmf_tcp_poll_group_create ...passed 00:03:50.497 Test: test_nvmf_tcp_send_c2h_data ...passed 00:03:50.497 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:03:50.497 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:03:50.497 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:03:50.497 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-07-15 21:40:04.225422] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1102:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:50.497 [2024-07-15 21:40:04.225451] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1622:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8204059e8 is same with the state(5) to be set 00:03:50.497 [2024-07-15 21:40:04.225462] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1622:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8204059e8 is same with the state(5) to be set 00:03:50.497 [2024-07-15 21:40:04.225471] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1102:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:50.497 [2024-07-15 21:40:04.225479] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1622:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8204059e8 is same with the state(5) to be set 00:03:50.497 passed 00:03:50.497 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:03:50.497 Test: test_nvmf_tcp_icreq_handle ...[2024-07-15 21:40:04.225512] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2136:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:03:50.497 [2024-07-15 21:40:04.225522] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1102:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:50.497 passed 00:03:50.497 Test: test_nvmf_tcp_check_xfer_type ...passed 00:03:50.497 Test: test_nvmf_tcp_invalid_sgl ...passed 00:03:50.497 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-07-15 21:40:04.225530] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1622:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8204058f0 is same with the state(5) to be set 00:03:50.497 [2024-07-15 21:40:04.225538] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2136:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:03:50.497 [2024-07-15 21:40:04.225546] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1622:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8204058f0 is same with the state(5) to be set 00:03:50.497 [2024-07-15 21:40:04.225554] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1102:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:50.497 [2024-07-15 21:40:04.225562] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1622:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8204058f0 is same with the state(5) to be set 00:03:50.497 [2024-07-15 21:40:04.225571] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1102:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=0 00:03:50.497 [2024-07-15 21:40:04.225579] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1622:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8204058f0 is same with the state(5) to be set 00:03:50.497 [2024-07-15 21:40:04.225594] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2532:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:03:50.497 [2024-07-15 21:40:04.225603] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1102:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:50.497 [2024-07-15 21:40:04.225610] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1622:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8204058f0 is same with the state(5) to be set 00:03:50.497 [2024-07-15 21:40:04.225621] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2263:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x820405178 00:03:50.497 [2024-07-15 21:40:04.225630] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1102:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:50.497 [2024-07-15 21:40:04.225638] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1622:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8204059e8 is same with the state(5) to be set 00:03:50.497 [2024-07-15 21:40:04.225653] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2322:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x8204059e8 00:03:50.497 [2024-07-15 21:40:04.225661] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1102:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:50.497 [2024-07-15 21:40:04.225669] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1622:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8204059e8 is same with the state(5) to be set 00:03:50.497 [2024-07-15 21:40:04.225678] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2273:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:03:50.497 [2024-07-15 21:40:04.225686] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1102:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:50.497 [2024-07-15 21:40:04.225694] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1622:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8204059e8 is same with the state(5) to be set 00:03:50.497 [2024-07-15 21:40:04.225702] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2312:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:03:50.497 [2024-07-15 21:40:04.225710] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1102:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:50.497 [2024-07-15 21:40:04.225718] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1622:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8204059e8 is same with the state(5) to be set 00:03:50.497 [2024-07-15 21:40:04.225726] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1102:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:50.497 passed 00:03:50.497 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-07-15 21:40:04.225734] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1622:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8204059e8 is same with the state(5) to be set 00:03:50.497 [2024-07-15 21:40:04.225742] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1102:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:50.497 [2024-07-15 21:40:04.225750] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1622:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8204059e8 is same with the state(5) to be set 00:03:50.497 [2024-07-15 21:40:04.225758] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1102:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:50.497 [2024-07-15 21:40:04.225766] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1622:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8204059e8 is same with the state(5) to be set 00:03:50.497 [2024-07-15 21:40:04.225775] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1102:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:50.497 [2024-07-15 21:40:04.225782] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1622:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8204059e8 is same with the state(5) to be set 00:03:50.497 [2024-07-15 21:40:04.225790] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1102:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:50.497 [2024-07-15 21:40:04.225798] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1622:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8204059e8 is same with the state(5) to be set 00:03:50.497 [2024-07-15 21:40:04.225806] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1102:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:50.497 [2024-07-15 21:40:04.225815] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1622:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8204059e8 is same with the state(5) to be set 00:03:50.497 passed 00:03:50.497 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-07-15 21:40:04.231455] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:03:50.497 passed 00:03:50.497 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-07-15 21:40:04.231477] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:03:50.497 passed 00:03:50.497 Test: test_nvmf_tcp_tls_generate_tls_psk ...passed 00:03:50.497 00:03:50.497 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.497 suites 1 1 n/a 0 0 00:03:50.497 tests 17 17 17 0 0 00:03:50.497 asserts 222 222 222 0 n/a 00:03:50.497 00:03:50.497 Elapsed time = 0.016 seconds[2024-07-15 21:40:04.231603] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:03:50.497 [2024-07-15 21:40:04.231618] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:03:50.497 [2024-07-15 21:40:04.231684] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:03:50.497 [2024-07-15 21:40:04.231695] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:03:50.497 00:03:50.497 21:40:04 unittest.unittest_nvmf -- unit/unittest.sh@113 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:03:50.497 00:03:50.497 00:03:50.497 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.497 http://cunit.sourceforge.net/ 00:03:50.497 00:03:50.497 00:03:50.498 Suite: nvmf 00:03:50.498 Test: test_nvmf_tgt_create_poll_group ...passed 00:03:50.498 00:03:50.498 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.498 suites 1 1 n/a 0 0 00:03:50.498 tests 1 1 1 0 0 00:03:50.498 asserts 17 17 17 0 n/a 00:03:50.498 00:03:50.498 Elapsed time = 0.000 seconds 00:03:50.498 00:03:50.498 real 0m0.066s 00:03:50.498 user 0m0.026s 00:03:50.498 sys 0m0.039s 00:03:50.498 21:40:04 unittest.unittest_nvmf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:50.498 21:40:04 unittest.unittest_nvmf -- common/autotest_common.sh@10 -- # set +x 00:03:50.498 ************************************ 00:03:50.498 END TEST unittest_nvmf 00:03:50.498 ************************************ 00:03:50.498 21:40:04 unittest -- common/autotest_common.sh@1136 -- # return 0 00:03:50.498 21:40:04 unittest -- unit/unittest.sh@262 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:50.498 21:40:04 unittest -- unit/unittest.sh@267 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:50.498 21:40:04 unittest -- unit/unittest.sh@268 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:03:50.498 21:40:04 unittest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:50.498 21:40:04 unittest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:50.498 21:40:04 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:50.498 ************************************ 00:03:50.498 START TEST unittest_nvmf_rdma 00:03:50.498 ************************************ 00:03:50.498 21:40:04 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:03:50.498 00:03:50.498 00:03:50.498 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.498 http://cunit.sourceforge.net/ 00:03:50.498 00:03:50.498 00:03:50.498 Suite: nvmf 00:03:50.498 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-07-15 21:40:04.294770] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1864:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:03:50.498 [2024-07-15 21:40:04.295080] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:03:50.498 passed 00:03:50.498 Test: test_spdk_nvmf_rdma_request_process ...[2024-07-15 21:40:04.295461] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:03:50.498 passed 00:03:50.498 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:03:50.498 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:03:50.498 Test: test_nvmf_rdma_opts_init ...passed 00:03:50.498 Test: test_nvmf_rdma_request_free_data ...passed 00:03:50.498 Test: test_nvmf_rdma_resources_create ...passed 00:03:50.498 Test: test_nvmf_rdma_qpair_compare ...passed 00:03:50.498 Test: test_nvmf_rdma_resize_cq ...[2024-07-15 21:40:04.296534] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 955:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:03:50.498 Using CQ of insufficient size may lead to CQ overrun 00:03:50.498 [2024-07-15 21:40:04.296562] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 960:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:03:50.498 [2024-07-15 21:40:04.296630] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 967:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0 00:03:50.498 passed 00:03:50.498 00:03:50.498 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.498 suites 1 1 n/a 0 0 00:03:50.498 tests 9 9 9 0 0 00:03:50.498 asserts 579 579 579 0 n/a 00:03:50.498 00:03:50.498 Elapsed time = 0.000 seconds 00:03:50.498 00:03:50.498 real 0m0.009s 00:03:50.498 user 0m0.008s 00:03:50.498 sys 0m0.000s 00:03:50.498 21:40:04 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:50.498 ************************************ 00:03:50.498 END TEST unittest_nvmf_rdma 00:03:50.498 ************************************ 00:03:50.498 21:40:04 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:03:50.498 21:40:04 unittest -- common/autotest_common.sh@1136 -- # return 0 00:03:50.498 21:40:04 unittest -- unit/unittest.sh@271 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:50.498 21:40:04 unittest -- unit/unittest.sh@275 -- # run_test unittest_scsi unittest_scsi 00:03:50.498 21:40:04 unittest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:50.498 21:40:04 unittest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:50.498 21:40:04 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:50.498 ************************************ 00:03:50.498 START TEST unittest_scsi 00:03:50.498 ************************************ 00:03:50.498 21:40:04 unittest.unittest_scsi -- common/autotest_common.sh@1117 -- # unittest_scsi 00:03:50.498 21:40:04 unittest.unittest_scsi -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:03:50.498 00:03:50.498 00:03:50.498 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.498 http://cunit.sourceforge.net/ 00:03:50.498 00:03:50.498 00:03:50.498 Suite: dev_suite 00:03:50.498 Test: dev_destruct_null_dev ...passed 00:03:50.498 Test: dev_destruct_zero_luns ...passed 00:03:50.498 Test: dev_destruct_null_lun ...passed 00:03:50.498 Test: dev_destruct_success ...passed 00:03:50.498 Test: dev_construct_num_luns_zero ...[2024-07-15 21:40:04.351192] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:03:50.498 passed 00:03:50.498 Test: dev_construct_no_lun_zero ...passed 00:03:50.498 Test: dev_construct_null_lun ...passed 00:03:50.498 Test: dev_construct_name_too_long ...passed 00:03:50.498 Test: dev_construct_success ...passed 00:03:50.498 Test: dev_construct_success_lun_zero_not_first ...passed 00:03:50.498 Test: dev_queue_mgmt_task_success ...passed 00:03:50.498 Test: dev_queue_task_success ...passed 00:03:50.498 Test: dev_stop_success ...passed 00:03:50.498 Test: dev_add_port_max_ports ...passed 00:03:50.498 Test: dev_add_port_construct_failure1 ...[2024-07-15 21:40:04.351458] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:03:50.498 [2024-07-15 21:40:04.351483] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 248:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:03:50.498 [2024-07-15 21:40:04.351503] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 223:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:03:50.498 [2024-07-15 21:40:04.351565] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:03:50.498 [2024-07-15 21:40:04.351585] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:03:50.498 passed 00:03:50.498 Test: dev_add_port_construct_failure2 ...passed 00:03:50.498 Test: dev_add_port_success1 ...passed 00:03:50.498 Test: dev_add_port_success2 ...passed 00:03:50.498 Test: dev_add_port_success3 ...passed 00:03:50.498 Test: dev_find_port_by_id_num_ports_zero ...passed 00:03:50.498 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:03:50.498 Test: dev_find_port_by_id_success ...passed 00:03:50.498 Test: dev_add_lun_bdev_not_found ...passed 00:03:50.498 Test: dev_add_lun_no_free_lun_id ...[2024-07-15 21:40:04.351605] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:03:50.498 passed 00:03:50.498 Test: dev_add_lun_success1 ...passed 00:03:50.498 Test: dev_add_lun_success2 ...passed 00:03:50.498 Test: dev_check_pending_tasks ...passed 00:03:50.498 Test: dev_iterate_luns ...passed 00:03:50.498 Test: dev_find_free_lun ...[2024-07-15 21:40:04.351919] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:03:50.498 passed 00:03:50.498 00:03:50.498 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.498 suites 1 1 n/a 0 0 00:03:50.498 tests 29 29 29 0 0 00:03:50.498 asserts 97 97 97 0 n/a 00:03:50.498 00:03:50.498 Elapsed time = 0.000 seconds 00:03:50.498 21:40:04 unittest.unittest_scsi -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:03:50.498 00:03:50.498 00:03:50.498 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.498 http://cunit.sourceforge.net/ 00:03:50.498 00:03:50.498 00:03:50.498 Suite: lun_suite 00:03:50.498 Test: lun_task_mgmt_execute_abort_task_not_supported ...passed 00:03:50.498 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...passed 00:03:50.498 Test: lun_task_mgmt_execute_lun_reset ...passed 00:03:50.498 Test: lun_task_mgmt_execute_target_reset ...passed 00:03:50.498 Test: lun_task_mgmt_execute_invalid_case ...passed 00:03:50.498 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:03:50.498 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:03:50.498 Test: lun_append_task_null_lun_not_supported ...passed 00:03:50.498 Test: lun_execute_scsi_task_pending ...passed 00:03:50.498 Test: lun_execute_scsi_task_complete ...passed 00:03:50.498 Test: lun_execute_scsi_task_resize ...passed 00:03:50.498 Test: lun_destruct_success ...passed 00:03:50.498 Test: lun_construct_null_ctx ...passed 00:03:50.498 Test: lun_construct_success ...passed 00:03:50.498 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:03:50.498 Test: lun_reset_task_suspend_scsi_task ...passed 00:03:50.498 Test: lun_check_pending_tasks_only_for_specific_initiator ...[2024-07-15 21:40:04.360112] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:03:50.498 [2024-07-15 21:40:04.360291] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:03:50.498 [2024-07-15 21:40:04.360312] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:03:50.498 [2024-07-15 21:40:04.360339] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:03:50.498 passed 00:03:50.498 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:03:50.498 00:03:50.498 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.498 suites 1 1 n/a 0 0 00:03:50.498 tests 18 18 18 0 0 00:03:50.498 asserts 153 153 153 0 n/a 00:03:50.498 00:03:50.498 Elapsed time = 0.000 seconds 00:03:50.498 21:40:04 unittest.unittest_scsi -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:03:50.498 00:03:50.498 00:03:50.498 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.498 http://cunit.sourceforge.net/ 00:03:50.498 00:03:50.498 00:03:50.498 Suite: scsi_suite 00:03:50.498 Test: scsi_init ...passed 00:03:50.498 00:03:50.498 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.499 suites 1 1 n/a 0 0 00:03:50.499 tests 1 1 1 0 0 00:03:50.499 asserts 1 1 1 0 n/a 00:03:50.499 00:03:50.499 Elapsed time = 0.000 seconds 00:03:50.499 21:40:04 unittest.unittest_scsi -- unit/unittest.sh@120 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:03:50.499 00:03:50.499 00:03:50.499 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.499 http://cunit.sourceforge.net/ 00:03:50.499 00:03:50.499 00:03:50.499 Suite: translation_suite 00:03:50.499 Test: mode_select_6_test ...passed 00:03:50.499 Test: mode_select_6_test2 ...passed 00:03:50.499 Test: mode_sense_6_test ...passed 00:03:50.499 Test: mode_sense_10_test ...passed 00:03:50.499 Test: inquiry_evpd_test ...passed 00:03:50.499 Test: inquiry_standard_test ...passed 00:03:50.499 Test: inquiry_overflow_test ...passed 00:03:50.499 Test: task_complete_test ...passed 00:03:50.499 Test: lba_range_test ...passed 00:03:50.499 Test: xfer_len_test ...[2024-07-15 21:40:04.371047] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1271:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:03:50.499 passed 00:03:50.499 Test: xfer_test ...passed 00:03:50.499 Test: scsi_name_padding_test ...passed 00:03:50.499 Test: get_dif_ctx_test ...passed 00:03:50.499 Test: unmap_split_test ...passed 00:03:50.499 00:03:50.499 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.499 suites 1 1 n/a 0 0 00:03:50.499 tests 14 14 14 0 0 00:03:50.499 asserts 1205 1205 1205 0 n/a 00:03:50.499 00:03:50.499 Elapsed time = 0.000 seconds 00:03:50.499 21:40:04 unittest.unittest_scsi -- unit/unittest.sh@121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:03:50.499 00:03:50.499 00:03:50.499 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.499 http://cunit.sourceforge.net/ 00:03:50.499 00:03:50.499 00:03:50.499 Suite: reservation_suite 00:03:50.499 Test: test_reservation_register ...[2024-07-15 21:40:04.377272] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:50.499 passed 00:03:50.499 Test: test_reservation_reserve ...[2024-07-15 21:40:04.377561] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:50.499 [2024-07-15 21:40:04.377587] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 215:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:03:50.499 [2024-07-15 21:40:04.377606] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 210:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:03:50.499 passed 00:03:50.499 Test: test_all_registrant_reservation_reserve ...passed 00:03:50.499 Test: test_all_registrant_reservation_access ...[2024-07-15 21:40:04.377630] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:50.499 [2024-07-15 21:40:04.377667] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:50.499 [2024-07-15 21:40:04.377688] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 866:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0x8 00:03:50.499 passed 00:03:50.499 Test: test_reservation_preempt_non_all_regs ...passed 00:03:50.499 Test: test_reservation_preempt_all_regs ...passed 00:03:50.499 Test: test_reservation_cmds_conflict ...[2024-07-15 21:40:04.377703] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 866:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0xaa 00:03:50.499 [2024-07-15 21:40:04.377731] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:50.499 [2024-07-15 21:40:04.377747] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 464:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:03:50.499 [2024-07-15 21:40:04.377771] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:50.499 [2024-07-15 21:40:04.377797] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:50.499 [2024-07-15 21:40:04.377815] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 858:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:03:50.499 [2024-07-15 21:40:04.377830] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:03:50.499 [2024-07-15 21:40:04.377844] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:03:50.499 passed 00:03:50.499 Test: test_scsi2_reserve_release ...passed 00:03:50.499 Test: test_pr_with_scsi2_reserve_release ...passed 00:03:50.499 00:03:50.499 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.499 suites 1 1 n/a 0 0 00:03:50.499 tests 9 9 9 0 0 00:03:50.499 asserts 344 344 344 0 n/a 00:03:50.499 00:03:50.499 Elapsed time = 0.000 seconds 00:03:50.499 [2024-07-15 21:40:04.377859] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:03:50.499 [2024-07-15 21:40:04.377873] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:03:50.499 [2024-07-15 21:40:04.377904] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:50.499 00:03:50.499 real 0m0.034s 00:03:50.499 user 0m0.021s 00:03:50.499 sys 0m0.019s 00:03:50.499 21:40:04 unittest.unittest_scsi -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:50.499 21:40:04 unittest.unittest_scsi -- common/autotest_common.sh@10 -- # set +x 00:03:50.499 ************************************ 00:03:50.499 END TEST unittest_scsi 00:03:50.499 ************************************ 00:03:50.499 21:40:04 unittest -- common/autotest_common.sh@1136 -- # return 0 00:03:50.499 21:40:04 unittest -- unit/unittest.sh@278 -- # uname -s 00:03:50.499 21:40:04 unittest -- unit/unittest.sh@278 -- # '[' FreeBSD = Linux ']' 00:03:50.499 21:40:04 unittest -- unit/unittest.sh@281 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:03:50.499 21:40:04 unittest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:50.499 21:40:04 unittest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:50.499 21:40:04 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:50.499 ************************************ 00:03:50.499 START TEST unittest_thread 00:03:50.499 ************************************ 00:03:50.499 21:40:04 unittest.unittest_thread -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:03:50.499 00:03:50.499 00:03:50.499 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.499 http://cunit.sourceforge.net/ 00:03:50.499 00:03:50.499 00:03:50.499 Suite: io_channel 00:03:50.499 Test: thread_alloc ...passed 00:03:50.499 Test: thread_send_msg ...passed 00:03:50.499 Test: thread_poller ...passed 00:03:50.499 Test: poller_pause ...passed 00:03:50.499 Test: thread_for_each ...passed 00:03:50.499 Test: for_each_channel_remove ...passed 00:03:50.499 Test: for_each_channel_unreg ...[2024-07-15 21:40:04.430869] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2178:spdk_io_device_register: *ERROR*: io_device 0x820b44d14 already registered (old:0x40831067000 new:0x40831067180) 00:03:50.499 passed 00:03:50.499 Test: thread_name ...passed 00:03:50.499 Test: channel ...passed 00:03:50.499 Test: channel_destroy_races ...[2024-07-15 21:40:04.431656] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2311:spdk_get_io_channel: *ERROR*: could not find io_device 0x228838 00:03:50.499 passed 00:03:50.499 Test: thread_exit_test ...[2024-07-15 21:40:04.432312] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 640:thread_exit: *ERROR*: thread 0x4083102ca80 got timeout, and move it to the exited state forcefully 00:03:50.499 passed 00:03:50.499 Test: thread_update_stats_test ...passed 00:03:50.499 Test: nested_channel ...passed 00:03:50.499 Test: device_unregister_and_thread_exit_race ...passed 00:03:50.499 Test: cache_closest_timed_poller ...passed 00:03:50.499 Test: multi_timed_pollers_have_same_expiration ...passed 00:03:50.499 Test: io_device_lookup ...passed 00:03:50.499 Test: spdk_spin ...[2024-07-15 21:40:04.433759] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:03:50.499 [2024-07-15 21:40:04.433781] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x820b44d10 00:03:50.499 [2024-07-15 21:40:04.433795] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3120:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:03:50.499 [2024-07-15 21:40:04.434001] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:03:50.499 [2024-07-15 21:40:04.434015] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x820b44d10 00:03:50.499 [2024-07-15 21:40:04.434028] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:03:50.499 [2024-07-15 21:40:04.434041] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x820b44d10 00:03:50.499 [2024-07-15 21:40:04.434053] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:03:50.499 [2024-07-15 21:40:04.434066] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x820b44d10 00:03:50.499 [2024-07-15 21:40:04.434079] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:03:50.499 [2024-07-15 21:40:04.434091] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x820b44d10 00:03:50.499 passed 00:03:50.499 Test: for_each_channel_and_thread_exit_race ...passed 00:03:50.499 Test: for_each_thread_and_thread_exit_race ...passed 00:03:50.499 00:03:50.499 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.499 suites 1 1 n/a 0 0 00:03:50.499 tests 20 20 20 0 0 00:03:50.499 asserts 409 409 409 0 n/a 00:03:50.499 00:03:50.499 Elapsed time = 0.008 seconds 00:03:50.499 00:03:50.499 real 0m0.013s 00:03:50.499 user 0m0.012s 00:03:50.499 sys 0m0.008s 00:03:50.499 21:40:04 unittest.unittest_thread -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:50.499 21:40:04 unittest.unittest_thread -- common/autotest_common.sh@10 -- # set +x 00:03:50.499 ************************************ 00:03:50.499 END TEST unittest_thread 00:03:50.499 ************************************ 00:03:50.499 21:40:04 unittest -- common/autotest_common.sh@1136 -- # return 0 00:03:50.499 21:40:04 unittest -- unit/unittest.sh@282 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:03:50.499 21:40:04 unittest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:50.499 21:40:04 unittest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:50.499 21:40:04 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:50.499 ************************************ 00:03:50.499 START TEST unittest_iobuf 00:03:50.499 ************************************ 00:03:50.500 21:40:04 unittest.unittest_iobuf -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:03:50.500 00:03:50.500 00:03:50.500 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.500 http://cunit.sourceforge.net/ 00:03:50.500 00:03:50.500 00:03:50.500 Suite: io_channel 00:03:50.500 Test: iobuf ...passed 00:03:50.500 Test: iobuf_cache ...[2024-07-15 21:40:04.474892] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 362:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:03:50.500 [2024-07-15 21:40:04.475064] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 364:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:03:50.500 [2024-07-15 21:40:04.475089] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 374:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:03:50.500 [2024-07-15 21:40:04.475103] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 376:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:03:50.500 [2024-07-15 21:40:04.475119] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 362:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:03:50.500 [2024-07-15 21:40:04.475132] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 364:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:03:50.500 passed 00:03:50.500 00:03:50.500 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.500 suites 1 1 n/a 0 0 00:03:50.500 tests 2 2 2 0 0 00:03:50.500 asserts 107 107 107 0 n/a 00:03:50.500 00:03:50.500 Elapsed time = 0.000 seconds 00:03:50.500 00:03:50.500 real 0m0.004s 00:03:50.500 user 0m0.000s 00:03:50.500 sys 0m0.008s 00:03:50.500 21:40:04 unittest.unittest_iobuf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:50.500 21:40:04 unittest.unittest_iobuf -- common/autotest_common.sh@10 -- # set +x 00:03:50.500 ************************************ 00:03:50.500 END TEST unittest_iobuf 00:03:50.500 ************************************ 00:03:50.500 21:40:04 unittest -- common/autotest_common.sh@1136 -- # return 0 00:03:50.500 21:40:04 unittest -- unit/unittest.sh@283 -- # run_test unittest_util unittest_util 00:03:50.500 21:40:04 unittest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:50.500 21:40:04 unittest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:50.500 21:40:04 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:50.500 ************************************ 00:03:50.500 START TEST unittest_util 00:03:50.500 ************************************ 00:03:50.500 21:40:04 unittest.unittest_util -- common/autotest_common.sh@1117 -- # unittest_util 00:03:50.500 21:40:04 unittest.unittest_util -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:03:50.500 00:03:50.500 00:03:50.500 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.500 http://cunit.sourceforge.net/ 00:03:50.500 00:03:50.500 00:03:50.500 Suite: base64 00:03:50.500 Test: test_base64_get_encoded_strlen ...passed 00:03:50.500 Test: test_base64_get_decoded_len ...passed 00:03:50.500 Test: test_base64_encode ...passed 00:03:50.500 Test: test_base64_decode ...passed 00:03:50.500 Test: test_base64_urlsafe_encode ...passed 00:03:50.500 Test: test_base64_urlsafe_decode ...passed 00:03:50.500 00:03:50.500 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.500 suites 1 1 n/a 0 0 00:03:50.500 tests 6 6 6 0 0 00:03:50.500 asserts 112 112 112 0 n/a 00:03:50.500 00:03:50.500 Elapsed time = 0.000 seconds 00:03:50.500 21:40:04 unittest.unittest_util -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:03:50.500 00:03:50.500 00:03:50.500 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.500 http://cunit.sourceforge.net/ 00:03:50.500 00:03:50.500 00:03:50.500 Suite: bit_array 00:03:50.500 Test: test_1bit ...passed 00:03:50.500 Test: test_64bit ...passed 00:03:50.500 Test: test_find ...passed 00:03:50.500 Test: test_resize ...passed 00:03:50.500 Test: test_errors ...passed 00:03:50.500 Test: test_count ...passed 00:03:50.500 Test: test_mask_store_load ...passed 00:03:50.500 Test: test_mask_clear ...passed 00:03:50.500 00:03:50.500 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.500 suites 1 1 n/a 0 0 00:03:50.500 tests 8 8 8 0 0 00:03:50.500 asserts 5075 5075 5075 0 n/a 00:03:50.500 00:03:50.500 Elapsed time = 0.000 seconds 00:03:50.500 21:40:04 unittest.unittest_util -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:03:50.500 00:03:50.500 00:03:50.500 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.500 http://cunit.sourceforge.net/ 00:03:50.500 00:03:50.500 00:03:50.500 Suite: cpuset 00:03:50.500 Test: test_cpuset ...passed 00:03:50.500 Test: test_cpuset_parse ...[2024-07-15 21:40:04.537679] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 256:parse_list: *ERROR*: Unexpected end of core list '[' 00:03:50.500 [2024-07-15 21:40:04.537949] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:03:50.500 [2024-07-15 21:40:04.537975] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:03:50.500 [2024-07-15 21:40:04.537992] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 237:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:03:50.500 [2024-07-15 21:40:04.538008] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:03:50.500 [2024-07-15 21:40:04.538025] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:03:50.500 [2024-07-15 21:40:04.538041] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 220:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:03:50.500 [2024-07-15 21:40:04.538057] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 215:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:03:50.500 passed 00:03:50.500 Test: test_cpuset_fmt ...passed 00:03:50.500 Test: test_cpuset_foreach ...passed 00:03:50.500 00:03:50.500 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.500 suites 1 1 n/a 0 0 00:03:50.500 tests 4 4 4 0 0 00:03:50.500 asserts 90 90 90 0 n/a 00:03:50.500 00:03:50.500 Elapsed time = 0.000 seconds 00:03:50.500 21:40:04 unittest.unittest_util -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:03:50.500 00:03:50.500 00:03:50.500 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.500 http://cunit.sourceforge.net/ 00:03:50.500 00:03:50.500 00:03:50.500 Suite: crc16 00:03:50.500 Test: test_crc16_t10dif ...passed 00:03:50.500 Test: test_crc16_t10dif_seed ...passed 00:03:50.500 Test: test_crc16_t10dif_copy ...passed 00:03:50.500 00:03:50.500 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.500 suites 1 1 n/a 0 0 00:03:50.500 tests 3 3 3 0 0 00:03:50.500 asserts 5 5 5 0 n/a 00:03:50.500 00:03:50.500 Elapsed time = 0.000 seconds 00:03:50.500 21:40:04 unittest.unittest_util -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:03:50.500 00:03:50.500 00:03:50.500 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.500 http://cunit.sourceforge.net/ 00:03:50.500 00:03:50.500 00:03:50.500 Suite: crc32_ieee 00:03:50.500 Test: test_crc32_ieee ...passed 00:03:50.500 00:03:50.500 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.500 suites 1 1 n/a 0 0 00:03:50.500 tests 1 1 1 0 0 00:03:50.500 asserts 1 1 1 0 n/a 00:03:50.500 00:03:50.500 Elapsed time = 0.000 seconds 00:03:50.500 21:40:04 unittest.unittest_util -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:03:50.500 00:03:50.500 00:03:50.500 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.500 http://cunit.sourceforge.net/ 00:03:50.500 00:03:50.500 00:03:50.500 Suite: crc32c 00:03:50.500 Test: test_crc32c ...passed 00:03:50.500 Test: test_crc32c_nvme ...passed 00:03:50.500 00:03:50.500 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.500 suites 1 1 n/a 0 0 00:03:50.500 tests 2 2 2 0 0 00:03:50.500 asserts 16 16 16 0 n/a 00:03:50.500 00:03:50.500 Elapsed time = 0.000 seconds 00:03:50.500 21:40:04 unittest.unittest_util -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:03:50.500 00:03:50.500 00:03:50.500 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.500 http://cunit.sourceforge.net/ 00:03:50.500 00:03:50.500 00:03:50.500 Suite: crc64 00:03:50.500 Test: test_crc64_nvme ...passed 00:03:50.500 00:03:50.500 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.500 suites 1 1 n/a 0 0 00:03:50.500 tests 1 1 1 0 0 00:03:50.500 asserts 4 4 4 0 n/a 00:03:50.500 00:03:50.501 Elapsed time = 0.000 seconds 00:03:50.501 21:40:04 unittest.unittest_util -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:03:50.501 00:03:50.501 00:03:50.501 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.501 http://cunit.sourceforge.net/ 00:03:50.501 00:03:50.501 00:03:50.501 Suite: string 00:03:50.501 Test: test_parse_ip_addr ...passed 00:03:50.501 Test: test_str_chomp ...passed 00:03:50.501 Test: test_parse_capacity ...passed 00:03:50.501 Test: test_sprintf_append_realloc ...passed 00:03:50.501 Test: test_strtol ...passed 00:03:50.501 Test: test_strtoll ...passed 00:03:50.501 Test: test_strarray ...passed 00:03:50.501 Test: test_strcpy_replace ...passed 00:03:50.501 00:03:50.501 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.501 suites 1 1 n/a 0 0 00:03:50.501 tests 8 8 8 0 0 00:03:50.501 asserts 161 161 161 0 n/a 00:03:50.501 00:03:50.501 Elapsed time = 0.000 seconds 00:03:50.501 21:40:04 unittest.unittest_util -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:03:50.501 00:03:50.501 00:03:50.501 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.501 http://cunit.sourceforge.net/ 00:03:50.501 00:03:50.501 00:03:50.501 Suite: dif 00:03:50.501 Test: dif_generate_and_verify_test ...[2024-07-15 21:40:04.567125] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:03:50.501 [2024-07-15 21:40:04.567383] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:03:50.501 [2024-07-15 21:40:04.567461] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:03:50.501 [2024-07-15 21:40:04.567525] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:03:50.501 [2024-07-15 21:40:04.567572] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:03:50.501 [2024-07-15 21:40:04.567615] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:03:50.501 passed 00:03:50.501 Test: dif_disable_check_test ...[2024-07-15 21:40:04.567778] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:03:50.501 [2024-07-15 21:40:04.567832] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:03:50.501 [2024-07-15 21:40:04.567886] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:03:50.501 passed 00:03:50.501 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-07-15 21:40:04.568048] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:03:50.501 [2024-07-15 21:40:04.568104] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:03:50.501 [2024-07-15 21:40:04.568159] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:03:50.501 [2024-07-15 21:40:04.568213] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:03:50.501 [2024-07-15 21:40:04.568264] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:03:50.501 [2024-07-15 21:40:04.568316] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:03:50.501 [2024-07-15 21:40:04.568370] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:03:50.501 [2024-07-15 21:40:04.568421] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:03:50.501 [2024-07-15 21:40:04.568472] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:03:50.501 [2024-07-15 21:40:04.568526] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:03:50.501 [2024-07-15 21:40:04.568589] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:03:50.501 passed 00:03:50.501 Test: dif_apptag_mask_test ...[2024-07-15 21:40:04.568650] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:03:50.501 [2024-07-15 21:40:04.568702] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:03:50.501 passed 00:03:50.501 Test: dif_sec_512_md_0_error_test ...[2024-07-15 21:40:04.568752] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:50.501 passed 00:03:50.501 Test: dif_sec_4096_md_0_error_test ...[2024-07-15 21:40:04.568771] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:50.501 passed 00:03:50.501 Test: dif_sec_4100_md_128_error_test ...[2024-07-15 21:40:04.568786] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:50.501 [2024-07-15 21:40:04.568814] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:03:50.501 [2024-07-15 21:40:04.568830] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:03:50.501 passed 00:03:50.501 Test: dif_guard_seed_test ...passed 00:03:50.501 Test: dif_guard_value_test ...passed 00:03:50.501 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:03:50.501 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:03:50.501 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:03:50.501 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:03:50.501 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:03:50.501 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:03:50.501 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:03:50.501 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:03:50.501 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:03:50.501 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:03:50.501 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:03:50.501 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:03:50.501 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:03:50.501 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:03:50.501 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:03:50.501 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:03:50.501 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:03:50.501 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:03:50.501 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-15 21:40:04.574774] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=ff4c, Actual=fd4c 00:03:50.501 [2024-07-15 21:40:04.575156] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fc21, Actual=fe21 00:03:50.501 [2024-07-15 21:40:04.575511] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=288 00:03:50.501 [2024-07-15 21:40:04.575850] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=288 00:03:50.501 [2024-07-15 21:40:04.576189] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=200005e 00:03:50.501 [2024-07-15 21:40:04.576548] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=200005e 00:03:50.501 [2024-07-15 21:40:04.576906] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fd4c, Actual=18a6 00:03:50.501 [2024-07-15 21:40:04.577185] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fe21, Actual=eca8 00:03:50.501 [2024-07-15 21:40:04.577458] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=18b753ed, Actual=1ab753ed 00:03:50.501 [2024-07-15 21:40:04.577784] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=3a574660, Actual=38574660 00:03:50.501 [2024-07-15 21:40:04.578097] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=288 00:03:50.501 [2024-07-15 21:40:04.578424] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=288 00:03:50.501 [2024-07-15 21:40:04.578759] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=20000000000005e 00:03:50.501 [2024-07-15 21:40:04.579098] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=20000000000005e 00:03:50.501 [2024-07-15 21:40:04.579423] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=1ab753ed, Actual=49675ceb 00:03:50.501 [2024-07-15 21:40:04.579691] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=38574660, Actual=f6d3a6ec 00:03:50.501 [2024-07-15 21:40:04.579953] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a776a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:50.501 [2024-07-15 21:40:04.580266] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=8a010a2d4837a266, Actual=88010a2d4837a266 00:03:50.501 [2024-07-15 21:40:04.580579] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=288 00:03:50.501 [2024-07-15 21:40:04.580889] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=288 00:03:50.501 [2024-07-15 21:40:04.581201] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=25e 00:03:50.501 [2024-07-15 21:40:04.581514] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=25e 00:03:50.501 [2024-07-15 21:40:04.581826] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a576a7728ecc20d3, Actual=239900efa5c2a7ab 00:03:50.501 [2024-07-15 21:40:04.582084] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=88010a2d4837a266, Actual=7d671ee9d6307f0a 00:03:50.501 passed 00:03:50.501 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-07-15 21:40:04.582209] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:03:50.501 [2024-07-15 21:40:04.582256] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc21, Actual=fe21 00:03:50.501 [2024-07-15 21:40:04.582313] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.502 [2024-07-15 21:40:04.582370] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.502 [2024-07-15 21:40:04.582429] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:03:50.502 [2024-07-15 21:40:04.582483] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:03:50.502 [2024-07-15 21:40:04.582543] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=18a6 00:03:50.502 [2024-07-15 21:40:04.582580] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=eca8 00:03:50.502 [2024-07-15 21:40:04.582625] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18b753ed, Actual=1ab753ed 00:03:50.502 [2024-07-15 21:40:04.582675] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3a574660, Actual=38574660 00:03:50.502 [2024-07-15 21:40:04.582727] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.502 [2024-07-15 21:40:04.582786] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.502 [2024-07-15 21:40:04.582837] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:03:50.502 [2024-07-15 21:40:04.582879] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:03:50.502 [2024-07-15 21:40:04.582920] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=49675ceb 00:03:50.502 [2024-07-15 21:40:04.582964] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=f6d3a6ec 00:03:50.502 [2024-07-15 21:40:04.582994] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a776a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:50.502 [2024-07-15 21:40:04.583035] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=8a010a2d4837a266, Actual=88010a2d4837a266 00:03:50.502 [2024-07-15 21:40:04.583076] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.502 [2024-07-15 21:40:04.583117] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.502 [2024-07-15 21:40:04.583158] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:03:50.502 [2024-07-15 21:40:04.583199] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:03:50.502 passed 00:03:50.502 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-07-15 21:40:04.583240] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=239900efa5c2a7ab 00:03:50.502 [2024-07-15 21:40:04.583268] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=7d671ee9d6307f0a 00:03:50.502 [2024-07-15 21:40:04.583300] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:03:50.502 [2024-07-15 21:40:04.583341] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc21, Actual=fe21 00:03:50.502 [2024-07-15 21:40:04.583382] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.502 [2024-07-15 21:40:04.583424] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.502 [2024-07-15 21:40:04.583465] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:03:50.502 [2024-07-15 21:40:04.583505] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:03:50.502 [2024-07-15 21:40:04.583546] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=18a6 00:03:50.502 [2024-07-15 21:40:04.583573] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=eca8 00:03:50.502 [2024-07-15 21:40:04.583601] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18b753ed, Actual=1ab753ed 00:03:50.502 [2024-07-15 21:40:04.583642] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3a574660, Actual=38574660 00:03:50.502 [2024-07-15 21:40:04.583683] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.502 [2024-07-15 21:40:04.583733] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.502 [2024-07-15 21:40:04.583774] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:03:50.502 [2024-07-15 21:40:04.583815] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:03:50.502 [2024-07-15 21:40:04.583856] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=49675ceb 00:03:50.502 [2024-07-15 21:40:04.583884] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=f6d3a6ec 00:03:50.502 [2024-07-15 21:40:04.583912] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a776a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:50.502 [2024-07-15 21:40:04.583952] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=8a010a2d4837a266, Actual=88010a2d4837a266 00:03:50.502 [2024-07-15 21:40:04.583993] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.502 [2024-07-15 21:40:04.584033] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.502 [2024-07-15 21:40:04.584074] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:03:50.502 [2024-07-15 21:40:04.584114] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:03:50.502 [2024-07-15 21:40:04.584155] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=239900efa5c2a7ab 00:03:50.502 passed 00:03:50.502 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-07-15 21:40:04.584182] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=7d671ee9d6307f0a 00:03:50.502 [2024-07-15 21:40:04.584214] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:03:50.502 [2024-07-15 21:40:04.584255] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc21, Actual=fe21 00:03:50.502 [2024-07-15 21:40:04.584296] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.502 [2024-07-15 21:40:04.584336] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.502 [2024-07-15 21:40:04.584377] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:03:50.502 [2024-07-15 21:40:04.584426] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:03:50.502 [2024-07-15 21:40:04.584467] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=18a6 00:03:50.502 [2024-07-15 21:40:04.584495] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=eca8 00:03:50.502 [2024-07-15 21:40:04.584523] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18b753ed, Actual=1ab753ed 00:03:50.502 [2024-07-15 21:40:04.584564] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3a574660, Actual=38574660 00:03:50.502 [2024-07-15 21:40:04.584604] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.502 [2024-07-15 21:40:04.584645] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.502 [2024-07-15 21:40:04.584686] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:03:50.502 [2024-07-15 21:40:04.584726] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:03:50.502 [2024-07-15 21:40:04.584768] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=49675ceb 00:03:50.502 [2024-07-15 21:40:04.584805] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=f6d3a6ec 00:03:50.502 [2024-07-15 21:40:04.584843] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a776a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:50.502 [2024-07-15 21:40:04.584899] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=8a010a2d4837a266, Actual=88010a2d4837a266 00:03:50.502 [2024-07-15 21:40:04.584952] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.502 [2024-07-15 21:40:04.585009] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.502 [2024-07-15 21:40:04.585065] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:03:50.502 [2024-07-15 21:40:04.585122] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:03:50.502 passed 00:03:50.502 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-07-15 21:40:04.585170] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=239900efa5c2a7ab 00:03:50.502 [2024-07-15 21:40:04.585208] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=7d671ee9d6307f0a 00:03:50.502 [2024-07-15 21:40:04.585251] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:03:50.502 [2024-07-15 21:40:04.585305] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc21, Actual=fe21 00:03:50.502 [2024-07-15 21:40:04.585363] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.502 [2024-07-15 21:40:04.585420] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.502 [2024-07-15 21:40:04.585486] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:03:50.502 [2024-07-15 21:40:04.585544] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:03:50.502 [2024-07-15 21:40:04.585600] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=18a6 00:03:50.502 passed 00:03:50.502 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-07-15 21:40:04.585635] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=eca8 00:03:50.502 [2024-07-15 21:40:04.585684] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18b753ed, Actual=1ab753ed 00:03:50.502 [2024-07-15 21:40:04.585735] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3a574660, Actual=38574660 00:03:50.502 [2024-07-15 21:40:04.585788] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.503 [2024-07-15 21:40:04.585842] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.503 [2024-07-15 21:40:04.585893] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:03:50.503 [2024-07-15 21:40:04.585950] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:03:50.503 [2024-07-15 21:40:04.586002] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=49675ceb 00:03:50.503 [2024-07-15 21:40:04.586042] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=f6d3a6ec 00:03:50.503 [2024-07-15 21:40:04.586072] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a776a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:50.503 [2024-07-15 21:40:04.586112] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=8a010a2d4837a266, Actual=88010a2d4837a266 00:03:50.503 [2024-07-15 21:40:04.586153] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.503 [2024-07-15 21:40:04.586193] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.503 [2024-07-15 21:40:04.586234] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:03:50.503 [2024-07-15 21:40:04.586274] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:03:50.503 [2024-07-15 21:40:04.586315] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=239900efa5c2a7ab 00:03:50.503 passed 00:03:50.503 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-07-15 21:40:04.586342] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=7d671ee9d6307f0a 00:03:50.503 [2024-07-15 21:40:04.586373] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:03:50.503 [2024-07-15 21:40:04.586413] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc21, Actual=fe21 00:03:50.503 [2024-07-15 21:40:04.586453] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.503 [2024-07-15 21:40:04.586494] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.503 [2024-07-15 21:40:04.586534] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:03:50.503 [2024-07-15 21:40:04.586575] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:03:50.503 [2024-07-15 21:40:04.586615] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=18a6 00:03:50.503 [2024-07-15 21:40:04.586643] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=eca8 00:03:50.503 passed 00:03:50.503 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-07-15 21:40:04.586673] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18b753ed, Actual=1ab753ed 00:03:50.503 [2024-07-15 21:40:04.586714] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3a574660, Actual=38574660 00:03:50.503 [2024-07-15 21:40:04.586754] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.503 [2024-07-15 21:40:04.586794] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.503 [2024-07-15 21:40:04.586835] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:03:50.503 [2024-07-15 21:40:04.586875] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:03:50.503 [2024-07-15 21:40:04.586915] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=49675ceb 00:03:50.503 [2024-07-15 21:40:04.586956] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=f6d3a6ec 00:03:50.503 [2024-07-15 21:40:04.586987] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a776a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:50.503 [2024-07-15 21:40:04.587028] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=8a010a2d4837a266, Actual=88010a2d4837a266 00:03:50.503 [2024-07-15 21:40:04.587068] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.503 [2024-07-15 21:40:04.587108] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.503 [2024-07-15 21:40:04.587149] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:03:50.503 [2024-07-15 21:40:04.587189] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:03:50.503 [2024-07-15 21:40:04.587229] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=239900efa5c2a7ab 00:03:50.503 [2024-07-15 21:40:04.587257] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=7d671ee9d6307f0a 00:03:50.503 passed 00:03:50.503 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:03:50.503 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:03:50.503 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:03:50.503 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:03:50.503 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:03:50.503 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:03:50.503 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:03:50.503 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:03:50.503 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:03:50.503 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-15 21:40:04.593247] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=ff4c, Actual=fd4c 00:03:50.503 [2024-07-15 21:40:04.593482] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=f5e, Actual=d5e 00:03:50.503 [2024-07-15 21:40:04.593674] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=288 00:03:50.503 [2024-07-15 21:40:04.593861] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=288 00:03:50.503 [2024-07-15 21:40:04.594055] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=200005e 00:03:50.503 [2024-07-15 21:40:04.594252] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=200005e 00:03:50.503 [2024-07-15 21:40:04.594436] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fd4c, Actual=18a6 00:03:50.503 [2024-07-15 21:40:04.594615] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=ba8c, Actual=a805 00:03:50.503 [2024-07-15 21:40:04.594799] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=18b753ed, Actual=1ab753ed 00:03:50.503 [2024-07-15 21:40:04.595001] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=9ea26cbb, Actual=9ca26cbb 00:03:50.503 [2024-07-15 21:40:04.595184] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=288 00:03:50.503 [2024-07-15 21:40:04.595389] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=288 00:03:50.503 [2024-07-15 21:40:04.595602] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=20000000000005e 00:03:50.503 [2024-07-15 21:40:04.595809] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=20000000000005e 00:03:50.503 [2024-07-15 21:40:04.596003] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=1ab753ed, Actual=49675ceb 00:03:50.503 [2024-07-15 21:40:04.596183] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=c91054db, Actual=794b457 00:03:50.503 [2024-07-15 21:40:04.596360] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a776a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:50.503 [2024-07-15 21:40:04.596537] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=bac72d898cb24f36, Actual=b8c72d898cb24f36 00:03:50.503 [2024-07-15 21:40:04.596712] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=288 00:03:50.503 [2024-07-15 21:40:04.596888] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=288 00:03:50.503 [2024-07-15 21:40:04.597064] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=25e 00:03:50.503 [2024-07-15 21:40:04.597244] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=25e 00:03:50.503 [2024-07-15 21:40:04.597447] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a576a7728ecc20d3, Actual=239900efa5c2a7ab 00:03:50.503 [2024-07-15 21:40:04.597661] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=2d19b1684f09bf67, Actual=d87fa5acd10e620b 00:03:50.503 passed 00:03:50.503 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-15 21:40:04.597733] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:03:50.503 [2024-07-15 21:40:04.597780] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=98c4, Actual=9ac4 00:03:50.503 [2024-07-15 21:40:04.597824] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.503 [2024-07-15 21:40:04.597866] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.503 [2024-07-15 21:40:04.597909] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:03:50.503 [2024-07-15 21:40:04.597952] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:03:50.503 [2024-07-15 21:40:04.597995] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=18a6 00:03:50.503 [2024-07-15 21:40:04.598038] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=3f9f 00:03:50.503 [2024-07-15 21:40:04.598081] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18b753ed, Actual=1ab753ed 00:03:50.503 [2024-07-15 21:40:04.598125] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd1478cc, Actual=bf1478cc 00:03:50.503 [2024-07-15 21:40:04.598168] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.503 [2024-07-15 21:40:04.598210] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.503 [2024-07-15 21:40:04.598253] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:03:50.503 [2024-07-15 21:40:04.598296] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:03:50.503 [2024-07-15 21:40:04.598338] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=49675ceb 00:03:50.504 [2024-07-15 21:40:04.598381] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=2422a020 00:03:50.504 [2024-07-15 21:40:04.598425] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a776a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:50.504 [2024-07-15 21:40:04.598467] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=afb8b6fa9561c0f3, Actual=adb8b6fa9561c0f3 00:03:50.504 [2024-07-15 21:40:04.598510] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.504 [2024-07-15 21:40:04.598553] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.504 [2024-07-15 21:40:04.598596] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:03:50.504 [2024-07-15 21:40:04.598639] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:03:50.504 passed 00:03:50.504 Test: dix_sec_512_md_0_error ...passed 00:03:50.504 Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-07-15 21:40:04.598682] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=239900efa5c2a7ab 00:03:50.504 [2024-07-15 21:40:04.598725] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=cd003edfc8ddedce 00:03:50.504 [2024-07-15 21:40:04.598736] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:50.504 passed 00:03:50.504 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:03:50.504 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:03:50.504 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:03:50.504 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:03:50.504 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:03:50.504 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:03:50.504 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:03:50.504 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:03:50.504 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-15 21:40:04.604250] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=ff4c, Actual=fd4c 00:03:50.504 [2024-07-15 21:40:04.604434] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=f5e, Actual=d5e 00:03:50.504 [2024-07-15 21:40:04.604610] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=288 00:03:50.504 [2024-07-15 21:40:04.604785] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=288 00:03:50.504 [2024-07-15 21:40:04.604959] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=200005e 00:03:50.504 [2024-07-15 21:40:04.605134] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=200005e 00:03:50.504 [2024-07-15 21:40:04.605307] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fd4c, Actual=18a6 00:03:50.504 [2024-07-15 21:40:04.605479] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=ba8c, Actual=a805 00:03:50.504 [2024-07-15 21:40:04.605651] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=18b753ed, Actual=1ab753ed 00:03:50.504 [2024-07-15 21:40:04.605823] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=9ea26cbb, Actual=9ca26cbb 00:03:50.504 [2024-07-15 21:40:04.605994] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=288 00:03:50.504 [2024-07-15 21:40:04.606166] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=288 00:03:50.504 [2024-07-15 21:40:04.606337] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=20000000000005e 00:03:50.504 [2024-07-15 21:40:04.606508] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=20000000000005e 00:03:50.504 [2024-07-15 21:40:04.606680] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=1ab753ed, Actual=49675ceb 00:03:50.504 [2024-07-15 21:40:04.606851] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=c91054db, Actual=794b457 00:03:50.504 [2024-07-15 21:40:04.607032] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a776a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:50.504 [2024-07-15 21:40:04.607205] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=bac72d898cb24f36, Actual=b8c72d898cb24f36 00:03:50.504 [2024-07-15 21:40:04.607377] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=288 00:03:50.504 [2024-07-15 21:40:04.607549] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=288 00:03:50.504 [2024-07-15 21:40:04.607721] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=25e 00:03:50.504 [2024-07-15 21:40:04.607893] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=25e 00:03:50.504 [2024-07-15 21:40:04.608065] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a576a7728ecc20d3, Actual=239900efa5c2a7ab 00:03:50.504 [2024-07-15 21:40:04.608237] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=2d19b1684f09bf67, Actual=d87fa5acd10e620b 00:03:50.504 passed 00:03:50.504 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-15 21:40:04.608289] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:03:50.504 [2024-07-15 21:40:04.608332] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=98c4, Actual=9ac4 00:03:50.504 [2024-07-15 21:40:04.608375] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.504 [2024-07-15 21:40:04.608417] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.504 [2024-07-15 21:40:04.608460] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:03:50.504 [2024-07-15 21:40:04.608503] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:03:50.504 [2024-07-15 21:40:04.608546] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=18a6 00:03:50.504 [2024-07-15 21:40:04.608588] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=3f9f 00:03:50.504 [2024-07-15 21:40:04.608631] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18b753ed, Actual=1ab753ed 00:03:50.504 [2024-07-15 21:40:04.608673] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd1478cc, Actual=bf1478cc 00:03:50.504 [2024-07-15 21:40:04.608715] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.504 [2024-07-15 21:40:04.608757] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.504 [2024-07-15 21:40:04.608800] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:03:50.504 [2024-07-15 21:40:04.608842] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:03:50.504 [2024-07-15 21:40:04.608884] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=49675ceb 00:03:50.504 [2024-07-15 21:40:04.608926] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=2422a020 00:03:50.504 [2024-07-15 21:40:04.608969] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a776a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:50.504 [2024-07-15 21:40:04.609011] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=afb8b6fa9561c0f3, Actual=adb8b6fa9561c0f3 00:03:50.504 [2024-07-15 21:40:04.609061] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.504 [2024-07-15 21:40:04.609104] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:50.504 [2024-07-15 21:40:04.609148] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:03:50.504 [2024-07-15 21:40:04.609191] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:03:50.504 [2024-07-15 21:40:04.609233] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=239900efa5c2a7ab 00:03:50.504 passed 00:03:50.504 Test: set_md_interleave_iovs_test ...[2024-07-15 21:40:04.609276] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=cd003edfc8ddedce 00:03:50.504 passed 00:03:50.504 Test: set_md_interleave_iovs_split_test ...passed 00:03:50.504 Test: dif_generate_stream_pi_16_test ...passed 00:03:50.504 Test: dif_generate_stream_test ...passed 00:03:50.504 Test: set_md_interleave_iovs_alignment_test ...[2024-07-15 21:40:04.610148] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1822:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:03:50.504 passed 00:03:50.504 Test: dif_generate_split_test ...passed 00:03:50.504 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:03:50.504 Test: dif_verify_split_test ...passed 00:03:50.504 Test: dif_verify_stream_multi_segments_test ...passed 00:03:50.504 Test: update_crc32c_pi_16_test ...passed 00:03:50.504 Test: update_crc32c_test ...passed 00:03:50.504 Test: dif_update_crc32c_split_test ...passed 00:03:50.504 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:03:50.504 Test: get_range_with_md_test ...passed 00:03:50.504 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:03:50.504 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:03:50.504 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:03:50.504 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:03:50.504 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:03:50.504 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:03:50.504 Test: dif_generate_and_verify_unmap_test ...passed 00:03:50.504 00:03:50.504 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.504 suites 1 1 n/a 0 0 00:03:50.504 tests 79 79 79 0 0 00:03:50.504 asserts 3584 3584 3584 0 n/a 00:03:50.504 00:03:50.504 Elapsed time = 0.047 seconds 00:03:50.504 21:40:04 unittest.unittest_util -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:03:50.504 00:03:50.504 00:03:50.504 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.504 http://cunit.sourceforge.net/ 00:03:50.504 00:03:50.504 00:03:50.504 Suite: iov 00:03:50.504 Test: test_single_iov ...passed 00:03:50.504 Test: test_simple_iov ...passed 00:03:50.504 Test: test_complex_iov ...passed 00:03:50.504 Test: test_iovs_to_buf ...passed 00:03:50.504 Test: test_buf_to_iovs ...passed 00:03:50.505 Test: test_memset ...passed 00:03:50.505 Test: test_iov_one ...passed 00:03:50.505 Test: test_iov_xfer ...passed 00:03:50.505 00:03:50.505 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.505 suites 1 1 n/a 0 0 00:03:50.505 tests 8 8 8 0 0 00:03:50.505 asserts 156 156 156 0 n/a 00:03:50.505 00:03:50.505 Elapsed time = 0.000 seconds 00:03:50.505 21:40:04 unittest.unittest_util -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:03:50.505 00:03:50.505 00:03:50.505 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.505 http://cunit.sourceforge.net/ 00:03:50.505 00:03:50.505 00:03:50.505 Suite: math 00:03:50.505 Test: test_serial_number_arithmetic ...passed 00:03:50.505 Suite: erase 00:03:50.505 Test: test_memset_s ...passed 00:03:50.505 00:03:50.505 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.505 suites 2 2 n/a 0 0 00:03:50.505 tests 2 2 2 0 0 00:03:50.505 asserts 18 18 18 0 n/a 00:03:50.505 00:03:50.505 Elapsed time = 0.000 seconds 00:03:50.505 21:40:04 unittest.unittest_util -- unit/unittest.sh@145 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:03:50.505 00:03:50.505 00:03:50.505 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.505 http://cunit.sourceforge.net/ 00:03:50.505 00:03:50.505 00:03:50.505 Suite: pipe 00:03:50.505 Test: test_create_destroy ...passed 00:03:50.505 Test: test_write_get_buffer ...passed 00:03:50.505 Test: test_write_advance ...passed 00:03:50.505 Test: test_read_get_buffer ...passed 00:03:50.505 Test: test_read_advance ...passed 00:03:50.505 Test: test_data ...passed 00:03:50.505 00:03:50.505 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.505 suites 1 1 n/a 0 0 00:03:50.505 tests 6 6 6 0 0 00:03:50.505 asserts 251 251 251 0 n/a 00:03:50.505 00:03:50.505 Elapsed time = 0.000 seconds 00:03:50.505 21:40:04 unittest.unittest_util -- unit/unittest.sh@146 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:03:50.505 00:03:50.505 00:03:50.505 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.505 http://cunit.sourceforge.net/ 00:03:50.505 00:03:50.505 00:03:50.505 Suite: xor 00:03:50.505 Test: test_xor_gen ...passed 00:03:50.505 00:03:50.505 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.505 suites 1 1 n/a 0 0 00:03:50.505 tests 1 1 1 0 0 00:03:50.505 asserts 17 17 17 0 n/a 00:03:50.505 00:03:50.505 Elapsed time = 0.000 seconds 00:03:50.505 00:03:50.505 real 0m0.120s 00:03:50.505 user 0m0.063s 00:03:50.505 sys 0m0.056s 00:03:50.505 21:40:04 unittest.unittest_util -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:50.505 21:40:04 unittest.unittest_util -- common/autotest_common.sh@10 -- # set +x 00:03:50.505 ************************************ 00:03:50.505 END TEST unittest_util 00:03:50.505 ************************************ 00:03:50.505 21:40:04 unittest -- common/autotest_common.sh@1136 -- # return 0 00:03:50.505 21:40:04 unittest -- unit/unittest.sh@284 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:50.505 21:40:04 unittest -- unit/unittest.sh@287 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:03:50.505 21:40:04 unittest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:50.505 21:40:04 unittest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:50.505 21:40:04 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:50.505 ************************************ 00:03:50.505 START TEST unittest_dma 00:03:50.505 ************************************ 00:03:50.505 21:40:04 unittest.unittest_dma -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:03:50.505 00:03:50.505 00:03:50.505 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.505 http://cunit.sourceforge.net/ 00:03:50.505 00:03:50.505 00:03:50.505 Suite: dma_suite 00:03:50.505 Test: test_dma ...passed 00:03:50.505 00:03:50.505 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.505 suites 1 1 n/a 0 0 00:03:50.505 tests 1 1 1 0 0 00:03:50.505 asserts 54 54 54 0 n/a 00:03:50.505 00:03:50.505 Elapsed time = 0.000 seconds 00:03:50.505 [2024-07-15 21:40:04.688259] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 56:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:03:50.505 00:03:50.505 real 0m0.006s 00:03:50.505 user 0m0.005s 00:03:50.505 sys 0m0.001s 00:03:50.505 21:40:04 unittest.unittest_dma -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:50.505 ************************************ 00:03:50.505 END TEST unittest_dma 00:03:50.505 ************************************ 00:03:50.505 21:40:04 unittest.unittest_dma -- common/autotest_common.sh@10 -- # set +x 00:03:50.505 21:40:04 unittest -- common/autotest_common.sh@1136 -- # return 0 00:03:50.505 21:40:04 unittest -- unit/unittest.sh@289 -- # run_test unittest_init unittest_init 00:03:50.505 21:40:04 unittest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:50.505 21:40:04 unittest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:50.505 21:40:04 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:50.505 ************************************ 00:03:50.505 START TEST unittest_init 00:03:50.505 ************************************ 00:03:50.505 21:40:04 unittest.unittest_init -- common/autotest_common.sh@1117 -- # unittest_init 00:03:50.505 21:40:04 unittest.unittest_init -- unit/unittest.sh@150 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:03:50.505 00:03:50.505 00:03:50.505 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.505 http://cunit.sourceforge.net/ 00:03:50.505 00:03:50.505 00:03:50.505 Suite: subsystem_suite 00:03:50.505 Test: subsystem_sort_test_depends_on_single ...passed 00:03:50.505 Test: subsystem_sort_test_depends_on_multiple ...passed 00:03:50.505 Test: subsystem_sort_test_missing_dependency ...[2024-07-15 21:40:04.736593] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 197:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:03:50.505 passed 00:03:50.505 00:03:50.505 [2024-07-15 21:40:04.736864] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:03:50.505 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.505 suites 1 1 n/a 0 0 00:03:50.505 tests 3 3 3 0 0 00:03:50.505 asserts 20 20 20 0 n/a 00:03:50.505 00:03:50.505 Elapsed time = 0.000 seconds 00:03:50.505 00:03:50.505 real 0m0.007s 00:03:50.505 user 0m0.007s 00:03:50.505 sys 0m0.000s 00:03:50.505 21:40:04 unittest.unittest_init -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:50.505 21:40:04 unittest.unittest_init -- common/autotest_common.sh@10 -- # set +x 00:03:50.505 ************************************ 00:03:50.505 END TEST unittest_init 00:03:50.505 ************************************ 00:03:50.505 21:40:04 unittest -- common/autotest_common.sh@1136 -- # return 0 00:03:50.505 21:40:04 unittest -- unit/unittest.sh@290 -- # run_test unittest_keyring /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:03:50.505 21:40:04 unittest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:50.505 21:40:04 unittest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:50.505 21:40:04 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:50.505 ************************************ 00:03:50.505 START TEST unittest_keyring 00:03:50.505 ************************************ 00:03:50.505 21:40:04 unittest.unittest_keyring -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:03:50.505 00:03:50.505 00:03:50.505 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.505 http://cunit.sourceforge.net/ 00:03:50.505 00:03:50.505 00:03:50.505 Suite: keyring 00:03:50.505 Test: test_keyring_add_remove ...[2024-07-15 21:40:04.785256] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists 00:03:50.505 [2024-07-15 21:40:04.785553] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists 00:03:50.505 [2024-07-15 21:40:04.785576] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:03:50.505 passed 00:03:50.505 Test: test_keyring_get_put ...passed 00:03:50.506 00:03:50.506 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.506 suites 1 1 n/a 0 0 00:03:50.506 tests 2 2 2 0 0 00:03:50.506 asserts 44 44 44 0 n/a 00:03:50.506 00:03:50.506 Elapsed time = 0.000 seconds 00:03:50.506 00:03:50.506 real 0m0.005s 00:03:50.506 user 0m0.000s 00:03:50.506 sys 0m0.008s 00:03:50.506 21:40:04 unittest.unittest_keyring -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:50.506 ************************************ 00:03:50.506 END TEST unittest_keyring 00:03:50.506 ************************************ 00:03:50.506 21:40:04 unittest.unittest_keyring -- common/autotest_common.sh@10 -- # set +x 00:03:50.506 21:40:04 unittest -- common/autotest_common.sh@1136 -- # return 0 00:03:50.506 21:40:04 unittest -- unit/unittest.sh@292 -- # '[' no = yes ']' 00:03:50.506 21:40:04 unittest -- unit/unittest.sh@305 -- # set +x 00:03:50.506 00:03:50.506 00:03:50.506 ===================== 00:03:50.506 All unit tests passed 00:03:50.506 ===================== 00:03:50.506 WARN: lcov not installed or SPDK built without coverage! 00:03:50.506 WARN: neither valgrind nor ASAN is enabled! 00:03:50.506 00:03:50.506 00:03:50.506 00:03:50.506 real 0m35.858s 00:03:50.506 user 0m17.954s 00:03:50.506 sys 0m1.471s 00:03:50.506 21:40:04 unittest -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:50.506 ************************************ 00:03:50.506 END TEST unittest 00:03:50.506 21:40:04 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:50.506 ************************************ 00:03:50.506 21:40:04 -- common/autotest_common.sh@1136 -- # return 0 00:03:50.506 21:40:04 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:50.506 21:40:04 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:50.506 21:40:04 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:50.506 21:40:04 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:50.506 21:40:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:50.506 21:40:04 -- common/autotest_common.sh@10 -- # set +x 00:03:50.506 21:40:04 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:50.506 21:40:04 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:50.506 21:40:04 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:50.506 21:40:04 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:50.506 21:40:04 -- common/autotest_common.sh@10 -- # set +x 00:03:50.506 ************************************ 00:03:50.506 START TEST env 00:03:50.506 ************************************ 00:03:50.506 21:40:04 env -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:50.506 * Looking for test storage... 00:03:50.506 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:50.506 21:40:05 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:50.506 21:40:05 env -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:50.506 21:40:05 env -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:50.506 21:40:05 env -- common/autotest_common.sh@10 -- # set +x 00:03:50.506 ************************************ 00:03:50.506 START TEST env_memory 00:03:50.506 ************************************ 00:03:50.506 21:40:05 env.env_memory -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:50.506 00:03:50.506 00:03:50.506 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.506 http://cunit.sourceforge.net/ 00:03:50.506 00:03:50.506 00:03:50.506 Suite: memory 00:03:50.506 Test: alloc and free memory map ...[2024-07-15 21:40:05.048905] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:50.506 passed 00:03:50.506 Test: mem map translation ...[2024-07-15 21:40:05.055914] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:50.506 [2024-07-15 21:40:05.055965] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:50.506 [2024-07-15 21:40:05.055999] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:50.506 [2024-07-15 21:40:05.056009] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:50.506 passed 00:03:50.506 Test: mem map registration ...[2024-07-15 21:40:05.064678] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:50.506 [2024-07-15 21:40:05.064707] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:50.506 passed 00:03:50.506 Test: mem map adjacent registrations ...passed 00:03:50.506 00:03:50.506 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.506 suites 1 1 n/a 0 0 00:03:50.506 tests 4 4 4 0 0 00:03:50.506 asserts 152 152 152 0 n/a 00:03:50.506 00:03:50.506 Elapsed time = 0.039 seconds 00:03:50.506 00:03:50.506 real 0m0.042s 00:03:50.506 user 0m0.034s 00:03:50.506 sys 0m0.009s 00:03:50.506 21:40:05 env.env_memory -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:50.506 21:40:05 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:50.506 ************************************ 00:03:50.506 END TEST env_memory 00:03:50.506 ************************************ 00:03:50.506 21:40:05 env -- common/autotest_common.sh@1136 -- # return 0 00:03:50.506 21:40:05 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:50.506 21:40:05 env -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:50.506 21:40:05 env -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:50.506 21:40:05 env -- common/autotest_common.sh@10 -- # set +x 00:03:50.506 ************************************ 00:03:50.506 START TEST env_vtophys 00:03:50.506 ************************************ 00:03:50.506 21:40:05 env.env_vtophys -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:50.506 EAL: lib.eal log level changed from notice to debug 00:03:50.506 EAL: Sysctl reports 10 cpus 00:03:50.506 EAL: Detected lcore 0 as core 0 on socket 0 00:03:50.506 EAL: Detected lcore 1 as core 0 on socket 0 00:03:50.506 EAL: Detected lcore 2 as core 0 on socket 0 00:03:50.506 EAL: Detected lcore 3 as core 0 on socket 0 00:03:50.506 EAL: Detected lcore 4 as core 0 on socket 0 00:03:50.506 EAL: Detected lcore 5 as core 0 on socket 0 00:03:50.506 EAL: Detected lcore 6 as core 0 on socket 0 00:03:50.506 EAL: Detected lcore 7 as core 0 on socket 0 00:03:50.506 EAL: Detected lcore 8 as core 0 on socket 0 00:03:50.506 EAL: Detected lcore 9 as core 0 on socket 0 00:03:50.506 EAL: Maximum logical cores by configuration: 128 00:03:50.506 EAL: Detected CPU lcores: 10 00:03:50.506 EAL: Detected NUMA nodes: 1 00:03:50.506 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:50.506 EAL: Checking presence of .so 'librte_eal.so.24' 00:03:50.506 EAL: Checking presence of .so 'librte_eal.so' 00:03:50.506 EAL: Detected static linkage of DPDK 00:03:50.506 EAL: No shared files mode enabled, IPC will be disabled 00:03:50.506 EAL: PCI scan found 10 devices 00:03:50.506 EAL: Specific IOVA mode is not requested, autodetecting 00:03:50.506 EAL: Selecting IOVA mode according to bus requests 00:03:50.506 EAL: Bus pci wants IOVA as 'PA' 00:03:50.506 EAL: Selected IOVA mode 'PA' 00:03:50.506 EAL: Contigmem driver has 8 buffers, each of size 256MB 00:03:50.506 EAL: Ask a virtual area of 0x2e000 bytes 00:03:50.506 EAL: WARNING! Base virtual address hint (0x1000005000 != 0x10001c1000) not respected! 00:03:50.506 EAL: This may cause issues with mapping memory into secondary processes 00:03:50.506 EAL: Virtual area found at 0x10001c1000 (size = 0x2e000) 00:03:50.506 EAL: Setting up physically contiguous memory... 00:03:50.506 EAL: Ask a virtual area of 0x1000 bytes 00:03:50.506 EAL: WARNING! Base virtual address hint (0x100000b000 != 0x10007c8000) not respected! 00:03:50.506 EAL: This may cause issues with mapping memory into secondary processes 00:03:50.506 EAL: Virtual area found at 0x10007c8000 (size = 0x1000) 00:03:50.506 EAL: Memseg list allocated at socket 0, page size 0x40000kB 00:03:50.506 EAL: Ask a virtual area of 0xf0000000 bytes 00:03:50.506 EAL: WARNING! Base virtual address hint (0x105000c000 != 0x1060000000) not respected! 00:03:50.506 EAL: This may cause issues with mapping memory into secondary processes 00:03:50.506 EAL: Virtual area found at 0x1060000000 (size = 0xf0000000) 00:03:50.506 EAL: VA reserved for memseg list at 0x1060000000, size f0000000 00:03:50.506 EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0xa0000000, len 268435456 00:03:50.506 EAL: Mapped memory segment 1 @ 0x1080000000: physaddr:0x160000000, len 268435456 00:03:50.506 EAL: Mapped memory segment 2 @ 0x10a0000000: physaddr:0x1f0000000, len 268435456 00:03:50.506 EAL: Mapped memory segment 3 @ 0x10c0000000: physaddr:0x210000000, len 268435456 00:03:50.506 EAL: Mapped memory segment 4 @ 0x1070000000: physaddr:0x220000000, len 268435456 00:03:50.506 EAL: Mapped memory segment 5 @ 0x1090000000: physaddr:0x230000000, len 268435456 00:03:50.506 EAL: Mapped memory segment 6 @ 0x10b0000000: physaddr:0x240000000, len 268435456 00:03:50.765 EAL: Mapped memory segment 7 @ 0x10d0000000: physaddr:0x250000000, len 268435456 00:03:50.765 EAL: No shared files mode enabled, IPC is disabled 00:03:50.765 EAL: Added 2048M to heap on socket 0 00:03:50.765 EAL: TSC is not safe to use in SMP mode 00:03:50.765 EAL: TSC is not invariant 00:03:50.765 EAL: TSC frequency is ~2199998 KHz 00:03:50.765 EAL: Main lcore 0 is ready (tid=1c1290212000;cpuset=[0]) 00:03:50.765 EAL: PCI scan found 10 devices 00:03:50.765 EAL: Registering mem event callbacks not supported 00:03:50.765 00:03:50.765 00:03:50.765 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.765 http://cunit.sourceforge.net/ 00:03:50.765 00:03:50.765 00:03:50.765 Suite: components_suite 00:03:50.765 Test: vtophys_malloc_test ...passed 00:03:51.024 Test: vtophys_spdk_malloc_test ...passed 00:03:51.024 00:03:51.024 Run Summary: Type Total Ran Passed Failed Inactive 00:03:51.024 suites 1 1 n/a 0 0 00:03:51.024 tests 2 2 2 0 0 00:03:51.024 asserts 546 546 546 0 n/a 00:03:51.024 00:03:51.024 Elapsed time = 0.375 seconds 00:03:51.024 00:03:51.024 real 0m1.007s 00:03:51.024 user 0m0.387s 00:03:51.024 sys 0m0.623s 00:03:51.024 21:40:06 env.env_vtophys -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:51.024 21:40:06 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:51.024 ************************************ 00:03:51.024 END TEST env_vtophys 00:03:51.024 ************************************ 00:03:51.024 21:40:06 env -- common/autotest_common.sh@1136 -- # return 0 00:03:51.024 21:40:06 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:51.024 21:40:06 env -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:51.024 21:40:06 env -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:51.024 21:40:06 env -- common/autotest_common.sh@10 -- # set +x 00:03:51.024 ************************************ 00:03:51.024 START TEST env_pci 00:03:51.024 ************************************ 00:03:51.024 21:40:06 env.env_pci -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:51.024 00:03:51.024 00:03:51.024 CUnit - A unit testing framework for C - Version 2.1-3 00:03:51.024 http://cunit.sourceforge.net/ 00:03:51.024 00:03:51.024 00:03:51.024 Suite: pci 00:03:51.024 Test: pci_hook ...passed 00:03:51.024 00:03:51.024 Run Summary: Type Total Ran Passed Failed Inactive 00:03:51.024 suites 1 1 n/a 0 0 00:03:51.024 tests 1 1 1 0 0 00:03:51.024 asserts 25 25 25 0 n/a 00:03:51.024 00:03:51.024 Elapsed time = 0.000 seconds 00:03:51.024 EAL: Cannot find device (10000:00:01.0) 00:03:51.024 EAL: Failed to attach device on primary process 00:03:51.024 00:03:51.024 real 0m0.008s 00:03:51.024 user 0m0.010s 00:03:51.024 sys 0m0.000s 00:03:51.024 21:40:06 env.env_pci -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:51.024 21:40:06 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:51.024 ************************************ 00:03:51.024 END TEST env_pci 00:03:51.024 ************************************ 00:03:51.282 21:40:06 env -- common/autotest_common.sh@1136 -- # return 0 00:03:51.282 21:40:06 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:51.282 21:40:06 env -- env/env.sh@15 -- # uname 00:03:51.282 21:40:06 env -- env/env.sh@15 -- # '[' FreeBSD = Linux ']' 00:03:51.282 21:40:06 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 00:03:51.282 21:40:06 env -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:03:51.282 21:40:06 env -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:51.282 21:40:06 env -- common/autotest_common.sh@10 -- # set +x 00:03:51.282 ************************************ 00:03:51.282 START TEST env_dpdk_post_init 00:03:51.282 ************************************ 00:03:51.282 21:40:06 env.env_dpdk_post_init -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 00:03:51.282 EAL: Sysctl reports 10 cpus 00:03:51.282 EAL: Detected CPU lcores: 10 00:03:51.282 EAL: Detected NUMA nodes: 1 00:03:51.282 EAL: Detected static linkage of DPDK 00:03:51.283 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:51.283 EAL: Selected IOVA mode 'PA' 00:03:51.283 EAL: Contigmem driver has 8 buffers, each of size 256MB 00:03:51.283 EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0xa0000000, len 268435456 00:03:51.283 EAL: Mapped memory segment 1 @ 0x1080000000: physaddr:0x160000000, len 268435456 00:03:51.283 EAL: Mapped memory segment 2 @ 0x10a0000000: physaddr:0x1f0000000, len 268435456 00:03:51.555 EAL: Mapped memory segment 3 @ 0x10c0000000: physaddr:0x210000000, len 268435456 00:03:51.555 EAL: Mapped memory segment 4 @ 0x1070000000: physaddr:0x220000000, len 268435456 00:03:51.555 EAL: Mapped memory segment 5 @ 0x1090000000: physaddr:0x230000000, len 268435456 00:03:51.555 EAL: Mapped memory segment 6 @ 0x10b0000000: physaddr:0x240000000, len 268435456 00:03:51.837 EAL: Mapped memory segment 7 @ 0x10d0000000: physaddr:0x250000000, len 268435456 00:03:51.837 EAL: TSC is not safe to use in SMP mode 00:03:51.837 EAL: TSC is not invariant 00:03:51.837 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:51.837 [2024-07-15 21:40:06.767442] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:03:51.837 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:03:51.837 Starting DPDK initialization... 00:03:51.837 Starting SPDK post initialization... 00:03:51.837 SPDK NVMe probe 00:03:51.837 Attaching to 0000:00:10.0 00:03:51.837 Attached to 0000:00:10.0 00:03:51.837 Cleaning up... 00:03:51.837 00:03:51.837 real 0m0.574s 00:03:51.837 user 0m0.008s 00:03:51.837 sys 0m0.561s 00:03:51.837 21:40:06 env.env_dpdk_post_init -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:51.837 21:40:06 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:51.837 ************************************ 00:03:51.837 END TEST env_dpdk_post_init 00:03:51.837 ************************************ 00:03:51.837 21:40:06 env -- common/autotest_common.sh@1136 -- # return 0 00:03:51.837 21:40:06 env -- env/env.sh@26 -- # uname 00:03:51.837 21:40:06 env -- env/env.sh@26 -- # '[' FreeBSD = Linux ']' 00:03:51.837 00:03:51.837 real 0m1.966s 00:03:51.837 user 0m0.611s 00:03:51.837 sys 0m1.361s 00:03:51.837 21:40:06 env -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:51.837 21:40:06 env -- common/autotest_common.sh@10 -- # set +x 00:03:51.837 ************************************ 00:03:51.837 END TEST env 00:03:51.837 ************************************ 00:03:51.837 21:40:06 -- common/autotest_common.sh@1136 -- # return 0 00:03:51.837 21:40:06 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:51.837 21:40:06 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:51.837 21:40:06 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:51.837 21:40:06 -- common/autotest_common.sh@10 -- # set +x 00:03:51.837 ************************************ 00:03:51.837 START TEST rpc 00:03:51.837 ************************************ 00:03:51.837 21:40:06 rpc -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:52.096 * Looking for test storage... 00:03:52.096 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:52.096 21:40:07 rpc -- rpc/rpc.sh@65 -- # spdk_pid=45511 00:03:52.096 21:40:07 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:52.096 21:40:07 rpc -- rpc/rpc.sh@67 -- # waitforlisten 45511 00:03:52.096 21:40:07 rpc -- common/autotest_common.sh@823 -- # '[' -z 45511 ']' 00:03:52.096 21:40:07 rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:52.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:52.096 21:40:07 rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:03:52.096 21:40:07 rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:52.096 21:40:07 rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:03:52.096 21:40:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.096 21:40:07 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:52.096 [2024-07-15 21:40:07.041960] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:03:52.096 [2024-07-15 21:40:07.042162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:03:52.662 EAL: TSC is not safe to use in SMP mode 00:03:52.662 EAL: TSC is not invariant 00:03:52.662 [2024-07-15 21:40:07.599177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:52.662 [2024-07-15 21:40:07.699537] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:03:52.662 [2024-07-15 21:40:07.702070] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:52.662 [2024-07-15 21:40:07.702121] app.c: 607:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 45511' to capture a snapshot of events at runtime. 00:03:52.662 [2024-07-15 21:40:07.702156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:53.230 21:40:08 rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:03:53.230 21:40:08 rpc -- common/autotest_common.sh@856 -- # return 0 00:03:53.230 21:40:08 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:53.230 21:40:08 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:53.230 21:40:08 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:53.230 21:40:08 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:53.230 21:40:08 rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:53.230 21:40:08 rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:53.230 21:40:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.230 ************************************ 00:03:53.230 START TEST rpc_integrity 00:03:53.230 ************************************ 00:03:53.230 21:40:08 rpc.rpc_integrity -- common/autotest_common.sh@1117 -- # rpc_integrity 00:03:53.230 21:40:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:53.230 21:40:08 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:53.230 21:40:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.230 21:40:08 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:53.230 21:40:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:53.230 21:40:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:53.230 21:40:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:53.230 21:40:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:53.230 21:40:08 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:53.230 21:40:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.230 21:40:08 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:53.230 21:40:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:53.230 21:40:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:53.230 21:40:08 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:53.230 21:40:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.230 21:40:08 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:53.230 21:40:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:53.230 { 00:03:53.230 "name": "Malloc0", 00:03:53.230 "aliases": [ 00:03:53.230 "ce446b55-42f2-11ef-9f7f-e9a656123a8b" 00:03:53.230 ], 00:03:53.230 "product_name": "Malloc disk", 00:03:53.230 "block_size": 512, 00:03:53.230 "num_blocks": 16384, 00:03:53.230 "uuid": "ce446b55-42f2-11ef-9f7f-e9a656123a8b", 00:03:53.230 "assigned_rate_limits": { 00:03:53.230 "rw_ios_per_sec": 0, 00:03:53.230 "rw_mbytes_per_sec": 0, 00:03:53.230 "r_mbytes_per_sec": 0, 00:03:53.230 "w_mbytes_per_sec": 0 00:03:53.230 }, 00:03:53.230 "claimed": false, 00:03:53.230 "zoned": false, 00:03:53.230 "supported_io_types": { 00:03:53.230 "read": true, 00:03:53.230 "write": true, 00:03:53.230 "unmap": true, 00:03:53.230 "flush": true, 00:03:53.230 "reset": true, 00:03:53.230 "nvme_admin": false, 00:03:53.230 "nvme_io": false, 00:03:53.230 "nvme_io_md": false, 00:03:53.230 "write_zeroes": true, 00:03:53.230 "zcopy": true, 00:03:53.230 "get_zone_info": false, 00:03:53.230 "zone_management": false, 00:03:53.230 "zone_append": false, 00:03:53.230 "compare": false, 00:03:53.230 "compare_and_write": false, 00:03:53.230 "abort": true, 00:03:53.230 "seek_hole": false, 00:03:53.230 "seek_data": false, 00:03:53.230 "copy": true, 00:03:53.230 "nvme_iov_md": false 00:03:53.230 }, 00:03:53.230 "memory_domains": [ 00:03:53.230 { 00:03:53.230 "dma_device_id": "system", 00:03:53.230 "dma_device_type": 1 00:03:53.230 }, 00:03:53.230 { 00:03:53.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.230 "dma_device_type": 2 00:03:53.230 } 00:03:53.230 ], 00:03:53.230 "driver_specific": {} 00:03:53.230 } 00:03:53.230 ]' 00:03:53.230 21:40:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:53.230 21:40:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:53.230 21:40:08 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:53.230 21:40:08 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:53.230 21:40:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.230 [2024-07-15 21:40:08.220379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:53.230 [2024-07-15 21:40:08.220436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:53.230 [2024-07-15 21:40:08.221012] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa5b9c037a00 00:03:53.230 [2024-07-15 21:40:08.221040] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:53.230 [2024-07-15 21:40:08.221751] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:53.230 [2024-07-15 21:40:08.221774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:53.230 Passthru0 00:03:53.230 21:40:08 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:53.230 21:40:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:53.230 21:40:08 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:53.230 21:40:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.230 21:40:08 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:53.230 21:40:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:53.230 { 00:03:53.230 "name": "Malloc0", 00:03:53.230 "aliases": [ 00:03:53.230 "ce446b55-42f2-11ef-9f7f-e9a656123a8b" 00:03:53.230 ], 00:03:53.230 "product_name": "Malloc disk", 00:03:53.230 "block_size": 512, 00:03:53.230 "num_blocks": 16384, 00:03:53.230 "uuid": "ce446b55-42f2-11ef-9f7f-e9a656123a8b", 00:03:53.230 "assigned_rate_limits": { 00:03:53.230 "rw_ios_per_sec": 0, 00:03:53.230 "rw_mbytes_per_sec": 0, 00:03:53.230 "r_mbytes_per_sec": 0, 00:03:53.230 "w_mbytes_per_sec": 0 00:03:53.230 }, 00:03:53.230 "claimed": true, 00:03:53.230 "claim_type": "exclusive_write", 00:03:53.230 "zoned": false, 00:03:53.230 "supported_io_types": { 00:03:53.230 "read": true, 00:03:53.230 "write": true, 00:03:53.230 "unmap": true, 00:03:53.230 "flush": true, 00:03:53.230 "reset": true, 00:03:53.230 "nvme_admin": false, 00:03:53.230 "nvme_io": false, 00:03:53.230 "nvme_io_md": false, 00:03:53.230 "write_zeroes": true, 00:03:53.230 "zcopy": true, 00:03:53.230 "get_zone_info": false, 00:03:53.230 "zone_management": false, 00:03:53.230 "zone_append": false, 00:03:53.230 "compare": false, 00:03:53.230 "compare_and_write": false, 00:03:53.230 "abort": true, 00:03:53.230 "seek_hole": false, 00:03:53.230 "seek_data": false, 00:03:53.230 "copy": true, 00:03:53.230 "nvme_iov_md": false 00:03:53.230 }, 00:03:53.230 "memory_domains": [ 00:03:53.230 { 00:03:53.230 "dma_device_id": "system", 00:03:53.230 "dma_device_type": 1 00:03:53.230 }, 00:03:53.230 { 00:03:53.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.230 "dma_device_type": 2 00:03:53.230 } 00:03:53.230 ], 00:03:53.230 "driver_specific": {} 00:03:53.230 }, 00:03:53.230 { 00:03:53.230 "name": "Passthru0", 00:03:53.230 "aliases": [ 00:03:53.230 "d1134386-3436-6758-a9aa-925c7b953cef" 00:03:53.230 ], 00:03:53.230 "product_name": "passthru", 00:03:53.230 "block_size": 512, 00:03:53.230 "num_blocks": 16384, 00:03:53.230 "uuid": "d1134386-3436-6758-a9aa-925c7b953cef", 00:03:53.230 "assigned_rate_limits": { 00:03:53.230 "rw_ios_per_sec": 0, 00:03:53.230 "rw_mbytes_per_sec": 0, 00:03:53.230 "r_mbytes_per_sec": 0, 00:03:53.230 "w_mbytes_per_sec": 0 00:03:53.230 }, 00:03:53.230 "claimed": false, 00:03:53.230 "zoned": false, 00:03:53.230 "supported_io_types": { 00:03:53.230 "read": true, 00:03:53.230 "write": true, 00:03:53.230 "unmap": true, 00:03:53.230 "flush": true, 00:03:53.230 "reset": true, 00:03:53.230 "nvme_admin": false, 00:03:53.230 "nvme_io": false, 00:03:53.230 "nvme_io_md": false, 00:03:53.230 "write_zeroes": true, 00:03:53.230 "zcopy": true, 00:03:53.230 "get_zone_info": false, 00:03:53.230 "zone_management": false, 00:03:53.230 "zone_append": false, 00:03:53.230 "compare": false, 00:03:53.230 "compare_and_write": false, 00:03:53.230 "abort": true, 00:03:53.230 "seek_hole": false, 00:03:53.230 "seek_data": false, 00:03:53.230 "copy": true, 00:03:53.230 "nvme_iov_md": false 00:03:53.230 }, 00:03:53.230 "memory_domains": [ 00:03:53.230 { 00:03:53.230 "dma_device_id": "system", 00:03:53.230 "dma_device_type": 1 00:03:53.230 }, 00:03:53.230 { 00:03:53.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.230 "dma_device_type": 2 00:03:53.230 } 00:03:53.230 ], 00:03:53.230 "driver_specific": { 00:03:53.230 "passthru": { 00:03:53.230 "name": "Passthru0", 00:03:53.230 "base_bdev_name": "Malloc0" 00:03:53.230 } 00:03:53.230 } 00:03:53.230 } 00:03:53.230 ]' 00:03:53.230 21:40:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:53.230 21:40:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:53.230 21:40:08 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:53.230 21:40:08 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:53.230 21:40:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.230 21:40:08 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:53.230 21:40:08 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:53.230 21:40:08 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:53.230 21:40:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.230 21:40:08 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:53.230 21:40:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:53.230 21:40:08 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:53.230 21:40:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.230 21:40:08 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:53.230 21:40:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:53.230 21:40:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:53.230 21:40:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:53.230 00:03:53.230 real 0m0.129s 00:03:53.230 user 0m0.031s 00:03:53.230 sys 0m0.041s 00:03:53.230 21:40:08 rpc.rpc_integrity -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:53.230 21:40:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.230 ************************************ 00:03:53.230 END TEST rpc_integrity 00:03:53.230 ************************************ 00:03:53.230 21:40:08 rpc -- common/autotest_common.sh@1136 -- # return 0 00:03:53.230 21:40:08 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:53.230 21:40:08 rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:53.230 21:40:08 rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:53.230 21:40:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.230 ************************************ 00:03:53.230 START TEST rpc_plugins 00:03:53.230 ************************************ 00:03:53.230 21:40:08 rpc.rpc_plugins -- common/autotest_common.sh@1117 -- # rpc_plugins 00:03:53.230 21:40:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:53.230 21:40:08 rpc.rpc_plugins -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:53.230 21:40:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:53.230 21:40:08 rpc.rpc_plugins -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:53.230 21:40:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:53.230 21:40:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:53.230 21:40:08 rpc.rpc_plugins -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:53.230 21:40:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:53.230 21:40:08 rpc.rpc_plugins -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:53.230 21:40:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:53.230 { 00:03:53.230 "name": "Malloc1", 00:03:53.230 "aliases": [ 00:03:53.230 "ce5c38fd-42f2-11ef-9f7f-e9a656123a8b" 00:03:53.230 ], 00:03:53.230 "product_name": "Malloc disk", 00:03:53.230 "block_size": 4096, 00:03:53.230 "num_blocks": 256, 00:03:53.230 "uuid": "ce5c38fd-42f2-11ef-9f7f-e9a656123a8b", 00:03:53.230 "assigned_rate_limits": { 00:03:53.230 "rw_ios_per_sec": 0, 00:03:53.230 "rw_mbytes_per_sec": 0, 00:03:53.230 "r_mbytes_per_sec": 0, 00:03:53.230 "w_mbytes_per_sec": 0 00:03:53.230 }, 00:03:53.230 "claimed": false, 00:03:53.230 "zoned": false, 00:03:53.230 "supported_io_types": { 00:03:53.230 "read": true, 00:03:53.230 "write": true, 00:03:53.230 "unmap": true, 00:03:53.230 "flush": true, 00:03:53.230 "reset": true, 00:03:53.230 "nvme_admin": false, 00:03:53.230 "nvme_io": false, 00:03:53.230 "nvme_io_md": false, 00:03:53.230 "write_zeroes": true, 00:03:53.230 "zcopy": true, 00:03:53.230 "get_zone_info": false, 00:03:53.230 "zone_management": false, 00:03:53.230 "zone_append": false, 00:03:53.230 "compare": false, 00:03:53.230 "compare_and_write": false, 00:03:53.230 "abort": true, 00:03:53.230 "seek_hole": false, 00:03:53.230 "seek_data": false, 00:03:53.230 "copy": true, 00:03:53.230 "nvme_iov_md": false 00:03:53.230 }, 00:03:53.230 "memory_domains": [ 00:03:53.230 { 00:03:53.230 "dma_device_id": "system", 00:03:53.230 "dma_device_type": 1 00:03:53.230 }, 00:03:53.230 { 00:03:53.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.230 "dma_device_type": 2 00:03:53.230 } 00:03:53.230 ], 00:03:53.230 "driver_specific": {} 00:03:53.230 } 00:03:53.230 ]' 00:03:53.230 21:40:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:53.230 21:40:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:53.230 21:40:08 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:53.230 21:40:08 rpc.rpc_plugins -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:53.230 21:40:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:53.231 21:40:08 rpc.rpc_plugins -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:53.231 21:40:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:53.231 21:40:08 rpc.rpc_plugins -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:53.231 21:40:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:53.231 21:40:08 rpc.rpc_plugins -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:53.231 21:40:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:53.231 21:40:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:53.231 21:40:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:53.231 00:03:53.231 real 0m0.063s 00:03:53.231 user 0m0.029s 00:03:53.231 sys 0m0.004s 00:03:53.231 21:40:08 rpc.rpc_plugins -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:53.231 21:40:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:53.231 ************************************ 00:03:53.231 END TEST rpc_plugins 00:03:53.231 ************************************ 00:03:53.488 21:40:08 rpc -- common/autotest_common.sh@1136 -- # return 0 00:03:53.488 21:40:08 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:53.488 21:40:08 rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:53.488 21:40:08 rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:53.488 21:40:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.488 ************************************ 00:03:53.488 START TEST rpc_trace_cmd_test 00:03:53.488 ************************************ 00:03:53.488 21:40:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1117 -- # rpc_trace_cmd_test 00:03:53.488 21:40:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:53.488 21:40:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:53.488 21:40:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:53.488 21:40:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:53.488 21:40:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:53.488 21:40:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:53.488 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid45511", 00:03:53.488 "tpoint_group_mask": "0x8", 00:03:53.488 "iscsi_conn": { 00:03:53.488 "mask": "0x2", 00:03:53.489 "tpoint_mask": "0x0" 00:03:53.489 }, 00:03:53.489 "scsi": { 00:03:53.489 "mask": "0x4", 00:03:53.489 "tpoint_mask": "0x0" 00:03:53.489 }, 00:03:53.489 "bdev": { 00:03:53.489 "mask": "0x8", 00:03:53.489 "tpoint_mask": "0xffffffffffffffff" 00:03:53.489 }, 00:03:53.489 "nvmf_rdma": { 00:03:53.489 "mask": "0x10", 00:03:53.489 "tpoint_mask": "0x0" 00:03:53.489 }, 00:03:53.489 "nvmf_tcp": { 00:03:53.489 "mask": "0x20", 00:03:53.489 "tpoint_mask": "0x0" 00:03:53.489 }, 00:03:53.489 "blobfs": { 00:03:53.489 "mask": "0x80", 00:03:53.489 "tpoint_mask": "0x0" 00:03:53.489 }, 00:03:53.489 "dsa": { 00:03:53.489 "mask": "0x200", 00:03:53.489 "tpoint_mask": "0x0" 00:03:53.489 }, 00:03:53.489 "thread": { 00:03:53.489 "mask": "0x400", 00:03:53.489 "tpoint_mask": "0x0" 00:03:53.489 }, 00:03:53.489 "nvme_pcie": { 00:03:53.489 "mask": "0x800", 00:03:53.489 "tpoint_mask": "0x0" 00:03:53.489 }, 00:03:53.489 "iaa": { 00:03:53.489 "mask": "0x1000", 00:03:53.489 "tpoint_mask": "0x0" 00:03:53.489 }, 00:03:53.489 "nvme_tcp": { 00:03:53.489 "mask": "0x2000", 00:03:53.489 "tpoint_mask": "0x0" 00:03:53.489 }, 00:03:53.489 "bdev_nvme": { 00:03:53.489 "mask": "0x4000", 00:03:53.489 "tpoint_mask": "0x0" 00:03:53.489 }, 00:03:53.489 "sock": { 00:03:53.489 "mask": "0x8000", 00:03:53.489 "tpoint_mask": "0x0" 00:03:53.489 } 00:03:53.489 }' 00:03:53.489 21:40:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:53.489 21:40:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:03:53.489 21:40:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:53.489 21:40:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:53.489 21:40:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:53.489 21:40:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:53.489 21:40:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:53.489 21:40:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:53.489 21:40:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:53.489 21:40:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:53.489 00:03:53.489 real 0m0.062s 00:03:53.489 user 0m0.038s 00:03:53.489 sys 0m0.012s 00:03:53.489 21:40:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:53.489 21:40:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:53.489 ************************************ 00:03:53.489 END TEST rpc_trace_cmd_test 00:03:53.489 ************************************ 00:03:53.489 21:40:08 rpc -- common/autotest_common.sh@1136 -- # return 0 00:03:53.489 21:40:08 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:53.489 21:40:08 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:53.489 21:40:08 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:53.489 21:40:08 rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:53.489 21:40:08 rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:53.489 21:40:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.489 ************************************ 00:03:53.489 START TEST rpc_daemon_integrity 00:03:53.489 ************************************ 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1117 -- # rpc_integrity 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:53.489 { 00:03:53.489 "name": "Malloc2", 00:03:53.489 "aliases": [ 00:03:53.489 "ce803cc5-42f2-11ef-9f7f-e9a656123a8b" 00:03:53.489 ], 00:03:53.489 "product_name": "Malloc disk", 00:03:53.489 "block_size": 512, 00:03:53.489 "num_blocks": 16384, 00:03:53.489 "uuid": "ce803cc5-42f2-11ef-9f7f-e9a656123a8b", 00:03:53.489 "assigned_rate_limits": { 00:03:53.489 "rw_ios_per_sec": 0, 00:03:53.489 "rw_mbytes_per_sec": 0, 00:03:53.489 "r_mbytes_per_sec": 0, 00:03:53.489 "w_mbytes_per_sec": 0 00:03:53.489 }, 00:03:53.489 "claimed": false, 00:03:53.489 "zoned": false, 00:03:53.489 "supported_io_types": { 00:03:53.489 "read": true, 00:03:53.489 "write": true, 00:03:53.489 "unmap": true, 00:03:53.489 "flush": true, 00:03:53.489 "reset": true, 00:03:53.489 "nvme_admin": false, 00:03:53.489 "nvme_io": false, 00:03:53.489 "nvme_io_md": false, 00:03:53.489 "write_zeroes": true, 00:03:53.489 "zcopy": true, 00:03:53.489 "get_zone_info": false, 00:03:53.489 "zone_management": false, 00:03:53.489 "zone_append": false, 00:03:53.489 "compare": false, 00:03:53.489 "compare_and_write": false, 00:03:53.489 "abort": true, 00:03:53.489 "seek_hole": false, 00:03:53.489 "seek_data": false, 00:03:53.489 "copy": true, 00:03:53.489 "nvme_iov_md": false 00:03:53.489 }, 00:03:53.489 "memory_domains": [ 00:03:53.489 { 00:03:53.489 "dma_device_id": "system", 00:03:53.489 "dma_device_type": 1 00:03:53.489 }, 00:03:53.489 { 00:03:53.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.489 "dma_device_type": 2 00:03:53.489 } 00:03:53.489 ], 00:03:53.489 "driver_specific": {} 00:03:53.489 } 00:03:53.489 ]' 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.489 [2024-07-15 21:40:08.616405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:53.489 [2024-07-15 21:40:08.616449] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:53.489 [2024-07-15 21:40:08.616475] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa5b9c037a00 00:03:53.489 [2024-07-15 21:40:08.616484] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:53.489 [2024-07-15 21:40:08.616943] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:53.489 [2024-07-15 21:40:08.616982] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:53.489 Passthru0 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:53.489 { 00:03:53.489 "name": "Malloc2", 00:03:53.489 "aliases": [ 00:03:53.489 "ce803cc5-42f2-11ef-9f7f-e9a656123a8b" 00:03:53.489 ], 00:03:53.489 "product_name": "Malloc disk", 00:03:53.489 "block_size": 512, 00:03:53.489 "num_blocks": 16384, 00:03:53.489 "uuid": "ce803cc5-42f2-11ef-9f7f-e9a656123a8b", 00:03:53.489 "assigned_rate_limits": { 00:03:53.489 "rw_ios_per_sec": 0, 00:03:53.489 "rw_mbytes_per_sec": 0, 00:03:53.489 "r_mbytes_per_sec": 0, 00:03:53.489 "w_mbytes_per_sec": 0 00:03:53.489 }, 00:03:53.489 "claimed": true, 00:03:53.489 "claim_type": "exclusive_write", 00:03:53.489 "zoned": false, 00:03:53.489 "supported_io_types": { 00:03:53.489 "read": true, 00:03:53.489 "write": true, 00:03:53.489 "unmap": true, 00:03:53.489 "flush": true, 00:03:53.489 "reset": true, 00:03:53.489 "nvme_admin": false, 00:03:53.489 "nvme_io": false, 00:03:53.489 "nvme_io_md": false, 00:03:53.489 "write_zeroes": true, 00:03:53.489 "zcopy": true, 00:03:53.489 "get_zone_info": false, 00:03:53.489 "zone_management": false, 00:03:53.489 "zone_append": false, 00:03:53.489 "compare": false, 00:03:53.489 "compare_and_write": false, 00:03:53.489 "abort": true, 00:03:53.489 "seek_hole": false, 00:03:53.489 "seek_data": false, 00:03:53.489 "copy": true, 00:03:53.489 "nvme_iov_md": false 00:03:53.489 }, 00:03:53.489 "memory_domains": [ 00:03:53.489 { 00:03:53.489 "dma_device_id": "system", 00:03:53.489 "dma_device_type": 1 00:03:53.489 }, 00:03:53.489 { 00:03:53.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.489 "dma_device_type": 2 00:03:53.489 } 00:03:53.489 ], 00:03:53.489 "driver_specific": {} 00:03:53.489 }, 00:03:53.489 { 00:03:53.489 "name": "Passthru0", 00:03:53.489 "aliases": [ 00:03:53.489 "19b72650-b084-9555-8168-7d619d7a9057" 00:03:53.489 ], 00:03:53.489 "product_name": "passthru", 00:03:53.489 "block_size": 512, 00:03:53.489 "num_blocks": 16384, 00:03:53.489 "uuid": "19b72650-b084-9555-8168-7d619d7a9057", 00:03:53.489 "assigned_rate_limits": { 00:03:53.489 "rw_ios_per_sec": 0, 00:03:53.489 "rw_mbytes_per_sec": 0, 00:03:53.489 "r_mbytes_per_sec": 0, 00:03:53.489 "w_mbytes_per_sec": 0 00:03:53.489 }, 00:03:53.489 "claimed": false, 00:03:53.489 "zoned": false, 00:03:53.489 "supported_io_types": { 00:03:53.489 "read": true, 00:03:53.489 "write": true, 00:03:53.489 "unmap": true, 00:03:53.489 "flush": true, 00:03:53.489 "reset": true, 00:03:53.489 "nvme_admin": false, 00:03:53.489 "nvme_io": false, 00:03:53.489 "nvme_io_md": false, 00:03:53.489 "write_zeroes": true, 00:03:53.489 "zcopy": true, 00:03:53.489 "get_zone_info": false, 00:03:53.489 "zone_management": false, 00:03:53.489 "zone_append": false, 00:03:53.489 "compare": false, 00:03:53.489 "compare_and_write": false, 00:03:53.489 "abort": true, 00:03:53.489 "seek_hole": false, 00:03:53.489 "seek_data": false, 00:03:53.489 "copy": true, 00:03:53.489 "nvme_iov_md": false 00:03:53.489 }, 00:03:53.489 "memory_domains": [ 00:03:53.489 { 00:03:53.489 "dma_device_id": "system", 00:03:53.489 "dma_device_type": 1 00:03:53.489 }, 00:03:53.489 { 00:03:53.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.489 "dma_device_type": 2 00:03:53.489 } 00:03:53.489 ], 00:03:53.489 "driver_specific": { 00:03:53.489 "passthru": { 00:03:53.489 "name": "Passthru0", 00:03:53.489 "base_bdev_name": "Malloc2" 00:03:53.489 } 00:03:53.489 } 00:03:53.489 } 00:03:53.489 ]' 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:03:53.489 21:40:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.747 21:40:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:03:53.747 21:40:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:53.747 21:40:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:53.747 21:40:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:53.747 00:03:53.747 real 0m0.133s 00:03:53.747 user 0m0.051s 00:03:53.747 sys 0m0.023s 00:03:53.747 21:40:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:53.747 21:40:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.747 ************************************ 00:03:53.747 END TEST rpc_daemon_integrity 00:03:53.747 ************************************ 00:03:53.747 21:40:08 rpc -- common/autotest_common.sh@1136 -- # return 0 00:03:53.747 21:40:08 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:53.747 21:40:08 rpc -- rpc/rpc.sh@84 -- # killprocess 45511 00:03:53.747 21:40:08 rpc -- common/autotest_common.sh@942 -- # '[' -z 45511 ']' 00:03:53.747 21:40:08 rpc -- common/autotest_common.sh@946 -- # kill -0 45511 00:03:53.747 21:40:08 rpc -- common/autotest_common.sh@947 -- # uname 00:03:53.747 21:40:08 rpc -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:03:53.747 21:40:08 rpc -- common/autotest_common.sh@950 -- # ps -c -o command 45511 00:03:53.747 21:40:08 rpc -- common/autotest_common.sh@950 -- # tail -1 00:03:53.747 21:40:08 rpc -- common/autotest_common.sh@950 -- # process_name=spdk_tgt 00:03:53.747 21:40:08 rpc -- common/autotest_common.sh@952 -- # '[' spdk_tgt = sudo ']' 00:03:53.747 killing process with pid 45511 00:03:53.747 21:40:08 rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 45511' 00:03:53.747 21:40:08 rpc -- common/autotest_common.sh@961 -- # kill 45511 00:03:53.747 21:40:08 rpc -- common/autotest_common.sh@966 -- # wait 45511 00:03:54.005 00:03:54.005 real 0m2.112s 00:03:54.005 user 0m2.208s 00:03:54.005 sys 0m0.933s 00:03:54.005 21:40:08 rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:54.005 21:40:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.005 ************************************ 00:03:54.005 END TEST rpc 00:03:54.005 ************************************ 00:03:54.005 21:40:09 -- common/autotest_common.sh@1136 -- # return 0 00:03:54.005 21:40:09 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:54.005 21:40:09 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:54.005 21:40:09 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:54.005 21:40:09 -- common/autotest_common.sh@10 -- # set +x 00:03:54.005 ************************************ 00:03:54.005 START TEST skip_rpc 00:03:54.005 ************************************ 00:03:54.005 21:40:09 skip_rpc -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:54.264 * Looking for test storage... 00:03:54.264 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:54.264 21:40:09 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:54.264 21:40:09 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:03:54.264 21:40:09 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:54.264 21:40:09 skip_rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:54.264 21:40:09 skip_rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:54.264 21:40:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.264 ************************************ 00:03:54.264 START TEST skip_rpc 00:03:54.264 ************************************ 00:03:54.264 21:40:09 skip_rpc.skip_rpc -- common/autotest_common.sh@1117 -- # test_skip_rpc 00:03:54.264 21:40:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=45687 00:03:54.264 21:40:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:54.264 21:40:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:54.264 21:40:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:54.264 [2024-07-15 21:40:09.228047] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:03:54.264 [2024-07-15 21:40:09.228220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:03:54.830 EAL: TSC is not safe to use in SMP mode 00:03:54.830 EAL: TSC is not invariant 00:03:54.830 [2024-07-15 21:40:09.778228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:54.830 [2024-07-15 21:40:09.874133] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:03:54.830 [2024-07-15 21:40:09.876713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.098 21:40:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:00.098 21:40:14 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # local es=0 00:04:00.098 21:40:14 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:00.098 21:40:14 skip_rpc.skip_rpc -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:04:00.098 21:40:14 skip_rpc.skip_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:00.098 21:40:14 skip_rpc.skip_rpc -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:04:00.098 21:40:14 skip_rpc.skip_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:00.098 21:40:14 skip_rpc.skip_rpc -- common/autotest_common.sh@645 -- # rpc_cmd spdk_get_version 00:04:00.098 21:40:14 skip_rpc.skip_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:00.098 21:40:14 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.098 21:40:14 skip_rpc.skip_rpc -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:04:00.098 21:40:14 skip_rpc.skip_rpc -- common/autotest_common.sh@645 -- # es=1 00:04:00.098 21:40:14 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:04:00.098 21:40:14 skip_rpc.skip_rpc -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:04:00.098 21:40:14 skip_rpc.skip_rpc -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:04:00.098 21:40:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:00.098 21:40:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 45687 00:04:00.098 21:40:14 skip_rpc.skip_rpc -- common/autotest_common.sh@942 -- # '[' -z 45687 ']' 00:04:00.098 21:40:14 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # kill -0 45687 00:04:00.098 21:40:14 skip_rpc.skip_rpc -- common/autotest_common.sh@947 -- # uname 00:04:00.098 21:40:14 skip_rpc.skip_rpc -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:04:00.098 21:40:14 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # ps -c -o command 45687 00:04:00.098 21:40:14 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # tail -1 00:04:00.098 21:40:14 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # process_name=spdk_tgt 00:04:00.098 21:40:14 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' spdk_tgt = sudo ']' 00:04:00.098 killing process with pid 45687 00:04:00.098 21:40:14 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 45687' 00:04:00.098 21:40:14 skip_rpc.skip_rpc -- common/autotest_common.sh@961 -- # kill 45687 00:04:00.098 21:40:14 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # wait 45687 00:04:00.098 00:04:00.098 real 0m5.356s 00:04:00.098 user 0m4.769s 00:04:00.098 sys 0m0.602s 00:04:00.098 21:40:14 skip_rpc.skip_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:00.098 21:40:14 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.098 ************************************ 00:04:00.098 END TEST skip_rpc 00:04:00.098 ************************************ 00:04:00.098 21:40:14 skip_rpc -- common/autotest_common.sh@1136 -- # return 0 00:04:00.098 21:40:14 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:00.098 21:40:14 skip_rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:00.098 21:40:14 skip_rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:00.098 21:40:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.098 ************************************ 00:04:00.098 START TEST skip_rpc_with_json 00:04:00.098 ************************************ 00:04:00.098 21:40:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1117 -- # test_skip_rpc_with_json 00:04:00.098 21:40:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:00.098 21:40:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=45732 00:04:00.098 21:40:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:00.098 21:40:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 45732 00:04:00.098 21:40:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@823 -- # '[' -z 45732 ']' 00:04:00.098 21:40:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:00.098 21:40:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:00.098 21:40:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:00.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:00.098 21:40:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:00.098 21:40:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:00.098 21:40:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:00.098 [2024-07-15 21:40:14.624871] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:04:00.098 [2024-07-15 21:40:14.625120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:00.098 EAL: TSC is not safe to use in SMP mode 00:04:00.098 EAL: TSC is not invariant 00:04:00.098 [2024-07-15 21:40:15.176215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.098 [2024-07-15 21:40:15.284743] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:00.356 [2024-07-15 21:40:15.287407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.614 21:40:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:00.614 21:40:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # return 0 00:04:00.614 21:40:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:00.614 21:40:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:00.614 21:40:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:00.614 [2024-07-15 21:40:15.719359] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:00.614 request: 00:04:00.614 { 00:04:00.614 "trtype": "tcp", 00:04:00.614 "method": "nvmf_get_transports", 00:04:00.614 "req_id": 1 00:04:00.614 } 00:04:00.614 Got JSON-RPC error response 00:04:00.614 response: 00:04:00.614 { 00:04:00.614 "code": -19, 00:04:00.614 "message": "Operation not supported by device" 00:04:00.614 } 00:04:00.614 21:40:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:04:00.614 21:40:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:00.614 21:40:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:00.614 21:40:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:00.614 [2024-07-15 21:40:15.731403] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:00.614 21:40:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:00.614 21:40:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:00.614 21:40:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:00.614 21:40:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:00.872 21:40:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:00.872 21:40:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:00.872 { 00:04:00.872 "subsystems": [ 00:04:00.872 { 00:04:00.872 "subsystem": "vmd", 00:04:00.872 "config": [] 00:04:00.872 }, 00:04:00.872 { 00:04:00.872 "subsystem": "iobuf", 00:04:00.872 "config": [ 00:04:00.872 { 00:04:00.872 "method": "iobuf_set_options", 00:04:00.872 "params": { 00:04:00.872 "small_pool_count": 8192, 00:04:00.872 "large_pool_count": 1024, 00:04:00.872 "small_bufsize": 8192, 00:04:00.872 "large_bufsize": 135168 00:04:00.872 } 00:04:00.872 } 00:04:00.872 ] 00:04:00.872 }, 00:04:00.872 { 00:04:00.872 "subsystem": "scheduler", 00:04:00.872 "config": [ 00:04:00.872 { 00:04:00.872 "method": "framework_set_scheduler", 00:04:00.872 "params": { 00:04:00.872 "name": "static" 00:04:00.872 } 00:04:00.872 } 00:04:00.872 ] 00:04:00.872 }, 00:04:00.872 { 00:04:00.872 "subsystem": "sock", 00:04:00.872 "config": [ 00:04:00.872 { 00:04:00.872 "method": "sock_set_default_impl", 00:04:00.872 "params": { 00:04:00.872 "impl_name": "posix" 00:04:00.872 } 00:04:00.872 }, 00:04:00.872 { 00:04:00.872 "method": "sock_impl_set_options", 00:04:00.872 "params": { 00:04:00.872 "impl_name": "ssl", 00:04:00.872 "recv_buf_size": 4096, 00:04:00.872 "send_buf_size": 4096, 00:04:00.872 "enable_recv_pipe": true, 00:04:00.872 "enable_quickack": false, 00:04:00.872 "enable_placement_id": 0, 00:04:00.872 "enable_zerocopy_send_server": true, 00:04:00.872 "enable_zerocopy_send_client": false, 00:04:00.872 "zerocopy_threshold": 0, 00:04:00.872 "tls_version": 0, 00:04:00.872 "enable_ktls": false 00:04:00.872 } 00:04:00.872 }, 00:04:00.872 { 00:04:00.872 "method": "sock_impl_set_options", 00:04:00.872 "params": { 00:04:00.872 "impl_name": "posix", 00:04:00.872 "recv_buf_size": 2097152, 00:04:00.872 "send_buf_size": 2097152, 00:04:00.872 "enable_recv_pipe": true, 00:04:00.872 "enable_quickack": false, 00:04:00.872 "enable_placement_id": 0, 00:04:00.872 "enable_zerocopy_send_server": true, 00:04:00.872 "enable_zerocopy_send_client": false, 00:04:00.872 "zerocopy_threshold": 0, 00:04:00.872 "tls_version": 0, 00:04:00.872 "enable_ktls": false 00:04:00.872 } 00:04:00.872 } 00:04:00.872 ] 00:04:00.872 }, 00:04:00.872 { 00:04:00.872 "subsystem": "keyring", 00:04:00.872 "config": [] 00:04:00.872 }, 00:04:00.872 { 00:04:00.872 "subsystem": "accel", 00:04:00.872 "config": [ 00:04:00.872 { 00:04:00.872 "method": "accel_set_options", 00:04:00.872 "params": { 00:04:00.872 "small_cache_size": 128, 00:04:00.872 "large_cache_size": 16, 00:04:00.872 "task_count": 2048, 00:04:00.872 "sequence_count": 2048, 00:04:00.872 "buf_count": 2048 00:04:00.872 } 00:04:00.872 } 00:04:00.872 ] 00:04:00.872 }, 00:04:00.872 { 00:04:00.872 "subsystem": "bdev", 00:04:00.872 "config": [ 00:04:00.872 { 00:04:00.872 "method": "bdev_set_options", 00:04:00.872 "params": { 00:04:00.872 "bdev_io_pool_size": 65535, 00:04:00.872 "bdev_io_cache_size": 256, 00:04:00.872 "bdev_auto_examine": true, 00:04:00.872 "iobuf_small_cache_size": 128, 00:04:00.872 "iobuf_large_cache_size": 16 00:04:00.872 } 00:04:00.872 }, 00:04:00.872 { 00:04:00.872 "method": "bdev_raid_set_options", 00:04:00.872 "params": { 00:04:00.872 "process_window_size_kb": 1024 00:04:00.872 } 00:04:00.872 }, 00:04:00.872 { 00:04:00.872 "method": "bdev_nvme_set_options", 00:04:00.872 "params": { 00:04:00.872 "action_on_timeout": "none", 00:04:00.872 "timeout_us": 0, 00:04:00.872 "timeout_admin_us": 0, 00:04:00.872 "keep_alive_timeout_ms": 10000, 00:04:00.872 "arbitration_burst": 0, 00:04:00.872 "low_priority_weight": 0, 00:04:00.872 "medium_priority_weight": 0, 00:04:00.872 "high_priority_weight": 0, 00:04:00.872 "nvme_adminq_poll_period_us": 10000, 00:04:00.872 "nvme_ioq_poll_period_us": 0, 00:04:00.872 "io_queue_requests": 0, 00:04:00.872 "delay_cmd_submit": true, 00:04:00.872 "transport_retry_count": 4, 00:04:00.872 "bdev_retry_count": 3, 00:04:00.872 "transport_ack_timeout": 0, 00:04:00.872 "ctrlr_loss_timeout_sec": 0, 00:04:00.872 "reconnect_delay_sec": 0, 00:04:00.872 "fast_io_fail_timeout_sec": 0, 00:04:00.872 "disable_auto_failback": false, 00:04:00.872 "generate_uuids": false, 00:04:00.872 "transport_tos": 0, 00:04:00.872 "nvme_error_stat": false, 00:04:00.872 "rdma_srq_size": 0, 00:04:00.872 "io_path_stat": false, 00:04:00.872 "allow_accel_sequence": false, 00:04:00.872 "rdma_max_cq_size": 0, 00:04:00.872 "rdma_cm_event_timeout_ms": 0, 00:04:00.872 "dhchap_digests": [ 00:04:00.872 "sha256", 00:04:00.872 "sha384", 00:04:00.872 "sha512" 00:04:00.872 ], 00:04:00.872 "dhchap_dhgroups": [ 00:04:00.872 "null", 00:04:00.872 "ffdhe2048", 00:04:00.872 "ffdhe3072", 00:04:00.872 "ffdhe4096", 00:04:00.872 "ffdhe6144", 00:04:00.872 "ffdhe8192" 00:04:00.872 ] 00:04:00.872 } 00:04:00.872 }, 00:04:00.872 { 00:04:00.872 "method": "bdev_nvme_set_hotplug", 00:04:00.872 "params": { 00:04:00.872 "period_us": 100000, 00:04:00.873 "enable": false 00:04:00.873 } 00:04:00.873 }, 00:04:00.873 { 00:04:00.873 "method": "bdev_wait_for_examine" 00:04:00.873 } 00:04:00.873 ] 00:04:00.873 }, 00:04:00.873 { 00:04:00.873 "subsystem": "scsi", 00:04:00.873 "config": null 00:04:00.873 }, 00:04:00.873 { 00:04:00.873 "subsystem": "nvmf", 00:04:00.873 "config": [ 00:04:00.873 { 00:04:00.873 "method": "nvmf_set_config", 00:04:00.873 "params": { 00:04:00.873 "discovery_filter": "match_any", 00:04:00.873 "admin_cmd_passthru": { 00:04:00.873 "identify_ctrlr": false 00:04:00.873 } 00:04:00.873 } 00:04:00.873 }, 00:04:00.873 { 00:04:00.873 "method": "nvmf_set_max_subsystems", 00:04:00.873 "params": { 00:04:00.873 "max_subsystems": 1024 00:04:00.873 } 00:04:00.873 }, 00:04:00.873 { 00:04:00.873 "method": "nvmf_set_crdt", 00:04:00.873 "params": { 00:04:00.873 "crdt1": 0, 00:04:00.873 "crdt2": 0, 00:04:00.873 "crdt3": 0 00:04:00.873 } 00:04:00.873 }, 00:04:00.873 { 00:04:00.873 "method": "nvmf_create_transport", 00:04:00.873 "params": { 00:04:00.873 "trtype": "TCP", 00:04:00.873 "max_queue_depth": 128, 00:04:00.873 "max_io_qpairs_per_ctrlr": 127, 00:04:00.873 "in_capsule_data_size": 4096, 00:04:00.873 "max_io_size": 131072, 00:04:00.873 "io_unit_size": 131072, 00:04:00.873 "max_aq_depth": 128, 00:04:00.873 "num_shared_buffers": 511, 00:04:00.873 "buf_cache_size": 4294967295, 00:04:00.873 "dif_insert_or_strip": false, 00:04:00.873 "zcopy": false, 00:04:00.873 "c2h_success": true, 00:04:00.873 "sock_priority": 0, 00:04:00.873 "abort_timeout_sec": 1, 00:04:00.873 "ack_timeout": 0, 00:04:00.873 "data_wr_pool_size": 0 00:04:00.873 } 00:04:00.873 } 00:04:00.873 ] 00:04:00.873 }, 00:04:00.873 { 00:04:00.873 "subsystem": "iscsi", 00:04:00.873 "config": [ 00:04:00.873 { 00:04:00.873 "method": "iscsi_set_options", 00:04:00.873 "params": { 00:04:00.873 "node_base": "iqn.2016-06.io.spdk", 00:04:00.873 "max_sessions": 128, 00:04:00.873 "max_connections_per_session": 2, 00:04:00.873 "max_queue_depth": 64, 00:04:00.873 "default_time2wait": 2, 00:04:00.873 "default_time2retain": 20, 00:04:00.873 "first_burst_length": 8192, 00:04:00.873 "immediate_data": true, 00:04:00.873 "allow_duplicated_isid": false, 00:04:00.873 "error_recovery_level": 0, 00:04:00.873 "nop_timeout": 60, 00:04:00.873 "nop_in_interval": 30, 00:04:00.873 "disable_chap": false, 00:04:00.873 "require_chap": false, 00:04:00.873 "mutual_chap": false, 00:04:00.873 "chap_group": 0, 00:04:00.873 "max_large_datain_per_connection": 64, 00:04:00.873 "max_r2t_per_connection": 4, 00:04:00.873 "pdu_pool_size": 36864, 00:04:00.873 "immediate_data_pool_size": 16384, 00:04:00.873 "data_out_pool_size": 2048 00:04:00.873 } 00:04:00.873 } 00:04:00.873 ] 00:04:00.873 } 00:04:00.873 ] 00:04:00.873 } 00:04:00.873 21:40:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:00.873 21:40:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 45732 00:04:00.873 21:40:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@942 -- # '[' -z 45732 ']' 00:04:00.873 21:40:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # kill -0 45732 00:04:00.873 21:40:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # uname 00:04:00.873 21:40:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:04:00.873 21:40:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # ps -c -o command 45732 00:04:00.873 21:40:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # tail -1 00:04:00.873 21:40:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # process_name=spdk_tgt 00:04:00.873 21:40:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' spdk_tgt = sudo ']' 00:04:00.873 killing process with pid 45732 00:04:00.873 21:40:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # echo 'killing process with pid 45732' 00:04:00.873 21:40:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@961 -- # kill 45732 00:04:00.873 21:40:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # wait 45732 00:04:01.131 21:40:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=45750 00:04:01.131 21:40:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:01.131 21:40:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 45750 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@942 -- # '[' -z 45750 ']' 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # kill -0 45750 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # uname 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # ps -c -o command 45750 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # tail -1 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # process_name=spdk_tgt 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' spdk_tgt = sudo ']' 00:04:06.393 killing process with pid 45750 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # echo 'killing process with pid 45750' 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@961 -- # kill 45750 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # wait 45750 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:06.393 00:04:06.393 real 0m6.821s 00:04:06.393 user 0m6.153s 00:04:06.393 sys 0m1.280s 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:06.393 ************************************ 00:04:06.393 END TEST skip_rpc_with_json 00:04:06.393 ************************************ 00:04:06.393 21:40:21 skip_rpc -- common/autotest_common.sh@1136 -- # return 0 00:04:06.393 21:40:21 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:06.393 21:40:21 skip_rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:06.393 21:40:21 skip_rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:06.393 21:40:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.393 ************************************ 00:04:06.393 START TEST skip_rpc_with_delay 00:04:06.393 ************************************ 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1117 -- # test_skip_rpc_with_delay 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # local es=0 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@630 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@634 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@645 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:06.393 [2024-07-15 21:40:21.494303] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:06.393 [2024-07-15 21:40:21.494649] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@645 -- # es=1 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:04:06.393 00:04:06.393 real 0m0.013s 00:04:06.393 user 0m0.014s 00:04:06.393 sys 0m0.000s 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:06.393 21:40:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:06.393 ************************************ 00:04:06.393 END TEST skip_rpc_with_delay 00:04:06.393 ************************************ 00:04:06.393 21:40:21 skip_rpc -- common/autotest_common.sh@1136 -- # return 0 00:04:06.393 21:40:21 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:06.393 21:40:21 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' FreeBSD '!=' FreeBSD ']' 00:04:06.393 21:40:21 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:06.393 00:04:06.393 real 0m12.494s 00:04:06.393 user 0m11.107s 00:04:06.393 sys 0m2.081s 00:04:06.393 21:40:21 skip_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:06.393 ************************************ 00:04:06.393 END TEST skip_rpc 00:04:06.393 ************************************ 00:04:06.393 21:40:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.393 21:40:21 -- common/autotest_common.sh@1136 -- # return 0 00:04:06.393 21:40:21 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:06.393 21:40:21 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:06.393 21:40:21 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:06.393 21:40:21 -- common/autotest_common.sh@10 -- # set +x 00:04:06.652 ************************************ 00:04:06.652 START TEST rpc_client 00:04:06.652 ************************************ 00:04:06.652 21:40:21 rpc_client -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:06.652 * Looking for test storage... 00:04:06.652 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:06.652 21:40:21 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:06.652 OK 00:04:06.652 21:40:21 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:06.652 00:04:06.652 real 0m0.166s 00:04:06.652 user 0m0.129s 00:04:06.652 sys 0m0.113s 00:04:06.652 21:40:21 rpc_client -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:06.652 21:40:21 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:06.652 ************************************ 00:04:06.652 END TEST rpc_client 00:04:06.652 ************************************ 00:04:06.652 21:40:21 -- common/autotest_common.sh@1136 -- # return 0 00:04:06.652 21:40:21 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:06.652 21:40:21 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:06.652 21:40:21 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:06.652 21:40:21 -- common/autotest_common.sh@10 -- # set +x 00:04:06.652 ************************************ 00:04:06.652 START TEST json_config 00:04:06.652 ************************************ 00:04:06.652 21:40:21 json_config -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:06.909 21:40:21 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:06.909 21:40:21 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:06.909 21:40:21 json_config -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:04:06.909 21:40:21 json_config -- nvmf/common.sh@7 -- # return 0 00:04:06.909 21:40:21 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:06.909 21:40:21 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:06.909 21:40:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:06.909 21:40:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:06.909 21:40:21 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:06.909 21:40:21 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:06.909 21:40:21 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:06.909 21:40:21 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:06.909 21:40:21 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:06.909 21:40:21 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:06.909 21:40:21 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:06.909 21:40:21 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:06.909 21:40:21 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:06.909 21:40:21 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:06.910 21:40:21 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:06.910 INFO: JSON configuration test init 00:04:06.910 21:40:21 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:06.910 21:40:21 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:06.910 21:40:21 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:06.910 21:40:21 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:06.910 21:40:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.910 21:40:21 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:06.910 21:40:21 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:06.910 21:40:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.910 21:40:21 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:06.910 21:40:21 json_config -- json_config/common.sh@9 -- # local app=target 00:04:06.910 21:40:21 json_config -- json_config/common.sh@10 -- # shift 00:04:06.910 21:40:21 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:06.910 21:40:21 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:06.910 21:40:21 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:06.910 21:40:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:06.910 21:40:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:06.910 21:40:21 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=45905 00:04:06.910 Waiting for target to run... 00:04:06.910 21:40:21 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:06.910 21:40:21 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:06.910 21:40:21 json_config -- json_config/common.sh@25 -- # waitforlisten 45905 /var/tmp/spdk_tgt.sock 00:04:06.910 21:40:21 json_config -- common/autotest_common.sh@823 -- # '[' -z 45905 ']' 00:04:06.910 21:40:21 json_config -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:06.910 21:40:21 json_config -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:06.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:06.910 21:40:21 json_config -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:06.910 21:40:21 json_config -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:06.910 21:40:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.910 [2024-07-15 21:40:21.952689] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:04:06.910 [2024-07-15 21:40:21.952912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:07.167 EAL: TSC is not safe to use in SMP mode 00:04:07.167 EAL: TSC is not invariant 00:04:07.167 [2024-07-15 21:40:22.241104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.167 [2024-07-15 21:40:22.327714] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:07.167 [2024-07-15 21:40:22.329927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.100 21:40:23 json_config -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:08.100 00:04:08.100 21:40:23 json_config -- common/autotest_common.sh@856 -- # return 0 00:04:08.100 21:40:23 json_config -- json_config/common.sh@26 -- # echo '' 00:04:08.100 21:40:23 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:08.100 21:40:23 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:08.100 21:40:23 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:08.100 21:40:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.100 21:40:23 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:08.100 21:40:23 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:08.100 21:40:23 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:08.100 21:40:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.100 21:40:23 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:08.100 21:40:23 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:08.100 21:40:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:08.357 [2024-07-15 21:40:23.354119] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:04:08.357 21:40:23 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:08.357 21:40:23 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:08.357 21:40:23 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:08.357 21:40:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.357 21:40:23 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:08.357 21:40:23 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:08.357 21:40:23 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:08.357 21:40:23 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:08.357 21:40:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:08.357 21:40:23 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:08.616 21:40:23 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:08.616 21:40:23 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:08.616 21:40:23 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:08.616 21:40:23 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:08.616 21:40:23 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:08.616 21:40:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.616 21:40:23 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:08.616 21:40:23 json_config -- json_config/json_config.sh@278 -- # [[ 1 -eq 1 ]] 00:04:08.616 21:40:23 json_config -- json_config/json_config.sh@279 -- # create_bdev_subsystem_config 00:04:08.616 21:40:23 json_config -- json_config/json_config.sh@105 -- # timing_enter create_bdev_subsystem_config 00:04:08.616 21:40:23 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:08.616 21:40:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.616 21:40:23 json_config -- json_config/json_config.sh@107 -- # expected_notifications=() 00:04:08.616 21:40:23 json_config -- json_config/json_config.sh@107 -- # local expected_notifications 00:04:08.616 21:40:23 json_config -- json_config/json_config.sh@111 -- # expected_notifications+=($(get_notifications)) 00:04:08.616 21:40:23 json_config -- json_config/json_config.sh@111 -- # get_notifications 00:04:08.616 21:40:23 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:04:08.616 21:40:23 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:08.616 21:40:23 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:08.616 21:40:23 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:04:08.616 21:40:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:04:08.616 21:40:23 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:04:08.873 21:40:24 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:04:08.873 21:40:24 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:08.873 21:40:24 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:08.873 21:40:24 json_config -- json_config/json_config.sh@113 -- # [[ 1 -eq 1 ]] 00:04:08.873 21:40:24 json_config -- json_config/json_config.sh@114 -- # local lvol_store_base_bdev=Nvme0n1 00:04:08.873 21:40:24 json_config -- json_config/json_config.sh@116 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:04:08.873 21:40:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:04:09.131 Nvme0n1p0 Nvme0n1p1 00:04:09.131 21:40:24 json_config -- json_config/json_config.sh@117 -- # tgt_rpc bdev_split_create Malloc0 3 00:04:09.131 21:40:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:04:09.697 [2024-07-15 21:40:24.578289] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:04:09.697 [2024-07-15 21:40:24.578361] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:04:09.697 00:04:09.697 21:40:24 json_config -- json_config/json_config.sh@118 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:04:09.697 21:40:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:04:09.697 Malloc3 00:04:09.955 21:40:24 json_config -- json_config/json_config.sh@119 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:04:09.955 21:40:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:04:09.955 [2024-07-15 21:40:25.110309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:09.955 [2024-07-15 21:40:25.110375] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:09.955 [2024-07-15 21:40:25.110405] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2311fc38180 00:04:09.955 [2024-07-15 21:40:25.110414] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:09.955 [2024-07-15 21:40:25.111096] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:09.955 [2024-07-15 21:40:25.111126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:04:09.955 PTBdevFromMalloc3 00:04:09.955 21:40:25 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_null_create Null0 32 512 00:04:09.955 21:40:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:04:10.272 Null0 00:04:10.272 21:40:25 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:04:10.272 21:40:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:04:10.530 Malloc0 00:04:10.530 21:40:25 json_config -- json_config/json_config.sh@124 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:04:10.530 21:40:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:04:10.788 Malloc1 00:04:10.788 21:40:25 json_config -- json_config/json_config.sh@137 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:04:10.788 21:40:25 json_config -- json_config/json_config.sh@140 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:04:11.355 102400+0 records in 00:04:11.355 102400+0 records out 00:04:11.355 104857600 bytes transferred in 0.337869 secs (310349800 bytes/sec) 00:04:11.355 21:40:26 json_config -- json_config/json_config.sh@141 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:04:11.355 21:40:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:04:11.355 aio_disk 00:04:11.355 21:40:26 json_config -- json_config/json_config.sh@142 -- # expected_notifications+=(bdev_register:aio_disk) 00:04:11.355 21:40:26 json_config -- json_config/json_config.sh@147 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:04:11.355 21:40:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:04:11.613 d95117c7-42f2-11ef-9f7f-e9a656123a8b 00:04:11.613 21:40:26 json_config -- json_config/json_config.sh@154 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:04:11.613 21:40:26 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:04:11.613 21:40:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:04:12.179 21:40:27 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:04:12.179 21:40:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:04:12.437 21:40:27 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:04:12.437 21:40:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:04:12.693 21:40:27 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:04:12.693 21:40:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:04:12.950 21:40:27 json_config -- json_config/json_config.sh@157 -- # [[ 0 -eq 1 ]] 00:04:12.950 21:40:27 json_config -- json_config/json_config.sh@172 -- # [[ 0 -eq 1 ]] 00:04:12.950 21:40:27 json_config -- json_config/json_config.sh@178 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:d988a4d2-42f2-11ef-9f7f-e9a656123a8b bdev_register:d9b0f05e-42f2-11ef-9f7f-e9a656123a8b bdev_register:d9dd81f5-42f2-11ef-9f7f-e9a656123a8b bdev_register:da0220e7-42f2-11ef-9f7f-e9a656123a8b 00:04:12.950 21:40:27 json_config -- json_config/json_config.sh@67 -- # local events_to_check 00:04:12.950 21:40:27 json_config -- json_config/json_config.sh@68 -- # local recorded_events 00:04:12.950 21:40:27 json_config -- json_config/json_config.sh@71 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:04:12.950 21:40:27 json_config -- json_config/json_config.sh@71 -- # sort 00:04:12.950 21:40:27 json_config -- json_config/json_config.sh@71 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:d988a4d2-42f2-11ef-9f7f-e9a656123a8b bdev_register:d9b0f05e-42f2-11ef-9f7f-e9a656123a8b bdev_register:d9dd81f5-42f2-11ef-9f7f-e9a656123a8b bdev_register:da0220e7-42f2-11ef-9f7f-e9a656123a8b 00:04:12.950 21:40:27 json_config -- json_config/json_config.sh@72 -- # recorded_events=($(get_notifications | sort)) 00:04:12.950 21:40:27 json_config -- json_config/json_config.sh@72 -- # get_notifications 00:04:12.950 21:40:27 json_config -- json_config/json_config.sh@72 -- # sort 00:04:12.950 21:40:27 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:04:12.950 21:40:27 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:12.950 21:40:27 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:12.950 21:40:27 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:04:12.951 21:40:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:04:12.951 21:40:27 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p1 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p0 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc3 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:PTBdevFromMalloc3 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Null0 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p2 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p1 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p0 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc1 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:aio_disk 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:d988a4d2-42f2-11ef-9f7f-e9a656123a8b 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:13.210 21:40:28 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:13.211 21:40:28 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:d9b0f05e-42f2-11ef-9f7f-e9a656123a8b 00:04:13.211 21:40:28 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:13.211 21:40:28 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:13.211 21:40:28 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:d9dd81f5-42f2-11ef-9f7f-e9a656123a8b 00:04:13.211 21:40:28 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:13.211 21:40:28 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:13.211 21:40:28 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:da0220e7-42f2-11ef-9f7f-e9a656123a8b 00:04:13.211 21:40:28 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:13.211 21:40:28 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:13.211 21:40:28 json_config -- json_config/json_config.sh@74 -- # [[ bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:d988a4d2-42f2-11ef-9f7f-e9a656123a8b bdev_register:d9b0f05e-42f2-11ef-9f7f-e9a656123a8b bdev_register:d9dd81f5-42f2-11ef-9f7f-e9a656123a8b bdev_register:da0220e7-42f2-11ef-9f7f-e9a656123a8b != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\d\9\8\8\a\4\d\2\-\4\2\f\2\-\1\1\e\f\-\9\f\7\f\-\e\9\a\6\5\6\1\2\3\a\8\b\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\d\9\b\0\f\0\5\e\-\4\2\f\2\-\1\1\e\f\-\9\f\7\f\-\e\9\a\6\5\6\1\2\3\a\8\b\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\d\9\d\d\8\1\f\5\-\4\2\f\2\-\1\1\e\f\-\9\f\7\f\-\e\9\a\6\5\6\1\2\3\a\8\b\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\d\a\0\2\2\0\e\7\-\4\2\f\2\-\1\1\e\f\-\9\f\7\f\-\e\9\a\6\5\6\1\2\3\a\8\b ]] 00:04:13.211 21:40:28 json_config -- json_config/json_config.sh@86 -- # cat 00:04:13.211 21:40:28 json_config -- json_config/json_config.sh@86 -- # printf ' %s\n' bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:d988a4d2-42f2-11ef-9f7f-e9a656123a8b bdev_register:d9b0f05e-42f2-11ef-9f7f-e9a656123a8b bdev_register:d9dd81f5-42f2-11ef-9f7f-e9a656123a8b bdev_register:da0220e7-42f2-11ef-9f7f-e9a656123a8b 00:04:13.211 Expected events matched: 00:04:13.211 bdev_register:Malloc0 00:04:13.211 bdev_register:Malloc0p0 00:04:13.211 bdev_register:Malloc0p1 00:04:13.211 bdev_register:Malloc0p2 00:04:13.211 bdev_register:Malloc1 00:04:13.211 bdev_register:Malloc3 00:04:13.211 bdev_register:Null0 00:04:13.211 bdev_register:Nvme0n1 00:04:13.211 bdev_register:Nvme0n1p0 00:04:13.211 bdev_register:Nvme0n1p1 00:04:13.211 bdev_register:PTBdevFromMalloc3 00:04:13.211 bdev_register:aio_disk 00:04:13.211 bdev_register:d988a4d2-42f2-11ef-9f7f-e9a656123a8b 00:04:13.211 bdev_register:d9b0f05e-42f2-11ef-9f7f-e9a656123a8b 00:04:13.211 bdev_register:d9dd81f5-42f2-11ef-9f7f-e9a656123a8b 00:04:13.211 bdev_register:da0220e7-42f2-11ef-9f7f-e9a656123a8b 00:04:13.211 21:40:28 json_config -- json_config/json_config.sh@180 -- # timing_exit create_bdev_subsystem_config 00:04:13.211 21:40:28 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:13.211 21:40:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.211 21:40:28 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:13.211 21:40:28 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:13.211 21:40:28 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:04:13.211 21:40:28 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:13.211 21:40:28 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:13.211 21:40:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.211 21:40:28 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:13.211 21:40:28 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:13.211 21:40:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:13.469 MallocBdevForConfigChangeCheck 00:04:13.469 21:40:28 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:13.469 21:40:28 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:13.469 21:40:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.469 21:40:28 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:13.469 21:40:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:14.063 INFO: shutting down applications... 00:04:14.063 21:40:28 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:14.063 21:40:28 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:14.063 21:40:28 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:14.063 21:40:28 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:14.063 21:40:28 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:14.063 [2024-07-15 21:40:29.106729] vbdev_lvol.c: 151:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:04:14.322 Calling clear_iscsi_subsystem 00:04:14.322 Calling clear_nvmf_subsystem 00:04:14.322 Calling clear_bdev_subsystem 00:04:14.322 21:40:29 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:14.322 21:40:29 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:14.322 21:40:29 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:14.323 21:40:29 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:14.323 21:40:29 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:14.323 21:40:29 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:14.580 21:40:29 json_config -- json_config/json_config.sh@345 -- # break 00:04:14.580 21:40:29 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:14.580 21:40:29 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:14.580 21:40:29 json_config -- json_config/common.sh@31 -- # local app=target 00:04:14.580 21:40:29 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:14.580 21:40:29 json_config -- json_config/common.sh@35 -- # [[ -n 45905 ]] 00:04:14.580 21:40:29 json_config -- json_config/common.sh@38 -- # kill -SIGINT 45905 00:04:14.580 21:40:29 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:14.580 21:40:29 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:14.580 21:40:29 json_config -- json_config/common.sh@41 -- # kill -0 45905 00:04:14.580 21:40:29 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:15.148 21:40:30 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:15.148 21:40:30 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:15.148 21:40:30 json_config -- json_config/common.sh@41 -- # kill -0 45905 00:04:15.148 SPDK target shutdown done 00:04:15.148 21:40:30 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:15.148 21:40:30 json_config -- json_config/common.sh@43 -- # break 00:04:15.148 21:40:30 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:15.148 21:40:30 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:15.148 INFO: relaunching applications... 00:04:15.148 21:40:30 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:15.148 21:40:30 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:15.148 21:40:30 json_config -- json_config/common.sh@9 -- # local app=target 00:04:15.148 21:40:30 json_config -- json_config/common.sh@10 -- # shift 00:04:15.148 21:40:30 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:15.148 21:40:30 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:15.148 21:40:30 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:15.148 21:40:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:15.148 21:40:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:15.148 21:40:30 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=46095 00:04:15.148 21:40:30 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:15.148 Waiting for target to run... 00:04:15.148 21:40:30 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:15.148 21:40:30 json_config -- json_config/common.sh@25 -- # waitforlisten 46095 /var/tmp/spdk_tgt.sock 00:04:15.148 21:40:30 json_config -- common/autotest_common.sh@823 -- # '[' -z 46095 ']' 00:04:15.148 21:40:30 json_config -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:15.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:15.148 21:40:30 json_config -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:15.148 21:40:30 json_config -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:15.148 21:40:30 json_config -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:15.148 21:40:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.148 [2024-07-15 21:40:30.183490] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:04:15.148 [2024-07-15 21:40:30.183655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:15.406 EAL: TSC is not safe to use in SMP mode 00:04:15.406 EAL: TSC is not invariant 00:04:15.406 [2024-07-15 21:40:30.447171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.406 [2024-07-15 21:40:30.542731] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:15.406 [2024-07-15 21:40:30.545421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.664 [2024-07-15 21:40:30.686713] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:04:15.664 [2024-07-15 21:40:30.686776] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:04:15.664 [2024-07-15 21:40:30.694696] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:04:15.664 [2024-07-15 21:40:30.694725] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:04:15.664 [2024-07-15 21:40:30.702711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:15.664 [2024-07-15 21:40:30.702738] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:04:15.664 [2024-07-15 21:40:30.702762] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:04:15.664 [2024-07-15 21:40:30.710712] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:04:15.664 [2024-07-15 21:40:30.784936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:15.664 [2024-07-15 21:40:30.784984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:15.664 [2024-07-15 21:40:30.785011] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2a85ef037780 00:04:15.664 [2024-07-15 21:40:30.785020] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:15.664 [2024-07-15 21:40:30.785097] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:15.664 [2024-07-15 21:40:30.785108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:04:16.230 21:40:31 json_config -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:16.230 00:04:16.230 21:40:31 json_config -- common/autotest_common.sh@856 -- # return 0 00:04:16.230 21:40:31 json_config -- json_config/common.sh@26 -- # echo '' 00:04:16.230 21:40:31 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:16.230 INFO: Checking if target configuration is the same... 00:04:16.230 21:40:31 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:16.230 21:40:31 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.bxAiYw /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:16.230 + '[' 2 -ne 2 ']' 00:04:16.230 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:16.230 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:16.230 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:16.230 +++ basename /tmp//sh-np.bxAiYw 00:04:16.230 ++ mktemp /tmp/sh-np.bxAiYw.XXX 00:04:16.230 + tmp_file_1=/tmp/sh-np.bxAiYw.9eO 00:04:16.230 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:16.230 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:16.230 + tmp_file_2=/tmp/spdk_tgt_config.json.Tra 00:04:16.230 + ret=0 00:04:16.230 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:16.230 21:40:31 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:16.230 21:40:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:16.798 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:16.798 + diff -u /tmp/sh-np.bxAiYw.9eO /tmp/spdk_tgt_config.json.Tra 00:04:16.798 INFO: JSON config files are the same 00:04:16.798 + echo 'INFO: JSON config files are the same' 00:04:16.798 + rm /tmp/sh-np.bxAiYw.9eO /tmp/spdk_tgt_config.json.Tra 00:04:16.798 + exit 0 00:04:16.798 21:40:31 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:16.798 INFO: changing configuration and checking if this can be detected... 00:04:16.798 21:40:31 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:16.798 21:40:31 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:16.798 21:40:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:17.056 21:40:32 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.ALKSaZ /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:17.056 + '[' 2 -ne 2 ']' 00:04:17.056 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:17.056 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:17.056 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:17.056 +++ basename /tmp//sh-np.ALKSaZ 00:04:17.056 ++ mktemp /tmp/sh-np.ALKSaZ.XXX 00:04:17.056 + tmp_file_1=/tmp/sh-np.ALKSaZ.QX2 00:04:17.056 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:17.056 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:17.056 + tmp_file_2=/tmp/spdk_tgt_config.json.rPL 00:04:17.056 + ret=0 00:04:17.056 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:17.056 21:40:32 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:17.056 21:40:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:17.314 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:17.573 + diff -u /tmp/sh-np.ALKSaZ.QX2 /tmp/spdk_tgt_config.json.rPL 00:04:17.573 + ret=1 00:04:17.573 + echo '=== Start of file: /tmp/sh-np.ALKSaZ.QX2 ===' 00:04:17.573 + cat /tmp/sh-np.ALKSaZ.QX2 00:04:17.573 + echo '=== End of file: /tmp/sh-np.ALKSaZ.QX2 ===' 00:04:17.573 + echo '' 00:04:17.573 + echo '=== Start of file: /tmp/spdk_tgt_config.json.rPL ===' 00:04:17.573 + cat /tmp/spdk_tgt_config.json.rPL 00:04:17.573 + echo '=== End of file: /tmp/spdk_tgt_config.json.rPL ===' 00:04:17.573 + echo '' 00:04:17.573 + rm /tmp/sh-np.ALKSaZ.QX2 /tmp/spdk_tgt_config.json.rPL 00:04:17.573 + exit 1 00:04:17.573 INFO: configuration change detected. 00:04:17.573 21:40:32 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:17.573 21:40:32 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:17.573 21:40:32 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:17.573 21:40:32 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:17.573 21:40:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.573 21:40:32 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:17.573 21:40:32 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:17.573 21:40:32 json_config -- json_config/json_config.sh@317 -- # [[ -n 46095 ]] 00:04:17.573 21:40:32 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:17.573 21:40:32 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:17.573 21:40:32 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:17.573 21:40:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.573 21:40:32 json_config -- json_config/json_config.sh@186 -- # [[ 1 -eq 1 ]] 00:04:17.573 21:40:32 json_config -- json_config/json_config.sh@187 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:04:17.573 21:40:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:04:17.831 21:40:32 json_config -- json_config/json_config.sh@188 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:04:17.831 21:40:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:04:18.090 21:40:33 json_config -- json_config/json_config.sh@189 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:04:18.090 21:40:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:04:18.347 21:40:33 json_config -- json_config/json_config.sh@190 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:04:18.347 21:40:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:04:18.605 21:40:33 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:18.605 21:40:33 json_config -- json_config/json_config.sh@193 -- # [[ FreeBSD = Linux ]] 00:04:18.605 21:40:33 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:18.605 21:40:33 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:18.605 21:40:33 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:18.605 21:40:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.605 21:40:33 json_config -- json_config/json_config.sh@323 -- # killprocess 46095 00:04:18.605 21:40:33 json_config -- common/autotest_common.sh@942 -- # '[' -z 46095 ']' 00:04:18.605 21:40:33 json_config -- common/autotest_common.sh@946 -- # kill -0 46095 00:04:18.605 21:40:33 json_config -- common/autotest_common.sh@947 -- # uname 00:04:18.605 21:40:33 json_config -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:04:18.605 21:40:33 json_config -- common/autotest_common.sh@950 -- # ps -c -o command 46095 00:04:18.605 21:40:33 json_config -- common/autotest_common.sh@950 -- # tail -1 00:04:18.605 21:40:33 json_config -- common/autotest_common.sh@950 -- # process_name=spdk_tgt 00:04:18.605 21:40:33 json_config -- common/autotest_common.sh@952 -- # '[' spdk_tgt = sudo ']' 00:04:18.605 killing process with pid 46095 00:04:18.605 21:40:33 json_config -- common/autotest_common.sh@960 -- # echo 'killing process with pid 46095' 00:04:18.605 21:40:33 json_config -- common/autotest_common.sh@961 -- # kill 46095 00:04:18.605 21:40:33 json_config -- common/autotest_common.sh@966 -- # wait 46095 00:04:18.861 21:40:33 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:18.861 21:40:33 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:18.861 21:40:33 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:18.861 21:40:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.861 21:40:33 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:18.861 INFO: Success 00:04:18.861 21:40:33 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:18.861 00:04:18.861 real 0m12.145s 00:04:18.861 user 0m19.278s 00:04:18.861 sys 0m1.931s 00:04:18.861 21:40:33 json_config -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:18.861 21:40:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.861 ************************************ 00:04:18.861 END TEST json_config 00:04:18.861 ************************************ 00:04:18.861 21:40:33 -- common/autotest_common.sh@1136 -- # return 0 00:04:18.861 21:40:33 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:18.861 21:40:33 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:18.861 21:40:33 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:18.861 21:40:33 -- common/autotest_common.sh@10 -- # set +x 00:04:18.861 ************************************ 00:04:18.861 START TEST json_config_extra_key 00:04:18.861 ************************************ 00:04:18.861 21:40:33 json_config_extra_key -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:19.119 21:40:34 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:19.119 21:40:34 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:19.119 21:40:34 json_config_extra_key -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:04:19.119 21:40:34 json_config_extra_key -- nvmf/common.sh@7 -- # return 0 00:04:19.119 21:40:34 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:19.119 21:40:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:19.119 21:40:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:19.119 21:40:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:19.119 21:40:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:19.119 21:40:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:19.119 21:40:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:19.119 21:40:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:19.119 21:40:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:19.119 21:40:34 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:19.119 INFO: launching applications... 00:04:19.119 21:40:34 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:19.119 21:40:34 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:19.119 21:40:34 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:19.119 21:40:34 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:19.119 21:40:34 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:19.119 21:40:34 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:19.119 21:40:34 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:19.119 21:40:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:19.119 21:40:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:19.119 21:40:34 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=46228 00:04:19.119 Waiting for target to run... 00:04:19.119 21:40:34 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:19.119 21:40:34 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:19.119 21:40:34 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 46228 /var/tmp/spdk_tgt.sock 00:04:19.119 21:40:34 json_config_extra_key -- common/autotest_common.sh@823 -- # '[' -z 46228 ']' 00:04:19.119 21:40:34 json_config_extra_key -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:19.119 21:40:34 json_config_extra_key -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:19.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:19.119 21:40:34 json_config_extra_key -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:19.119 21:40:34 json_config_extra_key -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:19.119 21:40:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:19.119 [2024-07-15 21:40:34.124948] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:04:19.119 [2024-07-15 21:40:34.125091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:19.376 EAL: TSC is not safe to use in SMP mode 00:04:19.376 EAL: TSC is not invariant 00:04:19.376 [2024-07-15 21:40:34.386282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.376 [2024-07-15 21:40:34.473138] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:19.376 [2024-07-15 21:40:34.475343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.323 21:40:35 json_config_extra_key -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:20.323 21:40:35 json_config_extra_key -- common/autotest_common.sh@856 -- # return 0 00:04:20.323 00:04:20.323 21:40:35 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:20.323 INFO: shutting down applications... 00:04:20.323 21:40:35 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:20.323 21:40:35 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:20.323 21:40:35 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:20.323 21:40:35 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:20.323 21:40:35 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 46228 ]] 00:04:20.323 21:40:35 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 46228 00:04:20.323 21:40:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:20.323 21:40:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:20.323 21:40:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 46228 00:04:20.323 21:40:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:20.887 21:40:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:20.887 21:40:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:20.887 21:40:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 46228 00:04:20.887 21:40:35 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:20.887 21:40:35 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:20.887 21:40:35 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:20.887 SPDK target shutdown done 00:04:20.887 Success 00:04:20.887 21:40:35 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:20.887 21:40:35 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:20.887 00:04:20.887 real 0m1.860s 00:04:20.887 user 0m1.794s 00:04:20.887 sys 0m0.479s 00:04:20.887 21:40:35 json_config_extra_key -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:20.887 21:40:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:20.887 ************************************ 00:04:20.887 END TEST json_config_extra_key 00:04:20.887 ************************************ 00:04:20.887 21:40:35 -- common/autotest_common.sh@1136 -- # return 0 00:04:20.887 21:40:35 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:20.887 21:40:35 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:20.887 21:40:35 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:20.887 21:40:35 -- common/autotest_common.sh@10 -- # set +x 00:04:20.887 ************************************ 00:04:20.887 START TEST alias_rpc 00:04:20.887 ************************************ 00:04:20.887 21:40:35 alias_rpc -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:20.887 * Looking for test storage... 00:04:20.887 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:20.887 21:40:36 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:20.887 21:40:36 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=46282 00:04:20.887 21:40:36 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 46282 00:04:20.887 21:40:36 alias_rpc -- common/autotest_common.sh@823 -- # '[' -z 46282 ']' 00:04:20.887 21:40:36 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:20.887 21:40:36 alias_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.887 21:40:36 alias_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:20.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.887 21:40:36 alias_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.887 21:40:36 alias_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:20.887 21:40:36 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.887 [2024-07-15 21:40:36.037542] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:04:20.887 [2024-07-15 21:40:36.037812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:21.463 EAL: TSC is not safe to use in SMP mode 00:04:21.463 EAL: TSC is not invariant 00:04:21.463 [2024-07-15 21:40:36.577430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.721 [2024-07-15 21:40:36.670804] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:21.721 [2024-07-15 21:40:36.673004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.287 21:40:37 alias_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:22.287 21:40:37 alias_rpc -- common/autotest_common.sh@856 -- # return 0 00:04:22.287 21:40:37 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:22.546 21:40:37 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 46282 00:04:22.546 21:40:37 alias_rpc -- common/autotest_common.sh@942 -- # '[' -z 46282 ']' 00:04:22.546 21:40:37 alias_rpc -- common/autotest_common.sh@946 -- # kill -0 46282 00:04:22.546 21:40:37 alias_rpc -- common/autotest_common.sh@947 -- # uname 00:04:22.546 21:40:37 alias_rpc -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:04:22.546 21:40:37 alias_rpc -- common/autotest_common.sh@950 -- # ps -c -o command 46282 00:04:22.546 21:40:37 alias_rpc -- common/autotest_common.sh@950 -- # tail -1 00:04:22.546 21:40:37 alias_rpc -- common/autotest_common.sh@950 -- # process_name=spdk_tgt 00:04:22.546 21:40:37 alias_rpc -- common/autotest_common.sh@952 -- # '[' spdk_tgt = sudo ']' 00:04:22.546 killing process with pid 46282 00:04:22.546 21:40:37 alias_rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 46282' 00:04:22.546 21:40:37 alias_rpc -- common/autotest_common.sh@961 -- # kill 46282 00:04:22.546 21:40:37 alias_rpc -- common/autotest_common.sh@966 -- # wait 46282 00:04:22.811 00:04:22.811 real 0m1.894s 00:04:22.811 user 0m2.109s 00:04:22.811 sys 0m0.776s 00:04:22.811 21:40:37 alias_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:22.811 21:40:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.811 ************************************ 00:04:22.811 END TEST alias_rpc 00:04:22.811 ************************************ 00:04:22.811 21:40:37 -- common/autotest_common.sh@1136 -- # return 0 00:04:22.811 21:40:37 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:22.811 21:40:37 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:22.811 21:40:37 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:22.811 21:40:37 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:22.811 21:40:37 -- common/autotest_common.sh@10 -- # set +x 00:04:22.811 ************************************ 00:04:22.811 START TEST spdkcli_tcp 00:04:22.811 ************************************ 00:04:22.811 21:40:37 spdkcli_tcp -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:22.811 * Looking for test storage... 00:04:22.811 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:22.811 21:40:37 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:22.811 21:40:37 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:22.811 21:40:37 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:22.811 21:40:37 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:22.811 21:40:37 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:22.811 21:40:37 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:22.811 21:40:37 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:22.811 21:40:37 spdkcli_tcp -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:22.811 21:40:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:22.811 21:40:37 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=46347 00:04:22.811 21:40:37 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 46347 00:04:22.811 21:40:37 spdkcli_tcp -- common/autotest_common.sh@823 -- # '[' -z 46347 ']' 00:04:22.811 21:40:37 spdkcli_tcp -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.811 21:40:37 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:22.811 21:40:37 spdkcli_tcp -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:22.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.811 21:40:37 spdkcli_tcp -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.811 21:40:37 spdkcli_tcp -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:22.811 21:40:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:22.811 [2024-07-15 21:40:37.979476] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:04:22.811 [2024-07-15 21:40:37.979694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:23.389 EAL: TSC is not safe to use in SMP mode 00:04:23.389 EAL: TSC is not invariant 00:04:23.389 [2024-07-15 21:40:38.543990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:23.647 [2024-07-15 21:40:38.623719] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:23.647 [2024-07-15 21:40:38.623786] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:04:23.647 [2024-07-15 21:40:38.636459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.647 [2024-07-15 21:40:38.636415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.905 21:40:38 spdkcli_tcp -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:23.905 21:40:38 spdkcli_tcp -- common/autotest_common.sh@856 -- # return 0 00:04:23.905 21:40:38 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:23.905 21:40:38 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=46355 00:04:23.905 21:40:38 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:24.163 [ 00:04:24.163 "spdk_get_version", 00:04:24.163 "rpc_get_methods", 00:04:24.163 "env_dpdk_get_mem_stats", 00:04:24.163 "trace_get_info", 00:04:24.163 "trace_get_tpoint_group_mask", 00:04:24.163 "trace_disable_tpoint_group", 00:04:24.163 "trace_enable_tpoint_group", 00:04:24.163 "trace_clear_tpoint_mask", 00:04:24.163 "trace_set_tpoint_mask", 00:04:24.163 "notify_get_notifications", 00:04:24.164 "notify_get_types", 00:04:24.164 "accel_get_stats", 00:04:24.164 "accel_set_options", 00:04:24.164 "accel_set_driver", 00:04:24.164 "accel_crypto_key_destroy", 00:04:24.164 "accel_crypto_keys_get", 00:04:24.164 "accel_crypto_key_create", 00:04:24.164 "accel_assign_opc", 00:04:24.164 "accel_get_module_info", 00:04:24.164 "accel_get_opc_assignments", 00:04:24.164 "bdev_get_histogram", 00:04:24.164 "bdev_enable_histogram", 00:04:24.164 "bdev_set_qos_limit", 00:04:24.164 "bdev_set_qd_sampling_period", 00:04:24.164 "bdev_get_bdevs", 00:04:24.164 "bdev_reset_iostat", 00:04:24.164 "bdev_get_iostat", 00:04:24.164 "bdev_examine", 00:04:24.164 "bdev_wait_for_examine", 00:04:24.164 "bdev_set_options", 00:04:24.164 "keyring_get_keys", 00:04:24.164 "framework_get_pci_devices", 00:04:24.164 "framework_get_config", 00:04:24.164 "framework_get_subsystems", 00:04:24.164 "sock_get_default_impl", 00:04:24.164 "sock_set_default_impl", 00:04:24.164 "sock_impl_set_options", 00:04:24.164 "sock_impl_get_options", 00:04:24.164 "thread_set_cpumask", 00:04:24.164 "framework_get_governor", 00:04:24.164 "framework_get_scheduler", 00:04:24.164 "framework_set_scheduler", 00:04:24.164 "framework_get_reactors", 00:04:24.164 "thread_get_io_channels", 00:04:24.164 "thread_get_pollers", 00:04:24.164 "thread_get_stats", 00:04:24.164 "framework_monitor_context_switch", 00:04:24.164 "spdk_kill_instance", 00:04:24.164 "log_enable_timestamps", 00:04:24.164 "log_get_flags", 00:04:24.164 "log_clear_flag", 00:04:24.164 "log_set_flag", 00:04:24.164 "log_get_level", 00:04:24.164 "log_set_level", 00:04:24.164 "log_get_print_level", 00:04:24.164 "log_set_print_level", 00:04:24.164 "framework_enable_cpumask_locks", 00:04:24.164 "framework_disable_cpumask_locks", 00:04:24.164 "framework_wait_init", 00:04:24.164 "framework_start_init", 00:04:24.164 "iobuf_get_stats", 00:04:24.164 "iobuf_set_options", 00:04:24.164 "vmd_rescan", 00:04:24.164 "vmd_remove_device", 00:04:24.164 "vmd_enable", 00:04:24.164 "nvmf_stop_mdns_prr", 00:04:24.164 "nvmf_publish_mdns_prr", 00:04:24.164 "nvmf_subsystem_get_listeners", 00:04:24.164 "nvmf_subsystem_get_qpairs", 00:04:24.164 "nvmf_subsystem_get_controllers", 00:04:24.164 "nvmf_get_stats", 00:04:24.164 "nvmf_get_transports", 00:04:24.164 "nvmf_create_transport", 00:04:24.164 "nvmf_get_targets", 00:04:24.164 "nvmf_delete_target", 00:04:24.164 "nvmf_create_target", 00:04:24.164 "nvmf_subsystem_allow_any_host", 00:04:24.164 "nvmf_subsystem_remove_host", 00:04:24.164 "nvmf_subsystem_add_host", 00:04:24.164 "nvmf_ns_remove_host", 00:04:24.164 "nvmf_ns_add_host", 00:04:24.164 "nvmf_subsystem_remove_ns", 00:04:24.164 "nvmf_subsystem_add_ns", 00:04:24.164 "nvmf_subsystem_listener_set_ana_state", 00:04:24.164 "nvmf_discovery_get_referrals", 00:04:24.164 "nvmf_discovery_remove_referral", 00:04:24.164 "nvmf_discovery_add_referral", 00:04:24.164 "nvmf_subsystem_remove_listener", 00:04:24.164 "nvmf_subsystem_add_listener", 00:04:24.164 "nvmf_delete_subsystem", 00:04:24.164 "nvmf_create_subsystem", 00:04:24.164 "nvmf_get_subsystems", 00:04:24.164 "nvmf_set_crdt", 00:04:24.164 "nvmf_set_config", 00:04:24.164 "nvmf_set_max_subsystems", 00:04:24.164 "scsi_get_devices", 00:04:24.164 "iscsi_get_histogram", 00:04:24.164 "iscsi_enable_histogram", 00:04:24.164 "iscsi_set_options", 00:04:24.164 "iscsi_get_auth_groups", 00:04:24.164 "iscsi_auth_group_remove_secret", 00:04:24.164 "iscsi_auth_group_add_secret", 00:04:24.164 "iscsi_delete_auth_group", 00:04:24.164 "iscsi_create_auth_group", 00:04:24.164 "iscsi_set_discovery_auth", 00:04:24.164 "iscsi_get_options", 00:04:24.164 "iscsi_target_node_request_logout", 00:04:24.164 "iscsi_target_node_set_redirect", 00:04:24.164 "iscsi_target_node_set_auth", 00:04:24.164 "iscsi_target_node_add_lun", 00:04:24.164 "iscsi_get_stats", 00:04:24.164 "iscsi_get_connections", 00:04:24.164 "iscsi_portal_group_set_auth", 00:04:24.164 "iscsi_start_portal_group", 00:04:24.164 "iscsi_delete_portal_group", 00:04:24.164 "iscsi_create_portal_group", 00:04:24.164 "iscsi_get_portal_groups", 00:04:24.164 "iscsi_delete_target_node", 00:04:24.164 "iscsi_target_node_remove_pg_ig_maps", 00:04:24.164 "iscsi_target_node_add_pg_ig_maps", 00:04:24.164 "iscsi_create_target_node", 00:04:24.164 "iscsi_get_target_nodes", 00:04:24.164 "iscsi_delete_initiator_group", 00:04:24.164 "iscsi_initiator_group_remove_initiators", 00:04:24.164 "iscsi_initiator_group_add_initiators", 00:04:24.164 "iscsi_create_initiator_group", 00:04:24.164 "iscsi_get_initiator_groups", 00:04:24.164 "keyring_file_remove_key", 00:04:24.164 "keyring_file_add_key", 00:04:24.164 "iaa_scan_accel_module", 00:04:24.164 "dsa_scan_accel_module", 00:04:24.164 "ioat_scan_accel_module", 00:04:24.164 "accel_error_inject_error", 00:04:24.164 "bdev_aio_delete", 00:04:24.164 "bdev_aio_rescan", 00:04:24.164 "bdev_aio_create", 00:04:24.164 "blobfs_create", 00:04:24.164 "blobfs_detect", 00:04:24.164 "blobfs_set_cache_size", 00:04:24.164 "bdev_zone_block_delete", 00:04:24.164 "bdev_zone_block_create", 00:04:24.164 "bdev_delay_delete", 00:04:24.164 "bdev_delay_create", 00:04:24.164 "bdev_delay_update_latency", 00:04:24.164 "bdev_split_delete", 00:04:24.164 "bdev_split_create", 00:04:24.164 "bdev_error_inject_error", 00:04:24.164 "bdev_error_delete", 00:04:24.164 "bdev_error_create", 00:04:24.164 "bdev_raid_set_options", 00:04:24.164 "bdev_raid_remove_base_bdev", 00:04:24.164 "bdev_raid_add_base_bdev", 00:04:24.164 "bdev_raid_delete", 00:04:24.164 "bdev_raid_create", 00:04:24.164 "bdev_raid_get_bdevs", 00:04:24.164 "bdev_lvol_set_parent_bdev", 00:04:24.164 "bdev_lvol_set_parent", 00:04:24.164 "bdev_lvol_check_shallow_copy", 00:04:24.164 "bdev_lvol_start_shallow_copy", 00:04:24.164 "bdev_lvol_grow_lvstore", 00:04:24.164 "bdev_lvol_get_lvols", 00:04:24.164 "bdev_lvol_get_lvstores", 00:04:24.165 "bdev_lvol_delete", 00:04:24.165 "bdev_lvol_set_read_only", 00:04:24.165 "bdev_lvol_resize", 00:04:24.165 "bdev_lvol_decouple_parent", 00:04:24.165 "bdev_lvol_inflate", 00:04:24.165 "bdev_lvol_rename", 00:04:24.165 "bdev_lvol_clone_bdev", 00:04:24.165 "bdev_lvol_clone", 00:04:24.165 "bdev_lvol_snapshot", 00:04:24.165 "bdev_lvol_create", 00:04:24.165 "bdev_lvol_delete_lvstore", 00:04:24.165 "bdev_lvol_rename_lvstore", 00:04:24.165 "bdev_lvol_create_lvstore", 00:04:24.165 "bdev_passthru_delete", 00:04:24.165 "bdev_passthru_create", 00:04:24.165 "bdev_nvme_send_cmd", 00:04:24.165 "bdev_nvme_get_path_iostat", 00:04:24.165 "bdev_nvme_get_mdns_discovery_info", 00:04:24.165 "bdev_nvme_stop_mdns_discovery", 00:04:24.165 "bdev_nvme_start_mdns_discovery", 00:04:24.165 "bdev_nvme_set_multipath_policy", 00:04:24.165 "bdev_nvme_set_preferred_path", 00:04:24.165 "bdev_nvme_get_io_paths", 00:04:24.165 "bdev_nvme_remove_error_injection", 00:04:24.165 "bdev_nvme_add_error_injection", 00:04:24.165 "bdev_nvme_get_discovery_info", 00:04:24.165 "bdev_nvme_stop_discovery", 00:04:24.165 "bdev_nvme_start_discovery", 00:04:24.165 "bdev_nvme_get_controller_health_info", 00:04:24.165 "bdev_nvme_disable_controller", 00:04:24.165 "bdev_nvme_enable_controller", 00:04:24.165 "bdev_nvme_reset_controller", 00:04:24.165 "bdev_nvme_get_transport_statistics", 00:04:24.165 "bdev_nvme_apply_firmware", 00:04:24.165 "bdev_nvme_detach_controller", 00:04:24.165 "bdev_nvme_get_controllers", 00:04:24.165 "bdev_nvme_attach_controller", 00:04:24.165 "bdev_nvme_set_hotplug", 00:04:24.165 "bdev_nvme_set_options", 00:04:24.165 "bdev_null_resize", 00:04:24.165 "bdev_null_delete", 00:04:24.165 "bdev_null_create", 00:04:24.165 "bdev_malloc_delete", 00:04:24.165 "bdev_malloc_create" 00:04:24.165 ] 00:04:24.165 21:40:39 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:24.165 21:40:39 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:24.165 21:40:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:24.165 21:40:39 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:24.165 21:40:39 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 46347 00:04:24.165 21:40:39 spdkcli_tcp -- common/autotest_common.sh@942 -- # '[' -z 46347 ']' 00:04:24.165 21:40:39 spdkcli_tcp -- common/autotest_common.sh@946 -- # kill -0 46347 00:04:24.165 21:40:39 spdkcli_tcp -- common/autotest_common.sh@947 -- # uname 00:04:24.165 21:40:39 spdkcli_tcp -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:04:24.165 21:40:39 spdkcli_tcp -- common/autotest_common.sh@950 -- # ps -c -o command 46347 00:04:24.165 21:40:39 spdkcli_tcp -- common/autotest_common.sh@950 -- # tail -1 00:04:24.165 21:40:39 spdkcli_tcp -- common/autotest_common.sh@950 -- # process_name=spdk_tgt 00:04:24.165 killing process with pid 46347 00:04:24.165 21:40:39 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' spdk_tgt = sudo ']' 00:04:24.165 21:40:39 spdkcli_tcp -- common/autotest_common.sh@960 -- # echo 'killing process with pid 46347' 00:04:24.165 21:40:39 spdkcli_tcp -- common/autotest_common.sh@961 -- # kill 46347 00:04:24.165 21:40:39 spdkcli_tcp -- common/autotest_common.sh@966 -- # wait 46347 00:04:24.424 00:04:24.424 real 0m1.751s 00:04:24.424 user 0m2.651s 00:04:24.424 sys 0m0.773s 00:04:24.424 21:40:39 spdkcli_tcp -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:24.424 ************************************ 00:04:24.424 END TEST spdkcli_tcp 00:04:24.424 ************************************ 00:04:24.424 21:40:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:24.424 21:40:39 -- common/autotest_common.sh@1136 -- # return 0 00:04:24.424 21:40:39 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:24.424 21:40:39 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:24.424 21:40:39 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:24.424 21:40:39 -- common/autotest_common.sh@10 -- # set +x 00:04:24.683 ************************************ 00:04:24.683 START TEST dpdk_mem_utility 00:04:24.683 ************************************ 00:04:24.683 21:40:39 dpdk_mem_utility -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:24.683 * Looking for test storage... 00:04:24.683 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:24.683 21:40:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:24.683 21:40:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=46426 00:04:24.683 21:40:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 46426 00:04:24.683 21:40:39 dpdk_mem_utility -- common/autotest_common.sh@823 -- # '[' -z 46426 ']' 00:04:24.683 21:40:39 dpdk_mem_utility -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.683 21:40:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:24.683 21:40:39 dpdk_mem_utility -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:24.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.683 21:40:39 dpdk_mem_utility -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.683 21:40:39 dpdk_mem_utility -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:24.683 21:40:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:24.683 [2024-07-15 21:40:39.764005] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:04:24.683 [2024-07-15 21:40:39.764157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:25.250 EAL: TSC is not safe to use in SMP mode 00:04:25.250 EAL: TSC is not invariant 00:04:25.250 [2024-07-15 21:40:40.314572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.250 [2024-07-15 21:40:40.408141] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:25.250 [2024-07-15 21:40:40.410297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.817 21:40:40 dpdk_mem_utility -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:25.817 21:40:40 dpdk_mem_utility -- common/autotest_common.sh@856 -- # return 0 00:04:25.817 21:40:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:25.817 21:40:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:25.817 21:40:40 dpdk_mem_utility -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:25.817 21:40:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:25.817 { 00:04:25.817 "filename": "/tmp/spdk_mem_dump.txt" 00:04:25.817 } 00:04:25.817 21:40:40 dpdk_mem_utility -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:25.817 21:40:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:25.817 DPDK memory size 2048.000000 MiB in 1 heap(s) 00:04:25.817 1 heaps totaling size 2048.000000 MiB 00:04:25.817 size: 2048.000000 MiB heap id: 0 00:04:25.817 end heaps---------- 00:04:25.817 8 mempools totaling size 592.563660 MiB 00:04:25.817 size: 212.271240 MiB name: PDU_immediate_data_Pool 00:04:25.817 size: 153.489014 MiB name: PDU_data_out_Pool 00:04:25.817 size: 84.500549 MiB name: bdev_io_46426 00:04:25.817 size: 51.008362 MiB name: evtpool_46426 00:04:25.817 size: 50.000549 MiB name: msgpool_46426 00:04:25.817 size: 21.758911 MiB name: PDU_Pool 00:04:25.817 size: 19.508911 MiB name: SCSI_TASK_Pool 00:04:25.817 size: 0.026123 MiB name: Session_Pool 00:04:25.817 end mempools------- 00:04:25.817 6 memzones totaling size 4.142822 MiB 00:04:25.817 size: 1.000366 MiB name: RG_ring_0_46426 00:04:25.817 size: 1.000366 MiB name: RG_ring_1_46426 00:04:25.817 size: 1.000366 MiB name: RG_ring_4_46426 00:04:25.817 size: 1.000366 MiB name: RG_ring_5_46426 00:04:25.817 size: 0.125366 MiB name: RG_ring_2_46426 00:04:25.817 size: 0.015991 MiB name: RG_ring_3_46426 00:04:25.817 end memzones------- 00:04:25.817 21:40:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:25.817 heap id: 0 total size: 2048.000000 MiB number of busy elements: 41 number of free elements: 3 00:04:25.817 list of free elements. size: 1254.071533 MiB 00:04:25.817 element at address: 0x1060000000 with size: 1254.001099 MiB 00:04:25.817 element at address: 0x10c8000000 with size: 0.070129 MiB 00:04:25.817 element at address: 0x10d98b6000 with size: 0.000305 MiB 00:04:25.817 list of standard malloc elements. size: 197.218323 MiB 00:04:25.817 element at address: 0x10cd4b0f80 with size: 132.000122 MiB 00:04:25.817 element at address: 0x10d58b5f80 with size: 64.000122 MiB 00:04:25.817 element at address: 0x10c7efff80 with size: 1.000122 MiB 00:04:25.817 element at address: 0x10dffd9f00 with size: 0.140747 MiB 00:04:25.817 element at address: 0x10c8020c80 with size: 0.062622 MiB 00:04:25.817 element at address: 0x10dfffdf80 with size: 0.007935 MiB 00:04:25.817 element at address: 0x10d58b1000 with size: 0.000305 MiB 00:04:25.817 element at address: 0x10d58b18c0 with size: 0.000305 MiB 00:04:25.817 element at address: 0x10d58b1140 with size: 0.000183 MiB 00:04:25.817 element at address: 0x10d58b1200 with size: 0.000183 MiB 00:04:25.817 element at address: 0x10d58b12c0 with size: 0.000183 MiB 00:04:25.817 element at address: 0x10d58b1380 with size: 0.000183 MiB 00:04:25.817 element at address: 0x10d58b1440 with size: 0.000183 MiB 00:04:25.817 element at address: 0x10d58b1500 with size: 0.000183 MiB 00:04:25.817 element at address: 0x10d58b15c0 with size: 0.000183 MiB 00:04:25.817 element at address: 0x10d58b1680 with size: 0.000183 MiB 00:04:25.817 element at address: 0x10d58b1740 with size: 0.000183 MiB 00:04:25.817 element at address: 0x10d58b1800 with size: 0.000183 MiB 00:04:25.817 element at address: 0x10d58b1a00 with size: 0.000183 MiB 00:04:25.817 element at address: 0x10d58b1ac0 with size: 0.000183 MiB 00:04:25.817 element at address: 0x10d58b1cc0 with size: 0.000183 MiB 00:04:25.817 element at address: 0x10d98b6140 with size: 0.000183 MiB 00:04:25.817 element at address: 0x10d98b6200 with size: 0.000183 MiB 00:04:25.817 element at address: 0x10d98b62c0 with size: 0.000183 MiB 00:04:25.817 element at address: 0x10d98b6380 with size: 0.000183 MiB 00:04:25.817 element at address: 0x10d98b6440 with size: 0.000183 MiB 00:04:25.817 element at address: 0x10d98b6500 with size: 0.000183 MiB 00:04:25.817 element at address: 0x10d98b6700 with size: 0.000183 MiB 00:04:25.817 element at address: 0x10d98b67c0 with size: 0.000183 MiB 00:04:25.817 element at address: 0x10d98b6880 with size: 0.000183 MiB 00:04:25.817 element at address: 0x10d98b6940 with size: 0.000183 MiB 00:04:25.817 element at address: 0x10d98d6c00 with size: 0.000183 MiB 00:04:25.817 element at address: 0x10d98d6cc0 with size: 0.000183 MiB 00:04:25.817 element at address: 0x10d99d6f80 with size: 0.000183 MiB 00:04:25.817 element at address: 0x10d9ad7240 with size: 0.000183 MiB 00:04:25.817 element at address: 0x10d9ad7300 with size: 0.000183 MiB 00:04:25.817 element at address: 0x10dccd7640 with size: 0.000183 MiB 00:04:25.817 element at address: 0x10dccd7840 with size: 0.000183 MiB 00:04:25.817 element at address: 0x10dccd7900 with size: 0.000183 MiB 00:04:25.817 element at address: 0x10dfed7c40 with size: 0.000183 MiB 00:04:25.817 element at address: 0x10dffd9e40 with size: 0.000183 MiB 00:04:25.817 list of memzone associated elements. size: 596.710144 MiB 00:04:25.817 element at address: 0x10b93f7f00 with size: 211.013000 MiB 00:04:25.817 associated memzone info: size: 211.012878 MiB name: MP_PDU_immediate_data_Pool_0 00:04:25.817 element at address: 0x10afa82c80 with size: 152.449524 MiB 00:04:25.817 associated memzone info: size: 152.449402 MiB name: MP_PDU_data_out_Pool_0 00:04:25.817 element at address: 0x10c8030d00 with size: 84.000122 MiB 00:04:25.817 associated memzone info: size: 84.000000 MiB name: MP_bdev_io_46426_0 00:04:25.817 element at address: 0x10dccd79c0 with size: 48.000122 MiB 00:04:25.817 associated memzone info: size: 48.000000 MiB name: MP_evtpool_46426_0 00:04:25.817 element at address: 0x10d9ad73c0 with size: 48.000122 MiB 00:04:25.817 associated memzone info: size: 48.000000 MiB name: MP_msgpool_46426_0 00:04:25.817 element at address: 0x10c683d780 with size: 20.250671 MiB 00:04:25.817 associated memzone info: size: 20.250549 MiB name: MP_PDU_Pool_0 00:04:25.817 element at address: 0x10ae700680 with size: 18.000671 MiB 00:04:25.817 associated memzone info: size: 18.000549 MiB name: MP_SCSI_TASK_Pool_0 00:04:25.817 element at address: 0x10dfcd7a40 with size: 2.000488 MiB 00:04:25.817 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_46426 00:04:25.817 element at address: 0x10dcad7440 with size: 2.000488 MiB 00:04:25.817 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_46426 00:04:25.817 element at address: 0x10dfed7d00 with size: 1.008118 MiB 00:04:25.817 associated memzone info: size: 1.007996 MiB name: MP_evtpool_46426 00:04:25.817 element at address: 0x10c7cfdc40 with size: 1.008118 MiB 00:04:25.817 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:25.817 element at address: 0x10c673b640 with size: 1.008118 MiB 00:04:25.817 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:25.817 element at address: 0x10b92f5dc0 with size: 1.008118 MiB 00:04:25.817 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:25.817 element at address: 0x10af980b40 with size: 1.008118 MiB 00:04:25.817 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:25.817 element at address: 0x10d99d7040 with size: 1.000488 MiB 00:04:25.817 associated memzone info: size: 1.000366 MiB name: RG_ring_0_46426 00:04:25.817 element at address: 0x10d98d6d80 with size: 1.000488 MiB 00:04:25.817 associated memzone info: size: 1.000366 MiB name: RG_ring_1_46426 00:04:25.817 element at address: 0x10c7dffd80 with size: 1.000488 MiB 00:04:25.817 associated memzone info: size: 1.000366 MiB name: RG_ring_4_46426 00:04:25.817 element at address: 0x10ae600480 with size: 1.000488 MiB 00:04:25.817 associated memzone info: size: 1.000366 MiB name: RG_ring_5_46426 00:04:25.817 element at address: 0x10cd430d80 with size: 0.500488 MiB 00:04:25.817 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_46426 00:04:25.817 element at address: 0x10c7c7da40 with size: 0.500488 MiB 00:04:25.817 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:25.817 element at address: 0x10af900940 with size: 0.500488 MiB 00:04:25.817 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:25.817 element at address: 0x10c66fb440 with size: 0.250488 MiB 00:04:25.817 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:25.817 element at address: 0x10d98b6a00 with size: 0.125488 MiB 00:04:25.817 associated memzone info: size: 0.125366 MiB name: RG_ring_2_46426 00:04:25.817 element at address: 0x10c8018a80 with size: 0.031738 MiB 00:04:25.817 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:25.817 element at address: 0x10c8011f40 with size: 0.023743 MiB 00:04:25.817 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:25.817 element at address: 0x10d58b1d80 with size: 0.016113 MiB 00:04:25.817 associated memzone info: size: 0.015991 MiB name: RG_ring_3_46426 00:04:25.817 element at address: 0x10c8018080 with size: 0.002441 MiB 00:04:25.817 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:25.817 element at address: 0x10dccd7700 with size: 0.000305 MiB 00:04:25.817 associated memzone info: size: 0.000183 MiB name: MP_msgpool_46426 00:04:25.817 element at address: 0x10d58b1b80 with size: 0.000305 MiB 00:04:25.817 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_46426 00:04:25.817 element at address: 0x10d98b65c0 with size: 0.000305 MiB 00:04:25.817 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:25.817 21:40:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:25.817 21:40:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 46426 00:04:25.817 21:40:40 dpdk_mem_utility -- common/autotest_common.sh@942 -- # '[' -z 46426 ']' 00:04:25.817 21:40:40 dpdk_mem_utility -- common/autotest_common.sh@946 -- # kill -0 46426 00:04:25.817 21:40:40 dpdk_mem_utility -- common/autotest_common.sh@947 -- # uname 00:04:25.817 21:40:40 dpdk_mem_utility -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:04:25.817 21:40:40 dpdk_mem_utility -- common/autotest_common.sh@950 -- # ps -c -o command 46426 00:04:25.817 21:40:40 dpdk_mem_utility -- common/autotest_common.sh@950 -- # tail -1 00:04:25.818 21:40:40 dpdk_mem_utility -- common/autotest_common.sh@950 -- # process_name=spdk_tgt 00:04:25.818 21:40:40 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' spdk_tgt = sudo ']' 00:04:25.818 killing process with pid 46426 00:04:25.818 21:40:40 dpdk_mem_utility -- common/autotest_common.sh@960 -- # echo 'killing process with pid 46426' 00:04:25.818 21:40:40 dpdk_mem_utility -- common/autotest_common.sh@961 -- # kill 46426 00:04:25.818 21:40:40 dpdk_mem_utility -- common/autotest_common.sh@966 -- # wait 46426 00:04:26.097 00:04:26.097 real 0m1.605s 00:04:26.097 user 0m1.613s 00:04:26.097 sys 0m0.747s 00:04:26.097 21:40:41 dpdk_mem_utility -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:26.097 21:40:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:26.097 ************************************ 00:04:26.097 END TEST dpdk_mem_utility 00:04:26.097 ************************************ 00:04:26.097 21:40:41 -- common/autotest_common.sh@1136 -- # return 0 00:04:26.097 21:40:41 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:26.097 21:40:41 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:26.097 21:40:41 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:26.097 21:40:41 -- common/autotest_common.sh@10 -- # set +x 00:04:26.097 ************************************ 00:04:26.097 START TEST event 00:04:26.097 ************************************ 00:04:26.097 21:40:41 event -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:26.356 * Looking for test storage... 00:04:26.356 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:26.356 21:40:41 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:26.356 21:40:41 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:26.356 21:40:41 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:26.356 21:40:41 event -- common/autotest_common.sh@1093 -- # '[' 6 -le 1 ']' 00:04:26.356 21:40:41 event -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:26.356 21:40:41 event -- common/autotest_common.sh@10 -- # set +x 00:04:26.356 ************************************ 00:04:26.356 START TEST event_perf 00:04:26.356 ************************************ 00:04:26.356 21:40:41 event.event_perf -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:26.356 Running I/O for 1 seconds...[2024-07-15 21:40:41.406131] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:04:26.356 [2024-07-15 21:40:41.406336] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:26.923 EAL: TSC is not safe to use in SMP mode 00:04:26.923 EAL: TSC is not invariant 00:04:26.923 [2024-07-15 21:40:41.958918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:26.923 [2024-07-15 21:40:42.042529] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:26.923 [2024-07-15 21:40:42.042609] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:04:26.923 [2024-07-15 21:40:42.042635] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:04:26.923 [2024-07-15 21:40:42.042643] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:04:26.923 [2024-07-15 21:40:42.046551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.923 Running I/O for 1 seconds...[2024-07-15 21:40:42.046772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.923 [2024-07-15 21:40:42.046662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:26.923 [2024-07-15 21:40:42.046766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:28.295 00:04:28.295 lcore 0: 2748295 00:04:28.295 lcore 1: 2748295 00:04:28.295 lcore 2: 2748294 00:04:28.295 lcore 3: 2748295 00:04:28.295 done. 00:04:28.295 00:04:28.295 real 0m1.760s 00:04:28.295 user 0m4.177s 00:04:28.295 sys 0m0.577s 00:04:28.296 21:40:43 event.event_perf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:28.296 21:40:43 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:28.296 ************************************ 00:04:28.296 END TEST event_perf 00:04:28.296 ************************************ 00:04:28.296 21:40:43 event -- common/autotest_common.sh@1136 -- # return 0 00:04:28.296 21:40:43 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:28.296 21:40:43 event -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:04:28.296 21:40:43 event -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:28.296 21:40:43 event -- common/autotest_common.sh@10 -- # set +x 00:04:28.296 ************************************ 00:04:28.296 START TEST event_reactor 00:04:28.296 ************************************ 00:04:28.296 21:40:43 event.event_reactor -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:28.296 [2024-07-15 21:40:43.213030] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:04:28.296 [2024-07-15 21:40:43.213288] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:28.553 EAL: TSC is not safe to use in SMP mode 00:04:28.553 EAL: TSC is not invariant 00:04:28.553 [2024-07-15 21:40:43.724452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.811 [2024-07-15 21:40:43.807315] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:28.811 [2024-07-15 21:40:43.809362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.743 test_start 00:04:29.743 oneshot 00:04:29.743 tick 100 00:04:29.743 tick 100 00:04:29.743 tick 250 00:04:29.743 tick 100 00:04:29.743 tick 100 00:04:29.743 tick 100 00:04:29.743 tick 250 00:04:29.743 tick 500 00:04:29.743 tick 100 00:04:29.743 tick 100 00:04:29.743 tick 250 00:04:29.743 tick 100 00:04:29.743 tick 100 00:04:29.743 test_end 00:04:29.743 00:04:29.743 real 0m1.715s 00:04:29.743 user 0m1.162s 00:04:29.743 sys 0m0.551s 00:04:29.743 21:40:44 event.event_reactor -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:29.743 21:40:44 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:29.743 ************************************ 00:04:29.743 END TEST event_reactor 00:04:29.743 ************************************ 00:04:30.000 21:40:44 event -- common/autotest_common.sh@1136 -- # return 0 00:04:30.000 21:40:44 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:30.000 21:40:44 event -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:04:30.000 21:40:44 event -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:30.000 21:40:44 event -- common/autotest_common.sh@10 -- # set +x 00:04:30.000 ************************************ 00:04:30.000 START TEST event_reactor_perf 00:04:30.000 ************************************ 00:04:30.000 21:40:44 event.event_reactor_perf -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:30.000 [2024-07-15 21:40:44.972733] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:04:30.000 [2024-07-15 21:40:44.973007] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:30.565 EAL: TSC is not safe to use in SMP mode 00:04:30.565 EAL: TSC is not invariant 00:04:30.565 [2024-07-15 21:40:45.528047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.565 [2024-07-15 21:40:45.617099] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:30.565 [2024-07-15 21:40:45.619266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.940 test_start 00:04:31.940 test_end 00:04:31.940 Performance: 3473401 events per second 00:04:31.940 00:04:31.940 real 0m1.766s 00:04:31.940 user 0m1.180s 00:04:31.940 sys 0m0.582s 00:04:31.940 21:40:46 event.event_reactor_perf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:31.940 ************************************ 00:04:31.940 END TEST event_reactor_perf 00:04:31.940 ************************************ 00:04:31.940 21:40:46 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:31.940 21:40:46 event -- common/autotest_common.sh@1136 -- # return 0 00:04:31.940 21:40:46 event -- event/event.sh@49 -- # uname -s 00:04:31.940 21:40:46 event -- event/event.sh@49 -- # '[' FreeBSD = Linux ']' 00:04:31.940 00:04:31.940 real 0m5.504s 00:04:31.940 user 0m6.630s 00:04:31.940 sys 0m1.891s 00:04:31.940 21:40:46 event -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:31.940 ************************************ 00:04:31.940 21:40:46 event -- common/autotest_common.sh@10 -- # set +x 00:04:31.940 END TEST event 00:04:31.940 ************************************ 00:04:31.940 21:40:46 -- common/autotest_common.sh@1136 -- # return 0 00:04:31.940 21:40:46 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:04:31.940 21:40:46 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:31.940 21:40:46 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:31.940 21:40:46 -- common/autotest_common.sh@10 -- # set +x 00:04:31.940 ************************************ 00:04:31.940 START TEST thread 00:04:31.940 ************************************ 00:04:31.940 21:40:46 thread -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:04:31.940 * Looking for test storage... 00:04:31.940 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:04:31.940 21:40:46 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:31.940 21:40:46 thread -- common/autotest_common.sh@1093 -- # '[' 8 -le 1 ']' 00:04:31.940 21:40:46 thread -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:31.940 21:40:46 thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.940 ************************************ 00:04:31.940 START TEST thread_poller_perf 00:04:31.940 ************************************ 00:04:31.940 21:40:46 thread.thread_poller_perf -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:31.940 [2024-07-15 21:40:46.957183] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:04:31.940 [2024-07-15 21:40:46.957387] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:32.507 EAL: TSC is not safe to use in SMP mode 00:04:32.507 EAL: TSC is not invariant 00:04:32.507 [2024-07-15 21:40:47.479345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.507 [2024-07-15 21:40:47.560226] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:32.507 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:32.507 [2024-07-15 21:40:47.562450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.883 ====================================== 00:04:33.883 busy:2201601305 (cyc) 00:04:33.883 total_run_count: 5526000 00:04:33.883 tsc_hz: 2199998373 (cyc) 00:04:33.883 ====================================== 00:04:33.883 poller_cost: 398 (cyc), 180 (nsec) 00:04:33.883 00:04:33.883 real 0m1.726s 00:04:33.883 user 0m1.167s 00:04:33.883 sys 0m0.558s 00:04:33.883 21:40:48 thread.thread_poller_perf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:33.883 ************************************ 00:04:33.883 END TEST thread_poller_perf 00:04:33.883 ************************************ 00:04:33.883 21:40:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:33.883 21:40:48 thread -- common/autotest_common.sh@1136 -- # return 0 00:04:33.883 21:40:48 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:33.883 21:40:48 thread -- common/autotest_common.sh@1093 -- # '[' 8 -le 1 ']' 00:04:33.883 21:40:48 thread -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:33.883 21:40:48 thread -- common/autotest_common.sh@10 -- # set +x 00:04:33.883 ************************************ 00:04:33.883 START TEST thread_poller_perf 00:04:33.883 ************************************ 00:04:33.883 21:40:48 thread.thread_poller_perf -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:33.883 [2024-07-15 21:40:48.721351] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:04:33.883 [2024-07-15 21:40:48.721582] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:34.140 EAL: TSC is not safe to use in SMP mode 00:04:34.140 EAL: TSC is not invariant 00:04:34.140 [2024-07-15 21:40:49.267378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.398 [2024-07-15 21:40:49.353589] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:34.398 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:34.398 [2024-07-15 21:40:49.355821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.331 ====================================== 00:04:35.331 busy:2201050567 (cyc) 00:04:35.331 total_run_count: 70375000 00:04:35.331 tsc_hz: 2199998373 (cyc) 00:04:35.331 ====================================== 00:04:35.331 poller_cost: 31 (cyc), 14 (nsec) 00:04:35.331 00:04:35.331 real 0m1.752s 00:04:35.331 user 0m1.191s 00:04:35.331 sys 0m0.560s 00:04:35.331 21:40:50 thread.thread_poller_perf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:35.331 ************************************ 00:04:35.331 END TEST thread_poller_perf 00:04:35.331 ************************************ 00:04:35.331 21:40:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:35.331 21:40:50 thread -- common/autotest_common.sh@1136 -- # return 0 00:04:35.331 21:40:50 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:04:35.331 21:40:50 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:04:35.331 21:40:50 thread -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:35.331 21:40:50 thread -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:35.331 21:40:50 thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.331 ************************************ 00:04:35.331 START TEST thread_spdk_lock 00:04:35.331 ************************************ 00:04:35.331 21:40:50 thread.thread_spdk_lock -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:04:35.590 [2024-07-15 21:40:50.521154] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:04:35.590 [2024-07-15 21:40:50.521420] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:35.848 EAL: TSC is not safe to use in SMP mode 00:04:35.848 EAL: TSC is not invariant 00:04:36.105 [2024-07-15 21:40:51.041006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:36.105 [2024-07-15 21:40:51.125634] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:36.105 [2024-07-15 21:40:51.125711] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:04:36.105 [2024-07-15 21:40:51.128340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.105 [2024-07-15 21:40:51.128329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.668 [2024-07-15 21:40:51.568217] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 965:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:04:36.668 [2024-07-15 21:40:51.568279] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:04:36.668 [2024-07-15 21:40:51.568289] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x315be0 00:04:36.668 [2024-07-15 21:40:51.568754] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:04:36.668 [2024-07-15 21:40:51.568854] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1026:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:04:36.668 [2024-07-15 21:40:51.568863] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:04:36.668 Starting test contend 00:04:36.668 Worker Delay Wait us Hold us Total us 00:04:36.668 0 3 256840 164653 421493 00:04:36.668 1 5 159145 266944 426089 00:04:36.668 PASS test contend 00:04:36.668 Starting test hold_by_poller 00:04:36.668 PASS test hold_by_poller 00:04:36.668 Starting test hold_by_message 00:04:36.668 PASS test hold_by_message 00:04:36.668 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:04:36.668 100014 assertions passed 00:04:36.668 0 assertions failed 00:04:36.668 00:04:36.668 real 0m1.170s 00:04:36.668 user 0m1.058s 00:04:36.668 sys 0m0.549s 00:04:36.668 21:40:51 thread.thread_spdk_lock -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:36.668 ************************************ 00:04:36.668 END TEST thread_spdk_lock 00:04:36.668 ************************************ 00:04:36.668 21:40:51 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:04:36.668 21:40:51 thread -- common/autotest_common.sh@1136 -- # return 0 00:04:36.668 00:04:36.668 real 0m4.909s 00:04:36.668 user 0m3.559s 00:04:36.668 sys 0m1.813s 00:04:36.668 21:40:51 thread -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:36.668 21:40:51 thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.668 ************************************ 00:04:36.668 END TEST thread 00:04:36.668 ************************************ 00:04:36.668 21:40:51 -- common/autotest_common.sh@1136 -- # return 0 00:04:36.668 21:40:51 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:04:36.668 21:40:51 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:36.668 21:40:51 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:36.668 21:40:51 -- common/autotest_common.sh@10 -- # set +x 00:04:36.668 ************************************ 00:04:36.668 START TEST accel 00:04:36.668 ************************************ 00:04:36.668 21:40:51 accel -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:04:36.925 * Looking for test storage... 00:04:36.925 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:04:36.925 21:40:51 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:04:36.925 21:40:51 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:04:36.925 21:40:51 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:36.925 21:40:51 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=46726 00:04:36.925 21:40:51 accel -- accel/accel.sh@63 -- # waitforlisten 46726 00:04:36.925 21:40:51 accel -- common/autotest_common.sh@823 -- # '[' -z 46726 ']' 00:04:36.925 21:40:51 accel -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.925 21:40:51 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /tmp//sh-np.CWg1EY 00:04:36.925 21:40:51 accel -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:36.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.925 21:40:51 accel -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.925 21:40:51 accel -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:36.925 21:40:51 accel -- common/autotest_common.sh@10 -- # set +x 00:04:36.925 [2024-07-15 21:40:51.930016] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:04:36.925 [2024-07-15 21:40:51.930294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:37.491 EAL: TSC is not safe to use in SMP mode 00:04:37.491 EAL: TSC is not invariant 00:04:37.491 [2024-07-15 21:40:52.464895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.491 [2024-07-15 21:40:52.575932] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:37.491 21:40:52 accel -- accel/accel.sh@61 -- # build_accel_config 00:04:37.491 21:40:52 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:37.491 21:40:52 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:37.491 21:40:52 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:37.491 21:40:52 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:37.491 21:40:52 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:37.491 21:40:52 accel -- accel/accel.sh@40 -- # local IFS=, 00:04:37.491 21:40:52 accel -- accel/accel.sh@41 -- # jq -r . 00:04:37.491 [2024-07-15 21:40:52.586964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.058 21:40:53 accel -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:38.058 21:40:53 accel -- common/autotest_common.sh@856 -- # return 0 00:04:38.058 21:40:53 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:04:38.058 21:40:53 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:04:38.058 21:40:53 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:04:38.058 21:40:53 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:04:38.058 21:40:53 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:04:38.058 21:40:53 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:04:38.058 21:40:53 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:04:38.058 21:40:53 accel -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:38.058 21:40:53 accel -- common/autotest_common.sh@10 -- # set +x 00:04:38.058 21:40:53 accel -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:38.058 21:40:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:38.058 21:40:53 accel -- accel/accel.sh@72 -- # IFS== 00:04:38.058 21:40:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:38.058 21:40:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:38.058 21:40:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:38.058 21:40:53 accel -- accel/accel.sh@72 -- # IFS== 00:04:38.058 21:40:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:38.059 21:40:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:38.059 21:40:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:38.059 21:40:53 accel -- accel/accel.sh@72 -- # IFS== 00:04:38.059 21:40:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:38.059 21:40:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:38.059 21:40:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:38.059 21:40:53 accel -- accel/accel.sh@72 -- # IFS== 00:04:38.059 21:40:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:38.059 21:40:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:38.059 21:40:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:38.059 21:40:53 accel -- accel/accel.sh@72 -- # IFS== 00:04:38.059 21:40:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:38.059 21:40:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:38.059 21:40:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:38.059 21:40:53 accel -- accel/accel.sh@72 -- # IFS== 00:04:38.059 21:40:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:38.059 21:40:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:38.059 21:40:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:38.059 21:40:53 accel -- accel/accel.sh@72 -- # IFS== 00:04:38.059 21:40:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:38.059 21:40:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:38.059 21:40:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:38.059 21:40:53 accel -- accel/accel.sh@72 -- # IFS== 00:04:38.059 21:40:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:38.059 21:40:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:38.059 21:40:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:38.059 21:40:53 accel -- accel/accel.sh@72 -- # IFS== 00:04:38.059 21:40:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:38.059 21:40:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:38.059 21:40:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:38.059 21:40:53 accel -- accel/accel.sh@72 -- # IFS== 00:04:38.059 21:40:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:38.059 21:40:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:38.059 21:40:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:38.059 21:40:53 accel -- accel/accel.sh@72 -- # IFS== 00:04:38.059 21:40:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:38.059 21:40:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:38.059 21:40:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:38.059 21:40:53 accel -- accel/accel.sh@72 -- # IFS== 00:04:38.059 21:40:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:38.059 21:40:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:38.059 21:40:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:38.059 21:40:53 accel -- accel/accel.sh@72 -- # IFS== 00:04:38.059 21:40:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:38.059 21:40:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:38.059 21:40:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:38.059 21:40:53 accel -- accel/accel.sh@72 -- # IFS== 00:04:38.059 21:40:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:38.059 21:40:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:38.059 21:40:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:38.059 21:40:53 accel -- accel/accel.sh@72 -- # IFS== 00:04:38.059 21:40:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:38.059 21:40:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:38.059 21:40:53 accel -- accel/accel.sh@75 -- # killprocess 46726 00:04:38.059 21:40:53 accel -- common/autotest_common.sh@942 -- # '[' -z 46726 ']' 00:04:38.059 21:40:53 accel -- common/autotest_common.sh@946 -- # kill -0 46726 00:04:38.059 21:40:53 accel -- common/autotest_common.sh@947 -- # uname 00:04:38.059 21:40:53 accel -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:04:38.059 21:40:53 accel -- common/autotest_common.sh@950 -- # ps -c -o command 46726 00:04:38.059 21:40:53 accel -- common/autotest_common.sh@950 -- # tail -1 00:04:38.059 21:40:53 accel -- common/autotest_common.sh@950 -- # process_name=spdk_tgt 00:04:38.059 21:40:53 accel -- common/autotest_common.sh@952 -- # '[' spdk_tgt = sudo ']' 00:04:38.059 killing process with pid 46726 00:04:38.059 21:40:53 accel -- common/autotest_common.sh@960 -- # echo 'killing process with pid 46726' 00:04:38.059 21:40:53 accel -- common/autotest_common.sh@961 -- # kill 46726 00:04:38.059 21:40:53 accel -- common/autotest_common.sh@966 -- # wait 46726 00:04:38.317 21:40:53 accel -- accel/accel.sh@76 -- # trap - ERR 00:04:38.317 21:40:53 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:04:38.317 21:40:53 accel -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:04:38.317 21:40:53 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:38.317 21:40:53 accel -- common/autotest_common.sh@10 -- # set +x 00:04:38.317 21:40:53 accel.accel_help -- common/autotest_common.sh@1117 -- # accel_perf -h 00:04:38.317 21:40:53 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.qni4yB -h 00:04:38.317 21:40:53 accel.accel_help -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:38.317 21:40:53 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:04:38.317 21:40:53 accel -- common/autotest_common.sh@1136 -- # return 0 00:04:38.317 21:40:53 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:04:38.317 21:40:53 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:04:38.317 21:40:53 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:38.317 21:40:53 accel -- common/autotest_common.sh@10 -- # set +x 00:04:38.317 ************************************ 00:04:38.317 START TEST accel_missing_filename 00:04:38.317 ************************************ 00:04:38.317 21:40:53 accel.accel_missing_filename -- common/autotest_common.sh@1117 -- # NOT accel_perf -t 1 -w compress 00:04:38.317 21:40:53 accel.accel_missing_filename -- common/autotest_common.sh@642 -- # local es=0 00:04:38.317 21:40:53 accel.accel_missing_filename -- common/autotest_common.sh@644 -- # valid_exec_arg accel_perf -t 1 -w compress 00:04:38.317 21:40:53 accel.accel_missing_filename -- common/autotest_common.sh@630 -- # local arg=accel_perf 00:04:38.317 21:40:53 accel.accel_missing_filename -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:38.317 21:40:53 accel.accel_missing_filename -- common/autotest_common.sh@634 -- # type -t accel_perf 00:04:38.317 21:40:53 accel.accel_missing_filename -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:38.317 21:40:53 accel.accel_missing_filename -- common/autotest_common.sh@645 -- # accel_perf -t 1 -w compress 00:04:38.317 21:40:53 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.FmiQqy -t 1 -w compress 00:04:38.317 [2024-07-15 21:40:53.402722] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:04:38.317 [2024-07-15 21:40:53.403034] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:38.886 EAL: TSC is not safe to use in SMP mode 00:04:38.886 EAL: TSC is not invariant 00:04:38.886 [2024-07-15 21:40:53.942756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.886 [2024-07-15 21:40:54.029775] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:38.886 21:40:54 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:04:38.886 21:40:54 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:38.886 21:40:54 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:38.886 21:40:54 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:38.886 21:40:54 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:38.886 21:40:54 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:38.886 21:40:54 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:04:38.886 21:40:54 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:04:38.886 [2024-07-15 21:40:54.039384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.886 [2024-07-15 21:40:54.041787] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:39.144 [2024-07-15 21:40:54.078366] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:04:39.144 A filename is required. 00:04:39.144 21:40:54 accel.accel_missing_filename -- common/autotest_common.sh@645 -- # es=234 00:04:39.144 21:40:54 accel.accel_missing_filename -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:04:39.144 21:40:54 accel.accel_missing_filename -- common/autotest_common.sh@654 -- # es=106 00:04:39.144 21:40:54 accel.accel_missing_filename -- common/autotest_common.sh@655 -- # case "$es" in 00:04:39.144 21:40:54 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # es=1 00:04:39.144 21:40:54 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:04:39.144 00:04:39.144 real 0m0.805s 00:04:39.144 user 0m0.222s 00:04:39.144 sys 0m0.583s 00:04:39.144 21:40:54 accel.accel_missing_filename -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:39.144 21:40:54 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:04:39.144 ************************************ 00:04:39.144 END TEST accel_missing_filename 00:04:39.144 ************************************ 00:04:39.144 21:40:54 accel -- common/autotest_common.sh@1136 -- # return 0 00:04:39.144 21:40:54 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:39.144 21:40:54 accel -- common/autotest_common.sh@1093 -- # '[' 10 -le 1 ']' 00:04:39.144 21:40:54 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:39.144 21:40:54 accel -- common/autotest_common.sh@10 -- # set +x 00:04:39.144 ************************************ 00:04:39.144 START TEST accel_compress_verify 00:04:39.144 ************************************ 00:04:39.144 21:40:54 accel.accel_compress_verify -- common/autotest_common.sh@1117 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:39.144 21:40:54 accel.accel_compress_verify -- common/autotest_common.sh@642 -- # local es=0 00:04:39.144 21:40:54 accel.accel_compress_verify -- common/autotest_common.sh@644 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:39.144 21:40:54 accel.accel_compress_verify -- common/autotest_common.sh@630 -- # local arg=accel_perf 00:04:39.144 21:40:54 accel.accel_compress_verify -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:39.144 21:40:54 accel.accel_compress_verify -- common/autotest_common.sh@634 -- # type -t accel_perf 00:04:39.144 21:40:54 accel.accel_compress_verify -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:39.144 21:40:54 accel.accel_compress_verify -- common/autotest_common.sh@645 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:39.144 21:40:54 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.W5lyx2 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:39.144 [2024-07-15 21:40:54.245671] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:04:39.144 [2024-07-15 21:40:54.245839] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:39.710 EAL: TSC is not safe to use in SMP mode 00:04:39.710 EAL: TSC is not invariant 00:04:39.710 [2024-07-15 21:40:54.787234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.710 [2024-07-15 21:40:54.872591] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:39.710 21:40:54 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:04:39.710 21:40:54 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:39.710 21:40:54 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:39.710 21:40:54 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:39.710 21:40:54 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:39.710 21:40:54 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:39.710 21:40:54 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:04:39.710 21:40:54 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:04:39.710 [2024-07-15 21:40:54.883382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.710 [2024-07-15 21:40:54.885759] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:39.969 [2024-07-15 21:40:54.920565] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:04:39.969 00:04:39.969 Compression does not support the verify option, aborting. 00:04:39.969 21:40:55 accel.accel_compress_verify -- common/autotest_common.sh@645 -- # es=211 00:04:39.969 21:40:55 accel.accel_compress_verify -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:04:39.969 21:40:55 accel.accel_compress_verify -- common/autotest_common.sh@654 -- # es=83 00:04:39.969 21:40:55 accel.accel_compress_verify -- common/autotest_common.sh@655 -- # case "$es" in 00:04:39.969 21:40:55 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # es=1 00:04:39.969 21:40:55 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:04:39.969 00:04:39.969 real 0m0.796s 00:04:39.969 user 0m0.218s 00:04:39.969 sys 0m0.584s 00:04:39.969 21:40:55 accel.accel_compress_verify -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:39.969 21:40:55 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:04:39.969 ************************************ 00:04:39.969 END TEST accel_compress_verify 00:04:39.969 ************************************ 00:04:39.969 21:40:55 accel -- common/autotest_common.sh@1136 -- # return 0 00:04:39.969 21:40:55 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:04:39.969 21:40:55 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:04:39.969 21:40:55 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:39.969 21:40:55 accel -- common/autotest_common.sh@10 -- # set +x 00:04:39.969 ************************************ 00:04:39.969 START TEST accel_wrong_workload 00:04:39.969 ************************************ 00:04:39.969 21:40:55 accel.accel_wrong_workload -- common/autotest_common.sh@1117 -- # NOT accel_perf -t 1 -w foobar 00:04:39.969 21:40:55 accel.accel_wrong_workload -- common/autotest_common.sh@642 -- # local es=0 00:04:39.969 21:40:55 accel.accel_wrong_workload -- common/autotest_common.sh@644 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:04:39.969 21:40:55 accel.accel_wrong_workload -- common/autotest_common.sh@630 -- # local arg=accel_perf 00:04:39.969 21:40:55 accel.accel_wrong_workload -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:39.969 21:40:55 accel.accel_wrong_workload -- common/autotest_common.sh@634 -- # type -t accel_perf 00:04:39.969 21:40:55 accel.accel_wrong_workload -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:39.969 21:40:55 accel.accel_wrong_workload -- common/autotest_common.sh@645 -- # accel_perf -t 1 -w foobar 00:04:39.969 21:40:55 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.NRpDxe -t 1 -w foobar 00:04:39.969 Unsupported workload type: foobar 00:04:39.969 [2024-07-15 21:40:55.084790] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:04:39.969 accel_perf options: 00:04:39.969 [-h help message] 00:04:39.969 [-q queue depth per core] 00:04:39.969 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:39.969 [-T number of threads per core 00:04:39.969 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:39.969 [-t time in seconds] 00:04:39.969 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:39.969 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:04:39.969 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:39.969 [-l for compress/decompress workloads, name of uncompressed input file 00:04:39.969 [-S for crc32c workload, use this seed value (default 0) 00:04:39.969 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:39.969 [-f for fill workload, use this BYTE value (default 255) 00:04:39.969 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:39.969 [-y verify result if this switch is on] 00:04:39.969 [-a tasks to allocate per core (default: same value as -q)] 00:04:39.969 Can be used to spread operations across a wider range of memory. 00:04:39.969 21:40:55 accel.accel_wrong_workload -- common/autotest_common.sh@645 -- # es=1 00:04:39.969 21:40:55 accel.accel_wrong_workload -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:04:39.969 21:40:55 accel.accel_wrong_workload -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:04:39.969 21:40:55 accel.accel_wrong_workload -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:04:39.969 00:04:39.969 real 0m0.010s 00:04:39.969 user 0m0.010s 00:04:39.969 sys 0m0.000s 00:04:39.969 21:40:55 accel.accel_wrong_workload -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:39.969 21:40:55 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:04:39.969 ************************************ 00:04:39.969 END TEST accel_wrong_workload 00:04:39.969 ************************************ 00:04:39.969 21:40:55 accel -- common/autotest_common.sh@1136 -- # return 0 00:04:39.969 21:40:55 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:04:39.969 21:40:55 accel -- common/autotest_common.sh@1093 -- # '[' 10 -le 1 ']' 00:04:39.969 21:40:55 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:39.969 21:40:55 accel -- common/autotest_common.sh@10 -- # set +x 00:04:39.969 ************************************ 00:04:39.969 START TEST accel_negative_buffers 00:04:39.969 ************************************ 00:04:39.969 21:40:55 accel.accel_negative_buffers -- common/autotest_common.sh@1117 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:04:39.969 21:40:55 accel.accel_negative_buffers -- common/autotest_common.sh@642 -- # local es=0 00:04:39.969 21:40:55 accel.accel_negative_buffers -- common/autotest_common.sh@644 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:04:39.969 21:40:55 accel.accel_negative_buffers -- common/autotest_common.sh@630 -- # local arg=accel_perf 00:04:39.969 21:40:55 accel.accel_negative_buffers -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:39.969 21:40:55 accel.accel_negative_buffers -- common/autotest_common.sh@634 -- # type -t accel_perf 00:04:39.969 21:40:55 accel.accel_negative_buffers -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:39.969 21:40:55 accel.accel_negative_buffers -- common/autotest_common.sh@645 -- # accel_perf -t 1 -w xor -y -x -1 00:04:39.969 21:40:55 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.PpWWVE -t 1 -w xor -y -x -1 00:04:39.969 -x option must be non-negative. 00:04:39.969 accel_perf options: 00:04:39.969 [-h help message] 00:04:39.969 [-q queue depth per core] 00:04:39.969 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:39.969 [-T number of threads per core 00:04:39.969 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:39.969 [-t time in seconds] 00:04:39.969 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:39.969 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:04:39.969 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:39.969 [-l for compress/decompress workloads, name of uncompressed input file 00:04:39.969 [-S for crc32c workload, use this seed value (default 0) 00:04:39.969 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:39.969 [-f for fill workload, use this BYTE value (default 255) 00:04:39.969 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:39.969 [-y verify result if this switch is on] 00:04:39.969 [-a tasks to allocate per core (default: same value as -q)] 00:04:39.969 Can be used to spread operations across a wider range of memory. 00:04:39.969 [2024-07-15 21:40:55.129404] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:04:39.970 21:40:55 accel.accel_negative_buffers -- common/autotest_common.sh@645 -- # es=1 00:04:39.970 21:40:55 accel.accel_negative_buffers -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:04:39.970 21:40:55 accel.accel_negative_buffers -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:04:39.970 21:40:55 accel.accel_negative_buffers -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:04:39.970 00:04:39.970 real 0m0.008s 00:04:39.970 user 0m0.006s 00:04:39.970 sys 0m0.002s 00:04:39.970 21:40:55 accel.accel_negative_buffers -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:39.970 21:40:55 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:04:39.970 ************************************ 00:04:39.970 END TEST accel_negative_buffers 00:04:39.970 ************************************ 00:04:40.227 21:40:55 accel -- common/autotest_common.sh@1136 -- # return 0 00:04:40.227 21:40:55 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:04:40.227 21:40:55 accel -- common/autotest_common.sh@1093 -- # '[' 9 -le 1 ']' 00:04:40.227 21:40:55 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:40.227 21:40:55 accel -- common/autotest_common.sh@10 -- # set +x 00:04:40.227 ************************************ 00:04:40.227 START TEST accel_crc32c 00:04:40.227 ************************************ 00:04:40.227 21:40:55 accel.accel_crc32c -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w crc32c -S 32 -y 00:04:40.227 21:40:55 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:04:40.227 21:40:55 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:04:40.227 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:40.227 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:40.227 21:40:55 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:04:40.227 21:40:55 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.PGoOSU -t 1 -w crc32c -S 32 -y 00:04:40.227 [2024-07-15 21:40:55.180117] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:04:40.227 [2024-07-15 21:40:55.180447] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:40.793 EAL: TSC is not safe to use in SMP mode 00:04:40.793 EAL: TSC is not invariant 00:04:40.793 [2024-07-15 21:40:55.777444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.793 [2024-07-15 21:40:55.865583] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:40.793 21:40:55 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:04:40.793 21:40:55 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:40.793 21:40:55 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:40.793 21:40:55 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:40.793 21:40:55 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:04:40.794 [2024-07-15 21:40:55.876316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:40.794 21:40:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:42.233 21:40:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:42.233 21:40:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:42.233 21:40:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:42.233 21:40:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:42.233 21:40:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:42.233 21:40:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:42.233 21:40:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:42.233 21:40:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:42.233 21:40:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:42.233 21:40:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:42.233 21:40:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:42.233 21:40:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:42.234 21:40:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:42.234 21:40:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:42.234 21:40:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:42.234 21:40:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:42.234 21:40:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:42.234 21:40:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:42.234 21:40:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:42.234 21:40:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:42.234 21:40:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:42.234 21:40:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:42.234 21:40:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:42.234 21:40:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:42.234 21:40:57 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:42.234 21:40:57 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:42.234 21:40:57 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:42.234 00:04:42.234 real 0m1.864s 00:04:42.234 user 0m1.246s 00:04:42.234 sys 0m0.622s 00:04:42.234 21:40:57 accel.accel_crc32c -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:42.234 21:40:57 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:04:42.234 ************************************ 00:04:42.234 END TEST accel_crc32c 00:04:42.234 ************************************ 00:04:42.234 21:40:57 accel -- common/autotest_common.sh@1136 -- # return 0 00:04:42.234 21:40:57 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:04:42.234 21:40:57 accel -- common/autotest_common.sh@1093 -- # '[' 9 -le 1 ']' 00:04:42.234 21:40:57 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:42.234 21:40:57 accel -- common/autotest_common.sh@10 -- # set +x 00:04:42.234 ************************************ 00:04:42.234 START TEST accel_crc32c_C2 00:04:42.234 ************************************ 00:04:42.234 21:40:57 accel.accel_crc32c_C2 -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w crc32c -y -C 2 00:04:42.234 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:04:42.234 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:04:42.234 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:42.234 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:04:42.234 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:42.234 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.NmmCYa -t 1 -w crc32c -y -C 2 00:04:42.234 [2024-07-15 21:40:57.085270] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:04:42.234 [2024-07-15 21:40:57.085500] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:42.492 EAL: TSC is not safe to use in SMP mode 00:04:42.492 EAL: TSC is not invariant 00:04:42.492 [2024-07-15 21:40:57.604561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.751 [2024-07-15 21:40:57.682712] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:04:42.751 [2024-07-15 21:40:57.693432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:42.751 21:40:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:43.686 21:40:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:43.686 21:40:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:43.686 21:40:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:43.686 21:40:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:43.686 21:40:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:43.686 21:40:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:43.686 21:40:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:43.686 21:40:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:43.686 21:40:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:43.686 21:40:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:43.686 21:40:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:43.686 21:40:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:43.686 21:40:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:43.686 21:40:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:43.686 21:40:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:43.686 21:40:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:43.686 21:40:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:43.686 21:40:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:43.686 21:40:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:43.686 21:40:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:43.686 21:40:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:43.686 21:40:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:43.686 21:40:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:43.686 21:40:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:43.686 21:40:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:43.686 21:40:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:43.686 21:40:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:43.686 00:04:43.686 real 0m1.773s 00:04:43.686 user 0m1.206s 00:04:43.686 sys 0m0.575s 00:04:43.686 21:40:58 accel.accel_crc32c_C2 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:43.686 21:40:58 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:04:43.686 ************************************ 00:04:43.686 END TEST accel_crc32c_C2 00:04:43.686 ************************************ 00:04:43.945 21:40:58 accel -- common/autotest_common.sh@1136 -- # return 0 00:04:43.945 21:40:58 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:04:43.945 21:40:58 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:04:43.945 21:40:58 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:43.945 21:40:58 accel -- common/autotest_common.sh@10 -- # set +x 00:04:43.945 ************************************ 00:04:43.945 START TEST accel_copy 00:04:43.945 ************************************ 00:04:43.945 21:40:58 accel.accel_copy -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w copy -y 00:04:43.945 21:40:58 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:04:43.945 21:40:58 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:04:43.945 21:40:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:43.945 21:40:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:43.945 21:40:58 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:04:43.945 21:40:58 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.QIvZrH -t 1 -w copy -y 00:04:43.945 [2024-07-15 21:40:58.899879] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:04:43.945 [2024-07-15 21:40:58.900145] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:44.513 EAL: TSC is not safe to use in SMP mode 00:04:44.513 EAL: TSC is not invariant 00:04:44.513 [2024-07-15 21:40:59.430191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.513 [2024-07-15 21:40:59.514428] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:04:44.513 [2024-07-15 21:40:59.525807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:44.513 21:40:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:44.514 21:40:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:44.514 21:40:59 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:04:44.514 21:40:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:44.514 21:40:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:44.514 21:40:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:44.514 21:40:59 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:04:44.514 21:40:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:44.514 21:40:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:44.514 21:40:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:44.514 21:40:59 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:04:44.514 21:40:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:44.514 21:40:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:44.514 21:40:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:44.514 21:40:59 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:04:44.514 21:40:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:44.514 21:40:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:44.514 21:40:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:44.514 21:40:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:44.514 21:40:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:44.514 21:40:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:44.514 21:40:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:44.514 21:40:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:44.514 21:40:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:44.514 21:40:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:44.514 21:40:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:45.913 21:41:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:45.913 21:41:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:45.913 21:41:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:45.913 21:41:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:45.913 21:41:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:45.913 21:41:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:45.913 21:41:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:45.913 21:41:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:45.913 21:41:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:45.913 21:41:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:45.913 21:41:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:45.913 21:41:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:45.913 21:41:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:45.913 21:41:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:45.913 21:41:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:45.913 21:41:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:45.913 21:41:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:45.913 21:41:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:45.913 21:41:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:45.913 21:41:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:45.913 21:41:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:45.913 21:41:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:45.913 21:41:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:45.913 21:41:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:45.913 21:41:00 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:45.913 21:41:00 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:04:45.913 21:41:00 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:45.913 00:04:45.913 real 0m1.801s 00:04:45.913 user 0m1.233s 00:04:45.913 sys 0m0.572s 00:04:45.913 21:41:00 accel.accel_copy -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:45.913 ************************************ 00:04:45.913 END TEST accel_copy 00:04:45.913 ************************************ 00:04:45.913 21:41:00 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:04:45.913 21:41:00 accel -- common/autotest_common.sh@1136 -- # return 0 00:04:45.913 21:41:00 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:45.913 21:41:00 accel -- common/autotest_common.sh@1093 -- # '[' 13 -le 1 ']' 00:04:45.913 21:41:00 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:45.913 21:41:00 accel -- common/autotest_common.sh@10 -- # set +x 00:04:45.913 ************************************ 00:04:45.913 START TEST accel_fill 00:04:45.913 ************************************ 00:04:45.913 21:41:00 accel.accel_fill -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:45.913 21:41:00 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:04:45.913 21:41:00 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:04:45.913 21:41:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:45.913 21:41:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:45.913 21:41:00 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:45.913 21:41:00 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.DvnGpU -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:45.913 [2024-07-15 21:41:00.742447] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:04:45.913 [2024-07-15 21:41:00.742760] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:46.172 EAL: TSC is not safe to use in SMP mode 00:04:46.172 EAL: TSC is not invariant 00:04:46.172 [2024-07-15 21:41:01.273386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.431 [2024-07-15 21:41:01.369743] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:04:46.431 [2024-07-15 21:41:01.380496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:46.431 21:41:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:47.365 21:41:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:47.365 21:41:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:47.365 21:41:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:47.365 21:41:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:47.365 21:41:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:47.365 21:41:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:47.365 21:41:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:47.365 21:41:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:47.365 21:41:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:47.365 21:41:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:47.365 21:41:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:47.365 21:41:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:47.365 21:41:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:47.365 21:41:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:47.365 21:41:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:47.365 21:41:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:47.365 21:41:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:47.365 21:41:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:47.365 21:41:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:47.365 21:41:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:47.365 21:41:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:47.365 21:41:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:47.365 21:41:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:47.365 21:41:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:47.365 21:41:02 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:47.365 21:41:02 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:04:47.365 21:41:02 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:47.365 00:04:47.365 real 0m1.803s 00:04:47.365 user 0m1.225s 00:04:47.365 sys 0m0.587s 00:04:47.365 ************************************ 00:04:47.365 END TEST accel_fill 00:04:47.365 ************************************ 00:04:47.365 21:41:02 accel.accel_fill -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:47.365 21:41:02 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:04:47.623 21:41:02 accel -- common/autotest_common.sh@1136 -- # return 0 00:04:47.623 21:41:02 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:04:47.623 21:41:02 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:04:47.623 21:41:02 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:47.623 21:41:02 accel -- common/autotest_common.sh@10 -- # set +x 00:04:47.623 ************************************ 00:04:47.623 START TEST accel_copy_crc32c 00:04:47.623 ************************************ 00:04:47.623 21:41:02 accel.accel_copy_crc32c -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w copy_crc32c -y 00:04:47.623 21:41:02 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:04:47.623 21:41:02 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:04:47.623 21:41:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:47.623 21:41:02 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:04:47.623 21:41:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:47.624 21:41:02 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.VpGVe9 -t 1 -w copy_crc32c -y 00:04:47.624 [2024-07-15 21:41:02.580893] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:04:47.624 [2024-07-15 21:41:02.581074] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:48.210 EAL: TSC is not safe to use in SMP mode 00:04:48.210 EAL: TSC is not invariant 00:04:48.210 [2024-07-15 21:41:03.109477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.210 [2024-07-15 21:41:03.217892] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:04:48.210 [2024-07-15 21:41:03.229326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:48.210 21:41:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:49.582 21:41:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:49.582 21:41:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:49.582 21:41:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:49.582 21:41:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:49.582 21:41:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:49.582 21:41:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:49.582 21:41:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:49.582 21:41:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:49.582 21:41:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:49.582 21:41:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:49.582 21:41:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:49.582 21:41:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:49.582 21:41:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:49.582 21:41:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:49.582 21:41:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:49.582 21:41:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:49.582 21:41:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:49.582 21:41:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:49.582 21:41:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:49.582 21:41:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:49.582 21:41:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:49.582 21:41:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:49.582 21:41:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:49.582 21:41:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:49.582 21:41:04 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:49.582 21:41:04 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:04:49.582 21:41:04 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:49.582 00:04:49.582 real 0m1.808s 00:04:49.582 user 0m1.247s 00:04:49.582 sys 0m0.573s 00:04:49.582 21:41:04 accel.accel_copy_crc32c -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:49.582 ************************************ 00:04:49.582 END TEST accel_copy_crc32c 00:04:49.582 ************************************ 00:04:49.582 21:41:04 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:04:49.582 21:41:04 accel -- common/autotest_common.sh@1136 -- # return 0 00:04:49.582 21:41:04 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:04:49.582 21:41:04 accel -- common/autotest_common.sh@1093 -- # '[' 9 -le 1 ']' 00:04:49.582 21:41:04 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:49.582 21:41:04 accel -- common/autotest_common.sh@10 -- # set +x 00:04:49.582 ************************************ 00:04:49.582 START TEST accel_copy_crc32c_C2 00:04:49.582 ************************************ 00:04:49.582 21:41:04 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:04:49.582 21:41:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:04:49.582 21:41:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:04:49.582 21:41:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:04:49.582 21:41:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:49.582 21:41:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:49.582 21:41:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.3tapV4 -t 1 -w copy_crc32c -y -C 2 00:04:49.582 [2024-07-15 21:41:04.433079] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:04:49.582 [2024-07-15 21:41:04.433338] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:49.841 EAL: TSC is not safe to use in SMP mode 00:04:49.841 EAL: TSC is not invariant 00:04:49.841 [2024-07-15 21:41:04.963917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.099 [2024-07-15 21:41:05.047047] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:50.099 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:04:50.099 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:50.099 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:50.099 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:50.099 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:50.099 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:50.099 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:04:50.099 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:04:50.099 [2024-07-15 21:41:05.054856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.099 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:50.099 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.099 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:50.099 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:50.099 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:50.099 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.099 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:50.099 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:50.099 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:04:50.099 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.099 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:50.099 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:50.099 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:50.099 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:50.100 21:41:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:51.038 21:41:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:51.038 21:41:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.038 21:41:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:51.038 21:41:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:51.038 21:41:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:51.038 21:41:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.038 21:41:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:51.038 21:41:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:51.038 21:41:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:51.038 21:41:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.038 21:41:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:51.038 21:41:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:51.038 21:41:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:51.038 21:41:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.038 21:41:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:51.038 21:41:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:51.038 21:41:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:51.038 21:41:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.038 21:41:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:51.038 21:41:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:51.038 21:41:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:51.038 21:41:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:51.038 21:41:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:51.038 21:41:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:51.038 21:41:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:51.038 21:41:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:04:51.038 21:41:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:51.038 00:04:51.038 real 0m1.784s 00:04:51.038 user 0m1.217s 00:04:51.038 sys 0m0.575s 00:04:51.038 21:41:06 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:51.038 21:41:06 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:04:51.038 ************************************ 00:04:51.038 END TEST accel_copy_crc32c_C2 00:04:51.038 ************************************ 00:04:51.297 21:41:06 accel -- common/autotest_common.sh@1136 -- # return 0 00:04:51.297 21:41:06 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:04:51.297 21:41:06 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:04:51.297 21:41:06 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:51.297 21:41:06 accel -- common/autotest_common.sh@10 -- # set +x 00:04:51.297 ************************************ 00:04:51.297 START TEST accel_dualcast 00:04:51.297 ************************************ 00:04:51.297 21:41:06 accel.accel_dualcast -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w dualcast -y 00:04:51.297 21:41:06 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:04:51.297 21:41:06 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:04:51.297 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:51.297 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:51.297 21:41:06 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:04:51.297 21:41:06 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.dC1uHT -t 1 -w dualcast -y 00:04:51.297 [2024-07-15 21:41:06.255242] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:04:51.297 [2024-07-15 21:41:06.255441] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:51.865 EAL: TSC is not safe to use in SMP mode 00:04:51.865 EAL: TSC is not invariant 00:04:51.865 [2024-07-15 21:41:06.785354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.865 [2024-07-15 21:41:06.869884] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:04:51.865 [2024-07-15 21:41:06.877835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:51.865 21:41:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:53.267 21:41:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:53.267 21:41:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:53.267 21:41:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:53.267 21:41:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:53.267 21:41:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:53.267 21:41:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:53.267 21:41:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:53.267 21:41:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:53.267 21:41:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:53.267 21:41:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:53.267 21:41:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:53.267 21:41:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:53.268 21:41:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:53.268 21:41:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:53.268 21:41:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:53.268 21:41:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:53.268 21:41:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:53.268 21:41:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:53.268 21:41:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:53.268 21:41:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:53.268 21:41:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:53.268 21:41:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:53.268 21:41:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:53.268 21:41:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:53.268 21:41:08 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:53.268 21:41:08 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:04:53.268 21:41:08 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:53.268 00:04:53.268 real 0m1.786s 00:04:53.268 user 0m1.233s 00:04:53.268 sys 0m0.562s 00:04:53.268 21:41:08 accel.accel_dualcast -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:53.268 ************************************ 00:04:53.268 END TEST accel_dualcast 00:04:53.268 ************************************ 00:04:53.268 21:41:08 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:04:53.268 21:41:08 accel -- common/autotest_common.sh@1136 -- # return 0 00:04:53.268 21:41:08 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:04:53.268 21:41:08 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:04:53.268 21:41:08 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:53.268 21:41:08 accel -- common/autotest_common.sh@10 -- # set +x 00:04:53.268 ************************************ 00:04:53.268 START TEST accel_compare 00:04:53.268 ************************************ 00:04:53.268 21:41:08 accel.accel_compare -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w compare -y 00:04:53.268 21:41:08 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:04:53.268 21:41:08 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:04:53.268 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:53.268 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:53.268 21:41:08 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:04:53.268 21:41:08 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.YC8C0k -t 1 -w compare -y 00:04:53.268 [2024-07-15 21:41:08.087137] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:04:53.268 [2024-07-15 21:41:08.087414] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:53.526 EAL: TSC is not safe to use in SMP mode 00:04:53.526 EAL: TSC is not invariant 00:04:53.526 [2024-07-15 21:41:08.629428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.527 [2024-07-15 21:41:08.714520] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:53.785 21:41:08 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:04:53.785 21:41:08 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:53.785 21:41:08 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:53.785 21:41:08 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:04:53.786 [2024-07-15 21:41:08.723213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:53.786 21:41:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:54.723 21:41:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:54.723 21:41:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:54.723 21:41:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:54.723 21:41:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:54.723 21:41:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:54.723 21:41:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:54.723 21:41:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:54.723 21:41:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:54.723 21:41:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:54.723 21:41:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:54.723 21:41:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:54.723 21:41:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:54.723 21:41:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:54.723 21:41:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:54.723 21:41:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:54.723 21:41:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:54.723 21:41:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:54.723 21:41:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:54.723 21:41:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:54.723 21:41:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:54.723 21:41:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:54.723 21:41:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:54.723 21:41:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:54.723 21:41:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:54.723 21:41:09 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:54.723 21:41:09 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:04:54.723 21:41:09 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:54.723 00:04:54.723 real 0m1.801s 00:04:54.723 user 0m1.240s 00:04:54.723 sys 0m0.575s 00:04:54.723 21:41:09 accel.accel_compare -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:54.723 21:41:09 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:04:54.723 ************************************ 00:04:54.723 END TEST accel_compare 00:04:54.723 ************************************ 00:04:54.982 21:41:09 accel -- common/autotest_common.sh@1136 -- # return 0 00:04:54.982 21:41:09 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:04:54.982 21:41:09 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:04:54.982 21:41:09 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:54.982 21:41:09 accel -- common/autotest_common.sh@10 -- # set +x 00:04:54.982 ************************************ 00:04:54.982 START TEST accel_xor 00:04:54.982 ************************************ 00:04:54.982 21:41:09 accel.accel_xor -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w xor -y 00:04:54.982 21:41:09 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:04:54.982 21:41:09 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:04:54.982 21:41:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:54.982 21:41:09 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:04:54.982 21:41:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:54.982 21:41:09 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.s6GJT3 -t 1 -w xor -y 00:04:54.982 [2024-07-15 21:41:09.929998] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:04:54.983 [2024-07-15 21:41:09.930263] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:55.550 EAL: TSC is not safe to use in SMP mode 00:04:55.550 EAL: TSC is not invariant 00:04:55.550 [2024-07-15 21:41:10.451639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.550 [2024-07-15 21:41:10.540261] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:04:55.550 [2024-07-15 21:41:10.551683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:55.550 21:41:10 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:04:55.551 21:41:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:55.551 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:55.551 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:55.551 21:41:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:55.551 21:41:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:55.551 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:55.551 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:55.551 21:41:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:55.551 21:41:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:55.551 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:55.551 21:41:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:56.927 21:41:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:56.927 21:41:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:56.927 21:41:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:56.927 21:41:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:56.927 21:41:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:56.927 21:41:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:56.927 21:41:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:56.927 21:41:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:56.927 21:41:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:56.927 21:41:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:56.927 21:41:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:56.927 21:41:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:56.927 21:41:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:56.927 21:41:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:56.927 21:41:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:56.927 21:41:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:56.927 21:41:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:56.927 21:41:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:56.927 21:41:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:56.927 21:41:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:56.927 21:41:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:56.927 21:41:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:56.927 21:41:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:56.927 21:41:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:56.927 21:41:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:56.927 21:41:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:04:56.927 21:41:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:56.927 00:04:56.927 real 0m1.801s 00:04:56.927 user 0m1.224s 00:04:56.927 sys 0m0.584s 00:04:56.927 21:41:11 accel.accel_xor -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:56.927 21:41:11 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:04:56.927 ************************************ 00:04:56.927 END TEST accel_xor 00:04:56.927 ************************************ 00:04:56.927 21:41:11 accel -- common/autotest_common.sh@1136 -- # return 0 00:04:56.927 21:41:11 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:04:56.927 21:41:11 accel -- common/autotest_common.sh@1093 -- # '[' 9 -le 1 ']' 00:04:56.927 21:41:11 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:56.927 21:41:11 accel -- common/autotest_common.sh@10 -- # set +x 00:04:56.927 ************************************ 00:04:56.927 START TEST accel_xor 00:04:56.927 ************************************ 00:04:56.927 21:41:11 accel.accel_xor -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w xor -y -x 3 00:04:56.927 21:41:11 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:04:56.927 21:41:11 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:04:56.927 21:41:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:56.927 21:41:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:56.927 21:41:11 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:04:56.927 21:41:11 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.EYkUjg -t 1 -w xor -y -x 3 00:04:56.927 [2024-07-15 21:41:11.772376] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:04:56.927 [2024-07-15 21:41:11.772576] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:57.186 EAL: TSC is not safe to use in SMP mode 00:04:57.186 EAL: TSC is not invariant 00:04:57.186 [2024-07-15 21:41:12.324148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.445 [2024-07-15 21:41:12.412557] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:57.445 21:41:12 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:04:57.445 21:41:12 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:57.445 21:41:12 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:57.445 21:41:12 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:57.445 21:41:12 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:57.445 21:41:12 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:57.445 21:41:12 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:04:57.445 21:41:12 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:04:57.445 [2024-07-15 21:41:12.419849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.445 21:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:57.445 21:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:57.445 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:57.445 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:57.445 21:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:57.445 21:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:57.445 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:57.445 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:57.445 21:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:04:57.445 21:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:57.445 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:57.445 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:57.446 21:41:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:58.398 21:41:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:58.398 21:41:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:58.398 21:41:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:58.398 21:41:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:58.398 21:41:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:58.398 21:41:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:58.399 21:41:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:58.399 21:41:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:58.399 21:41:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:58.399 21:41:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:58.399 21:41:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:58.399 21:41:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:58.399 21:41:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:58.399 21:41:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:58.399 21:41:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:58.399 21:41:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:58.399 21:41:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:58.399 21:41:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:58.399 21:41:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:58.399 21:41:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:58.399 21:41:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:58.399 21:41:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:58.399 21:41:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:58.399 21:41:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:58.399 21:41:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:58.399 21:41:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:04:58.399 21:41:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:58.399 00:04:58.399 real 0m1.809s 00:04:58.399 user 0m1.225s 00:04:58.399 sys 0m0.595s 00:04:58.399 21:41:13 accel.accel_xor -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:58.399 21:41:13 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:04:58.399 ************************************ 00:04:58.399 END TEST accel_xor 00:04:58.399 ************************************ 00:04:58.656 21:41:13 accel -- common/autotest_common.sh@1136 -- # return 0 00:04:58.656 21:41:13 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:04:58.656 21:41:13 accel -- common/autotest_common.sh@1093 -- # '[' 6 -le 1 ']' 00:04:58.656 21:41:13 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:58.656 21:41:13 accel -- common/autotest_common.sh@10 -- # set +x 00:04:58.656 ************************************ 00:04:58.656 START TEST accel_dif_verify 00:04:58.656 ************************************ 00:04:58.656 21:41:13 accel.accel_dif_verify -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w dif_verify 00:04:58.656 21:41:13 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:04:58.656 21:41:13 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:04:58.656 21:41:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:58.656 21:41:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:58.656 21:41:13 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:04:58.656 21:41:13 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.H14aiJ -t 1 -w dif_verify 00:04:58.656 [2024-07-15 21:41:13.621542] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:04:58.656 [2024-07-15 21:41:13.621773] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:59.223 EAL: TSC is not safe to use in SMP mode 00:04:59.223 EAL: TSC is not invariant 00:04:59.223 [2024-07-15 21:41:14.150882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.223 [2024-07-15 21:41:14.240363] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:04:59.223 [2024-07-15 21:41:14.250470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:59.223 21:41:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:00.652 21:41:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:00.652 21:41:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:00.652 21:41:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:00.652 21:41:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:00.652 21:41:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:00.652 21:41:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:00.652 21:41:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:00.652 21:41:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:00.652 21:41:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:00.652 21:41:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:00.652 21:41:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:00.652 21:41:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:00.652 21:41:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:00.652 21:41:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:00.652 21:41:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:00.652 21:41:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:00.652 21:41:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:00.652 21:41:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:00.652 21:41:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:00.652 21:41:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:00.652 21:41:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:00.652 21:41:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:00.652 21:41:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:00.652 21:41:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:00.652 21:41:15 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:00.652 21:41:15 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:00.652 21:41:15 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:00.652 00:05:00.652 real 0m1.801s 00:05:00.652 user 0m1.252s 00:05:00.652 sys 0m0.559s 00:05:00.652 21:41:15 accel.accel_dif_verify -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:00.652 21:41:15 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:00.652 ************************************ 00:05:00.652 END TEST accel_dif_verify 00:05:00.652 ************************************ 00:05:00.652 21:41:15 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:00.652 21:41:15 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:00.652 21:41:15 accel -- common/autotest_common.sh@1093 -- # '[' 6 -le 1 ']' 00:05:00.652 21:41:15 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:00.652 21:41:15 accel -- common/autotest_common.sh@10 -- # set +x 00:05:00.652 ************************************ 00:05:00.652 START TEST accel_dif_generate 00:05:00.652 ************************************ 00:05:00.652 21:41:15 accel.accel_dif_generate -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w dif_generate 00:05:00.652 21:41:15 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:00.652 21:41:15 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:00.652 21:41:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:00.652 21:41:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:00.652 21:41:15 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:00.652 21:41:15 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.yi97O6 -t 1 -w dif_generate 00:05:00.652 [2024-07-15 21:41:15.461884] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:05:00.652 [2024-07-15 21:41:15.462029] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:00.912 EAL: TSC is not safe to use in SMP mode 00:05:00.912 EAL: TSC is not invariant 00:05:00.912 [2024-07-15 21:41:15.978587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.912 [2024-07-15 21:41:16.060217] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:00.912 21:41:16 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:00.912 21:41:16 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:00.912 21:41:16 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:00.912 21:41:16 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:00.912 21:41:16 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:00.912 21:41:16 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:00.912 21:41:16 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:00.912 21:41:16 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:00.912 [2024-07-15 21:41:16.071137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.912 21:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:00.912 21:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:00.912 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:00.912 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:00.913 21:41:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:02.289 21:41:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:02.289 21:41:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:02.289 21:41:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:02.289 21:41:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:02.289 21:41:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:02.289 21:41:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:02.289 21:41:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:02.289 21:41:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:02.289 21:41:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:02.289 21:41:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:02.289 21:41:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:02.289 21:41:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:02.290 21:41:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:02.290 21:41:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:02.290 21:41:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:02.290 21:41:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:02.290 21:41:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:02.290 21:41:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:02.290 21:41:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:02.290 21:41:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:02.290 21:41:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:02.290 21:41:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:02.290 21:41:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:02.290 21:41:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:02.290 21:41:17 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:02.290 21:41:17 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:02.290 21:41:17 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:02.290 00:05:02.290 real 0m1.766s 00:05:02.290 user 0m1.226s 00:05:02.290 sys 0m0.550s 00:05:02.290 21:41:17 accel.accel_dif_generate -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:02.290 21:41:17 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:02.290 ************************************ 00:05:02.290 END TEST accel_dif_generate 00:05:02.290 ************************************ 00:05:02.290 21:41:17 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:02.290 21:41:17 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:02.290 21:41:17 accel -- common/autotest_common.sh@1093 -- # '[' 6 -le 1 ']' 00:05:02.290 21:41:17 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:02.290 21:41:17 accel -- common/autotest_common.sh@10 -- # set +x 00:05:02.290 ************************************ 00:05:02.290 START TEST accel_dif_generate_copy 00:05:02.290 ************************************ 00:05:02.290 21:41:17 accel.accel_dif_generate_copy -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w dif_generate_copy 00:05:02.290 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:02.290 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:02.290 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.290 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:02.290 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:02.290 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.dHWj5I -t 1 -w dif_generate_copy 00:05:02.290 [2024-07-15 21:41:17.269571] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:05:02.290 [2024-07-15 21:41:17.269739] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:02.856 EAL: TSC is not safe to use in SMP mode 00:05:02.856 EAL: TSC is not invariant 00:05:02.856 [2024-07-15 21:41:17.817837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.856 [2024-07-15 21:41:17.902512] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:02.856 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:02.856 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:02.856 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:02.856 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:02.856 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:02.856 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:02.856 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:02.856 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:02.856 [2024-07-15 21:41:17.913061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.856 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:02.856 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:02.856 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.856 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:02.856 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:02.856 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:02.856 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.856 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:02.856 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:02.856 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:02.856 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.856 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:02.856 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:02.856 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:02.856 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.856 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:02.856 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:02.856 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.857 21:41:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:04.231 21:41:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:04.231 21:41:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:04.231 21:41:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:04.231 21:41:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:04.231 21:41:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:04.231 21:41:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:04.231 21:41:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:04.231 21:41:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:04.231 21:41:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:04.231 21:41:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:04.231 21:41:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:04.231 21:41:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:04.231 21:41:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:04.231 21:41:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:04.231 21:41:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:04.231 21:41:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:04.231 21:41:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:04.231 21:41:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:04.231 21:41:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:04.231 21:41:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:04.231 21:41:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:04.231 21:41:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:04.231 21:41:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:04.231 21:41:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:04.231 21:41:19 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:04.231 21:41:19 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:04.231 21:41:19 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:04.231 00:05:04.231 real 0m1.811s 00:05:04.231 user 0m1.251s 00:05:04.231 sys 0m0.568s 00:05:04.231 21:41:19 accel.accel_dif_generate_copy -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:04.231 ************************************ 00:05:04.231 END TEST accel_dif_generate_copy 00:05:04.231 ************************************ 00:05:04.231 21:41:19 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:04.231 21:41:19 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:04.231 21:41:19 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:04.231 21:41:19 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:04.231 21:41:19 accel -- common/autotest_common.sh@1093 -- # '[' 8 -le 1 ']' 00:05:04.231 21:41:19 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:04.231 21:41:19 accel -- common/autotest_common.sh@10 -- # set +x 00:05:04.231 ************************************ 00:05:04.231 START TEST accel_comp 00:05:04.231 ************************************ 00:05:04.231 21:41:19 accel.accel_comp -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:04.231 21:41:19 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:04.231 21:41:19 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:04.231 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:04.231 21:41:19 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:04.231 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:04.231 21:41:19 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.vmcUqI -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:04.231 [2024-07-15 21:41:19.121869] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:05:04.231 [2024-07-15 21:41:19.122099] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:04.490 EAL: TSC is not safe to use in SMP mode 00:05:04.490 EAL: TSC is not invariant 00:05:04.490 [2024-07-15 21:41:19.634440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.749 [2024-07-15 21:41:19.716588] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:04.749 21:41:19 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:04.749 21:41:19 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:04.749 21:41:19 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:04.749 21:41:19 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:04.749 21:41:19 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:04.749 21:41:19 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:04.749 21:41:19 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:04.749 21:41:19 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:04.749 [2024-07-15 21:41:19.726085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.749 21:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:04.749 21:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:04.749 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:04.749 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:04.749 21:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:04.749 21:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:04.749 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:04.749 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:04.749 21:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:04.749 21:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:04.749 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:04.749 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:04.749 21:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:04.749 21:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:04.749 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:04.749 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:04.749 21:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:04.749 21:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:04.749 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:04.749 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:04.749 21:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:04.749 21:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:04.749 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:04.749 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:04.749 21:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:04.750 21:41:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:06.126 21:41:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:06.126 21:41:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:06.126 21:41:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:06.126 21:41:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:06.126 21:41:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:06.126 21:41:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:06.126 21:41:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:06.126 21:41:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:06.126 21:41:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:06.126 21:41:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:06.126 21:41:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:06.126 21:41:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:06.126 21:41:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:06.126 21:41:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:06.126 21:41:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:06.126 21:41:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:06.126 21:41:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:06.126 21:41:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:06.126 21:41:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:06.126 21:41:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:06.126 21:41:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:06.126 21:41:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:06.126 21:41:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:06.126 21:41:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:06.126 21:41:20 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:06.126 21:41:20 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:06.126 21:41:20 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:06.126 00:05:06.126 real 0m1.772s 00:05:06.126 user 0m1.215s 00:05:06.126 sys 0m0.561s 00:05:06.126 21:41:20 accel.accel_comp -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:06.126 21:41:20 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:06.126 ************************************ 00:05:06.126 END TEST accel_comp 00:05:06.126 ************************************ 00:05:06.126 21:41:20 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:06.127 21:41:20 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:06.127 21:41:20 accel -- common/autotest_common.sh@1093 -- # '[' 9 -le 1 ']' 00:05:06.127 21:41:20 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:06.127 21:41:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:06.127 ************************************ 00:05:06.127 START TEST accel_decomp 00:05:06.127 ************************************ 00:05:06.127 21:41:20 accel.accel_decomp -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:06.127 21:41:20 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:06.127 21:41:20 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:06.127 21:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:06.127 21:41:20 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:06.127 21:41:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:06.127 21:41:20 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.y6V6ou -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:06.127 [2024-07-15 21:41:20.936765] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:05:06.127 [2024-07-15 21:41:20.936947] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:06.385 EAL: TSC is not safe to use in SMP mode 00:05:06.385 EAL: TSC is not invariant 00:05:06.385 [2024-07-15 21:41:21.487879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.385 [2024-07-15 21:41:21.572746] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:06.736 21:41:21 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:06.736 21:41:21 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:06.736 21:41:21 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:06.736 21:41:21 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:06.736 21:41:21 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:06.736 21:41:21 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:06.736 21:41:21 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:06.736 21:41:21 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:06.736 [2024-07-15 21:41:21.581859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.736 21:41:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:06.736 21:41:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:06.736 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:06.736 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:06.736 21:41:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:06.736 21:41:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:06.736 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:06.736 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:06.736 21:41:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:06.736 21:41:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:06.736 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:06.736 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:06.736 21:41:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:06.736 21:41:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:06.736 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:06.737 21:41:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:07.672 21:41:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:07.672 21:41:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:07.672 21:41:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:07.672 21:41:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:07.672 21:41:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:07.672 21:41:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:07.672 21:41:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:07.672 21:41:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:07.672 21:41:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:07.672 21:41:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:07.672 21:41:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:07.672 21:41:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:07.672 21:41:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:07.672 21:41:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:07.672 21:41:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:07.672 21:41:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:07.672 21:41:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:07.672 21:41:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:07.672 21:41:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:07.672 21:41:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:07.672 21:41:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:07.672 21:41:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:07.672 21:41:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:07.672 21:41:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:07.672 21:41:22 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:07.672 21:41:22 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:07.672 21:41:22 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:07.672 00:05:07.672 real 0m1.807s 00:05:07.672 user 0m1.227s 00:05:07.672 sys 0m0.590s 00:05:07.672 21:41:22 accel.accel_decomp -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:07.672 21:41:22 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:07.672 ************************************ 00:05:07.672 END TEST accel_decomp 00:05:07.672 ************************************ 00:05:07.672 21:41:22 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:07.672 21:41:22 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:07.672 21:41:22 accel -- common/autotest_common.sh@1093 -- # '[' 11 -le 1 ']' 00:05:07.672 21:41:22 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:07.672 21:41:22 accel -- common/autotest_common.sh@10 -- # set +x 00:05:07.672 ************************************ 00:05:07.672 START TEST accel_decomp_full 00:05:07.672 ************************************ 00:05:07.672 21:41:22 accel.accel_decomp_full -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:07.672 21:41:22 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:07.672 21:41:22 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:07.672 21:41:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:07.672 21:41:22 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:07.672 21:41:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:07.672 21:41:22 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.APIbZO -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:07.672 [2024-07-15 21:41:22.790215] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:05:07.672 [2024-07-15 21:41:22.790480] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:08.238 EAL: TSC is not safe to use in SMP mode 00:05:08.238 EAL: TSC is not invariant 00:05:08.238 [2024-07-15 21:41:23.327085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.238 [2024-07-15 21:41:23.409050] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:08.238 21:41:23 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:08.238 21:41:23 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:08.238 21:41:23 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:08.238 21:41:23 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:08.238 21:41:23 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:08.238 21:41:23 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:08.238 21:41:23 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:08.238 21:41:23 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:08.238 [2024-07-15 21:41:23.418062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.238 21:41:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:08.238 21:41:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:08.238 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:08.238 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:08.238 21:41:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:08.238 21:41:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:08.238 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:08.238 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:08.238 21:41:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:08.238 21:41:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:08.238 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:08.238 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:08.238 21:41:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:08.238 21:41:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:08.238 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:08.238 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:08.238 21:41:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:08.238 21:41:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:08.238 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:08.238 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:08.238 21:41:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:08.496 21:41:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:09.428 21:41:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:09.428 21:41:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:09.428 21:41:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:09.428 21:41:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:09.428 21:41:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:09.428 21:41:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:09.428 21:41:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:09.428 21:41:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:09.428 21:41:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:09.428 21:41:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:09.428 21:41:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:09.428 21:41:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:09.428 21:41:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:09.428 21:41:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:09.428 21:41:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:09.428 21:41:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:09.428 21:41:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:09.428 21:41:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:09.428 21:41:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:09.428 21:41:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:09.428 21:41:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:09.428 21:41:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:09.428 21:41:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:09.428 21:41:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:09.428 21:41:24 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:09.428 21:41:24 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:09.428 21:41:24 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:09.428 00:05:09.428 real 0m1.796s 00:05:09.428 user 0m1.224s 00:05:09.428 sys 0m0.585s 00:05:09.428 21:41:24 accel.accel_decomp_full -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:09.428 ************************************ 00:05:09.428 END TEST accel_decomp_full 00:05:09.428 ************************************ 00:05:09.428 21:41:24 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:09.428 21:41:24 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:09.428 21:41:24 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:09.428 21:41:24 accel -- common/autotest_common.sh@1093 -- # '[' 11 -le 1 ']' 00:05:09.428 21:41:24 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:09.428 21:41:24 accel -- common/autotest_common.sh@10 -- # set +x 00:05:09.686 ************************************ 00:05:09.686 START TEST accel_decomp_mcore 00:05:09.686 ************************************ 00:05:09.686 21:41:24 accel.accel_decomp_mcore -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:09.686 21:41:24 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:09.686 21:41:24 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:09.686 21:41:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:09.686 21:41:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:09.686 21:41:24 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:09.686 21:41:24 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.B5EniY -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:09.686 [2024-07-15 21:41:24.626194] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:05:09.686 [2024-07-15 21:41:24.626387] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:10.295 EAL: TSC is not safe to use in SMP mode 00:05:10.295 EAL: TSC is not invariant 00:05:10.295 [2024-07-15 21:41:25.159839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:10.295 [2024-07-15 21:41:25.245031] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:10.295 [2024-07-15 21:41:25.245087] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:05:10.295 [2024-07-15 21:41:25.245097] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:05:10.295 [2024-07-15 21:41:25.245104] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:05:10.295 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:10.295 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:10.295 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:10.295 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:10.296 [2024-07-15 21:41:25.257838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.296 [2024-07-15 21:41:25.257728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.296 [2024-07-15 21:41:25.257794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:10.296 [2024-07-15 21:41:25.257830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:10.296 21:41:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:11.233 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:11.233 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:11.233 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:11.233 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:11.233 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:11.233 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:11.233 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:11.233 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:11.233 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:11.233 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:11.233 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:11.233 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:11.233 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:11.233 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:11.233 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:11.233 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:11.233 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:11.233 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:11.233 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:11.233 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:11.233 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:11.233 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:11.233 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:11.233 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:11.233 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:11.233 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:11.233 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:11.233 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:11.233 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:11.233 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:11.233 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:11.234 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:11.234 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:11.234 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:11.234 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:11.234 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:11.234 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:11.234 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:11.234 21:41:26 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:11.234 00:05:11.234 real 0m1.795s 00:05:11.234 user 0m4.335s 00:05:11.234 sys 0m0.592s 00:05:11.234 21:41:26 accel.accel_decomp_mcore -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:11.234 21:41:26 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:11.234 ************************************ 00:05:11.234 END TEST accel_decomp_mcore 00:05:11.234 ************************************ 00:05:11.491 21:41:26 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:11.491 21:41:26 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:11.491 21:41:26 accel -- common/autotest_common.sh@1093 -- # '[' 13 -le 1 ']' 00:05:11.491 21:41:26 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:11.491 21:41:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:11.491 ************************************ 00:05:11.491 START TEST accel_decomp_full_mcore 00:05:11.491 ************************************ 00:05:11.491 21:41:26 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:11.491 21:41:26 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:11.491 21:41:26 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:11.491 21:41:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:11.492 21:41:26 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:11.492 21:41:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:11.492 21:41:26 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.8GCwUc -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:11.492 [2024-07-15 21:41:26.459611] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:05:11.492 [2024-07-15 21:41:26.459816] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:12.058 EAL: TSC is not safe to use in SMP mode 00:05:12.058 EAL: TSC is not invariant 00:05:12.058 [2024-07-15 21:41:26.988564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:12.058 [2024-07-15 21:41:27.075992] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:12.058 [2024-07-15 21:41:27.076063] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:05:12.058 [2024-07-15 21:41:27.076089] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:05:12.058 [2024-07-15 21:41:27.076097] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:12.058 [2024-07-15 21:41:27.086388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.058 [2024-07-15 21:41:27.086656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.058 [2024-07-15 21:41:27.086541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.058 [2024-07-15 21:41:27.086650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:12.058 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:12.059 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:12.059 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:12.059 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:12.059 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:12.059 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:12.059 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:12.059 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:12.059 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:12.059 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:12.059 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:12.059 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:12.059 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:12.059 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:12.059 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:12.059 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:12.059 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:12.059 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:12.059 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:12.059 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:12.059 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:12.059 21:41:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:13.431 00:05:13.431 real 0m1.805s 00:05:13.431 user 0m4.415s 00:05:13.431 sys 0m0.549s 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:13.431 21:41:28 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:13.431 ************************************ 00:05:13.431 END TEST accel_decomp_full_mcore 00:05:13.431 ************************************ 00:05:13.431 21:41:28 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:13.431 21:41:28 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:13.431 21:41:28 accel -- common/autotest_common.sh@1093 -- # '[' 11 -le 1 ']' 00:05:13.431 21:41:28 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:13.431 21:41:28 accel -- common/autotest_common.sh@10 -- # set +x 00:05:13.431 ************************************ 00:05:13.431 START TEST accel_decomp_mthread 00:05:13.431 ************************************ 00:05:13.431 21:41:28 accel.accel_decomp_mthread -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:13.431 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:13.431 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:13.431 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:13.431 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:13.431 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:13.431 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.dfYox0 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:13.431 [2024-07-15 21:41:28.302893] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:05:13.431 [2024-07-15 21:41:28.303097] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:13.689 EAL: TSC is not safe to use in SMP mode 00:05:13.689 EAL: TSC is not invariant 00:05:13.689 [2024-07-15 21:41:28.875030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.948 [2024-07-15 21:41:28.973434] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:13.948 [2024-07-15 21:41:28.985204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:13.948 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:13.949 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:13.949 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:13.949 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:13.949 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:13.949 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:13.949 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:13.949 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:13.949 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:13.949 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:13.949 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:13.949 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:13.949 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:13.949 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:13.949 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:13.949 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:13.949 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:13.949 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:13.949 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:13.949 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:13.949 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:13.949 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:13.949 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:13.949 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:13.949 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:13.949 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:13.949 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:13.949 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:13.949 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:13.949 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:13.949 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:13.949 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:13.949 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:13.949 21:41:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:15.341 21:41:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:15.341 21:41:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:15.341 21:41:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:15.341 21:41:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:15.341 21:41:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:15.341 21:41:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:15.341 21:41:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:15.341 21:41:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:15.341 21:41:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:15.341 21:41:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:15.341 21:41:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:15.341 21:41:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:15.341 21:41:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:15.341 21:41:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:15.341 21:41:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:15.341 21:41:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:15.341 21:41:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:15.341 21:41:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:15.341 21:41:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:15.341 21:41:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:15.341 21:41:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:15.341 21:41:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:15.341 21:41:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:15.341 21:41:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:15.341 21:41:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:15.341 21:41:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:15.341 21:41:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:15.342 21:41:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:15.342 21:41:30 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:15.342 21:41:30 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:15.342 21:41:30 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:15.342 00:05:15.342 real 0m1.853s 00:05:15.342 user 0m1.250s 00:05:15.342 sys 0m0.613s 00:05:15.342 21:41:30 accel.accel_decomp_mthread -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:15.342 ************************************ 00:05:15.342 END TEST accel_decomp_mthread 00:05:15.342 ************************************ 00:05:15.342 21:41:30 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:15.342 21:41:30 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:15.342 21:41:30 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:15.342 21:41:30 accel -- common/autotest_common.sh@1093 -- # '[' 13 -le 1 ']' 00:05:15.342 21:41:30 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:15.342 21:41:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:15.342 ************************************ 00:05:15.342 START TEST accel_decomp_full_mthread 00:05:15.342 ************************************ 00:05:15.342 21:41:30 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:15.342 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:15.342 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:15.342 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:15.342 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:15.342 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:15.342 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.DHMkd2 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:15.342 [2024-07-15 21:41:30.204998] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:05:15.342 [2024-07-15 21:41:30.205230] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:15.601 EAL: TSC is not safe to use in SMP mode 00:05:15.601 EAL: TSC is not invariant 00:05:15.601 [2024-07-15 21:41:30.735626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.858 [2024-07-15 21:41:30.819522] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:15.858 [2024-07-15 21:41:30.826573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:15.858 21:41:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:17.231 21:41:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:17.231 21:41:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:17.231 21:41:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:17.231 21:41:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:17.231 21:41:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:17.231 21:41:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:17.231 21:41:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:17.231 21:41:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:17.231 21:41:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:17.231 21:41:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:17.231 21:41:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:17.231 21:41:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:17.231 21:41:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:17.231 21:41:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:17.231 21:41:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:17.231 21:41:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:17.231 21:41:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:17.231 21:41:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:17.231 21:41:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:17.232 21:41:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:17.232 21:41:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:17.232 21:41:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:17.232 21:41:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:17.232 21:41:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:17.232 21:41:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:17.232 21:41:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:17.232 21:41:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:17.232 21:41:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:17.232 21:41:32 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:17.232 21:41:32 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:17.232 21:41:32 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:17.232 00:05:17.232 real 0m1.811s 00:05:17.232 user 0m1.234s 00:05:17.232 sys 0m0.585s 00:05:17.232 21:41:32 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:17.232 21:41:32 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:17.232 ************************************ 00:05:17.232 END TEST accel_decomp_full_mthread 00:05:17.232 ************************************ 00:05:17.232 21:41:32 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:17.232 21:41:32 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:17.232 21:41:32 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /tmp//sh-np.4q3CqL 00:05:17.232 21:41:32 accel -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:05:17.232 21:41:32 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:17.232 21:41:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:17.232 ************************************ 00:05:17.232 START TEST accel_dif_functional_tests 00:05:17.232 ************************************ 00:05:17.232 21:41:32 accel.accel_dif_functional_tests -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /tmp//sh-np.4q3CqL 00:05:17.232 [2024-07-15 21:41:32.057277] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:05:17.232 [2024-07-15 21:41:32.057452] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:17.491 EAL: TSC is not safe to use in SMP mode 00:05:17.491 EAL: TSC is not invariant 00:05:17.491 [2024-07-15 21:41:32.575958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:17.491 [2024-07-15 21:41:32.671373] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:17.491 [2024-07-15 21:41:32.671446] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:05:17.491 [2024-07-15 21:41:32.671458] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:05:17.491 21:41:32 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:17.491 21:41:32 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:17.491 21:41:32 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:17.491 21:41:32 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.491 21:41:32 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.491 21:41:32 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:17.491 21:41:32 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:17.491 21:41:32 accel -- accel/accel.sh@41 -- # jq -r . 00:05:17.750 [2024-07-15 21:41:32.682867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.750 [2024-07-15 21:41:32.682795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.750 [2024-07-15 21:41:32.682859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.750 00:05:17.750 00:05:17.750 CUnit - A unit testing framework for C - Version 2.1-3 00:05:17.750 http://cunit.sourceforge.net/ 00:05:17.750 00:05:17.750 00:05:17.750 Suite: accel_dif 00:05:17.750 Test: verify: DIF generated, GUARD check ...passed 00:05:17.750 Test: verify: DIF generated, APPTAG check ...passed 00:05:17.750 Test: verify: DIF generated, REFTAG check ...passed 00:05:17.750 Test: verify: DIF not generated, GUARD check ...passed 00:05:17.750 Test: verify: DIF not generated, APPTAG check ...passed 00:05:17.750 Test: verify: DIF not generated, REFTAG check ...passed 00:05:17.750 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:17.750 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 21:41:32.700172] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:17.750 [2024-07-15 21:41:32.700237] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:17.750 [2024-07-15 21:41:32.700265] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:17.750 [2024-07-15 21:41:32.700313] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:17.750 passed 00:05:17.750 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:17.750 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:17.750 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:17.750 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:05:17.750 Test: verify copy: DIF generated, GUARD check ...passed 00:05:17.750 Test: verify copy: DIF generated, APPTAG check ...passed 00:05:17.750 Test: verify copy: DIF generated, REFTAG check ...passed 00:05:17.750 Test: verify copy: DIF not generated, GUARD check ...passed 00:05:17.750 Test: verify copy: DIF not generated, APPTAG check ...passed 00:05:17.750 Test: verify copy: DIF not generated, REFTAG check ...passed 00:05:17.750 Test: generate copy: DIF generated, GUARD check ...passed 00:05:17.750 Test: generate copy: DIF generated, APTTAG check ...[2024-07-15 21:41:32.700386] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:17.750 [2024-07-15 21:41:32.700472] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:17.750 [2024-07-15 21:41:32.700499] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:17.750 [2024-07-15 21:41:32.700524] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:17.750 passed 00:05:17.751 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:17.751 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:17.751 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:17.751 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:17.751 Test: generate copy: iovecs-len validate ...passed 00:05:17.751 Test: generate copy: buffer alignment validate ...passed 00:05:17.751 00:05:17.751 Run Summary: Type Total Ran Passed Failed Inactive 00:05:17.751 suites 1 1 n/a 0 0 00:05:17.751 tests 26 26 26 0 0 00:05:17.751 asserts 115 115 115 0 n/a 00:05:17.751 00:05:17.751 Elapsed time = 0.000 seconds 00:05:17.751 [2024-07-15 21:41:32.700655] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:17.751 00:05:17.751 real 0m0.829s 00:05:17.751 user 0m0.400s 00:05:17.751 sys 0m0.579s 00:05:17.751 21:41:32 accel.accel_dif_functional_tests -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:17.751 21:41:32 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:17.751 ************************************ 00:05:17.751 END TEST accel_dif_functional_tests 00:05:17.751 ************************************ 00:05:17.751 21:41:32 accel -- common/autotest_common.sh@1136 -- # return 0 00:05:17.751 00:05:17.751 real 0m41.147s 00:05:17.751 user 0m33.627s 00:05:17.751 sys 0m14.532s 00:05:17.751 21:41:32 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:17.751 21:41:32 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:17.751 21:41:32 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:17.751 21:41:32 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:17.751 21:41:32 accel -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:17.751 21:41:32 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:17.751 21:41:32 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:17.751 21:41:32 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.751 21:41:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:17.751 21:41:32 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:17.751 21:41:32 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:17.751 21:41:32 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.751 ************************************ 00:05:17.751 END TEST accel 00:05:17.751 ************************************ 00:05:17.751 21:41:32 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:17.751 21:41:32 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:17.751 21:41:32 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.751 21:41:32 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:17.751 21:41:32 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.751 21:41:32 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.751 21:41:32 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:17.751 21:41:32 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.751 21:41:32 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:17.751 21:41:32 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:17.751 21:41:32 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:17.751 21:41:32 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:17.751 21:41:32 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:17.751 21:41:32 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:18.009 21:41:32 -- common/autotest_common.sh@1136 -- # return 0 00:05:18.009 21:41:32 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:18.009 21:41:32 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:18.009 21:41:32 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:18.009 21:41:32 -- common/autotest_common.sh@10 -- # set +x 00:05:18.009 ************************************ 00:05:18.009 START TEST accel_rpc 00:05:18.009 ************************************ 00:05:18.009 21:41:32 accel_rpc -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:18.009 * Looking for test storage... 00:05:18.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:18.009 21:41:33 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:18.009 21:41:33 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=47488 00:05:18.009 21:41:33 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:18.009 21:41:33 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 47488 00:05:18.009 21:41:33 accel_rpc -- common/autotest_common.sh@823 -- # '[' -z 47488 ']' 00:05:18.009 21:41:33 accel_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.009 21:41:33 accel_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:18.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.009 21:41:33 accel_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.009 21:41:33 accel_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:18.009 21:41:33 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.009 [2024-07-15 21:41:33.104111] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:05:18.009 [2024-07-15 21:41:33.104252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:18.576 EAL: TSC is not safe to use in SMP mode 00:05:18.576 EAL: TSC is not invariant 00:05:18.576 [2024-07-15 21:41:33.615505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.576 [2024-07-15 21:41:33.702845] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:18.576 [2024-07-15 21:41:33.704972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.141 21:41:34 accel_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:19.141 21:41:34 accel_rpc -- common/autotest_common.sh@856 -- # return 0 00:05:19.141 21:41:34 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:19.141 21:41:34 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:19.141 21:41:34 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:19.141 21:41:34 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:19.141 21:41:34 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:19.141 21:41:34 accel_rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:19.141 21:41:34 accel_rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:19.141 21:41:34 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.141 ************************************ 00:05:19.141 START TEST accel_assign_opcode 00:05:19.141 ************************************ 00:05:19.141 21:41:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1117 -- # accel_assign_opcode_test_suite 00:05:19.141 21:41:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:19.141 21:41:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:19.141 21:41:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:19.141 [2024-07-15 21:41:34.165288] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:19.141 21:41:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:19.141 21:41:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:19.141 21:41:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:19.141 21:41:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:19.141 [2024-07-15 21:41:34.173273] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:19.141 21:41:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:19.141 21:41:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:19.141 21:41:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:19.141 21:41:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:19.141 21:41:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:19.141 21:41:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:19.141 21:41:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:19.141 21:41:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:19.141 21:41:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:19.141 21:41:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:19.141 21:41:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:19.141 software 00:05:19.141 00:05:19.141 real 0m0.068s 00:05:19.141 user 0m0.017s 00:05:19.141 sys 0m0.001s 00:05:19.141 21:41:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:19.141 ************************************ 00:05:19.141 END TEST accel_assign_opcode 00:05:19.141 ************************************ 00:05:19.141 21:41:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:19.141 21:41:34 accel_rpc -- common/autotest_common.sh@1136 -- # return 0 00:05:19.141 21:41:34 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 47488 00:05:19.141 21:41:34 accel_rpc -- common/autotest_common.sh@942 -- # '[' -z 47488 ']' 00:05:19.141 21:41:34 accel_rpc -- common/autotest_common.sh@946 -- # kill -0 47488 00:05:19.141 21:41:34 accel_rpc -- common/autotest_common.sh@947 -- # uname 00:05:19.141 21:41:34 accel_rpc -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:05:19.141 21:41:34 accel_rpc -- common/autotest_common.sh@950 -- # ps -c -o command 47488 00:05:19.141 21:41:34 accel_rpc -- common/autotest_common.sh@950 -- # tail -1 00:05:19.141 21:41:34 accel_rpc -- common/autotest_common.sh@950 -- # process_name=spdk_tgt 00:05:19.141 killing process with pid 47488 00:05:19.141 21:41:34 accel_rpc -- common/autotest_common.sh@952 -- # '[' spdk_tgt = sudo ']' 00:05:19.141 21:41:34 accel_rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 47488' 00:05:19.141 21:41:34 accel_rpc -- common/autotest_common.sh@961 -- # kill 47488 00:05:19.141 21:41:34 accel_rpc -- common/autotest_common.sh@966 -- # wait 47488 00:05:19.399 00:05:19.399 real 0m1.567s 00:05:19.399 user 0m1.446s 00:05:19.400 sys 0m0.768s 00:05:19.400 21:41:34 accel_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:19.400 21:41:34 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.400 ************************************ 00:05:19.400 END TEST accel_rpc 00:05:19.400 ************************************ 00:05:19.400 21:41:34 -- common/autotest_common.sh@1136 -- # return 0 00:05:19.400 21:41:34 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:19.400 21:41:34 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:19.400 21:41:34 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:19.400 21:41:34 -- common/autotest_common.sh@10 -- # set +x 00:05:19.400 ************************************ 00:05:19.400 START TEST app_cmdline 00:05:19.400 ************************************ 00:05:19.400 21:41:34 app_cmdline -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:19.680 * Looking for test storage... 00:05:19.681 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:19.681 21:41:34 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:19.681 21:41:34 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=47566 00:05:19.681 21:41:34 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 47566 00:05:19.681 21:41:34 app_cmdline -- common/autotest_common.sh@823 -- # '[' -z 47566 ']' 00:05:19.681 21:41:34 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:19.681 21:41:34 app_cmdline -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.681 21:41:34 app_cmdline -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:19.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.681 21:41:34 app_cmdline -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.681 21:41:34 app_cmdline -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:19.681 21:41:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:19.681 [2024-07-15 21:41:34.700344] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:05:19.681 [2024-07-15 21:41:34.700485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:20.247 EAL: TSC is not safe to use in SMP mode 00:05:20.247 EAL: TSC is not invariant 00:05:20.247 [2024-07-15 21:41:35.205303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.247 [2024-07-15 21:41:35.289440] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:20.247 [2024-07-15 21:41:35.291729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.813 21:41:35 app_cmdline -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:20.813 21:41:35 app_cmdline -- common/autotest_common.sh@856 -- # return 0 00:05:20.813 21:41:35 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:20.813 { 00:05:20.813 "version": "SPDK v24.09-pre git sha1 a83ad116a", 00:05:20.813 "fields": { 00:05:20.813 "major": 24, 00:05:20.813 "minor": 9, 00:05:20.813 "patch": 0, 00:05:20.813 "suffix": "-pre", 00:05:20.813 "commit": "a83ad116a" 00:05:20.813 } 00:05:20.813 } 00:05:20.813 21:41:35 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:20.813 21:41:35 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:20.813 21:41:35 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:20.813 21:41:35 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:20.813 21:41:35 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:20.813 21:41:35 app_cmdline -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:20.813 21:41:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:20.813 21:41:35 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:20.813 21:41:35 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:20.813 21:41:35 app_cmdline -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:20.813 21:41:35 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:20.813 21:41:35 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:20.813 21:41:35 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:20.813 21:41:35 app_cmdline -- common/autotest_common.sh@642 -- # local es=0 00:05:20.813 21:41:35 app_cmdline -- common/autotest_common.sh@644 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:20.813 21:41:35 app_cmdline -- common/autotest_common.sh@630 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:20.814 21:41:35 app_cmdline -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:20.814 21:41:35 app_cmdline -- common/autotest_common.sh@634 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:20.814 21:41:35 app_cmdline -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:20.814 21:41:35 app_cmdline -- common/autotest_common.sh@636 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:20.814 21:41:35 app_cmdline -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:20.814 21:41:35 app_cmdline -- common/autotest_common.sh@636 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:20.814 21:41:35 app_cmdline -- common/autotest_common.sh@636 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:20.814 21:41:35 app_cmdline -- common/autotest_common.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:21.380 request: 00:05:21.380 { 00:05:21.380 "method": "env_dpdk_get_mem_stats", 00:05:21.380 "req_id": 1 00:05:21.380 } 00:05:21.380 Got JSON-RPC error response 00:05:21.380 response: 00:05:21.380 { 00:05:21.380 "code": -32601, 00:05:21.380 "message": "Method not found" 00:05:21.380 } 00:05:21.380 21:41:36 app_cmdline -- common/autotest_common.sh@645 -- # es=1 00:05:21.380 21:41:36 app_cmdline -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:05:21.380 21:41:36 app_cmdline -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:05:21.380 21:41:36 app_cmdline -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:05:21.380 21:41:36 app_cmdline -- app/cmdline.sh@1 -- # killprocess 47566 00:05:21.380 21:41:36 app_cmdline -- common/autotest_common.sh@942 -- # '[' -z 47566 ']' 00:05:21.380 21:41:36 app_cmdline -- common/autotest_common.sh@946 -- # kill -0 47566 00:05:21.380 21:41:36 app_cmdline -- common/autotest_common.sh@947 -- # uname 00:05:21.380 21:41:36 app_cmdline -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:05:21.380 21:41:36 app_cmdline -- common/autotest_common.sh@950 -- # tail -1 00:05:21.380 21:41:36 app_cmdline -- common/autotest_common.sh@950 -- # ps -c -o command 47566 00:05:21.380 21:41:36 app_cmdline -- common/autotest_common.sh@950 -- # process_name=spdk_tgt 00:05:21.380 21:41:36 app_cmdline -- common/autotest_common.sh@952 -- # '[' spdk_tgt = sudo ']' 00:05:21.380 killing process with pid 47566 00:05:21.380 21:41:36 app_cmdline -- common/autotest_common.sh@960 -- # echo 'killing process with pid 47566' 00:05:21.380 21:41:36 app_cmdline -- common/autotest_common.sh@961 -- # kill 47566 00:05:21.380 21:41:36 app_cmdline -- common/autotest_common.sh@966 -- # wait 47566 00:05:21.639 00:05:21.639 real 0m2.015s 00:05:21.639 user 0m2.446s 00:05:21.639 sys 0m0.695s 00:05:21.639 21:41:36 app_cmdline -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:21.639 ************************************ 00:05:21.639 END TEST app_cmdline 00:05:21.639 ************************************ 00:05:21.639 21:41:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:21.639 21:41:36 -- common/autotest_common.sh@1136 -- # return 0 00:05:21.639 21:41:36 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:21.639 21:41:36 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:21.639 21:41:36 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:21.639 21:41:36 -- common/autotest_common.sh@10 -- # set +x 00:05:21.639 ************************************ 00:05:21.639 START TEST version 00:05:21.639 ************************************ 00:05:21.639 21:41:36 version -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:21.639 * Looking for test storage... 00:05:21.639 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:21.639 21:41:36 version -- app/version.sh@17 -- # get_header_version major 00:05:21.639 21:41:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:21.639 21:41:36 version -- app/version.sh@14 -- # tr -d '"' 00:05:21.639 21:41:36 version -- app/version.sh@14 -- # cut -f2 00:05:21.639 21:41:36 version -- app/version.sh@17 -- # major=24 00:05:21.639 21:41:36 version -- app/version.sh@18 -- # get_header_version minor 00:05:21.639 21:41:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:21.639 21:41:36 version -- app/version.sh@14 -- # cut -f2 00:05:21.639 21:41:36 version -- app/version.sh@14 -- # tr -d '"' 00:05:21.639 21:41:36 version -- app/version.sh@18 -- # minor=9 00:05:21.639 21:41:36 version -- app/version.sh@19 -- # get_header_version patch 00:05:21.639 21:41:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:21.639 21:41:36 version -- app/version.sh@14 -- # cut -f2 00:05:21.639 21:41:36 version -- app/version.sh@14 -- # tr -d '"' 00:05:21.639 21:41:36 version -- app/version.sh@19 -- # patch=0 00:05:21.639 21:41:36 version -- app/version.sh@20 -- # get_header_version suffix 00:05:21.639 21:41:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:21.639 21:41:36 version -- app/version.sh@14 -- # cut -f2 00:05:21.639 21:41:36 version -- app/version.sh@14 -- # tr -d '"' 00:05:21.639 21:41:36 version -- app/version.sh@20 -- # suffix=-pre 00:05:21.639 21:41:36 version -- app/version.sh@22 -- # version=24.9 00:05:21.639 21:41:36 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:21.639 21:41:36 version -- app/version.sh@28 -- # version=24.9rc0 00:05:21.639 21:41:36 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:21.639 21:41:36 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:21.639 21:41:36 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:21.639 21:41:36 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:21.639 00:05:21.639 real 0m0.184s 00:05:21.639 user 0m0.116s 00:05:21.639 sys 0m0.144s 00:05:21.639 21:41:36 version -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:21.639 21:41:36 version -- common/autotest_common.sh@10 -- # set +x 00:05:21.639 ************************************ 00:05:21.639 END TEST version 00:05:21.639 ************************************ 00:05:21.898 21:41:36 -- common/autotest_common.sh@1136 -- # return 0 00:05:21.898 21:41:36 -- spdk/autotest.sh@188 -- # '[' 1 -eq 1 ']' 00:05:21.898 21:41:36 -- spdk/autotest.sh@189 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:05:21.898 21:41:36 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:21.898 21:41:36 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:21.898 21:41:36 -- common/autotest_common.sh@10 -- # set +x 00:05:21.898 ************************************ 00:05:21.898 START TEST blockdev_general 00:05:21.898 ************************************ 00:05:21.898 21:41:36 blockdev_general -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:05:21.898 * Looking for test storage... 00:05:21.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:21.898 21:41:37 blockdev_general -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:21.898 21:41:37 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:05:21.898 21:41:37 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:21.898 21:41:37 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:21.898 21:41:37 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:21.898 21:41:37 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:21.898 21:41:37 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:21.898 21:41:37 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:21.898 21:41:37 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:05:21.898 21:41:37 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:05:21.898 21:41:37 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:05:21.898 21:41:37 blockdev_general -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:05:21.898 21:41:37 blockdev_general -- bdev/blockdev.sh@674 -- # uname -s 00:05:21.898 21:41:37 blockdev_general -- bdev/blockdev.sh@674 -- # '[' FreeBSD = Linux ']' 00:05:21.899 21:41:37 blockdev_general -- bdev/blockdev.sh@679 -- # PRE_RESERVED_MEM=2048 00:05:21.899 21:41:37 blockdev_general -- bdev/blockdev.sh@682 -- # test_type=bdev 00:05:21.899 21:41:37 blockdev_general -- bdev/blockdev.sh@683 -- # crypto_device= 00:05:21.899 21:41:37 blockdev_general -- bdev/blockdev.sh@684 -- # dek= 00:05:21.899 21:41:37 blockdev_general -- bdev/blockdev.sh@685 -- # env_ctx= 00:05:21.899 21:41:37 blockdev_general -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:05:21.899 21:41:37 blockdev_general -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:05:21.899 21:41:37 blockdev_general -- bdev/blockdev.sh@690 -- # [[ bdev == bdev ]] 00:05:21.899 21:41:37 blockdev_general -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:05:21.899 21:41:37 blockdev_general -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:05:21.899 21:41:37 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=47701 00:05:21.899 21:41:37 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:21.899 21:41:37 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 47701 00:05:21.899 21:41:37 blockdev_general -- common/autotest_common.sh@823 -- # '[' -z 47701 ']' 00:05:21.899 21:41:37 blockdev_general -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.899 21:41:37 blockdev_general -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:21.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.899 21:41:37 blockdev_general -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.899 21:41:37 blockdev_general -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:05:21.899 21:41:37 blockdev_general -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:21.899 21:41:37 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:21.899 [2024-07-15 21:41:37.019119] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:05:21.899 [2024-07-15 21:41:37.019385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:22.467 EAL: TSC is not safe to use in SMP mode 00:05:22.467 EAL: TSC is not invariant 00:05:22.467 [2024-07-15 21:41:37.532198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.467 [2024-07-15 21:41:37.628285] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:22.467 [2024-07-15 21:41:37.630738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.034 21:41:38 blockdev_general -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:23.034 21:41:38 blockdev_general -- common/autotest_common.sh@856 -- # return 0 00:05:23.034 21:41:38 blockdev_general -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:05:23.034 21:41:38 blockdev_general -- bdev/blockdev.sh@696 -- # setup_bdev_conf 00:05:23.034 21:41:38 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:05:23.034 21:41:38 blockdev_general -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:23.034 21:41:38 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:23.034 [2024-07-15 21:41:38.130541] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:23.034 [2024-07-15 21:41:38.130604] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:23.034 00:05:23.034 [2024-07-15 21:41:38.138533] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:23.034 [2024-07-15 21:41:38.138583] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:23.034 00:05:23.034 Malloc0 00:05:23.034 Malloc1 00:05:23.034 Malloc2 00:05:23.034 Malloc3 00:05:23.034 Malloc4 00:05:23.034 Malloc5 00:05:23.034 Malloc6 00:05:23.034 Malloc7 00:05:23.034 Malloc8 00:05:23.329 Malloc9 00:05:23.329 [2024-07-15 21:41:38.226539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:23.329 [2024-07-15 21:41:38.226591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:23.330 [2024-07-15 21:41:38.226623] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15fd5cc3a980 00:05:23.330 [2024-07-15 21:41:38.226632] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:23.330 [2024-07-15 21:41:38.226973] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:23.330 [2024-07-15 21:41:38.226997] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:23.330 TestPT 00:05:23.330 21:41:38 blockdev_general -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:23.330 21:41:38 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:05:23.330 5000+0 records in 00:05:23.330 5000+0 records out 00:05:23.330 10240000 bytes transferred in 0.023137 secs (442583634 bytes/sec) 00:05:23.330 21:41:38 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:05:23.330 21:41:38 blockdev_general -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:23.330 21:41:38 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:23.330 AIO0 00:05:23.330 21:41:38 blockdev_general -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:23.330 21:41:38 blockdev_general -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:05:23.330 21:41:38 blockdev_general -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:23.330 21:41:38 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:23.330 21:41:38 blockdev_general -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:23.330 21:41:38 blockdev_general -- bdev/blockdev.sh@740 -- # cat 00:05:23.330 21:41:38 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:05:23.330 21:41:38 blockdev_general -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:23.330 21:41:38 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:23.330 21:41:38 blockdev_general -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:23.330 21:41:38 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:05:23.330 21:41:38 blockdev_general -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:23.330 21:41:38 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:23.330 21:41:38 blockdev_general -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:23.330 21:41:38 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:23.330 21:41:38 blockdev_general -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:23.330 21:41:38 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:23.330 21:41:38 blockdev_general -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:23.330 21:41:38 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:05:23.330 21:41:38 blockdev_general -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:05:23.330 21:41:38 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:05:23.330 21:41:38 blockdev_general -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:23.330 21:41:38 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:23.609 21:41:38 blockdev_general -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:23.609 21:41:38 blockdev_general -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:05:23.609 21:41:38 blockdev_general -- bdev/blockdev.sh@749 -- # jq -r .name 00:05:23.610 21:41:38 blockdev_general -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "03e2f1a9-42f3-11ef-9f7f-e9a656123a8b"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "03e2f1a9-42f3-11ef-9f7f-e9a656123a8b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "3db57840-5e69-4c5a-9292-e6c2071d8d9f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "3db57840-5e69-4c5a-9292-e6c2071d8d9f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "2170fa05-1ec8-a05a-b9fd-232e1c5249cf"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "2170fa05-1ec8-a05a-b9fd-232e1c5249cf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "aeccb26b-24fe-1956-9cdb-120dc3a48cd4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "aeccb26b-24fe-1956-9cdb-120dc3a48cd4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "2a4331c9-4eab-2c55-a82b-d552d35ec239"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2a4331c9-4eab-2c55-a82b-d552d35ec239",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "48ec0d9b-d30c-cf58-9091-d65e3e0db5a3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "48ec0d9b-d30c-cf58-9091-d65e3e0db5a3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "6b529c44-8499-ee56-b80c-8341ff9772c9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6b529c44-8499-ee56-b80c-8341ff9772c9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "e999e241-2ce6-e757-954e-5cfe14a9db6e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e999e241-2ce6-e757-954e-5cfe14a9db6e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "5ef31410-1c8f-eb56-a0d6-eb34d2c850e7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5ef31410-1c8f-eb56-a0d6-eb34d2c850e7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "556285cb-9196-3358-aef8-362aabf9da42"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "556285cb-9196-3358-aef8-362aabf9da42",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "c5833a1f-83af-4f5a-b246-bfbc2de5cc85"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c5833a1f-83af-4f5a-b246-bfbc2de5cc85",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "6ef14009-b1d6-c155-82fd-85cdbc0f64d7"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "6ef14009-b1d6-c155-82fd-85cdbc0f64d7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "03f06adc-42f3-11ef-9f7f-e9a656123a8b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "03f06adc-42f3-11ef-9f7f-e9a656123a8b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "03f06adc-42f3-11ef-9f7f-e9a656123a8b",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "03e7d33d-42f3-11ef-9f7f-e9a656123a8b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "03e90baf-42f3-11ef-9f7f-e9a656123a8b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "03f197dd-42f3-11ef-9f7f-e9a656123a8b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "03f197dd-42f3-11ef-9f7f-e9a656123a8b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "03f197dd-42f3-11ef-9f7f-e9a656123a8b",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "03ea443b-42f3-11ef-9f7f-e9a656123a8b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "03eb7cb9-42f3-11ef-9f7f-e9a656123a8b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "03f2d08b-42f3-11ef-9f7f-e9a656123a8b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "03f2d08b-42f3-11ef-9f7f-e9a656123a8b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "03f2d08b-42f3-11ef-9f7f-e9a656123a8b",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "03ecb535-42f3-11ef-9f7f-e9a656123a8b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "03ededb3-42f3-11ef-9f7f-e9a656123a8b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "03fac0c9-42f3-11ef-9f7f-e9a656123a8b"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "03fac0c9-42f3-11ef-9f7f-e9a656123a8b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:05:23.610 21:41:38 blockdev_general -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:05:23.610 21:41:38 blockdev_general -- bdev/blockdev.sh@752 -- # hello_world_bdev=Malloc0 00:05:23.610 21:41:38 blockdev_general -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:05:23.610 21:41:38 blockdev_general -- bdev/blockdev.sh@754 -- # killprocess 47701 00:05:23.610 21:41:38 blockdev_general -- common/autotest_common.sh@942 -- # '[' -z 47701 ']' 00:05:23.610 21:41:38 blockdev_general -- common/autotest_common.sh@946 -- # kill -0 47701 00:05:23.610 21:41:38 blockdev_general -- common/autotest_common.sh@947 -- # uname 00:05:23.610 21:41:38 blockdev_general -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:05:23.610 21:41:38 blockdev_general -- common/autotest_common.sh@950 -- # ps -c -o command 47701 00:05:23.610 21:41:38 blockdev_general -- common/autotest_common.sh@950 -- # tail -1 00:05:23.610 21:41:38 blockdev_general -- common/autotest_common.sh@950 -- # process_name=spdk_tgt 00:05:23.610 21:41:38 blockdev_general -- common/autotest_common.sh@952 -- # '[' spdk_tgt = sudo ']' 00:05:23.610 killing process with pid 47701 00:05:23.610 21:41:38 blockdev_general -- common/autotest_common.sh@960 -- # echo 'killing process with pid 47701' 00:05:23.610 21:41:38 blockdev_general -- common/autotest_common.sh@961 -- # kill 47701 00:05:23.610 21:41:38 blockdev_general -- common/autotest_common.sh@966 -- # wait 47701 00:05:23.870 21:41:38 blockdev_general -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:23.870 21:41:38 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:05:23.870 21:41:38 blockdev_general -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:05:23.870 21:41:38 blockdev_general -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:23.870 21:41:38 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:23.870 ************************************ 00:05:23.870 START TEST bdev_hello_world 00:05:23.870 ************************************ 00:05:23.870 21:41:38 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:05:23.870 [2024-07-15 21:41:38.918346] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:05:23.870 [2024-07-15 21:41:38.918493] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:24.438 EAL: TSC is not safe to use in SMP mode 00:05:24.438 EAL: TSC is not invariant 00:05:24.438 [2024-07-15 21:41:39.427548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.438 [2024-07-15 21:41:39.508956] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:24.438 [2024-07-15 21:41:39.511087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.438 [2024-07-15 21:41:39.569454] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:24.438 [2024-07-15 21:41:39.569535] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:24.438 [2024-07-15 21:41:39.577427] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:24.438 [2024-07-15 21:41:39.577489] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:24.438 [2024-07-15 21:41:39.585444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:24.438 [2024-07-15 21:41:39.585493] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:24.438 [2024-07-15 21:41:39.585518] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:24.697 [2024-07-15 21:41:39.633467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:24.697 [2024-07-15 21:41:39.633545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:24.697 [2024-07-15 21:41:39.633574] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x925c2436800 00:05:24.697 [2024-07-15 21:41:39.633582] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:24.697 [2024-07-15 21:41:39.633972] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:24.697 [2024-07-15 21:41:39.633993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:24.697 [2024-07-15 21:41:39.733572] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:05:24.697 [2024-07-15 21:41:39.733624] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:05:24.697 [2024-07-15 21:41:39.733637] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:05:24.697 [2024-07-15 21:41:39.733651] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:05:24.697 [2024-07-15 21:41:39.733664] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:05:24.697 [2024-07-15 21:41:39.733673] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:05:24.697 [2024-07-15 21:41:39.733684] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:05:24.697 00:05:24.697 [2024-07-15 21:41:39.733700] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:05:24.955 00:05:24.955 real 0m1.049s 00:05:24.955 user 0m0.497s 00:05:24.955 sys 0m0.550s 00:05:24.955 ************************************ 00:05:24.955 END TEST bdev_hello_world 00:05:24.955 ************************************ 00:05:24.955 21:41:39 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:24.955 21:41:39 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:05:24.955 21:41:39 blockdev_general -- common/autotest_common.sh@1136 -- # return 0 00:05:24.955 21:41:39 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:05:24.955 21:41:39 blockdev_general -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:05:24.955 21:41:39 blockdev_general -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:24.955 21:41:39 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:24.955 ************************************ 00:05:24.955 START TEST bdev_bounds 00:05:24.955 ************************************ 00:05:24.955 21:41:40 blockdev_general.bdev_bounds -- common/autotest_common.sh@1117 -- # bdev_bounds '' 00:05:24.955 21:41:40 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=47753 00:05:24.955 Process bdevio pid: 47753 00:05:24.955 21:41:40 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.955 21:41:40 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 47753' 00:05:24.955 21:41:40 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 47753 00:05:24.955 21:41:40 blockdev_general.bdev_bounds -- common/autotest_common.sh@823 -- # '[' -z 47753 ']' 00:05:24.955 21:41:40 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:24.955 21:41:40 blockdev_general.bdev_bounds -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.955 21:41:40 blockdev_general.bdev_bounds -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:24.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.955 21:41:40 blockdev_general.bdev_bounds -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.955 21:41:40 blockdev_general.bdev_bounds -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:24.955 21:41:40 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:24.955 [2024-07-15 21:41:40.015489] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:05:24.955 [2024-07-15 21:41:40.015654] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:25.521 EAL: TSC is not safe to use in SMP mode 00:05:25.521 EAL: TSC is not invariant 00:05:25.521 [2024-07-15 21:41:40.565752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:25.521 [2024-07-15 21:41:40.647220] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:25.521 [2024-07-15 21:41:40.647294] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:05:25.521 [2024-07-15 21:41:40.647319] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:05:25.521 [2024-07-15 21:41:40.650662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.521 [2024-07-15 21:41:40.650607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.521 [2024-07-15 21:41:40.650656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.521 [2024-07-15 21:41:40.708517] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:25.521 [2024-07-15 21:41:40.708577] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:25.780 [2024-07-15 21:41:40.716503] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:25.780 [2024-07-15 21:41:40.716535] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:25.780 [2024-07-15 21:41:40.724521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:25.780 [2024-07-15 21:41:40.724550] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:25.780 [2024-07-15 21:41:40.724559] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:25.780 [2024-07-15 21:41:40.772524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:25.780 [2024-07-15 21:41:40.772602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:25.780 [2024-07-15 21:41:40.772613] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1363b1236800 00:05:25.780 [2024-07-15 21:41:40.772621] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:25.780 [2024-07-15 21:41:40.772991] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:25.780 [2024-07-15 21:41:40.773018] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:26.039 21:41:41 blockdev_general.bdev_bounds -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:26.039 21:41:41 blockdev_general.bdev_bounds -- common/autotest_common.sh@856 -- # return 0 00:05:26.039 21:41:41 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:05:26.039 I/O targets: 00:05:26.039 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:05:26.039 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:05:26.039 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:05:26.039 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:05:26.039 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:05:26.039 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:05:26.039 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:05:26.039 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:05:26.039 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:05:26.039 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:05:26.039 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:05:26.039 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:05:26.039 raid0: 131072 blocks of 512 bytes (64 MiB) 00:05:26.039 concat0: 131072 blocks of 512 bytes (64 MiB) 00:05:26.039 raid1: 65536 blocks of 512 bytes (32 MiB) 00:05:26.039 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:05:26.039 00:05:26.039 00:05:26.039 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.039 http://cunit.sourceforge.net/ 00:05:26.039 00:05:26.039 00:05:26.039 Suite: bdevio tests on: AIO0 00:05:26.039 Test: blockdev write read block ...passed 00:05:26.039 Test: blockdev write zeroes read block ...passed 00:05:26.039 Test: blockdev write zeroes read no split ...passed 00:05:26.039 Test: blockdev write zeroes read split ...passed 00:05:26.299 Test: blockdev write zeroes read split partial ...passed 00:05:26.299 Test: blockdev reset ...passed 00:05:26.299 Test: blockdev write read 8 blocks ...passed 00:05:26.299 Test: blockdev write read size > 128k ...passed 00:05:26.299 Test: blockdev write read invalid size ...passed 00:05:26.299 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:26.299 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:26.299 Test: blockdev write read max offset ...passed 00:05:26.299 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:26.299 Test: blockdev writev readv 8 blocks ...passed 00:05:26.299 Test: blockdev writev readv 30 x 1block ...passed 00:05:26.299 Test: blockdev writev readv block ...passed 00:05:26.299 Test: blockdev writev readv size > 128k ...passed 00:05:26.299 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:26.299 Test: blockdev comparev and writev ...passed 00:05:26.299 Test: blockdev nvme passthru rw ...passed 00:05:26.299 Test: blockdev nvme passthru vendor specific ...passed 00:05:26.299 Test: blockdev nvme admin passthru ...passed 00:05:26.299 Test: blockdev copy ...passed 00:05:26.299 Suite: bdevio tests on: raid1 00:05:26.299 Test: blockdev write read block ...passed 00:05:26.299 Test: blockdev write zeroes read block ...passed 00:05:26.299 Test: blockdev write zeroes read no split ...passed 00:05:26.299 Test: blockdev write zeroes read split ...passed 00:05:26.299 Test: blockdev write zeroes read split partial ...passed 00:05:26.299 Test: blockdev reset ...passed 00:05:26.299 Test: blockdev write read 8 blocks ...passed 00:05:26.299 Test: blockdev write read size > 128k ...passed 00:05:26.299 Test: blockdev write read invalid size ...passed 00:05:26.299 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:26.299 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:26.299 Test: blockdev write read max offset ...passed 00:05:26.299 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:26.299 Test: blockdev writev readv 8 blocks ...passed 00:05:26.299 Test: blockdev writev readv 30 x 1block ...passed 00:05:26.299 Test: blockdev writev readv block ...passed 00:05:26.299 Test: blockdev writev readv size > 128k ...passed 00:05:26.299 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:26.299 Test: blockdev comparev and writev ...passed 00:05:26.299 Test: blockdev nvme passthru rw ...passed 00:05:26.299 Test: blockdev nvme passthru vendor specific ...passed 00:05:26.299 Test: blockdev nvme admin passthru ...passed 00:05:26.299 Test: blockdev copy ...passed 00:05:26.299 Suite: bdevio tests on: concat0 00:05:26.299 Test: blockdev write read block ...passed 00:05:26.299 Test: blockdev write zeroes read block ...passed 00:05:26.299 Test: blockdev write zeroes read no split ...passed 00:05:26.299 Test: blockdev write zeroes read split ...passed 00:05:26.299 Test: blockdev write zeroes read split partial ...passed 00:05:26.299 Test: blockdev reset ...passed 00:05:26.299 Test: blockdev write read 8 blocks ...passed 00:05:26.299 Test: blockdev write read size > 128k ...passed 00:05:26.299 Test: blockdev write read invalid size ...passed 00:05:26.299 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:26.299 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:26.299 Test: blockdev write read max offset ...passed 00:05:26.299 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:26.299 Test: blockdev writev readv 8 blocks ...passed 00:05:26.299 Test: blockdev writev readv 30 x 1block ...passed 00:05:26.299 Test: blockdev writev readv block ...passed 00:05:26.299 Test: blockdev writev readv size > 128k ...passed 00:05:26.299 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:26.299 Test: blockdev comparev and writev ...passed 00:05:26.299 Test: blockdev nvme passthru rw ...passed 00:05:26.299 Test: blockdev nvme passthru vendor specific ...passed 00:05:26.299 Test: blockdev nvme admin passthru ...passed 00:05:26.299 Test: blockdev copy ...passed 00:05:26.299 Suite: bdevio tests on: raid0 00:05:26.299 Test: blockdev write read block ...passed 00:05:26.299 Test: blockdev write zeroes read block ...passed 00:05:26.299 Test: blockdev write zeroes read no split ...passed 00:05:26.299 Test: blockdev write zeroes read split ...passed 00:05:26.299 Test: blockdev write zeroes read split partial ...passed 00:05:26.299 Test: blockdev reset ...passed 00:05:26.299 Test: blockdev write read 8 blocks ...passed 00:05:26.300 Test: blockdev write read size > 128k ...passed 00:05:26.300 Test: blockdev write read invalid size ...passed 00:05:26.300 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:26.300 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:26.300 Test: blockdev write read max offset ...passed 00:05:26.300 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:26.300 Test: blockdev writev readv 8 blocks ...passed 00:05:26.300 Test: blockdev writev readv 30 x 1block ...passed 00:05:26.300 Test: blockdev writev readv block ...passed 00:05:26.300 Test: blockdev writev readv size > 128k ...passed 00:05:26.300 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:26.300 Test: blockdev comparev and writev ...passed 00:05:26.300 Test: blockdev nvme passthru rw ...passed 00:05:26.300 Test: blockdev nvme passthru vendor specific ...passed 00:05:26.300 Test: blockdev nvme admin passthru ...passed 00:05:26.300 Test: blockdev copy ...passed 00:05:26.300 Suite: bdevio tests on: TestPT 00:05:26.300 Test: blockdev write read block ...passed 00:05:26.300 Test: blockdev write zeroes read block ...passed 00:05:26.300 Test: blockdev write zeroes read no split ...passed 00:05:26.300 Test: blockdev write zeroes read split ...passed 00:05:26.300 Test: blockdev write zeroes read split partial ...passed 00:05:26.300 Test: blockdev reset ...passed 00:05:26.300 Test: blockdev write read 8 blocks ...passed 00:05:26.300 Test: blockdev write read size > 128k ...passed 00:05:26.300 Test: blockdev write read invalid size ...passed 00:05:26.300 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:26.300 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:26.300 Test: blockdev write read max offset ...passed 00:05:26.300 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:26.300 Test: blockdev writev readv 8 blocks ...passed 00:05:26.300 Test: blockdev writev readv 30 x 1block ...passed 00:05:26.300 Test: blockdev writev readv block ...passed 00:05:26.300 Test: blockdev writev readv size > 128k ...passed 00:05:26.300 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:26.300 Test: blockdev comparev and writev ...passed 00:05:26.300 Test: blockdev nvme passthru rw ...passed 00:05:26.300 Test: blockdev nvme passthru vendor specific ...passed 00:05:26.300 Test: blockdev nvme admin passthru ...passed 00:05:26.300 Test: blockdev copy ...passed 00:05:26.300 Suite: bdevio tests on: Malloc2p7 00:05:26.300 Test: blockdev write read block ...passed 00:05:26.300 Test: blockdev write zeroes read block ...passed 00:05:26.300 Test: blockdev write zeroes read no split ...passed 00:05:26.300 Test: blockdev write zeroes read split ...passed 00:05:26.300 Test: blockdev write zeroes read split partial ...passed 00:05:26.300 Test: blockdev reset ...passed 00:05:26.300 Test: blockdev write read 8 blocks ...passed 00:05:26.300 Test: blockdev write read size > 128k ...passed 00:05:26.300 Test: blockdev write read invalid size ...passed 00:05:26.300 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:26.300 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:26.300 Test: blockdev write read max offset ...passed 00:05:26.300 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:26.300 Test: blockdev writev readv 8 blocks ...passed 00:05:26.300 Test: blockdev writev readv 30 x 1block ...passed 00:05:26.300 Test: blockdev writev readv block ...passed 00:05:26.300 Test: blockdev writev readv size > 128k ...passed 00:05:26.300 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:26.300 Test: blockdev comparev and writev ...passed 00:05:26.300 Test: blockdev nvme passthru rw ...passed 00:05:26.300 Test: blockdev nvme passthru vendor specific ...passed 00:05:26.300 Test: blockdev nvme admin passthru ...passed 00:05:26.300 Test: blockdev copy ...passed 00:05:26.300 Suite: bdevio tests on: Malloc2p6 00:05:26.300 Test: blockdev write read block ...passed 00:05:26.300 Test: blockdev write zeroes read block ...passed 00:05:26.300 Test: blockdev write zeroes read no split ...passed 00:05:26.300 Test: blockdev write zeroes read split ...passed 00:05:26.300 Test: blockdev write zeroes read split partial ...passed 00:05:26.300 Test: blockdev reset ...passed 00:05:26.300 Test: blockdev write read 8 blocks ...passed 00:05:26.300 Test: blockdev write read size > 128k ...passed 00:05:26.300 Test: blockdev write read invalid size ...passed 00:05:26.300 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:26.300 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:26.300 Test: blockdev write read max offset ...passed 00:05:26.300 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:26.300 Test: blockdev writev readv 8 blocks ...passed 00:05:26.300 Test: blockdev writev readv 30 x 1block ...passed 00:05:26.300 Test: blockdev writev readv block ...passed 00:05:26.300 Test: blockdev writev readv size > 128k ...passed 00:05:26.300 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:26.300 Test: blockdev comparev and writev ...passed 00:05:26.300 Test: blockdev nvme passthru rw ...passed 00:05:26.300 Test: blockdev nvme passthru vendor specific ...passed 00:05:26.300 Test: blockdev nvme admin passthru ...passed 00:05:26.300 Test: blockdev copy ...passed 00:05:26.300 Suite: bdevio tests on: Malloc2p5 00:05:26.300 Test: blockdev write read block ...passed 00:05:26.300 Test: blockdev write zeroes read block ...passed 00:05:26.300 Test: blockdev write zeroes read no split ...passed 00:05:26.300 Test: blockdev write zeroes read split ...passed 00:05:26.300 Test: blockdev write zeroes read split partial ...passed 00:05:26.300 Test: blockdev reset ...passed 00:05:26.300 Test: blockdev write read 8 blocks ...passed 00:05:26.300 Test: blockdev write read size > 128k ...passed 00:05:26.300 Test: blockdev write read invalid size ...passed 00:05:26.300 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:26.300 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:26.300 Test: blockdev write read max offset ...passed 00:05:26.300 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:26.300 Test: blockdev writev readv 8 blocks ...passed 00:05:26.300 Test: blockdev writev readv 30 x 1block ...passed 00:05:26.300 Test: blockdev writev readv block ...passed 00:05:26.300 Test: blockdev writev readv size > 128k ...passed 00:05:26.300 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:26.300 Test: blockdev comparev and writev ...passed 00:05:26.300 Test: blockdev nvme passthru rw ...passed 00:05:26.300 Test: blockdev nvme passthru vendor specific ...passed 00:05:26.300 Test: blockdev nvme admin passthru ...passed 00:05:26.300 Test: blockdev copy ...passed 00:05:26.300 Suite: bdevio tests on: Malloc2p4 00:05:26.300 Test: blockdev write read block ...passed 00:05:26.300 Test: blockdev write zeroes read block ...passed 00:05:26.300 Test: blockdev write zeroes read no split ...passed 00:05:26.300 Test: blockdev write zeroes read split ...passed 00:05:26.300 Test: blockdev write zeroes read split partial ...passed 00:05:26.300 Test: blockdev reset ...passed 00:05:26.300 Test: blockdev write read 8 blocks ...passed 00:05:26.300 Test: blockdev write read size > 128k ...passed 00:05:26.300 Test: blockdev write read invalid size ...passed 00:05:26.300 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:26.300 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:26.300 Test: blockdev write read max offset ...passed 00:05:26.300 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:26.300 Test: blockdev writev readv 8 blocks ...passed 00:05:26.300 Test: blockdev writev readv 30 x 1block ...passed 00:05:26.300 Test: blockdev writev readv block ...passed 00:05:26.300 Test: blockdev writev readv size > 128k ...passed 00:05:26.300 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:26.300 Test: blockdev comparev and writev ...passed 00:05:26.300 Test: blockdev nvme passthru rw ...passed 00:05:26.300 Test: blockdev nvme passthru vendor specific ...passed 00:05:26.300 Test: blockdev nvme admin passthru ...passed 00:05:26.300 Test: blockdev copy ...passed 00:05:26.300 Suite: bdevio tests on: Malloc2p3 00:05:26.300 Test: blockdev write read block ...passed 00:05:26.300 Test: blockdev write zeroes read block ...passed 00:05:26.300 Test: blockdev write zeroes read no split ...passed 00:05:26.300 Test: blockdev write zeroes read split ...passed 00:05:26.300 Test: blockdev write zeroes read split partial ...passed 00:05:26.300 Test: blockdev reset ...passed 00:05:26.300 Test: blockdev write read 8 blocks ...passed 00:05:26.300 Test: blockdev write read size > 128k ...passed 00:05:26.300 Test: blockdev write read invalid size ...passed 00:05:26.300 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:26.300 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:26.300 Test: blockdev write read max offset ...passed 00:05:26.300 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:26.300 Test: blockdev writev readv 8 blocks ...passed 00:05:26.300 Test: blockdev writev readv 30 x 1block ...passed 00:05:26.300 Test: blockdev writev readv block ...passed 00:05:26.301 Test: blockdev writev readv size > 128k ...passed 00:05:26.301 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:26.301 Test: blockdev comparev and writev ...passed 00:05:26.301 Test: blockdev nvme passthru rw ...passed 00:05:26.301 Test: blockdev nvme passthru vendor specific ...passed 00:05:26.301 Test: blockdev nvme admin passthru ...passed 00:05:26.301 Test: blockdev copy ...passed 00:05:26.301 Suite: bdevio tests on: Malloc2p2 00:05:26.301 Test: blockdev write read block ...passed 00:05:26.301 Test: blockdev write zeroes read block ...passed 00:05:26.301 Test: blockdev write zeroes read no split ...passed 00:05:26.301 Test: blockdev write zeroes read split ...passed 00:05:26.301 Test: blockdev write zeroes read split partial ...passed 00:05:26.301 Test: blockdev reset ...passed 00:05:26.301 Test: blockdev write read 8 blocks ...passed 00:05:26.301 Test: blockdev write read size > 128k ...passed 00:05:26.301 Test: blockdev write read invalid size ...passed 00:05:26.301 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:26.301 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:26.301 Test: blockdev write read max offset ...passed 00:05:26.301 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:26.301 Test: blockdev writev readv 8 blocks ...passed 00:05:26.301 Test: blockdev writev readv 30 x 1block ...passed 00:05:26.301 Test: blockdev writev readv block ...passed 00:05:26.301 Test: blockdev writev readv size > 128k ...passed 00:05:26.301 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:26.301 Test: blockdev comparev and writev ...passed 00:05:26.301 Test: blockdev nvme passthru rw ...passed 00:05:26.301 Test: blockdev nvme passthru vendor specific ...passed 00:05:26.301 Test: blockdev nvme admin passthru ...passed 00:05:26.301 Test: blockdev copy ...passed 00:05:26.301 Suite: bdevio tests on: Malloc2p1 00:05:26.301 Test: blockdev write read block ...passed 00:05:26.301 Test: blockdev write zeroes read block ...passed 00:05:26.301 Test: blockdev write zeroes read no split ...passed 00:05:26.301 Test: blockdev write zeroes read split ...passed 00:05:26.301 Test: blockdev write zeroes read split partial ...passed 00:05:26.301 Test: blockdev reset ...passed 00:05:26.301 Test: blockdev write read 8 blocks ...passed 00:05:26.301 Test: blockdev write read size > 128k ...passed 00:05:26.301 Test: blockdev write read invalid size ...passed 00:05:26.301 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:26.301 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:26.301 Test: blockdev write read max offset ...passed 00:05:26.301 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:26.301 Test: blockdev writev readv 8 blocks ...passed 00:05:26.301 Test: blockdev writev readv 30 x 1block ...passed 00:05:26.301 Test: blockdev writev readv block ...passed 00:05:26.301 Test: blockdev writev readv size > 128k ...passed 00:05:26.301 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:26.301 Test: blockdev comparev and writev ...passed 00:05:26.301 Test: blockdev nvme passthru rw ...passed 00:05:26.301 Test: blockdev nvme passthru vendor specific ...passed 00:05:26.301 Test: blockdev nvme admin passthru ...passed 00:05:26.301 Test: blockdev copy ...passed 00:05:26.301 Suite: bdevio tests on: Malloc2p0 00:05:26.301 Test: blockdev write read block ...passed 00:05:26.301 Test: blockdev write zeroes read block ...passed 00:05:26.301 Test: blockdev write zeroes read no split ...passed 00:05:26.301 Test: blockdev write zeroes read split ...passed 00:05:26.301 Test: blockdev write zeroes read split partial ...passed 00:05:26.301 Test: blockdev reset ...passed 00:05:26.301 Test: blockdev write read 8 blocks ...passed 00:05:26.301 Test: blockdev write read size > 128k ...passed 00:05:26.301 Test: blockdev write read invalid size ...passed 00:05:26.301 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:26.301 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:26.301 Test: blockdev write read max offset ...passed 00:05:26.301 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:26.301 Test: blockdev writev readv 8 blocks ...passed 00:05:26.301 Test: blockdev writev readv 30 x 1block ...passed 00:05:26.301 Test: blockdev writev readv block ...passed 00:05:26.301 Test: blockdev writev readv size > 128k ...passed 00:05:26.301 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:26.301 Test: blockdev comparev and writev ...passed 00:05:26.301 Test: blockdev nvme passthru rw ...passed 00:05:26.301 Test: blockdev nvme passthru vendor specific ...passed 00:05:26.301 Test: blockdev nvme admin passthru ...passed 00:05:26.301 Test: blockdev copy ...passed 00:05:26.301 Suite: bdevio tests on: Malloc1p1 00:05:26.301 Test: blockdev write read block ...passed 00:05:26.301 Test: blockdev write zeroes read block ...passed 00:05:26.301 Test: blockdev write zeroes read no split ...passed 00:05:26.301 Test: blockdev write zeroes read split ...passed 00:05:26.301 Test: blockdev write zeroes read split partial ...passed 00:05:26.301 Test: blockdev reset ...passed 00:05:26.301 Test: blockdev write read 8 blocks ...passed 00:05:26.301 Test: blockdev write read size > 128k ...passed 00:05:26.301 Test: blockdev write read invalid size ...passed 00:05:26.301 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:26.301 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:26.301 Test: blockdev write read max offset ...passed 00:05:26.301 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:26.301 Test: blockdev writev readv 8 blocks ...passed 00:05:26.301 Test: blockdev writev readv 30 x 1block ...passed 00:05:26.301 Test: blockdev writev readv block ...passed 00:05:26.301 Test: blockdev writev readv size > 128k ...passed 00:05:26.301 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:26.301 Test: blockdev comparev and writev ...passed 00:05:26.301 Test: blockdev nvme passthru rw ...passed 00:05:26.301 Test: blockdev nvme passthru vendor specific ...passed 00:05:26.301 Test: blockdev nvme admin passthru ...passed 00:05:26.301 Test: blockdev copy ...passed 00:05:26.301 Suite: bdevio tests on: Malloc1p0 00:05:26.301 Test: blockdev write read block ...passed 00:05:26.301 Test: blockdev write zeroes read block ...passed 00:05:26.301 Test: blockdev write zeroes read no split ...passed 00:05:26.301 Test: blockdev write zeroes read split ...passed 00:05:26.301 Test: blockdev write zeroes read split partial ...passed 00:05:26.301 Test: blockdev reset ...passed 00:05:26.301 Test: blockdev write read 8 blocks ...passed 00:05:26.301 Test: blockdev write read size > 128k ...passed 00:05:26.301 Test: blockdev write read invalid size ...passed 00:05:26.301 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:26.301 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:26.301 Test: blockdev write read max offset ...passed 00:05:26.301 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:26.301 Test: blockdev writev readv 8 blocks ...passed 00:05:26.301 Test: blockdev writev readv 30 x 1block ...passed 00:05:26.301 Test: blockdev writev readv block ...passed 00:05:26.301 Test: blockdev writev readv size > 128k ...passed 00:05:26.301 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:26.301 Test: blockdev comparev and writev ...passed 00:05:26.301 Test: blockdev nvme passthru rw ...passed 00:05:26.301 Test: blockdev nvme passthru vendor specific ...passed 00:05:26.301 Test: blockdev nvme admin passthru ...passed 00:05:26.301 Test: blockdev copy ...passed 00:05:26.301 Suite: bdevio tests on: Malloc0 00:05:26.301 Test: blockdev write read block ...passed 00:05:26.301 Test: blockdev write zeroes read block ...passed 00:05:26.301 Test: blockdev write zeroes read no split ...passed 00:05:26.301 Test: blockdev write zeroes read split ...passed 00:05:26.301 Test: blockdev write zeroes read split partial ...passed 00:05:26.301 Test: blockdev reset ...passed 00:05:26.301 Test: blockdev write read 8 blocks ...passed 00:05:26.301 Test: blockdev write read size > 128k ...passed 00:05:26.301 Test: blockdev write read invalid size ...passed 00:05:26.301 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:26.301 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:26.301 Test: blockdev write read max offset ...passed 00:05:26.301 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:26.301 Test: blockdev writev readv 8 blocks ...passed 00:05:26.301 Test: blockdev writev readv 30 x 1block ...passed 00:05:26.301 Test: blockdev writev readv block ...passed 00:05:26.301 Test: blockdev writev readv size > 128k ...passed 00:05:26.301 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:26.301 Test: blockdev comparev and writev ...passed 00:05:26.301 Test: blockdev nvme passthru rw ...passed 00:05:26.302 Test: blockdev nvme passthru vendor specific ...passed 00:05:26.302 Test: blockdev nvme admin passthru ...passed 00:05:26.302 Test: blockdev copy ...passed 00:05:26.302 00:05:26.302 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.302 suites 16 16 n/a 0 0 00:05:26.302 tests 368 368 368 0 0 00:05:26.302 asserts 2224 2224 2224 0 n/a 00:05:26.302 00:05:26.302 Elapsed time = 0.539 seconds 00:05:26.302 0 00:05:26.302 21:41:41 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 47753 00:05:26.302 21:41:41 blockdev_general.bdev_bounds -- common/autotest_common.sh@942 -- # '[' -z 47753 ']' 00:05:26.302 21:41:41 blockdev_general.bdev_bounds -- common/autotest_common.sh@946 -- # kill -0 47753 00:05:26.302 21:41:41 blockdev_general.bdev_bounds -- common/autotest_common.sh@947 -- # uname 00:05:26.302 21:41:41 blockdev_general.bdev_bounds -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:05:26.302 21:41:41 blockdev_general.bdev_bounds -- common/autotest_common.sh@950 -- # ps -c -o command 47753 00:05:26.302 21:41:41 blockdev_general.bdev_bounds -- common/autotest_common.sh@950 -- # tail -1 00:05:26.302 21:41:41 blockdev_general.bdev_bounds -- common/autotest_common.sh@950 -- # process_name=bdevio 00:05:26.302 killing process with pid 47753 00:05:26.302 21:41:41 blockdev_general.bdev_bounds -- common/autotest_common.sh@952 -- # '[' bdevio = sudo ']' 00:05:26.302 21:41:41 blockdev_general.bdev_bounds -- common/autotest_common.sh@960 -- # echo 'killing process with pid 47753' 00:05:26.302 21:41:41 blockdev_general.bdev_bounds -- common/autotest_common.sh@961 -- # kill 47753 00:05:26.302 21:41:41 blockdev_general.bdev_bounds -- common/autotest_common.sh@966 -- # wait 47753 00:05:26.560 21:41:41 blockdev_general.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:05:26.560 00:05:26.560 real 0m1.703s 00:05:26.560 user 0m3.292s 00:05:26.560 sys 0m0.767s 00:05:26.560 ************************************ 00:05:26.560 END TEST bdev_bounds 00:05:26.560 ************************************ 00:05:26.560 21:41:41 blockdev_general.bdev_bounds -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:26.560 21:41:41 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:26.560 21:41:41 blockdev_general -- common/autotest_common.sh@1136 -- # return 0 00:05:26.560 21:41:41 blockdev_general -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:05:26.560 21:41:41 blockdev_general -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:05:26.560 21:41:41 blockdev_general -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:26.560 21:41:41 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:26.560 ************************************ 00:05:26.560 START TEST bdev_nbd 00:05:26.560 ************************************ 00:05:26.818 21:41:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@1117 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:05:26.818 21:41:41 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:05:26.818 21:41:41 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ FreeBSD == Linux ]] 00:05:26.818 21:41:41 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # return 0 00:05:26.818 00:05:26.818 real 0m0.004s 00:05:26.818 user 0m0.001s 00:05:26.818 sys 0m0.007s 00:05:26.818 21:41:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:26.818 ************************************ 00:05:26.818 END TEST bdev_nbd 00:05:26.818 ************************************ 00:05:26.818 21:41:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:26.818 21:41:41 blockdev_general -- common/autotest_common.sh@1136 -- # return 0 00:05:26.818 21:41:41 blockdev_general -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:05:26.818 21:41:41 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = nvme ']' 00:05:26.818 21:41:41 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = gpt ']' 00:05:26.818 21:41:41 blockdev_general -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:05:26.818 21:41:41 blockdev_general -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:05:26.818 21:41:41 blockdev_general -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:26.818 21:41:41 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:26.818 ************************************ 00:05:26.818 START TEST bdev_fio 00:05:26.818 ************************************ 00:05:26.818 21:41:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1117 -- # fio_test_suite '' 00:05:26.818 21:41:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:05:26.818 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:05:26.818 21:41:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:05:26.818 21:41:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:05:26.818 21:41:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:05:26.818 21:41:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:05:26.818 21:41:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:05:26.818 21:41:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:05:26.818 21:41:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1274 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:26.818 21:41:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1275 -- # local workload=verify 00:05:26.818 21:41:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1276 -- # local bdev_type=AIO 00:05:26.818 21:41:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1277 -- # local env_context= 00:05:26.818 21:41:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1278 -- # local fio_dir=/usr/src/fio 00:05:26.818 21:41:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:05:26.818 21:41:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1285 -- # '[' -z verify ']' 00:05:26.818 21:41:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1289 -- # '[' -n '' ']' 00:05:26.818 21:41:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1293 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:26.818 21:41:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # cat 00:05:26.818 21:41:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1307 -- # '[' verify == verify ']' 00:05:26.819 21:41:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1308 -- # cat 00:05:26.819 21:41:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1317 -- # '[' AIO == AIO ']' 00:05:26.819 21:41:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1318 -- # /usr/src/fio/fio --version 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1318 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1319 -- # echo serialize_overlap=1 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc0]' 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc0 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p0]' 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p0 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p1]' 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p1 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p0]' 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p0 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p1]' 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p1 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p2]' 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p2 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p3]' 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p3 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p4]' 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p4 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p5]' 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p5 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p6]' 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p6 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p7]' 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p7 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_TestPT]' 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=TestPT 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid0]' 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid0 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_concat0]' 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=concat0 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid1]' 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid1 00:05:27.756 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:27.757 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_AIO0]' 00:05:27.757 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=AIO0 00:05:27.757 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:05:27.757 21:41:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:27.757 21:41:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1093 -- # '[' 11 -le 1 ']' 00:05:27.757 21:41:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:27.757 21:41:42 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:05:27.757 ************************************ 00:05:27.757 START TEST bdev_fio_rw_verify 00:05:27.757 ************************************ 00:05:27.757 21:41:42 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1117 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:27.757 21:41:42 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:27.757 21:41:42 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:05:27.757 21:41:42 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:05:27.757 21:41:42 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1333 -- # local sanitizers 00:05:27.757 21:41:42 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1334 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:27.757 21:41:42 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # shift 00:05:27.757 21:41:42 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local asan_lib= 00:05:27.757 21:41:42 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:05:27.757 21:41:42 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:27.757 21:41:42 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # grep libasan 00:05:27.757 21:41:42 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:05:27.757 21:41:42 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # asan_lib= 00:05:27.757 21:41:42 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:05:27.757 21:41:42 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:05:27.757 21:41:42 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:27.757 21:41:42 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:05:27.757 21:41:42 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:05:27.757 21:41:42 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # asan_lib= 00:05:27.757 21:41:42 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:05:27.757 21:41:42 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:05:27.757 21:41:42 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:27.757 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:27.757 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:27.757 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:27.757 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:27.757 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:27.757 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:27.757 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:27.757 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:27.757 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:27.757 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:27.757 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:27.757 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:27.757 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:27.757 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:27.757 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:27.757 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:27.757 fio-3.35 00:05:27.757 Starting 16 threads 00:05:28.323 EAL: TSC is not safe to use in SMP mode 00:05:28.323 EAL: TSC is not invariant 00:05:40.517 00:05:40.517 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=101355: Mon Jul 15 21:41:55 2024 00:05:40.517 read: IOPS=232k, BW=907MiB/s (951MB/s)(9073MiB/10002msec) 00:05:40.517 slat (nsec): min=285, max=196861k, avg=3782.46, stdev=414386.20 00:05:40.517 clat (nsec): min=881, max=315168k, avg=52029.82, stdev=1631763.57 00:05:40.517 lat (usec): min=2, max=315170, avg=55.81, stdev=1683.56 00:05:40.517 clat percentiles (usec): 00:05:40.517 | 50.000th=[ 10], 99.000th=[ 717], 99.900th=[ 1090], 00:05:40.517 | 99.990th=[ 94897], 99.999th=[181404] 00:05:40.517 write: IOPS=396k, BW=1548MiB/s (1623MB/s)(15.0GiB/9917msec); 0 zone resets 00:05:40.517 slat (nsec): min=578, max=481893k, avg=20798.99, stdev=984651.92 00:05:40.517 clat (nsec): min=813, max=1231.8M, avg=99421.84, stdev=2700601.99 00:05:40.517 lat (usec): min=12, max=1231.9k, avg=120.22, stdev=2874.96 00:05:40.517 clat percentiles (usec): 00:05:40.517 | 50.000th=[ 51], 99.000th=[ 709], 99.900th=[ 2606], 00:05:40.517 | 99.990th=[ 94897], 99.999th=[252707] 00:05:40.517 bw ( MiB/s): min= 485, max= 2545, per=99.63%, avg=1541.84, stdev=42.50, samples=298 00:05:40.517 iops : min=124386, max=651570, avg=394711.67, stdev=10880.54, samples=298 00:05:40.517 lat (nsec) : 1000=0.01% 00:05:40.517 lat (usec) : 2=0.03%, 4=11.26%, 10=17.18%, 20=21.97%, 50=16.21% 00:05:40.517 lat (usec) : 100=29.69%, 250=2.00%, 500=0.14%, 750=0.72%, 1000=0.65% 00:05:40.517 lat (msec) : 2=0.05%, 4=0.03%, 10=0.01%, 20=0.01%, 50=0.01% 00:05:40.517 lat (msec) : 100=0.02%, 250=0.01%, 500=0.01%, 2000=0.01% 00:05:40.517 cpu : usr=55.69%, sys=3.33%, ctx=1005750, majf=0, minf=620 00:05:40.517 IO depths : 1=12.5%, 2=25.0%, 4=49.9%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:05:40.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:05:40.517 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:05:40.517 issued rwts: total=2322662,3929086,0,0 short=0,0,0,0 dropped=0,0,0,0 00:05:40.517 latency : target=0, window=0, percentile=100.00%, depth=8 00:05:40.517 00:05:40.517 Run status group 0 (all jobs): 00:05:40.517 READ: bw=907MiB/s (951MB/s), 907MiB/s-907MiB/s (951MB/s-951MB/s), io=9073MiB (9514MB), run=10002-10002msec 00:05:40.517 WRITE: bw=1548MiB/s (1623MB/s), 1548MiB/s-1548MiB/s (1623MB/s-1623MB/s), io=15.0GiB (16.1GB), run=9917-9917msec 00:05:41.453 00:05:41.453 real 0m13.881s 00:05:41.453 user 1m35.249s 00:05:41.453 sys 0m8.955s 00:05:41.453 ************************************ 00:05:41.453 END TEST bdev_fio_rw_verify 00:05:41.453 ************************************ 00:05:41.453 21:41:56 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:41.453 21:41:56 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:05:41.453 21:41:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1136 -- # return 0 00:05:41.453 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:05:41.453 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:41.453 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:05:41.453 21:41:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1274 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:41.453 21:41:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1275 -- # local workload=trim 00:05:41.453 21:41:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1276 -- # local bdev_type= 00:05:41.453 21:41:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1277 -- # local env_context= 00:05:41.453 21:41:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1278 -- # local fio_dir=/usr/src/fio 00:05:41.453 21:41:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:05:41.453 21:41:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1285 -- # '[' -z trim ']' 00:05:41.453 21:41:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1289 -- # '[' -n '' ']' 00:05:41.453 21:41:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1293 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:41.453 21:41:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # cat 00:05:41.453 21:41:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1307 -- # '[' trim == verify ']' 00:05:41.453 21:41:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1322 -- # '[' trim == trim ']' 00:05:41.453 21:41:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # echo rw=trimwrite 00:05:41.453 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:05:41.454 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "03e2f1a9-42f3-11ef-9f7f-e9a656123a8b"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "03e2f1a9-42f3-11ef-9f7f-e9a656123a8b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "3db57840-5e69-4c5a-9292-e6c2071d8d9f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "3db57840-5e69-4c5a-9292-e6c2071d8d9f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "2170fa05-1ec8-a05a-b9fd-232e1c5249cf"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "2170fa05-1ec8-a05a-b9fd-232e1c5249cf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "aeccb26b-24fe-1956-9cdb-120dc3a48cd4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "aeccb26b-24fe-1956-9cdb-120dc3a48cd4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "2a4331c9-4eab-2c55-a82b-d552d35ec239"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2a4331c9-4eab-2c55-a82b-d552d35ec239",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "48ec0d9b-d30c-cf58-9091-d65e3e0db5a3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "48ec0d9b-d30c-cf58-9091-d65e3e0db5a3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "6b529c44-8499-ee56-b80c-8341ff9772c9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6b529c44-8499-ee56-b80c-8341ff9772c9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "e999e241-2ce6-e757-954e-5cfe14a9db6e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e999e241-2ce6-e757-954e-5cfe14a9db6e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "5ef31410-1c8f-eb56-a0d6-eb34d2c850e7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5ef31410-1c8f-eb56-a0d6-eb34d2c850e7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "556285cb-9196-3358-aef8-362aabf9da42"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "556285cb-9196-3358-aef8-362aabf9da42",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "c5833a1f-83af-4f5a-b246-bfbc2de5cc85"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c5833a1f-83af-4f5a-b246-bfbc2de5cc85",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "6ef14009-b1d6-c155-82fd-85cdbc0f64d7"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "6ef14009-b1d6-c155-82fd-85cdbc0f64d7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "03f06adc-42f3-11ef-9f7f-e9a656123a8b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "03f06adc-42f3-11ef-9f7f-e9a656123a8b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "03f06adc-42f3-11ef-9f7f-e9a656123a8b",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "03e7d33d-42f3-11ef-9f7f-e9a656123a8b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "03e90baf-42f3-11ef-9f7f-e9a656123a8b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "03f197dd-42f3-11ef-9f7f-e9a656123a8b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "03f197dd-42f3-11ef-9f7f-e9a656123a8b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "03f197dd-42f3-11ef-9f7f-e9a656123a8b",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "03ea443b-42f3-11ef-9f7f-e9a656123a8b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "03eb7cb9-42f3-11ef-9f7f-e9a656123a8b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "03f2d08b-42f3-11ef-9f7f-e9a656123a8b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "03f2d08b-42f3-11ef-9f7f-e9a656123a8b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "03f2d08b-42f3-11ef-9f7f-e9a656123a8b",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "03ecb535-42f3-11ef-9f7f-e9a656123a8b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "03ededb3-42f3-11ef-9f7f-e9a656123a8b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "03fac0c9-42f3-11ef-9f7f-e9a656123a8b"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "03fac0c9-42f3-11ef-9f7f-e9a656123a8b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:05:41.454 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n Malloc0 00:05:41.454 Malloc1p0 00:05:41.454 Malloc1p1 00:05:41.454 Malloc2p0 00:05:41.454 Malloc2p1 00:05:41.454 Malloc2p2 00:05:41.454 Malloc2p3 00:05:41.454 Malloc2p4 00:05:41.454 Malloc2p5 00:05:41.454 Malloc2p6 00:05:41.454 Malloc2p7 00:05:41.454 TestPT 00:05:41.454 raid0 00:05:41.454 concat0 ]] 00:05:41.454 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "03e2f1a9-42f3-11ef-9f7f-e9a656123a8b"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "03e2f1a9-42f3-11ef-9f7f-e9a656123a8b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "3db57840-5e69-4c5a-9292-e6c2071d8d9f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "3db57840-5e69-4c5a-9292-e6c2071d8d9f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "2170fa05-1ec8-a05a-b9fd-232e1c5249cf"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "2170fa05-1ec8-a05a-b9fd-232e1c5249cf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "aeccb26b-24fe-1956-9cdb-120dc3a48cd4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "aeccb26b-24fe-1956-9cdb-120dc3a48cd4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "2a4331c9-4eab-2c55-a82b-d552d35ec239"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2a4331c9-4eab-2c55-a82b-d552d35ec239",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "48ec0d9b-d30c-cf58-9091-d65e3e0db5a3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "48ec0d9b-d30c-cf58-9091-d65e3e0db5a3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "6b529c44-8499-ee56-b80c-8341ff9772c9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6b529c44-8499-ee56-b80c-8341ff9772c9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "e999e241-2ce6-e757-954e-5cfe14a9db6e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e999e241-2ce6-e757-954e-5cfe14a9db6e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "5ef31410-1c8f-eb56-a0d6-eb34d2c850e7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5ef31410-1c8f-eb56-a0d6-eb34d2c850e7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "556285cb-9196-3358-aef8-362aabf9da42"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "556285cb-9196-3358-aef8-362aabf9da42",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "c5833a1f-83af-4f5a-b246-bfbc2de5cc85"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c5833a1f-83af-4f5a-b246-bfbc2de5cc85",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "6ef14009-b1d6-c155-82fd-85cdbc0f64d7"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "6ef14009-b1d6-c155-82fd-85cdbc0f64d7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "03f06adc-42f3-11ef-9f7f-e9a656123a8b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "03f06adc-42f3-11ef-9f7f-e9a656123a8b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "03f06adc-42f3-11ef-9f7f-e9a656123a8b",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "03e7d33d-42f3-11ef-9f7f-e9a656123a8b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "03e90baf-42f3-11ef-9f7f-e9a656123a8b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "03f197dd-42f3-11ef-9f7f-e9a656123a8b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "03f197dd-42f3-11ef-9f7f-e9a656123a8b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "03f197dd-42f3-11ef-9f7f-e9a656123a8b",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "03ea443b-42f3-11ef-9f7f-e9a656123a8b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "03eb7cb9-42f3-11ef-9f7f-e9a656123a8b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "03f2d08b-42f3-11ef-9f7f-e9a656123a8b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "03f2d08b-42f3-11ef-9f7f-e9a656123a8b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "03f2d08b-42f3-11ef-9f7f-e9a656123a8b",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "03ecb535-42f3-11ef-9f7f-e9a656123a8b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "03ededb3-42f3-11ef-9f7f-e9a656123a8b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "03fac0c9-42f3-11ef-9f7f-e9a656123a8b"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "03fac0c9-42f3-11ef-9f7f-e9a656123a8b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc0]' 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc0 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p0]' 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p0 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p1]' 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p1 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p0]' 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p0 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p1]' 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p1 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p2]' 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p2 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p3]' 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p3 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p4]' 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p4 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p5]' 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p5 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p6]' 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p6 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p7]' 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p7 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_TestPT]' 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=TestPT 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_raid0]' 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=raid0 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_concat0]' 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=concat0 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1093 -- # '[' 11 -le 1 ']' 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:41.455 21:41:56 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:05:41.455 ************************************ 00:05:41.455 START TEST bdev_fio_trim 00:05:41.456 ************************************ 00:05:41.456 21:41:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1117 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:41.456 21:41:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1350 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:41.456 21:41:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:05:41.456 21:41:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:05:41.456 21:41:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1333 -- # local sanitizers 00:05:41.456 21:41:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1334 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:41.456 21:41:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1335 -- # shift 00:05:41.456 21:41:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local asan_lib= 00:05:41.456 21:41:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:05:41.456 21:41:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:41.456 21:41:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # grep libasan 00:05:41.456 21:41:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:05:41.456 21:41:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # asan_lib= 00:05:41.456 21:41:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:05:41.456 21:41:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:05:41.456 21:41:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:41.456 21:41:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:05:41.456 21:41:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:05:41.456 21:41:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # asan_lib= 00:05:41.456 21:41:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:05:41.456 21:41:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:05:41.456 21:41:56 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:41.715 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:41.715 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:41.715 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:41.715 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:41.715 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:41.715 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:41.715 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:41.715 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:41.715 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:41.715 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:41.715 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:41.715 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:41.715 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:41.715 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:41.715 fio-3.35 00:05:41.715 Starting 14 threads 00:05:42.279 EAL: TSC is not safe to use in SMP mode 00:05:42.279 EAL: TSC is not invariant 00:05:54.475 00:05:54.475 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=101374: Mon Jul 15 21:42:07 2024 00:05:54.475 write: IOPS=2444k, BW=9546MiB/s (10.0GB/s)(93.2GiB/10002msec); 0 zone resets 00:05:54.475 slat (nsec): min=278, max=1401.6M, avg=1474.53, stdev=338124.51 00:05:54.475 clat (nsec): min=1400, max=1244.4M, avg=15551.71, stdev=1206034.14 00:05:54.475 lat (usec): min=2, max=1401.6k, avg=17.03, stdev=1252.54 00:05:54.475 clat percentiles (usec): 00:05:54.475 | 50.000th=[ 7], 99.000th=[ 17], 99.900th=[ 955], 99.990th=[ 963], 00:05:54.475 | 99.999th=[94897] 00:05:54.475 bw ( MiB/s): min= 2444, max=14730, per=100.00%, avg=9692.98, stdev=294.90, samples=261 00:05:54.475 iops : min=625876, max=3771020, avg=2481402.49, stdev=75493.28, samples=261 00:05:54.475 trim: IOPS=2444k, BW=9546MiB/s (10.0GB/s)(93.2GiB/10002msec); 0 zone resets 00:05:54.475 slat (nsec): min=573, max=1015.8M, avg=1557.76, stdev=397660.26 00:05:54.475 clat (nsec): min=400, max=1401.6M, avg=11195.92, stdev=882241.43 00:05:54.475 lat (nsec): min=1656, max=1401.6M, avg=12753.68, stdev=967728.30 00:05:54.475 clat percentiles (usec): 00:05:54.475 | 50.000th=[ 9], 99.000th=[ 16], 99.900th=[ 24], 99.990th=[ 50], 00:05:54.475 | 99.999th=[94897] 00:05:54.475 bw ( MiB/s): min= 2444, max=14730, per=100.00%, avg=9692.99, stdev=294.90, samples=261 00:05:54.475 iops : min=625876, max=3771036, avg=2481404.24, stdev=75493.26, samples=261 00:05:54.475 lat (nsec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:05:54.475 lat (usec) : 2=0.09%, 4=23.12%, 10=58.10%, 20=18.23%, 50=0.26% 00:05:54.475 lat (usec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.17% 00:05:54.475 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01% 00:05:54.475 lat (msec) : 100=0.01%, 250=0.01%, 500=0.01%, 1000=0.01%, 2000=0.01% 00:05:54.475 cpu : usr=63.27%, sys=4.01%, ctx=1098144, majf=0, minf=0 00:05:54.475 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:05:54.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:05:54.475 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:05:54.475 issued rwts: total=0,24442240,24442247,0 short=0,0,0,0 dropped=0,0,0,0 00:05:54.475 latency : target=0, window=0, percentile=100.00%, depth=8 00:05:54.475 00:05:54.475 Run status group 0 (all jobs): 00:05:54.475 WRITE: bw=9546MiB/s (10.0GB/s), 9546MiB/s-9546MiB/s (10.0GB/s-10.0GB/s), io=93.2GiB (100GB), run=10002-10002msec 00:05:54.475 TRIM: bw=9546MiB/s (10.0GB/s), 9546MiB/s-9546MiB/s (10.0GB/s-10.0GB/s), io=93.2GiB (100GB), run=10002-10002msec 00:05:54.475 00:05:54.475 real 0m12.424s 00:05:54.475 user 1m34.484s 00:05:54.475 sys 0m8.480s 00:05:54.475 21:42:09 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:54.475 21:42:09 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:05:54.475 ************************************ 00:05:54.475 END TEST bdev_fio_trim 00:05:54.475 ************************************ 00:05:54.475 21:42:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1136 -- # return 0 00:05:54.475 21:42:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:05:54.475 21:42:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:54.475 21:42:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:05:54.475 /home/vagrant/spdk_repo/spdk 00:05:54.475 21:42:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:05:54.475 00:05:54.475 real 0m27.263s 00:05:54.475 user 3m10.143s 00:05:54.475 sys 0m17.945s 00:05:54.475 21:42:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:54.475 21:42:09 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:05:54.475 ************************************ 00:05:54.475 END TEST bdev_fio 00:05:54.475 ************************************ 00:05:54.475 21:42:09 blockdev_general -- common/autotest_common.sh@1136 -- # return 0 00:05:54.475 21:42:09 blockdev_general -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:54.475 21:42:09 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:05:54.475 21:42:09 blockdev_general -- common/autotest_common.sh@1093 -- # '[' 16 -le 1 ']' 00:05:54.475 21:42:09 blockdev_general -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:54.475 21:42:09 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:54.475 ************************************ 00:05:54.475 START TEST bdev_verify 00:05:54.475 ************************************ 00:05:54.475 21:42:09 blockdev_general.bdev_verify -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:05:54.475 [2024-07-15 21:42:09.112467] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:05:54.475 [2024-07-15 21:42:09.112629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:54.475 EAL: TSC is not safe to use in SMP mode 00:05:54.475 EAL: TSC is not invariant 00:05:54.475 [2024-07-15 21:42:09.647107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.731 [2024-07-15 21:42:09.754963] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:54.731 [2024-07-15 21:42:09.755035] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:05:54.731 [2024-07-15 21:42:09.758260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.731 [2024-07-15 21:42:09.758250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.731 [2024-07-15 21:42:09.820063] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:54.731 [2024-07-15 21:42:09.820118] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:54.731 [2024-07-15 21:42:09.828046] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:54.731 [2024-07-15 21:42:09.828093] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:54.731 [2024-07-15 21:42:09.836056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:54.731 [2024-07-15 21:42:09.836099] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:54.731 [2024-07-15 21:42:09.836110] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:54.731 [2024-07-15 21:42:09.884072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:54.731 [2024-07-15 21:42:09.884139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:54.731 [2024-07-15 21:42:09.884159] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x381690036800 00:05:54.731 [2024-07-15 21:42:09.884173] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:54.731 [2024-07-15 21:42:09.884666] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:54.731 [2024-07-15 21:42:09.884707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:54.988 Running I/O for 5 seconds... 00:06:00.249 00:06:00.249 Latency(us) 00:06:00.249 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:00.249 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:00.249 Verification LBA range: start 0x0 length 0x1000 00:06:00.249 Malloc0 : 5.02 6445.91 25.18 0.00 0.00 19849.46 63.30 51475.59 00:06:00.249 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:00.249 Verification LBA range: start 0x1000 length 0x1000 00:06:00.249 Malloc0 : 5.03 107.39 0.42 0.00 0.00 1191100.88 930.91 1723478.92 00:06:00.249 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:00.249 Verification LBA range: start 0x0 length 0x800 00:06:00.249 Malloc1p0 : 5.02 5916.81 23.11 0.00 0.00 21619.61 279.27 23712.13 00:06:00.249 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:00.249 Verification LBA range: start 0x800 length 0x800 00:06:00.249 Malloc1p0 : 5.02 6429.98 25.12 0.00 0.00 19894.32 275.55 22043.94 00:06:00.249 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:00.249 Verification LBA range: start 0x0 length 0x800 00:06:00.249 Malloc1p1 : 5.02 5916.44 23.11 0.00 0.00 21615.64 366.78 22997.20 00:06:00.249 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:00.249 Verification LBA range: start 0x800 length 0x800 00:06:00.249 Malloc1p1 : 5.02 6429.57 25.12 0.00 0.00 19891.99 268.10 21448.16 00:06:00.249 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:00.249 Verification LBA range: start 0x0 length 0x200 00:06:00.249 Malloc2p0 : 5.02 5916.10 23.11 0.00 0.00 21611.93 284.86 22639.73 00:06:00.249 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:00.249 Verification LBA range: start 0x200 length 0x200 00:06:00.249 Malloc2p0 : 5.02 6429.20 25.11 0.00 0.00 19889.59 361.19 20852.38 00:06:00.249 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:00.249 Verification LBA range: start 0x0 length 0x200 00:06:00.249 Malloc2p1 : 5.02 5915.76 23.11 0.00 0.00 21608.74 294.17 21209.85 00:06:00.249 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:00.249 Verification LBA range: start 0x200 length 0x200 00:06:00.249 Malloc2p1 : 5.02 6428.83 25.11 0.00 0.00 19886.59 271.83 19303.35 00:06:00.249 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:00.249 Verification LBA range: start 0x0 length 0x200 00:06:00.249 Malloc2p2 : 5.02 5915.43 23.11 0.00 0.00 21605.53 336.99 20494.91 00:06:00.249 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:00.249 Verification LBA range: start 0x200 length 0x200 00:06:00.249 Malloc2p2 : 5.02 6428.43 25.11 0.00 0.00 19883.59 288.58 18588.41 00:06:00.249 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:00.249 Verification LBA range: start 0x0 length 0x200 00:06:00.249 Malloc2p3 : 5.02 5915.12 23.11 0.00 0.00 21601.13 307.20 20137.44 00:06:00.249 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:00.249 Verification LBA range: start 0x200 length 0x200 00:06:00.249 Malloc2p3 : 5.02 6428.05 25.11 0.00 0.00 19880.56 269.96 18230.94 00:06:00.249 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:00.249 Verification LBA range: start 0x0 length 0x200 00:06:00.249 Malloc2p4 : 5.02 5914.79 23.10 0.00 0.00 21597.86 266.24 18826.72 00:06:00.249 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:00.249 Verification LBA range: start 0x200 length 0x200 00:06:00.249 Malloc2p4 : 5.02 6427.69 25.11 0.00 0.00 19877.87 303.48 17396.84 00:06:00.249 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:00.249 Verification LBA range: start 0x0 length 0x200 00:06:00.249 Malloc2p5 : 5.02 5914.44 23.10 0.00 0.00 21595.05 275.55 19184.19 00:06:00.249 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:00.249 Verification LBA range: start 0x200 length 0x200 00:06:00.249 Malloc2p5 : 5.02 6426.76 25.10 0.00 0.00 19875.57 266.24 17992.62 00:06:00.249 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:00.249 Verification LBA range: start 0x0 length 0x200 00:06:00.249 Malloc2p6 : 5.02 5914.11 23.10 0.00 0.00 21591.93 290.44 19660.81 00:06:00.249 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:00.249 Verification LBA range: start 0x200 length 0x200 00:06:00.249 Malloc2p6 : 5.02 6426.30 25.10 0.00 0.00 19874.62 264.38 18469.25 00:06:00.249 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:00.249 Verification LBA range: start 0x0 length 0x200 00:06:00.249 Malloc2p7 : 5.02 5913.82 23.10 0.00 0.00 21587.99 297.89 20614.07 00:06:00.249 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:00.249 Verification LBA range: start 0x200 length 0x200 00:06:00.249 Malloc2p7 : 5.02 6425.87 25.10 0.00 0.00 19872.05 266.24 18945.88 00:06:00.249 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:00.249 Verification LBA range: start 0x0 length 0x1000 00:06:00.249 TestPT : 5.02 5913.50 23.10 0.00 0.00 21584.95 273.69 21329.00 00:06:00.249 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:00.249 Verification LBA range: start 0x1000 length 0x1000 00:06:00.249 TestPT : 5.02 5200.47 20.31 0.00 0.00 24546.67 726.11 64344.48 00:06:00.249 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:00.249 Verification LBA range: start 0x0 length 0x2000 00:06:00.249 raid0 : 5.02 5913.21 23.10 0.00 0.00 21581.76 284.86 22282.26 00:06:00.249 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:00.249 Verification LBA range: start 0x2000 length 0x2000 00:06:00.249 raid0 : 5.02 6425.13 25.10 0.00 0.00 19866.94 284.86 18469.25 00:06:00.249 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:00.249 Verification LBA range: start 0x0 length 0x2000 00:06:00.249 concat0 : 5.02 5912.92 23.10 0.00 0.00 21578.81 296.03 23116.35 00:06:00.249 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:00.249 Verification LBA range: start 0x2000 length 0x2000 00:06:00.249 concat0 : 5.02 6424.76 25.10 0.00 0.00 19864.19 275.55 19541.66 00:06:00.249 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:00.249 Verification LBA range: start 0x0 length 0x1000 00:06:00.249 raid1 : 5.02 5912.61 23.10 0.00 0.00 21575.20 381.67 24188.76 00:06:00.249 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:00.249 Verification LBA range: start 0x1000 length 0x1000 00:06:00.249 raid1 : 5.02 6424.29 25.09 0.00 0.00 19861.20 387.26 21686.47 00:06:00.249 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:00.249 Verification LBA range: start 0x0 length 0x4e2 00:06:00.249 AIO0 : 5.16 769.92 3.01 0.00 0.00 163523.89 804.31 284068.98 00:06:00.249 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:00.249 Verification LBA range: start 0x4e2 length 0x4e2 00:06:00.249 AIO0 : 5.16 778.42 3.04 0.00 0.00 161371.99 13822.15 263097.45 00:06:00.249 =================================================================================================================== 00:06:00.249 Total : 179662.02 701.80 0.00 0.00 22769.15 63.30 1723478.92 00:06:00.249 00:06:00.249 real 0m6.317s 00:06:00.249 user 0m10.199s 00:06:00.249 sys 0m0.698s 00:06:00.249 ************************************ 00:06:00.249 END TEST bdev_verify 00:06:00.249 ************************************ 00:06:00.249 21:42:15 blockdev_general.bdev_verify -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:00.249 21:42:15 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:00.507 21:42:15 blockdev_general -- common/autotest_common.sh@1136 -- # return 0 00:06:00.507 21:42:15 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:00.507 21:42:15 blockdev_general -- common/autotest_common.sh@1093 -- # '[' 16 -le 1 ']' 00:06:00.507 21:42:15 blockdev_general -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:00.507 21:42:15 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:00.507 ************************************ 00:06:00.507 START TEST bdev_verify_big_io 00:06:00.507 ************************************ 00:06:00.507 21:42:15 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:00.507 [2024-07-15 21:42:15.475270] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:00.507 [2024-07-15 21:42:15.475517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:01.074 EAL: TSC is not safe to use in SMP mode 00:06:01.074 EAL: TSC is not invariant 00:06:01.074 [2024-07-15 21:42:16.031480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.074 [2024-07-15 21:42:16.119449] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:01.074 [2024-07-15 21:42:16.119513] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:01.074 [2024-07-15 21:42:16.122254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.074 [2024-07-15 21:42:16.122246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.074 [2024-07-15 21:42:16.180827] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:01.074 [2024-07-15 21:42:16.180876] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:01.074 [2024-07-15 21:42:16.188811] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:01.074 [2024-07-15 21:42:16.188843] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:01.074 [2024-07-15 21:42:16.196827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:01.074 [2024-07-15 21:42:16.196859] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:06:01.074 [2024-07-15 21:42:16.196868] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:01.074 [2024-07-15 21:42:16.244833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:01.074 [2024-07-15 21:42:16.244889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:01.074 [2024-07-15 21:42:16.244901] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc0781636800 00:06:01.074 [2024-07-15 21:42:16.244909] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:01.074 [2024-07-15 21:42:16.245289] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:01.074 [2024-07-15 21:42:16.245318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:06:01.333 [2024-07-15 21:42:16.346366] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:06:01.333 [2024-07-15 21:42:16.346592] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:06:01.333 [2024-07-15 21:42:16.346773] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:06:01.333 [2024-07-15 21:42:16.346959] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:06:01.333 [2024-07-15 21:42:16.347173] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:06:01.333 [2024-07-15 21:42:16.347455] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:06:01.333 [2024-07-15 21:42:16.347638] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:06:01.333 [2024-07-15 21:42:16.347825] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:06:01.333 [2024-07-15 21:42:16.347998] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:06:01.333 [2024-07-15 21:42:16.348176] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:06:01.333 [2024-07-15 21:42:16.348359] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:06:01.333 [2024-07-15 21:42:16.348533] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:06:01.333 [2024-07-15 21:42:16.348704] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:06:01.333 [2024-07-15 21:42:16.348882] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:06:01.333 [2024-07-15 21:42:16.349054] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:06:01.333 [2024-07-15 21:42:16.349247] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:06:01.333 [2024-07-15 21:42:16.351067] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:06:01.333 [2024-07-15 21:42:16.351288] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:06:01.333 Running I/O for 5 seconds... 00:06:06.600 00:06:06.600 Latency(us) 00:06:06.600 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:06.600 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:06.600 Verification LBA range: start 0x0 length 0x100 00:06:06.600 Malloc0 : 5.05 3702.05 231.38 0.00 0.00 34487.63 85.64 106764.18 00:06:06.600 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:06.600 Verification LBA range: start 0x100 length 0x100 00:06:06.600 Malloc0 : 5.06 4227.82 264.24 0.00 0.00 30186.29 86.57 113913.57 00:06:06.600 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:06.600 Verification LBA range: start 0x0 length 0x80 00:06:06.600 Malloc1p0 : 5.07 1753.11 109.57 0.00 0.00 72607.08 1094.75 136314.98 00:06:06.600 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:06.600 Verification LBA range: start 0x80 length 0x80 00:06:06.600 Malloc1p0 : 5.10 549.14 34.32 0.00 0.00 231740.79 443.11 280255.97 00:06:06.600 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:06.600 Verification LBA range: start 0x0 length 0x80 00:06:06.600 Malloc1p1 : 5.08 484.69 30.29 0.00 0.00 262424.75 383.53 301227.51 00:06:06.600 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:06.600 Verification LBA range: start 0x80 length 0x80 00:06:06.600 Malloc1p1 : 5.10 549.12 34.32 0.00 0.00 231284.41 431.94 270723.46 00:06:06.600 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:06.600 Verification LBA range: start 0x0 length 0x20 00:06:06.600 Malloc2p0 : 5.06 467.66 29.23 0.00 0.00 67943.78 247.62 98661.54 00:06:06.600 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:06.600 Verification LBA range: start 0x20 length 0x20 00:06:06.600 Malloc2p0 : 5.07 533.49 33.34 0.00 0.00 59501.24 275.55 96755.04 00:06:06.600 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:06.600 Verification LBA range: start 0x0 length 0x20 00:06:06.600 Malloc2p1 : 5.06 467.63 29.23 0.00 0.00 67915.73 258.79 97708.29 00:06:06.600 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:06.600 Verification LBA range: start 0x20 length 0x20 00:06:06.600 Malloc2p1 : 5.07 533.46 33.34 0.00 0.00 59473.94 273.69 95325.16 00:06:06.600 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:06.600 Verification LBA range: start 0x0 length 0x20 00:06:06.600 Malloc2p2 : 5.06 467.61 29.23 0.00 0.00 67887.43 243.90 96755.04 00:06:06.600 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:06.600 Verification LBA range: start 0x20 length 0x20 00:06:06.600 Malloc2p2 : 5.07 533.43 33.34 0.00 0.00 59451.00 277.41 93895.28 00:06:06.600 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:06.600 Verification LBA range: start 0x0 length 0x20 00:06:06.600 Malloc2p3 : 5.06 467.59 29.22 0.00 0.00 67849.84 245.76 95801.79 00:06:06.600 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:06.600 Verification LBA range: start 0x20 length 0x20 00:06:06.600 Malloc2p3 : 5.07 533.40 33.34 0.00 0.00 59423.48 279.27 92465.41 00:06:06.600 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:06.600 Verification LBA range: start 0x0 length 0x20 00:06:06.600 Malloc2p4 : 5.06 467.56 29.22 0.00 0.00 67821.64 258.79 94848.54 00:06:06.600 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:06.600 Verification LBA range: start 0x20 length 0x20 00:06:06.600 Malloc2p4 : 5.07 533.37 33.34 0.00 0.00 59393.37 273.69 91035.53 00:06:06.600 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:06.600 Verification LBA range: start 0x0 length 0x20 00:06:06.600 Malloc2p5 : 5.06 467.54 29.22 0.00 0.00 67808.56 247.62 93895.28 00:06:06.600 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:06.600 Verification LBA range: start 0x20 length 0x20 00:06:06.600 Malloc2p5 : 5.07 533.34 33.33 0.00 0.00 59360.21 277.41 90082.28 00:06:06.600 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:06.600 Verification LBA range: start 0x0 length 0x20 00:06:06.600 Malloc2p6 : 5.07 470.17 29.39 0.00 0.00 67447.58 251.35 92942.03 00:06:06.600 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:06.600 Verification LBA range: start 0x20 length 0x20 00:06:06.601 Malloc2p6 : 5.07 533.31 33.33 0.00 0.00 59334.42 271.83 88652.40 00:06:06.601 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:06.601 Verification LBA range: start 0x0 length 0x20 00:06:06.601 Malloc2p7 : 5.07 470.14 29.38 0.00 0.00 67417.01 249.48 92465.41 00:06:06.601 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:06.601 Verification LBA range: start 0x20 length 0x20 00:06:06.601 Malloc2p7 : 5.08 535.64 33.48 0.00 0.00 59064.41 281.13 87222.52 00:06:06.601 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:06.601 Verification LBA range: start 0x0 length 0x100 00:06:06.601 TestPT : 5.12 481.12 30.07 0.00 0.00 261879.49 6404.66 244032.41 00:06:06.601 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:06.601 Verification LBA range: start 0x100 length 0x100 00:06:06.601 TestPT : 5.21 231.54 14.47 0.00 0.00 542750.38 17992.62 537633.91 00:06:06.601 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:06.601 Verification LBA range: start 0x0 length 0x200 00:06:06.601 raid0 : 5.09 487.67 30.48 0.00 0.00 259273.36 376.09 282162.48 00:06:06.601 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:06.601 Verification LBA range: start 0x200 length 0x200 00:06:06.601 raid0 : 5.10 552.56 34.54 0.00 0.00 228123.89 409.60 244032.41 00:06:06.601 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:06.601 Verification LBA range: start 0x0 length 0x200 00:06:06.601 concat0 : 5.09 487.65 30.48 0.00 0.00 258877.62 396.57 274536.46 00:06:06.601 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:06.601 Verification LBA range: start 0x200 length 0x200 00:06:06.601 concat0 : 5.09 559.42 34.96 0.00 0.00 225243.54 353.75 237359.65 00:06:06.601 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:06.601 Verification LBA range: start 0x0 length 0x100 00:06:06.601 raid1 : 5.09 490.75 30.67 0.00 0.00 256872.83 422.63 266910.45 00:06:06.601 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:06.601 Verification LBA range: start 0x100 length 0x100 00:06:06.601 raid1 : 5.10 558.79 34.92 0.00 0.00 225122.71 469.18 226873.88 00:06:06.601 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:06:06.601 Verification LBA range: start 0x0 length 0x4e 00:06:06.601 AIO0 : 5.08 490.47 30.65 0.00 0.00 156459.22 297.89 161099.52 00:06:06.601 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:06:06.601 Verification LBA range: start 0x4e length 0x4e 00:06:06.601 AIO0 : 5.10 564.06 35.25 0.00 0.00 135743.10 323.96 136314.98 00:06:06.601 =================================================================================================================== 00:06:06.601 Total : 24185.29 1511.58 0.00 0.00 100947.50 85.64 537633.91 00:06:06.859 00:06:06.859 real 0m6.383s 00:06:06.859 user 0m11.278s 00:06:06.859 sys 0m0.708s 00:06:06.859 21:42:21 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:06.859 21:42:21 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:06.859 ************************************ 00:06:06.859 END TEST bdev_verify_big_io 00:06:06.859 ************************************ 00:06:06.859 21:42:21 blockdev_general -- common/autotest_common.sh@1136 -- # return 0 00:06:06.859 21:42:21 blockdev_general -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:06.859 21:42:21 blockdev_general -- common/autotest_common.sh@1093 -- # '[' 13 -le 1 ']' 00:06:06.859 21:42:21 blockdev_general -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:06.859 21:42:21 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:06.859 ************************************ 00:06:06.859 START TEST bdev_write_zeroes 00:06:06.859 ************************************ 00:06:06.859 21:42:21 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:06.859 [2024-07-15 21:42:21.902021] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:06.859 [2024-07-15 21:42:21.902301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:07.425 EAL: TSC is not safe to use in SMP mode 00:06:07.425 EAL: TSC is not invariant 00:06:07.425 [2024-07-15 21:42:22.418352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.425 [2024-07-15 21:42:22.516029] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:07.425 [2024-07-15 21:42:22.518596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.425 [2024-07-15 21:42:22.579217] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:07.425 [2024-07-15 21:42:22.579279] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:07.425 [2024-07-15 21:42:22.587205] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:07.425 [2024-07-15 21:42:22.587245] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:07.425 [2024-07-15 21:42:22.595222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:07.425 [2024-07-15 21:42:22.595259] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:06:07.425 [2024-07-15 21:42:22.595270] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:07.683 [2024-07-15 21:42:22.643234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:07.683 [2024-07-15 21:42:22.643291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:07.683 [2024-07-15 21:42:22.643305] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x29e6c8e36800 00:06:07.683 [2024-07-15 21:42:22.643315] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:07.683 [2024-07-15 21:42:22.643771] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:07.683 [2024-07-15 21:42:22.643798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:06:07.683 Running I/O for 1 seconds... 00:06:08.655 00:06:08.655 Latency(us) 00:06:08.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:08.655 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:08.655 Malloc0 : 1.00 31593.43 123.41 0.00 0.00 4050.73 181.53 8698.42 00:06:08.655 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:08.655 Malloc1p0 : 1.00 31589.72 123.40 0.00 0.00 4048.98 216.90 8519.69 00:06:08.655 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:08.655 Malloc1p1 : 1.00 31586.86 123.39 0.00 0.00 4047.66 213.18 8340.95 00:06:08.655 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:08.655 Malloc2p0 : 1.01 31603.12 123.45 0.00 0.00 4043.35 215.97 8102.64 00:06:08.655 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:08.655 Malloc2p1 : 1.01 31599.45 123.44 0.00 0.00 4042.51 215.04 7923.90 00:06:08.655 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:08.655 Malloc2p2 : 1.01 31596.73 123.42 0.00 0.00 4040.59 213.18 7745.17 00:06:08.655 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:08.655 Malloc2p3 : 1.01 31593.72 123.41 0.00 0.00 4039.02 211.32 7536.65 00:06:08.655 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:08.655 Malloc2p4 : 1.01 31588.02 123.39 0.00 0.00 4037.96 224.35 7328.12 00:06:08.655 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:08.655 Malloc2p5 : 1.01 31584.83 123.38 0.00 0.00 4036.15 213.18 7179.18 00:06:08.655 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:08.655 Malloc2p6 : 1.01 31581.98 123.37 0.00 0.00 4034.37 214.11 6970.65 00:06:08.655 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:08.655 Malloc2p7 : 1.01 31578.35 123.35 0.00 0.00 4033.90 226.21 6732.34 00:06:08.655 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:08.655 TestPT : 1.01 31575.44 123.34 0.00 0.00 4031.93 222.49 6523.82 00:06:08.655 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:08.655 raid0 : 1.01 31569.88 123.32 0.00 0.00 4030.24 333.27 6315.29 00:06:08.655 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:08.655 concat0 : 1.01 31564.53 123.30 0.00 0.00 4028.59 312.79 6047.19 00:06:08.655 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:08.655 raid1 : 1.01 31559.03 123.28 0.00 0.00 4026.09 528.76 5570.56 00:06:08.655 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:08.655 AIO0 : 1.05 2662.75 10.40 0.00 0.00 46597.21 759.62 178258.05 00:06:08.655 =================================================================================================================== 00:06:08.655 Total : 476427.84 1861.05 0.00 0.00 4286.57 181.53 178258.05 00:06:08.913 00:06:08.913 real 0m2.167s 00:06:08.913 user 0m1.387s 00:06:08.913 sys 0m0.557s 00:06:08.913 21:42:24 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:08.913 21:42:24 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:08.913 ************************************ 00:06:08.913 END TEST bdev_write_zeroes 00:06:08.913 ************************************ 00:06:08.913 21:42:24 blockdev_general -- common/autotest_common.sh@1136 -- # return 0 00:06:08.913 21:42:24 blockdev_general -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:08.913 21:42:24 blockdev_general -- common/autotest_common.sh@1093 -- # '[' 13 -le 1 ']' 00:06:08.913 21:42:24 blockdev_general -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:08.913 21:42:24 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:09.171 ************************************ 00:06:09.171 START TEST bdev_json_nonenclosed 00:06:09.171 ************************************ 00:06:09.171 21:42:24 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:09.171 [2024-07-15 21:42:24.114086] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:09.171 [2024-07-15 21:42:24.114322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:09.738 EAL: TSC is not safe to use in SMP mode 00:06:09.738 EAL: TSC is not invariant 00:06:09.738 [2024-07-15 21:42:24.629076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.738 [2024-07-15 21:42:24.716817] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:09.738 [2024-07-15 21:42:24.719031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.738 [2024-07-15 21:42:24.719073] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:09.738 [2024-07-15 21:42:24.719098] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:09.738 [2024-07-15 21:42:24.719107] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:09.738 00:06:09.738 real 0m0.732s 00:06:09.738 user 0m0.178s 00:06:09.738 sys 0m0.552s 00:06:09.738 21:42:24 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1117 -- # es=234 00:06:09.738 21:42:24 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:09.738 ************************************ 00:06:09.738 END TEST bdev_json_nonenclosed 00:06:09.738 ************************************ 00:06:09.738 21:42:24 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:09.738 21:42:24 blockdev_general -- common/autotest_common.sh@1136 -- # return 234 00:06:09.738 21:42:24 blockdev_general -- bdev/blockdev.sh@782 -- # true 00:06:09.738 21:42:24 blockdev_general -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:09.738 21:42:24 blockdev_general -- common/autotest_common.sh@1093 -- # '[' 13 -le 1 ']' 00:06:09.738 21:42:24 blockdev_general -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:09.738 21:42:24 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:09.738 ************************************ 00:06:09.738 START TEST bdev_json_nonarray 00:06:09.738 ************************************ 00:06:09.738 21:42:24 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:09.738 [2024-07-15 21:42:24.891586] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:09.738 [2024-07-15 21:42:24.891835] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:10.304 EAL: TSC is not safe to use in SMP mode 00:06:10.304 EAL: TSC is not invariant 00:06:10.304 [2024-07-15 21:42:25.412902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.562 [2024-07-15 21:42:25.498854] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:10.562 [2024-07-15 21:42:25.500969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.562 [2024-07-15 21:42:25.501010] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:10.562 [2024-07-15 21:42:25.501020] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:10.562 [2024-07-15 21:42:25.501029] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:10.562 00:06:10.562 real 0m0.728s 00:06:10.562 user 0m0.180s 00:06:10.562 sys 0m0.549s 00:06:10.562 21:42:25 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1117 -- # es=234 00:06:10.562 21:42:25 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:10.562 21:42:25 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:10.562 ************************************ 00:06:10.562 END TEST bdev_json_nonarray 00:06:10.562 ************************************ 00:06:10.562 21:42:25 blockdev_general -- common/autotest_common.sh@1136 -- # return 234 00:06:10.562 21:42:25 blockdev_general -- bdev/blockdev.sh@785 -- # true 00:06:10.562 21:42:25 blockdev_general -- bdev/blockdev.sh@787 -- # [[ bdev == bdev ]] 00:06:10.562 21:42:25 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qos qos_test_suite '' 00:06:10.562 21:42:25 blockdev_general -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:06:10.562 21:42:25 blockdev_general -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:10.562 21:42:25 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:10.562 ************************************ 00:06:10.562 START TEST bdev_qos 00:06:10.562 ************************************ 00:06:10.562 21:42:25 blockdev_general.bdev_qos -- common/autotest_common.sh@1117 -- # qos_test_suite '' 00:06:10.562 21:42:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # QOS_PID=48166 00:06:10.562 21:42:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # echo 'Process qos testing pid: 48166' 00:06:10.562 Process qos testing pid: 48166 00:06:10.562 21:42:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:06:10.562 21:42:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@449 -- # waitforlisten 48166 00:06:10.562 21:42:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:06:10.562 21:42:25 blockdev_general.bdev_qos -- common/autotest_common.sh@823 -- # '[' -z 48166 ']' 00:06:10.562 21:42:25 blockdev_general.bdev_qos -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.562 21:42:25 blockdev_general.bdev_qos -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:10.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.562 21:42:25 blockdev_general.bdev_qos -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.562 21:42:25 blockdev_general.bdev_qos -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:10.562 21:42:25 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:10.562 [2024-07-15 21:42:25.667320] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:10.562 [2024-07-15 21:42:25.667584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:11.128 EAL: TSC is not safe to use in SMP mode 00:06:11.128 EAL: TSC is not invariant 00:06:11.128 [2024-07-15 21:42:26.197809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.128 [2024-07-15 21:42:26.283534] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:11.128 [2024-07-15 21:42:26.285609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.695 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:11.695 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@856 -- # return 0 00:06:11.695 21:42:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:06:11.695 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:11.695 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:11.695 Malloc_0 00:06:11.695 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:11.695 21:42:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # waitforbdev Malloc_0 00:06:11.695 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@891 -- # local bdev_name=Malloc_0 00:06:11.695 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:06:11.695 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@893 -- # local i 00:06:11.695 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:06:11.695 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:06:11.695 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@896 -- # rpc_cmd bdev_wait_for_examine 00:06:11.695 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:11.695 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:11.695 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:11.696 [ 00:06:11.696 { 00:06:11.696 "name": "Malloc_0", 00:06:11.696 "aliases": [ 00:06:11.696 "20db4b32-42f3-11ef-9f7f-e9a656123a8b" 00:06:11.696 ], 00:06:11.696 "product_name": "Malloc disk", 00:06:11.696 "block_size": 512, 00:06:11.696 "num_blocks": 262144, 00:06:11.696 "uuid": "20db4b32-42f3-11ef-9f7f-e9a656123a8b", 00:06:11.696 "assigned_rate_limits": { 00:06:11.696 "rw_ios_per_sec": 0, 00:06:11.696 "rw_mbytes_per_sec": 0, 00:06:11.696 "r_mbytes_per_sec": 0, 00:06:11.696 "w_mbytes_per_sec": 0 00:06:11.696 }, 00:06:11.696 "claimed": false, 00:06:11.696 "zoned": false, 00:06:11.696 "supported_io_types": { 00:06:11.696 "read": true, 00:06:11.696 "write": true, 00:06:11.696 "unmap": true, 00:06:11.696 "flush": true, 00:06:11.696 "reset": true, 00:06:11.696 "nvme_admin": false, 00:06:11.696 "nvme_io": false, 00:06:11.696 "nvme_io_md": false, 00:06:11.696 "write_zeroes": true, 00:06:11.696 "zcopy": true, 00:06:11.696 "get_zone_info": false, 00:06:11.696 "zone_management": false, 00:06:11.696 "zone_append": false, 00:06:11.696 "compare": false, 00:06:11.696 "compare_and_write": false, 00:06:11.696 "abort": true, 00:06:11.696 "seek_hole": false, 00:06:11.696 "seek_data": false, 00:06:11.696 "copy": true, 00:06:11.696 "nvme_iov_md": false 00:06:11.696 }, 00:06:11.696 "memory_domains": [ 00:06:11.696 { 00:06:11.696 "dma_device_id": "system", 00:06:11.696 "dma_device_type": 1 00:06:11.696 }, 00:06:11.696 { 00:06:11.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:11.696 "dma_device_type": 2 00:06:11.696 } 00:06:11.696 ], 00:06:11.696 "driver_specific": {} 00:06:11.696 } 00:06:11.696 ] 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # return 0 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # rpc_cmd bdev_null_create Null_1 128 512 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:11.696 Null_1 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@454 -- # waitforbdev Null_1 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@891 -- # local bdev_name=Null_1 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@893 -- # local i 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@896 -- # rpc_cmd bdev_wait_for_examine 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:11.696 [ 00:06:11.696 { 00:06:11.696 "name": "Null_1", 00:06:11.696 "aliases": [ 00:06:11.696 "20e02c96-42f3-11ef-9f7f-e9a656123a8b" 00:06:11.696 ], 00:06:11.696 "product_name": "Null disk", 00:06:11.696 "block_size": 512, 00:06:11.696 "num_blocks": 262144, 00:06:11.696 "uuid": "20e02c96-42f3-11ef-9f7f-e9a656123a8b", 00:06:11.696 "assigned_rate_limits": { 00:06:11.696 "rw_ios_per_sec": 0, 00:06:11.696 "rw_mbytes_per_sec": 0, 00:06:11.696 "r_mbytes_per_sec": 0, 00:06:11.696 "w_mbytes_per_sec": 0 00:06:11.696 }, 00:06:11.696 "claimed": false, 00:06:11.696 "zoned": false, 00:06:11.696 "supported_io_types": { 00:06:11.696 "read": true, 00:06:11.696 "write": true, 00:06:11.696 "unmap": false, 00:06:11.696 "flush": false, 00:06:11.696 "reset": true, 00:06:11.696 "nvme_admin": false, 00:06:11.696 "nvme_io": false, 00:06:11.696 "nvme_io_md": false, 00:06:11.696 "write_zeroes": true, 00:06:11.696 "zcopy": false, 00:06:11.696 "get_zone_info": false, 00:06:11.696 "zone_management": false, 00:06:11.696 "zone_append": false, 00:06:11.696 "compare": false, 00:06:11.696 "compare_and_write": false, 00:06:11.696 "abort": true, 00:06:11.696 "seek_hole": false, 00:06:11.696 "seek_data": false, 00:06:11.696 "copy": false, 00:06:11.696 "nvme_iov_md": false 00:06:11.696 }, 00:06:11.696 "driver_specific": {} 00:06:11.696 } 00:06:11.696 ] 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # return 0 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@457 -- # qos_function_test 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_iops_limit=1000 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local qos_lower_bw_limit=2 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local io_result=0 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local iops_limit=0 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@414 -- # local bw_limit=0 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # get_io_result IOPS Malloc_0 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:06:11.696 21:42:26 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:06:11.955 Running I/O for 60 seconds... 00:06:17.223 21:42:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 595692.28 2382769.13 0.00 0.00 2551808.00 0.00 0.00 ' 00:06:17.223 21:42:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:06:17.223 21:42:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:06:17.223 21:42:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # iostat_result=595692.28 00:06:17.223 21:42:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 595692 00:06:17.223 21:42:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # io_result=595692 00:06:17.223 21:42:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # iops_limit=148000 00:06:17.223 21:42:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@419 -- # '[' 148000 -gt 1000 ']' 00:06:17.223 21:42:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 148000 Malloc_0 00:06:17.223 21:42:32 blockdev_general.bdev_qos -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.223 21:42:32 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:17.223 21:42:32 blockdev_general.bdev_qos -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.223 21:42:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@423 -- # run_test bdev_qos_iops run_qos_test 148000 IOPS Malloc_0 00:06:17.223 21:42:32 blockdev_general.bdev_qos -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:06:17.223 21:42:32 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:17.223 21:42:32 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:17.223 ************************************ 00:06:17.223 START TEST bdev_qos_iops 00:06:17.223 ************************************ 00:06:17.223 21:42:32 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1117 -- # run_qos_test 148000 IOPS Malloc_0 00:06:17.223 21:42:32 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_limit=148000 00:06:17.223 21:42:32 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@390 -- # local qos_result=0 00:06:17.223 21:42:32 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # get_io_result IOPS Malloc_0 00:06:17.223 21:42:32 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:06:17.223 21:42:32 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:06:17.223 21:42:32 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # local iostat_result 00:06:17.223 21:42:32 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:06:17.223 21:42:32 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:17.223 21:42:32 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # tail -1 00:06:23.781 21:42:37 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 148070.53 592282.13 0.00 0.00 623216.00 0.00 0.00 ' 00:06:23.781 21:42:37 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:06:23.781 21:42:37 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:06:23.781 21:42:37 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # iostat_result=148070.53 00:06:23.781 21:42:37 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@385 -- # echo 148070 00:06:23.781 21:42:37 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # qos_result=148070 00:06:23.781 21:42:37 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@393 -- # '[' IOPS = BANDWIDTH ']' 00:06:23.781 21:42:37 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # lower_limit=133200 00:06:23.781 21:42:37 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@397 -- # upper_limit=162800 00:06:23.781 21:42:37 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 148070 -lt 133200 ']' 00:06:23.781 21:42:37 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 148070 -gt 162800 ']' 00:06:23.781 00:06:23.781 real 0m5.358s 00:06:23.781 user 0m0.132s 00:06:23.781 sys 0m0.030s 00:06:23.781 21:42:37 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:23.781 21:42:37 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:06:23.781 ************************************ 00:06:23.781 END TEST bdev_qos_iops 00:06:23.781 ************************************ 00:06:23.781 21:42:37 blockdev_general.bdev_qos -- common/autotest_common.sh@1136 -- # return 0 00:06:23.781 21:42:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # get_io_result BANDWIDTH Null_1 00:06:23.781 21:42:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:06:23.781 21:42:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:06:23.781 21:42:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:06:23.781 21:42:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:23.781 21:42:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Null_1 00:06:23.781 21:42:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:06:29.047 21:42:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 384213.02 1536852.07 0.00 0.00 1659904.00 0.00 0.00 ' 00:06:29.047 21:42:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:06:29.047 21:42:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:29.047 21:42:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:06:29.047 21:42:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # iostat_result=1659904.00 00:06:29.047 21:42:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 1659904 00:06:29.047 21:42:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=1659904 00:06:29.047 21:42:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # bw_limit=162 00:06:29.047 21:42:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@429 -- # '[' 162 -lt 2 ']' 00:06:29.047 21:42:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 162 Null_1 00:06:29.047 21:42:43 blockdev_general.bdev_qos -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:29.047 21:42:43 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:29.047 21:42:43 blockdev_general.bdev_qos -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:29.047 21:42:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@433 -- # run_test bdev_qos_bw run_qos_test 162 BANDWIDTH Null_1 00:06:29.047 21:42:43 blockdev_general.bdev_qos -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:06:29.047 21:42:43 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:29.047 21:42:43 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:29.047 ************************************ 00:06:29.047 START TEST bdev_qos_bw 00:06:29.047 ************************************ 00:06:29.047 21:42:43 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1117 -- # run_qos_test 162 BANDWIDTH Null_1 00:06:29.047 21:42:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_limit=162 00:06:29.047 21:42:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:06:29.047 21:42:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Null_1 00:06:29.047 21:42:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:06:29.047 21:42:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:06:29.047 21:42:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:06:29.047 21:42:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:29.047 21:42:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # grep Null_1 00:06:29.047 21:42:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # tail -1 00:06:34.317 21:42:48 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 41480.55 165922.20 0.00 0.00 177664.00 0.00 0.00 ' 00:06:34.317 21:42:48 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:06:34.317 21:42:48 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:34.317 21:42:48 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:06:34.317 21:42:48 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # iostat_result=177664.00 00:06:34.317 21:42:48 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@385 -- # echo 177664 00:06:34.318 21:42:48 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # qos_result=177664 00:06:34.318 21:42:48 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:34.318 21:42:48 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@394 -- # qos_limit=165888 00:06:34.318 21:42:48 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # lower_limit=149299 00:06:34.318 21:42:48 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@397 -- # upper_limit=182476 00:06:34.318 21:42:48 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 177664 -lt 149299 ']' 00:06:34.318 21:42:48 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 177664 -gt 182476 ']' 00:06:34.318 00:06:34.318 real 0m5.506s 00:06:34.318 user 0m0.123s 00:06:34.318 sys 0m0.039s 00:06:34.318 21:42:48 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:34.318 21:42:48 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:06:34.318 ************************************ 00:06:34.318 END TEST bdev_qos_bw 00:06:34.318 ************************************ 00:06:34.318 21:42:48 blockdev_general.bdev_qos -- common/autotest_common.sh@1136 -- # return 0 00:06:34.318 21:42:48 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:06:34.318 21:42:48 blockdev_general.bdev_qos -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:34.318 21:42:48 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:34.318 21:42:48 blockdev_general.bdev_qos -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:34.318 21:42:48 blockdev_general.bdev_qos -- bdev/blockdev.sh@437 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:06:34.318 21:42:48 blockdev_general.bdev_qos -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:06:34.318 21:42:48 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:34.318 21:42:48 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:34.318 ************************************ 00:06:34.318 START TEST bdev_qos_ro_bw 00:06:34.318 ************************************ 00:06:34.318 21:42:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1117 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:06:34.318 21:42:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_limit=2 00:06:34.318 21:42:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:06:34.318 21:42:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Malloc_0 00:06:34.318 21:42:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:06:34.318 21:42:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:06:34.318 21:42:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:06:34.318 21:42:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:34.318 21:42:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:06:34.318 21:42:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # tail -1 00:06:39.621 21:42:54 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 512.51 2050.03 0.00 0.00 2216.00 0.00 0.00 ' 00:06:39.621 21:42:54 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:06:39.621 21:42:54 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:39.621 21:42:54 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:06:39.621 21:42:54 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # iostat_result=2216.00 00:06:39.621 21:42:54 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@385 -- # echo 2216 00:06:39.621 21:42:54 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # qos_result=2216 00:06:39.621 21:42:54 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:39.621 21:42:54 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@394 -- # qos_limit=2048 00:06:39.621 21:42:54 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # lower_limit=1843 00:06:39.621 21:42:54 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@397 -- # upper_limit=2252 00:06:39.621 21:42:54 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2216 -lt 1843 ']' 00:06:39.621 21:42:54 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2216 -gt 2252 ']' 00:06:39.621 00:06:39.621 real 0m5.478s 00:06:39.621 user 0m0.157s 00:06:39.621 sys 0m0.008s 00:06:39.621 21:42:54 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:39.621 21:42:54 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:06:39.621 ************************************ 00:06:39.621 END TEST bdev_qos_ro_bw 00:06:39.621 ************************************ 00:06:39.621 21:42:54 blockdev_general.bdev_qos -- common/autotest_common.sh@1136 -- # return 0 00:06:39.621 21:42:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:06:39.621 21:42:54 blockdev_general.bdev_qos -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:39.621 21:42:54 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:39.879 21:42:54 blockdev_general.bdev_qos -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:39.879 21:42:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # rpc_cmd bdev_null_delete Null_1 00:06:39.879 21:42:54 blockdev_general.bdev_qos -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:39.879 21:42:54 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:39.879 00:06:39.879 Latency(us) 00:06:39.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:39.879 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:06:39.879 Malloc_0 : 27.95 203868.21 796.36 0.00 0.00 1244.50 361.19 503316.85 00:06:39.879 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:06:39.879 Null_1 : 27.99 282741.01 1104.46 0.00 0.00 905.06 71.21 29550.80 00:06:39.879 =================================================================================================================== 00:06:39.879 Total : 486609.22 1900.82 0.00 0.00 1047.17 71.21 503316.85 00:06:39.879 0 00:06:39.879 21:42:54 blockdev_general.bdev_qos -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:39.879 21:42:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # killprocess 48166 00:06:39.879 21:42:54 blockdev_general.bdev_qos -- common/autotest_common.sh@942 -- # '[' -z 48166 ']' 00:06:39.879 21:42:54 blockdev_general.bdev_qos -- common/autotest_common.sh@946 -- # kill -0 48166 00:06:39.879 21:42:54 blockdev_general.bdev_qos -- common/autotest_common.sh@947 -- # uname 00:06:39.879 21:42:54 blockdev_general.bdev_qos -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:06:39.879 21:42:54 blockdev_general.bdev_qos -- common/autotest_common.sh@950 -- # ps -c -o command 48166 00:06:39.879 21:42:54 blockdev_general.bdev_qos -- common/autotest_common.sh@950 -- # tail -1 00:06:39.879 21:42:54 blockdev_general.bdev_qos -- common/autotest_common.sh@950 -- # process_name=bdevperf 00:06:39.879 killing process with pid 48166 00:06:39.879 21:42:54 blockdev_general.bdev_qos -- common/autotest_common.sh@952 -- # '[' bdevperf = sudo ']' 00:06:39.879 21:42:54 blockdev_general.bdev_qos -- common/autotest_common.sh@960 -- # echo 'killing process with pid 48166' 00:06:39.879 21:42:54 blockdev_general.bdev_qos -- common/autotest_common.sh@961 -- # kill 48166 00:06:39.879 Received shutdown signal, test time was about 28.001877 seconds 00:06:39.879 00:06:39.879 Latency(us) 00:06:39.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:39.879 =================================================================================================================== 00:06:39.879 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:39.879 21:42:54 blockdev_general.bdev_qos -- common/autotest_common.sh@966 -- # wait 48166 00:06:40.137 21:42:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@462 -- # trap - SIGINT SIGTERM EXIT 00:06:40.137 00:06:40.137 real 0m29.449s 00:06:40.137 user 0m30.141s 00:06:40.137 sys 0m0.925s 00:06:40.137 ************************************ 00:06:40.137 END TEST bdev_qos 00:06:40.137 ************************************ 00:06:40.137 21:42:55 blockdev_general.bdev_qos -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:40.137 21:42:55 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:40.137 21:42:55 blockdev_general -- common/autotest_common.sh@1136 -- # return 0 00:06:40.137 21:42:55 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:06:40.137 21:42:55 blockdev_general -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:06:40.137 21:42:55 blockdev_general -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:40.137 21:42:55 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:40.137 ************************************ 00:06:40.137 START TEST bdev_qd_sampling 00:06:40.137 ************************************ 00:06:40.137 21:42:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1117 -- # qd_sampling_test_suite '' 00:06:40.137 21:42:55 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@538 -- # QD_DEV=Malloc_QD 00:06:40.137 21:42:55 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:06:40.137 21:42:55 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # QD_PID=48391 00:06:40.137 Process bdev QD sampling period testing pid: 48391 00:06:40.137 21:42:55 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # echo 'Process bdev QD sampling period testing pid: 48391' 00:06:40.137 21:42:55 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:06:40.137 21:42:55 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@544 -- # waitforlisten 48391 00:06:40.137 21:42:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@823 -- # '[' -z 48391 ']' 00:06:40.137 21:42:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.137 21:42:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:40.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.137 21:42:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.137 21:42:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:40.137 21:42:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:40.137 [2024-07-15 21:42:55.156681] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:40.137 [2024-07-15 21:42:55.156894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:40.704 EAL: TSC is not safe to use in SMP mode 00:06:40.704 EAL: TSC is not invariant 00:06:40.704 [2024-07-15 21:42:55.718878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.704 [2024-07-15 21:42:55.817882] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:40.704 [2024-07-15 21:42:55.817961] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:40.704 [2024-07-15 21:42:55.821108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.704 [2024-07-15 21:42:55.821097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.271 21:42:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:41.271 21:42:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@856 -- # return 0 00:06:41.271 21:42:56 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:06:41.271 21:42:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:41.271 21:42:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:41.271 Malloc_QD 00:06:41.271 21:42:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:41.271 21:42:56 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@547 -- # waitforbdev Malloc_QD 00:06:41.271 21:42:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@891 -- # local bdev_name=Malloc_QD 00:06:41.271 21:42:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:06:41.271 21:42:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@893 -- # local i 00:06:41.271 21:42:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:06:41.271 21:42:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:06:41.271 21:42:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@896 -- # rpc_cmd bdev_wait_for_examine 00:06:41.271 21:42:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:41.271 21:42:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:41.271 21:42:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:41.271 21:42:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:06:41.271 21:42:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:41.271 21:42:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:41.271 [ 00:06:41.271 { 00:06:41.271 "name": "Malloc_QD", 00:06:41.271 "aliases": [ 00:06:41.271 "3275bdfb-42f3-11ef-9f7f-e9a656123a8b" 00:06:41.271 ], 00:06:41.271 "product_name": "Malloc disk", 00:06:41.271 "block_size": 512, 00:06:41.271 "num_blocks": 262144, 00:06:41.271 "uuid": "3275bdfb-42f3-11ef-9f7f-e9a656123a8b", 00:06:41.271 "assigned_rate_limits": { 00:06:41.271 "rw_ios_per_sec": 0, 00:06:41.271 "rw_mbytes_per_sec": 0, 00:06:41.271 "r_mbytes_per_sec": 0, 00:06:41.271 "w_mbytes_per_sec": 0 00:06:41.271 }, 00:06:41.271 "claimed": false, 00:06:41.271 "zoned": false, 00:06:41.271 "supported_io_types": { 00:06:41.271 "read": true, 00:06:41.271 "write": true, 00:06:41.271 "unmap": true, 00:06:41.271 "flush": true, 00:06:41.271 "reset": true, 00:06:41.271 "nvme_admin": false, 00:06:41.271 "nvme_io": false, 00:06:41.271 "nvme_io_md": false, 00:06:41.271 "write_zeroes": true, 00:06:41.271 "zcopy": true, 00:06:41.271 "get_zone_info": false, 00:06:41.271 "zone_management": false, 00:06:41.271 "zone_append": false, 00:06:41.271 "compare": false, 00:06:41.271 "compare_and_write": false, 00:06:41.271 "abort": true, 00:06:41.271 "seek_hole": false, 00:06:41.271 "seek_data": false, 00:06:41.271 "copy": true, 00:06:41.271 "nvme_iov_md": false 00:06:41.271 }, 00:06:41.271 "memory_domains": [ 00:06:41.271 { 00:06:41.271 "dma_device_id": "system", 00:06:41.271 "dma_device_type": 1 00:06:41.271 }, 00:06:41.271 { 00:06:41.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.271 "dma_device_type": 2 00:06:41.271 } 00:06:41.271 ], 00:06:41.271 "driver_specific": {} 00:06:41.271 } 00:06:41.271 ] 00:06:41.271 21:42:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:41.271 21:42:56 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # return 0 00:06:41.271 21:42:56 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # sleep 2 00:06:41.271 21:42:56 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:41.271 Running I/O for 5 seconds... 00:06:43.198 21:42:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@551 -- # qd_sampling_function_test Malloc_QD 00:06:43.198 21:42:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local bdev_name=Malloc_QD 00:06:43.198 21:42:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local sampling_period=10 00:06:43.198 21:42:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@521 -- # local iostats 00:06:43.198 21:42:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:06:43.198 21:42:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:43.198 21:42:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:43.198 21:42:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:43.198 21:42:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:06:43.198 21:42:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:43.198 21:42:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:43.457 21:42:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:43.457 21:42:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # iostats='{ 00:06:43.457 "tick_rate": 2199998373, 00:06:43.457 "ticks": 761134317580, 00:06:43.457 "bdevs": [ 00:06:43.457 { 00:06:43.457 "name": "Malloc_QD", 00:06:43.457 "bytes_read": 12123673088, 00:06:43.457 "num_read_ops": 2959875, 00:06:43.457 "bytes_written": 0, 00:06:43.457 "num_write_ops": 0, 00:06:43.457 "bytes_unmapped": 0, 00:06:43.457 "num_unmap_ops": 0, 00:06:43.457 "bytes_copied": 0, 00:06:43.457 "num_copy_ops": 0, 00:06:43.457 "read_latency_ticks": 2206502375946, 00:06:43.457 "max_read_latency_ticks": 1231233, 00:06:43.457 "min_read_latency_ticks": 40942, 00:06:43.457 "write_latency_ticks": 0, 00:06:43.457 "max_write_latency_ticks": 0, 00:06:43.457 "min_write_latency_ticks": 0, 00:06:43.457 "unmap_latency_ticks": 0, 00:06:43.457 "max_unmap_latency_ticks": 0, 00:06:43.457 "min_unmap_latency_ticks": 0, 00:06:43.457 "copy_latency_ticks": 0, 00:06:43.457 "max_copy_latency_ticks": 0, 00:06:43.457 "min_copy_latency_ticks": 0, 00:06:43.457 "io_error": {}, 00:06:43.457 "queue_depth_polling_period": 10, 00:06:43.457 "queue_depth": 512, 00:06:43.457 "io_time": 240, 00:06:43.457 "weighted_io_time": 122880 00:06:43.457 } 00:06:43.457 ] 00:06:43.457 }' 00:06:43.457 21:42:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:06:43.457 21:42:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # qd_sampling_period=10 00:06:43.457 21:42:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 == null ']' 00:06:43.457 21:42:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 -ne 10 ']' 00:06:43.457 21:42:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:06:43.457 21:42:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:43.457 21:42:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:43.457 00:06:43.457 Latency(us) 00:06:43.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:43.457 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:06:43.457 Malloc_QD : 1.99 752853.96 2940.84 0.00 0.00 339.77 57.25 448.70 00:06:43.457 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:06:43.457 Malloc_QD : 1.99 757082.90 2957.36 0.00 0.00 337.87 55.16 562.27 00:06:43.457 =================================================================================================================== 00:06:43.457 Total : 1509936.85 5898.19 0.00 0.00 338.82 55.16 562.27 00:06:43.457 0 00:06:43.457 21:42:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:43.457 21:42:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # killprocess 48391 00:06:43.457 21:42:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@942 -- # '[' -z 48391 ']' 00:06:43.457 21:42:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@946 -- # kill -0 48391 00:06:43.457 21:42:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@947 -- # uname 00:06:43.457 21:42:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:06:43.457 21:42:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@950 -- # ps -c -o command 48391 00:06:43.457 21:42:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@950 -- # tail -1 00:06:43.457 21:42:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@950 -- # process_name=bdevperf 00:06:43.457 21:42:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@952 -- # '[' bdevperf = sudo ']' 00:06:43.457 killing process with pid 48391 00:06:43.457 21:42:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@960 -- # echo 'killing process with pid 48391' 00:06:43.457 21:42:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@961 -- # kill 48391 00:06:43.457 Received shutdown signal, test time was about 2.019844 seconds 00:06:43.457 00:06:43.457 Latency(us) 00:06:43.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:43.457 =================================================================================================================== 00:06:43.457 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:43.457 21:42:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@966 -- # wait 48391 00:06:43.457 21:42:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@555 -- # trap - SIGINT SIGTERM EXIT 00:06:43.457 00:06:43.457 real 0m3.459s 00:06:43.457 user 0m6.215s 00:06:43.457 sys 0m0.667s 00:06:43.457 21:42:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:43.457 21:42:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:43.457 ************************************ 00:06:43.457 END TEST bdev_qd_sampling 00:06:43.457 ************************************ 00:06:43.457 21:42:58 blockdev_general -- common/autotest_common.sh@1136 -- # return 0 00:06:43.457 21:42:58 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_error error_test_suite '' 00:06:43.457 21:42:58 blockdev_general -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:06:43.457 21:42:58 blockdev_general -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:43.457 21:42:58 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:43.716 ************************************ 00:06:43.716 START TEST bdev_error 00:06:43.716 ************************************ 00:06:43.716 21:42:58 blockdev_general.bdev_error -- common/autotest_common.sh@1117 -- # error_test_suite '' 00:06:43.716 21:42:58 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_1=Dev_1 00:06:43.716 21:42:58 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # DEV_2=Dev_2 00:06:43.716 21:42:58 blockdev_general.bdev_error -- bdev/blockdev.sh@468 -- # ERR_DEV=EE_Dev_1 00:06:43.716 21:42:58 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # ERR_PID=48434 00:06:43.716 21:42:58 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # echo 'Process error testing pid: 48434' 00:06:43.716 Process error testing pid: 48434 00:06:43.716 21:42:58 blockdev_general.bdev_error -- bdev/blockdev.sh@474 -- # waitforlisten 48434 00:06:43.716 21:42:58 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:06:43.716 21:42:58 blockdev_general.bdev_error -- common/autotest_common.sh@823 -- # '[' -z 48434 ']' 00:06:43.716 21:42:58 blockdev_general.bdev_error -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.716 21:42:58 blockdev_general.bdev_error -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:43.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.716 21:42:58 blockdev_general.bdev_error -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.716 21:42:58 blockdev_general.bdev_error -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:43.716 21:42:58 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:43.716 [2024-07-15 21:42:58.660279] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:43.716 [2024-07-15 21:42:58.660548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:44.282 EAL: TSC is not safe to use in SMP mode 00:06:44.282 EAL: TSC is not invariant 00:06:44.282 [2024-07-15 21:42:59.178512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.282 [2024-07-15 21:42:59.279017] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:44.282 [2024-07-15 21:42:59.281503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@856 -- # return 0 00:06:44.849 21:42:59 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:44.849 Dev_1 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:44.849 21:42:59 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # waitforbdev Dev_1 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@891 -- # local bdev_name=Dev_1 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@893 -- # local i 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # rpc_cmd bdev_wait_for_examine 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:44.849 [ 00:06:44.849 { 00:06:44.849 "name": "Dev_1", 00:06:44.849 "aliases": [ 00:06:44.849 "34882b84-42f3-11ef-9f7f-e9a656123a8b" 00:06:44.849 ], 00:06:44.849 "product_name": "Malloc disk", 00:06:44.849 "block_size": 512, 00:06:44.849 "num_blocks": 262144, 00:06:44.849 "uuid": "34882b84-42f3-11ef-9f7f-e9a656123a8b", 00:06:44.849 "assigned_rate_limits": { 00:06:44.849 "rw_ios_per_sec": 0, 00:06:44.849 "rw_mbytes_per_sec": 0, 00:06:44.849 "r_mbytes_per_sec": 0, 00:06:44.849 "w_mbytes_per_sec": 0 00:06:44.849 }, 00:06:44.849 "claimed": false, 00:06:44.849 "zoned": false, 00:06:44.849 "supported_io_types": { 00:06:44.849 "read": true, 00:06:44.849 "write": true, 00:06:44.849 "unmap": true, 00:06:44.849 "flush": true, 00:06:44.849 "reset": true, 00:06:44.849 "nvme_admin": false, 00:06:44.849 "nvme_io": false, 00:06:44.849 "nvme_io_md": false, 00:06:44.849 "write_zeroes": true, 00:06:44.849 "zcopy": true, 00:06:44.849 "get_zone_info": false, 00:06:44.849 "zone_management": false, 00:06:44.849 "zone_append": false, 00:06:44.849 "compare": false, 00:06:44.849 "compare_and_write": false, 00:06:44.849 "abort": true, 00:06:44.849 "seek_hole": false, 00:06:44.849 "seek_data": false, 00:06:44.849 "copy": true, 00:06:44.849 "nvme_iov_md": false 00:06:44.849 }, 00:06:44.849 "memory_domains": [ 00:06:44.849 { 00:06:44.849 "dma_device_id": "system", 00:06:44.849 "dma_device_type": 1 00:06:44.849 }, 00:06:44.849 { 00:06:44.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:44.849 "dma_device_type": 2 00:06:44.849 } 00:06:44.849 ], 00:06:44.849 "driver_specific": {} 00:06:44.849 } 00:06:44.849 ] 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # return 0 00:06:44.849 21:42:59 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_error_create Dev_1 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:44.849 true 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:44.849 21:42:59 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:44.849 Dev_2 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:44.849 21:42:59 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # waitforbdev Dev_2 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@891 -- # local bdev_name=Dev_2 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@893 -- # local i 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # rpc_cmd bdev_wait_for_examine 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:44.849 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:44.849 [ 00:06:44.849 { 00:06:44.849 "name": "Dev_2", 00:06:44.849 "aliases": [ 00:06:44.849 "348e4569-42f3-11ef-9f7f-e9a656123a8b" 00:06:44.849 ], 00:06:44.849 "product_name": "Malloc disk", 00:06:44.850 "block_size": 512, 00:06:44.850 "num_blocks": 262144, 00:06:44.850 "uuid": "348e4569-42f3-11ef-9f7f-e9a656123a8b", 00:06:44.850 "assigned_rate_limits": { 00:06:44.850 "rw_ios_per_sec": 0, 00:06:44.850 "rw_mbytes_per_sec": 0, 00:06:44.850 "r_mbytes_per_sec": 0, 00:06:44.850 "w_mbytes_per_sec": 0 00:06:44.850 }, 00:06:44.850 "claimed": false, 00:06:44.850 "zoned": false, 00:06:44.850 "supported_io_types": { 00:06:44.850 "read": true, 00:06:44.850 "write": true, 00:06:44.850 "unmap": true, 00:06:44.850 "flush": true, 00:06:44.850 "reset": true, 00:06:44.850 "nvme_admin": false, 00:06:44.850 "nvme_io": false, 00:06:44.850 "nvme_io_md": false, 00:06:44.850 "write_zeroes": true, 00:06:44.850 "zcopy": true, 00:06:44.850 "get_zone_info": false, 00:06:44.850 "zone_management": false, 00:06:44.850 "zone_append": false, 00:06:44.850 "compare": false, 00:06:44.850 "compare_and_write": false, 00:06:44.850 "abort": true, 00:06:44.850 "seek_hole": false, 00:06:44.850 "seek_data": false, 00:06:44.850 "copy": true, 00:06:44.850 "nvme_iov_md": false 00:06:44.850 }, 00:06:44.850 "memory_domains": [ 00:06:44.850 { 00:06:44.850 "dma_device_id": "system", 00:06:44.850 "dma_device_type": 1 00:06:44.850 }, 00:06:44.850 { 00:06:44.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:44.850 "dma_device_type": 2 00:06:44.850 } 00:06:44.850 ], 00:06:44.850 "driver_specific": {} 00:06:44.850 } 00:06:44.850 ] 00:06:44.850 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:44.850 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # return 0 00:06:44.850 21:42:59 blockdev_general.bdev_error -- bdev/blockdev.sh@481 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:06:44.850 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:44.850 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:44.850 21:42:59 blockdev_general.bdev_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:44.850 21:42:59 blockdev_general.bdev_error -- bdev/blockdev.sh@484 -- # sleep 1 00:06:44.850 21:42:59 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:06:44.850 Running I/O for 5 seconds... 00:06:45.785 21:43:00 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # kill -0 48434 00:06:45.785 Process is existed as continue on error is set. Pid: 48434 00:06:45.785 21:43:00 blockdev_general.bdev_error -- bdev/blockdev.sh@488 -- # echo 'Process is existed as continue on error is set. Pid: 48434' 00:06:45.785 21:43:00 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:06:45.785 21:43:00 blockdev_general.bdev_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:45.785 21:43:00 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:45.785 21:43:00 blockdev_general.bdev_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:45.785 21:43:00 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # rpc_cmd bdev_malloc_delete Dev_1 00:06:45.785 21:43:00 blockdev_general.bdev_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:45.785 21:43:00 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:45.785 21:43:00 blockdev_general.bdev_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:45.785 21:43:00 blockdev_general.bdev_error -- bdev/blockdev.sh@497 -- # sleep 5 00:06:45.785 Timeout while waiting for response: 00:06:45.785 00:06:45.785 00:06:50.035 00:06:50.035 Latency(us) 00:06:50.035 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:50.035 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:06:50.035 EE_Dev_1 : 0.97 299773.25 1170.99 5.16 0.00 53.14 24.20 123.81 00:06:50.035 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:06:50.035 Dev_2 : 5.00 670504.32 2619.16 0.00 0.00 23.64 5.41 24307.92 00:06:50.035 =================================================================================================================== 00:06:50.035 Total : 970277.57 3790.15 5.16 0.00 25.99 5.41 24307.92 00:06:50.967 21:43:05 blockdev_general.bdev_error -- bdev/blockdev.sh@499 -- # killprocess 48434 00:06:50.967 21:43:05 blockdev_general.bdev_error -- common/autotest_common.sh@942 -- # '[' -z 48434 ']' 00:06:50.967 21:43:05 blockdev_general.bdev_error -- common/autotest_common.sh@946 -- # kill -0 48434 00:06:50.967 21:43:05 blockdev_general.bdev_error -- common/autotest_common.sh@947 -- # uname 00:06:50.967 21:43:05 blockdev_general.bdev_error -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:06:50.967 21:43:05 blockdev_general.bdev_error -- common/autotest_common.sh@950 -- # ps -c -o command 48434 00:06:50.967 21:43:05 blockdev_general.bdev_error -- common/autotest_common.sh@950 -- # tail -1 00:06:50.967 21:43:06 blockdev_general.bdev_error -- common/autotest_common.sh@950 -- # process_name=bdevperf 00:06:50.967 21:43:06 blockdev_general.bdev_error -- common/autotest_common.sh@952 -- # '[' bdevperf = sudo ']' 00:06:50.967 killing process with pid 48434 00:06:50.967 21:43:06 blockdev_general.bdev_error -- common/autotest_common.sh@960 -- # echo 'killing process with pid 48434' 00:06:50.967 Received shutdown signal, test time was about 5.000000 seconds 00:06:50.967 00:06:50.967 Latency(us) 00:06:50.967 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:50.967 =================================================================================================================== 00:06:50.967 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:50.967 21:43:06 blockdev_general.bdev_error -- common/autotest_common.sh@961 -- # kill 48434 00:06:50.967 21:43:06 blockdev_general.bdev_error -- common/autotest_common.sh@966 -- # wait 48434 00:06:51.225 21:43:06 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # ERR_PID=48474 00:06:51.225 21:43:06 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:06:51.225 Process error testing pid: 48474 00:06:51.225 21:43:06 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # echo 'Process error testing pid: 48474' 00:06:51.225 21:43:06 blockdev_general.bdev_error -- bdev/blockdev.sh@505 -- # waitforlisten 48474 00:06:51.225 21:43:06 blockdev_general.bdev_error -- common/autotest_common.sh@823 -- # '[' -z 48474 ']' 00:06:51.225 21:43:06 blockdev_general.bdev_error -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.225 21:43:06 blockdev_general.bdev_error -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:51.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.225 21:43:06 blockdev_general.bdev_error -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.225 21:43:06 blockdev_general.bdev_error -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:51.225 21:43:06 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:51.225 [2024-07-15 21:43:06.211496] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:51.225 [2024-07-15 21:43:06.211760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:51.790 EAL: TSC is not safe to use in SMP mode 00:06:51.790 EAL: TSC is not invariant 00:06:51.790 [2024-07-15 21:43:06.758268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.790 [2024-07-15 21:43:06.846428] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:51.790 [2024-07-15 21:43:06.848510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.356 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:52.356 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@856 -- # return 0 00:06:52.356 21:43:07 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:06:52.356 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:52.356 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:52.356 Dev_1 00:06:52.356 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:52.356 21:43:07 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # waitforbdev Dev_1 00:06:52.356 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@891 -- # local bdev_name=Dev_1 00:06:52.356 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:06:52.356 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@893 -- # local i 00:06:52.356 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:06:52.356 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:06:52.356 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # rpc_cmd bdev_wait_for_examine 00:06:52.356 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:52.356 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:52.356 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:52.356 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:06:52.356 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:52.356 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:52.356 [ 00:06:52.356 { 00:06:52.356 "name": "Dev_1", 00:06:52.356 "aliases": [ 00:06:52.356 "3900f60f-42f3-11ef-9f7f-e9a656123a8b" 00:06:52.356 ], 00:06:52.356 "product_name": "Malloc disk", 00:06:52.356 "block_size": 512, 00:06:52.356 "num_blocks": 262144, 00:06:52.356 "uuid": "3900f60f-42f3-11ef-9f7f-e9a656123a8b", 00:06:52.356 "assigned_rate_limits": { 00:06:52.356 "rw_ios_per_sec": 0, 00:06:52.356 "rw_mbytes_per_sec": 0, 00:06:52.356 "r_mbytes_per_sec": 0, 00:06:52.356 "w_mbytes_per_sec": 0 00:06:52.356 }, 00:06:52.356 "claimed": false, 00:06:52.356 "zoned": false, 00:06:52.356 "supported_io_types": { 00:06:52.356 "read": true, 00:06:52.356 "write": true, 00:06:52.356 "unmap": true, 00:06:52.356 "flush": true, 00:06:52.356 "reset": true, 00:06:52.356 "nvme_admin": false, 00:06:52.356 "nvme_io": false, 00:06:52.356 "nvme_io_md": false, 00:06:52.356 "write_zeroes": true, 00:06:52.356 "zcopy": true, 00:06:52.356 "get_zone_info": false, 00:06:52.356 "zone_management": false, 00:06:52.356 "zone_append": false, 00:06:52.356 "compare": false, 00:06:52.356 "compare_and_write": false, 00:06:52.356 "abort": true, 00:06:52.356 "seek_hole": false, 00:06:52.356 "seek_data": false, 00:06:52.356 "copy": true, 00:06:52.356 "nvme_iov_md": false 00:06:52.356 }, 00:06:52.356 "memory_domains": [ 00:06:52.356 { 00:06:52.356 "dma_device_id": "system", 00:06:52.356 "dma_device_type": 1 00:06:52.356 }, 00:06:52.356 { 00:06:52.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:52.356 "dma_device_type": 2 00:06:52.357 } 00:06:52.357 ], 00:06:52.357 "driver_specific": {} 00:06:52.357 } 00:06:52.357 ] 00:06:52.357 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:52.357 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # return 0 00:06:52.357 21:43:07 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_error_create Dev_1 00:06:52.357 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:52.357 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:52.357 true 00:06:52.357 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:52.357 21:43:07 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:06:52.357 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:52.357 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:52.357 Dev_2 00:06:52.357 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:52.357 21:43:07 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # waitforbdev Dev_2 00:06:52.357 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@891 -- # local bdev_name=Dev_2 00:06:52.357 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:06:52.357 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@893 -- # local i 00:06:52.357 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:06:52.357 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:06:52.357 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # rpc_cmd bdev_wait_for_examine 00:06:52.357 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:52.357 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:52.357 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:52.357 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:06:52.357 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:52.357 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:52.357 [ 00:06:52.357 { 00:06:52.357 "name": "Dev_2", 00:06:52.357 "aliases": [ 00:06:52.357 "39070f76-42f3-11ef-9f7f-e9a656123a8b" 00:06:52.357 ], 00:06:52.357 "product_name": "Malloc disk", 00:06:52.357 "block_size": 512, 00:06:52.357 "num_blocks": 262144, 00:06:52.357 "uuid": "39070f76-42f3-11ef-9f7f-e9a656123a8b", 00:06:52.357 "assigned_rate_limits": { 00:06:52.357 "rw_ios_per_sec": 0, 00:06:52.357 "rw_mbytes_per_sec": 0, 00:06:52.357 "r_mbytes_per_sec": 0, 00:06:52.357 "w_mbytes_per_sec": 0 00:06:52.357 }, 00:06:52.357 "claimed": false, 00:06:52.357 "zoned": false, 00:06:52.357 "supported_io_types": { 00:06:52.357 "read": true, 00:06:52.357 "write": true, 00:06:52.357 "unmap": true, 00:06:52.357 "flush": true, 00:06:52.357 "reset": true, 00:06:52.357 "nvme_admin": false, 00:06:52.357 "nvme_io": false, 00:06:52.357 "nvme_io_md": false, 00:06:52.357 "write_zeroes": true, 00:06:52.357 "zcopy": true, 00:06:52.357 "get_zone_info": false, 00:06:52.357 "zone_management": false, 00:06:52.357 "zone_append": false, 00:06:52.357 "compare": false, 00:06:52.357 "compare_and_write": false, 00:06:52.357 "abort": true, 00:06:52.357 "seek_hole": false, 00:06:52.357 "seek_data": false, 00:06:52.357 "copy": true, 00:06:52.357 "nvme_iov_md": false 00:06:52.357 }, 00:06:52.357 "memory_domains": [ 00:06:52.357 { 00:06:52.357 "dma_device_id": "system", 00:06:52.357 "dma_device_type": 1 00:06:52.357 }, 00:06:52.357 { 00:06:52.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:52.357 "dma_device_type": 2 00:06:52.357 } 00:06:52.357 ], 00:06:52.357 "driver_specific": {} 00:06:52.357 } 00:06:52.357 ] 00:06:52.357 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:52.357 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # return 0 00:06:52.357 21:43:07 blockdev_general.bdev_error -- bdev/blockdev.sh@512 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:06:52.357 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:52.357 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:52.357 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:52.357 21:43:07 blockdev_general.bdev_error -- bdev/blockdev.sh@515 -- # NOT wait 48474 00:06:52.357 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # local es=0 00:06:52.357 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@644 -- # valid_exec_arg wait 48474 00:06:52.357 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@630 -- # local arg=wait 00:06:52.357 21:43:07 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:06:52.357 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:52.357 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@634 -- # type -t wait 00:06:52.357 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:52.357 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@645 -- # wait 48474 00:06:52.357 Running I/O for 5 seconds... 00:06:52.357 task offset: 147192 on job bdev=EE_Dev_1 fails 00:06:52.357 00:06:52.357 Latency(us) 00:06:52.357 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:52.357 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:06:52.357 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:06:52.357 EE_Dev_1 : 0.00 170542.64 666.18 38759.69 0.00 62.55 24.90 118.23 00:06:52.357 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:06:52.357 Dev_2 : 0.00 203821.66 796.18 0.00 0.00 36.59 23.74 55.16 00:06:52.357 =================================================================================================================== 00:06:52.357 Total : 374364.29 1462.36 38759.69 0.00 48.47 23.74 118.23 00:06:52.357 [2024-07-15 21:43:07.468853] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.357 request: 00:06:52.357 { 00:06:52.357 "method": "perform_tests", 00:06:52.357 "req_id": 1 00:06:52.357 } 00:06:52.357 Got JSON-RPC error response 00:06:52.357 response: 00:06:52.357 { 00:06:52.357 "code": -32603, 00:06:52.357 "message": "bdevperf failed with error Operation not permitted" 00:06:52.357 } 00:06:52.615 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@645 -- # es=255 00:06:52.615 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:06:52.615 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@654 -- # es=127 00:06:52.615 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@655 -- # case "$es" in 00:06:52.615 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@662 -- # es=1 00:06:52.615 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:06:52.615 00:06:52.615 real 0m9.045s 00:06:52.615 user 0m9.126s 00:06:52.615 sys 0m1.309s 00:06:52.615 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:52.615 21:43:07 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:52.615 ************************************ 00:06:52.615 END TEST bdev_error 00:06:52.615 ************************************ 00:06:52.615 21:43:07 blockdev_general -- common/autotest_common.sh@1136 -- # return 0 00:06:52.615 21:43:07 blockdev_general -- bdev/blockdev.sh@791 -- # run_test bdev_stat stat_test_suite '' 00:06:52.615 21:43:07 blockdev_general -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:06:52.615 21:43:07 blockdev_general -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:52.615 21:43:07 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:52.615 ************************************ 00:06:52.615 START TEST bdev_stat 00:06:52.615 ************************************ 00:06:52.615 21:43:07 blockdev_general.bdev_stat -- common/autotest_common.sh@1117 -- # stat_test_suite '' 00:06:52.615 21:43:07 blockdev_general.bdev_stat -- bdev/blockdev.sh@592 -- # STAT_DEV=Malloc_STAT 00:06:52.616 21:43:07 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # STAT_PID=48501 00:06:52.616 21:43:07 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # echo 'Process Bdev IO statistics testing pid: 48501' 00:06:52.616 Process Bdev IO statistics testing pid: 48501 00:06:52.616 21:43:07 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:06:52.616 21:43:07 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:06:52.616 21:43:07 blockdev_general.bdev_stat -- bdev/blockdev.sh@599 -- # waitforlisten 48501 00:06:52.616 21:43:07 blockdev_general.bdev_stat -- common/autotest_common.sh@823 -- # '[' -z 48501 ']' 00:06:52.616 21:43:07 blockdev_general.bdev_stat -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.616 21:43:07 blockdev_general.bdev_stat -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:52.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.616 21:43:07 blockdev_general.bdev_stat -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.616 21:43:07 blockdev_general.bdev_stat -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:52.616 21:43:07 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:52.616 [2024-07-15 21:43:07.751913] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:52.616 [2024-07-15 21:43:07.752148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:53.182 EAL: TSC is not safe to use in SMP mode 00:06:53.182 EAL: TSC is not invariant 00:06:53.182 [2024-07-15 21:43:08.277106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:53.503 [2024-07-15 21:43:08.377441] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:53.503 [2024-07-15 21:43:08.377530] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:53.503 [2024-07-15 21:43:08.380764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.503 [2024-07-15 21:43:08.380752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.762 21:43:08 blockdev_general.bdev_stat -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:53.762 21:43:08 blockdev_general.bdev_stat -- common/autotest_common.sh@856 -- # return 0 00:06:53.762 21:43:08 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:06:53.762 21:43:08 blockdev_general.bdev_stat -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:53.762 21:43:08 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:53.762 Malloc_STAT 00:06:53.762 21:43:08 blockdev_general.bdev_stat -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:53.762 21:43:08 blockdev_general.bdev_stat -- bdev/blockdev.sh@602 -- # waitforbdev Malloc_STAT 00:06:53.762 21:43:08 blockdev_general.bdev_stat -- common/autotest_common.sh@891 -- # local bdev_name=Malloc_STAT 00:06:53.762 21:43:08 blockdev_general.bdev_stat -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:06:53.762 21:43:08 blockdev_general.bdev_stat -- common/autotest_common.sh@893 -- # local i 00:06:53.762 21:43:08 blockdev_general.bdev_stat -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:06:53.762 21:43:08 blockdev_general.bdev_stat -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:06:53.762 21:43:08 blockdev_general.bdev_stat -- common/autotest_common.sh@896 -- # rpc_cmd bdev_wait_for_examine 00:06:53.762 21:43:08 blockdev_general.bdev_stat -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:53.762 21:43:08 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:53.762 21:43:08 blockdev_general.bdev_stat -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:53.762 21:43:08 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:06:53.762 21:43:08 blockdev_general.bdev_stat -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:53.762 21:43:08 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:53.762 [ 00:06:53.762 { 00:06:53.762 "name": "Malloc_STAT", 00:06:53.762 "aliases": [ 00:06:53.762 "39f8defe-42f3-11ef-9f7f-e9a656123a8b" 00:06:53.762 ], 00:06:53.762 "product_name": "Malloc disk", 00:06:53.762 "block_size": 512, 00:06:53.762 "num_blocks": 262144, 00:06:53.762 "uuid": "39f8defe-42f3-11ef-9f7f-e9a656123a8b", 00:06:53.762 "assigned_rate_limits": { 00:06:53.762 "rw_ios_per_sec": 0, 00:06:53.762 "rw_mbytes_per_sec": 0, 00:06:53.762 "r_mbytes_per_sec": 0, 00:06:53.762 "w_mbytes_per_sec": 0 00:06:53.762 }, 00:06:53.762 "claimed": false, 00:06:53.762 "zoned": false, 00:06:53.762 "supported_io_types": { 00:06:53.762 "read": true, 00:06:53.762 "write": true, 00:06:53.762 "unmap": true, 00:06:53.762 "flush": true, 00:06:53.762 "reset": true, 00:06:53.762 "nvme_admin": false, 00:06:53.762 "nvme_io": false, 00:06:53.762 "nvme_io_md": false, 00:06:53.762 "write_zeroes": true, 00:06:53.762 "zcopy": true, 00:06:53.762 "get_zone_info": false, 00:06:53.762 "zone_management": false, 00:06:53.762 "zone_append": false, 00:06:53.762 "compare": false, 00:06:53.762 "compare_and_write": false, 00:06:53.762 "abort": true, 00:06:53.762 "seek_hole": false, 00:06:53.762 "seek_data": false, 00:06:53.762 "copy": true, 00:06:53.762 "nvme_iov_md": false 00:06:53.762 }, 00:06:53.762 "memory_domains": [ 00:06:53.762 { 00:06:53.762 "dma_device_id": "system", 00:06:53.762 "dma_device_type": 1 00:06:53.762 }, 00:06:53.762 { 00:06:53.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:53.762 "dma_device_type": 2 00:06:53.762 } 00:06:53.762 ], 00:06:53.762 "driver_specific": {} 00:06:53.762 } 00:06:53.762 ] 00:06:53.762 21:43:08 blockdev_general.bdev_stat -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:53.762 21:43:08 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # return 0 00:06:53.762 21:43:08 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # sleep 2 00:06:53.762 21:43:08 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:54.021 Running I/O for 10 seconds... 00:06:55.921 21:43:10 blockdev_general.bdev_stat -- bdev/blockdev.sh@606 -- # stat_function_test Malloc_STAT 00:06:55.921 21:43:10 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local bdev_name=Malloc_STAT 00:06:55.921 21:43:10 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local iostats 00:06:55.921 21:43:10 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count1 00:06:55.921 21:43:10 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local io_count2 00:06:55.921 21:43:10 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local iostats_per_channel 00:06:55.921 21:43:10 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel1 00:06:55.921 21:43:10 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel2 00:06:55.921 21:43:10 blockdev_general.bdev_stat -- bdev/blockdev.sh@566 -- # local io_count_per_channel_all=0 00:06:55.921 21:43:10 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:06:55.921 21:43:10 blockdev_general.bdev_stat -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:55.921 21:43:10 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:55.921 21:43:10 blockdev_general.bdev_stat -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:55.921 21:43:10 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # iostats='{ 00:06:55.921 "tick_rate": 2199998373, 00:06:55.921 "ticks": 788801096176, 00:06:55.921 "bdevs": [ 00:06:55.921 { 00:06:55.921 "name": "Malloc_STAT", 00:06:55.921 "bytes_read": 11882500608, 00:06:55.921 "num_read_ops": 2900995, 00:06:55.921 "bytes_written": 0, 00:06:55.921 "num_write_ops": 0, 00:06:55.921 "bytes_unmapped": 0, 00:06:55.921 "num_unmap_ops": 0, 00:06:55.921 "bytes_copied": 0, 00:06:55.921 "num_copy_ops": 0, 00:06:55.921 "read_latency_ticks": 2152740013221, 00:06:55.921 "max_read_latency_ticks": 1168021, 00:06:55.921 "min_read_latency_ticks": 41184, 00:06:55.921 "write_latency_ticks": 0, 00:06:55.921 "max_write_latency_ticks": 0, 00:06:55.921 "min_write_latency_ticks": 0, 00:06:55.921 "unmap_latency_ticks": 0, 00:06:55.921 "max_unmap_latency_ticks": 0, 00:06:55.921 "min_unmap_latency_ticks": 0, 00:06:55.921 "copy_latency_ticks": 0, 00:06:55.921 "max_copy_latency_ticks": 0, 00:06:55.921 "min_copy_latency_ticks": 0, 00:06:55.921 "io_error": {} 00:06:55.921 } 00:06:55.921 ] 00:06:55.921 }' 00:06:55.921 21:43:10 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # jq -r '.bdevs[0].num_read_ops' 00:06:55.921 21:43:10 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # io_count1=2900995 00:06:55.921 21:43:10 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:06:55.921 21:43:10 blockdev_general.bdev_stat -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:55.921 21:43:10 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:55.921 21:43:10 blockdev_general.bdev_stat -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:55.921 21:43:10 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # iostats_per_channel='{ 00:06:55.921 "tick_rate": 2199998373, 00:06:55.921 "ticks": 788855231815, 00:06:55.921 "name": "Malloc_STAT", 00:06:55.921 "channels": [ 00:06:55.921 { 00:06:55.921 "thread_id": 2, 00:06:55.921 "bytes_read": 6041894912, 00:06:55.921 "num_read_ops": 1475072, 00:06:55.921 "bytes_written": 0, 00:06:55.921 "num_write_ops": 0, 00:06:55.921 "bytes_unmapped": 0, 00:06:55.921 "num_unmap_ops": 0, 00:06:55.921 "bytes_copied": 0, 00:06:55.921 "num_copy_ops": 0, 00:06:55.921 "read_latency_ticks": 1090129291109, 00:06:55.921 "max_read_latency_ticks": 1131504, 00:06:55.921 "min_read_latency_ticks": 690900, 00:06:55.921 "write_latency_ticks": 0, 00:06:55.921 "max_write_latency_ticks": 0, 00:06:55.921 "min_write_latency_ticks": 0, 00:06:55.921 "unmap_latency_ticks": 0, 00:06:55.921 "max_unmap_latency_ticks": 0, 00:06:55.921 "min_unmap_latency_ticks": 0, 00:06:55.921 "copy_latency_ticks": 0, 00:06:55.921 "max_copy_latency_ticks": 0, 00:06:55.921 "min_copy_latency_ticks": 0 00:06:55.921 }, 00:06:55.921 { 00:06:55.921 "thread_id": 3, 00:06:55.921 "bytes_read": 5991563264, 00:06:55.921 "num_read_ops": 1462784, 00:06:55.921 "bytes_written": 0, 00:06:55.921 "num_write_ops": 0, 00:06:55.921 "bytes_unmapped": 0, 00:06:55.921 "num_unmap_ops": 0, 00:06:55.921 "bytes_copied": 0, 00:06:55.921 "num_copy_ops": 0, 00:06:55.921 "read_latency_ticks": 1090338272109, 00:06:55.921 "max_read_latency_ticks": 1168021, 00:06:55.921 "min_read_latency_ticks": 696655, 00:06:55.921 "write_latency_ticks": 0, 00:06:55.921 "max_write_latency_ticks": 0, 00:06:55.921 "min_write_latency_ticks": 0, 00:06:55.921 "unmap_latency_ticks": 0, 00:06:55.921 "max_unmap_latency_ticks": 0, 00:06:55.921 "min_unmap_latency_ticks": 0, 00:06:55.921 "copy_latency_ticks": 0, 00:06:55.921 "max_copy_latency_ticks": 0, 00:06:55.921 "min_copy_latency_ticks": 0 00:06:55.921 } 00:06:55.921 ] 00:06:55.921 }' 00:06:55.921 21:43:10 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # jq -r '.channels[0].num_read_ops' 00:06:55.921 21:43:10 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel1=1475072 00:06:55.921 21:43:10 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=1475072 00:06:55.921 21:43:10 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # jq -r '.channels[1].num_read_ops' 00:06:55.921 21:43:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel2=1462784 00:06:55.921 21:43:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@575 -- # io_count_per_channel_all=2937856 00:06:55.921 21:43:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:06:55.921 21:43:11 blockdev_general.bdev_stat -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:55.921 21:43:11 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:55.921 21:43:11 blockdev_general.bdev_stat -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:55.921 21:43:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # iostats='{ 00:06:55.921 "tick_rate": 2199998373, 00:06:55.921 "ticks": 788929565704, 00:06:55.921 "bdevs": [ 00:06:55.921 { 00:06:55.921 "name": "Malloc_STAT", 00:06:55.921 "bytes_read": 12230627840, 00:06:55.921 "num_read_ops": 2985987, 00:06:55.921 "bytes_written": 0, 00:06:55.921 "num_write_ops": 0, 00:06:55.921 "bytes_unmapped": 0, 00:06:55.921 "num_unmap_ops": 0, 00:06:55.921 "bytes_copied": 0, 00:06:55.921 "num_copy_ops": 0, 00:06:55.921 "read_latency_ticks": 2218389845784, 00:06:55.921 "max_read_latency_ticks": 1182214, 00:06:55.921 "min_read_latency_ticks": 41184, 00:06:55.921 "write_latency_ticks": 0, 00:06:55.921 "max_write_latency_ticks": 0, 00:06:55.921 "min_write_latency_ticks": 0, 00:06:55.921 "unmap_latency_ticks": 0, 00:06:55.921 "max_unmap_latency_ticks": 0, 00:06:55.921 "min_unmap_latency_ticks": 0, 00:06:55.921 "copy_latency_ticks": 0, 00:06:55.921 "max_copy_latency_ticks": 0, 00:06:55.921 "min_copy_latency_ticks": 0, 00:06:55.921 "io_error": {} 00:06:55.921 } 00:06:55.921 ] 00:06:55.921 }' 00:06:55.921 21:43:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # jq -r '.bdevs[0].num_read_ops' 00:06:55.921 21:43:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # io_count2=2985987 00:06:55.921 21:43:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 2937856 -lt 2900995 ']' 00:06:55.921 21:43:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 2937856 -gt 2985987 ']' 00:06:55.921 21:43:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:06:55.921 21:43:11 blockdev_general.bdev_stat -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:55.921 21:43:11 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:55.921 00:06:55.921 Latency(us) 00:06:55.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:55.921 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:06:55.921 Malloc_STAT : 1.99 760829.95 2971.99 0.00 0.00 336.20 57.25 517.59 00:06:55.921 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:06:55.921 Malloc_STAT : 1.99 754050.83 2945.51 0.00 0.00 339.22 62.84 539.93 00:06:55.921 =================================================================================================================== 00:06:55.921 Total : 1514880.78 5917.50 0.00 0.00 337.71 57.25 539.93 00:06:55.921 0 00:06:55.921 21:43:11 blockdev_general.bdev_stat -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:55.921 21:43:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # killprocess 48501 00:06:55.921 21:43:11 blockdev_general.bdev_stat -- common/autotest_common.sh@942 -- # '[' -z 48501 ']' 00:06:55.921 21:43:11 blockdev_general.bdev_stat -- common/autotest_common.sh@946 -- # kill -0 48501 00:06:55.921 21:43:11 blockdev_general.bdev_stat -- common/autotest_common.sh@947 -- # uname 00:06:55.921 21:43:11 blockdev_general.bdev_stat -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:06:55.921 21:43:11 blockdev_general.bdev_stat -- common/autotest_common.sh@950 -- # ps -c -o command 48501 00:06:55.921 21:43:11 blockdev_general.bdev_stat -- common/autotest_common.sh@950 -- # tail -1 00:06:55.921 21:43:11 blockdev_general.bdev_stat -- common/autotest_common.sh@950 -- # process_name=bdevperf 00:06:55.922 21:43:11 blockdev_general.bdev_stat -- common/autotest_common.sh@952 -- # '[' bdevperf = sudo ']' 00:06:55.922 killing process with pid 48501 00:06:55.922 21:43:11 blockdev_general.bdev_stat -- common/autotest_common.sh@960 -- # echo 'killing process with pid 48501' 00:06:55.922 21:43:11 blockdev_general.bdev_stat -- common/autotest_common.sh@961 -- # kill 48501 00:06:55.922 Received shutdown signal, test time was about 2.024562 seconds 00:06:55.922 00:06:55.922 Latency(us) 00:06:55.922 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:55.922 =================================================================================================================== 00:06:55.922 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:55.922 21:43:11 blockdev_general.bdev_stat -- common/autotest_common.sh@966 -- # wait 48501 00:06:56.180 21:43:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@610 -- # trap - SIGINT SIGTERM EXIT 00:06:56.180 00:06:56.180 real 0m3.500s 00:06:56.180 user 0m6.422s 00:06:56.180 sys 0m0.632s 00:06:56.180 21:43:11 blockdev_general.bdev_stat -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:56.180 ************************************ 00:06:56.180 21:43:11 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:56.180 END TEST bdev_stat 00:06:56.180 ************************************ 00:06:56.180 21:43:11 blockdev_general -- common/autotest_common.sh@1136 -- # return 0 00:06:56.180 21:43:11 blockdev_general -- bdev/blockdev.sh@794 -- # [[ bdev == gpt ]] 00:06:56.180 21:43:11 blockdev_general -- bdev/blockdev.sh@798 -- # [[ bdev == crypto_sw ]] 00:06:56.180 21:43:11 blockdev_general -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:06:56.180 21:43:11 blockdev_general -- bdev/blockdev.sh@811 -- # cleanup 00:06:56.180 21:43:11 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:56.180 21:43:11 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:56.180 21:43:11 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:06:56.180 21:43:11 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:06:56.180 21:43:11 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:06:56.180 21:43:11 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:06:56.180 00:06:56.180 real 1m34.428s 00:06:56.180 user 4m31.234s 00:06:56.180 sys 0m27.019s 00:06:56.180 21:43:11 blockdev_general -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:56.180 21:43:11 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:56.180 ************************************ 00:06:56.180 END TEST blockdev_general 00:06:56.180 ************************************ 00:06:56.180 21:43:11 -- common/autotest_common.sh@1136 -- # return 0 00:06:56.180 21:43:11 -- spdk/autotest.sh@190 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:56.180 21:43:11 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:56.180 21:43:11 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:56.180 21:43:11 -- common/autotest_common.sh@10 -- # set +x 00:06:56.180 ************************************ 00:06:56.180 START TEST bdev_raid 00:06:56.180 ************************************ 00:06:56.180 21:43:11 bdev_raid -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:56.439 * Looking for test storage... 00:06:56.439 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:56.439 21:43:11 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:56.439 21:43:11 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:56.439 21:43:11 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:06:56.439 21:43:11 bdev_raid -- bdev/bdev_raid.sh@851 -- # mkdir -p /raidtest 00:06:56.439 21:43:11 bdev_raid -- bdev/bdev_raid.sh@852 -- # trap 'cleanup; exit 1' EXIT 00:06:56.439 21:43:11 bdev_raid -- bdev/bdev_raid.sh@854 -- # base_blocklen=512 00:06:56.439 21:43:11 bdev_raid -- bdev/bdev_raid.sh@856 -- # uname -s 00:06:56.439 21:43:11 bdev_raid -- bdev/bdev_raid.sh@856 -- # '[' FreeBSD = Linux ']' 00:06:56.439 21:43:11 bdev_raid -- bdev/bdev_raid.sh@863 -- # run_test raid0_resize_test raid0_resize_test 00:06:56.439 21:43:11 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:56.439 21:43:11 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:56.439 21:43:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:56.439 ************************************ 00:06:56.439 START TEST raid0_resize_test 00:06:56.439 ************************************ 00:06:56.439 21:43:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1117 -- # raid0_resize_test 00:06:56.439 21:43:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local blksize=512 00:06:56.439 21:43:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local bdev_size_mb=32 00:06:56.439 21:43:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local new_bdev_size_mb=64 00:06:56.439 21:43:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local blkcnt 00:06:56.439 21:43:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local raid_size_mb 00:06:56.439 21:43:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local new_raid_size_mb 00:06:56.439 21:43:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # raid_pid=48606 00:06:56.439 Process raid pid: 48606 00:06:56.439 21:43:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # echo 'Process raid pid: 48606' 00:06:56.439 21:43:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # waitforlisten 48606 /var/tmp/spdk-raid.sock 00:06:56.439 21:43:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@823 -- # '[' -z 48606 ']' 00:06:56.439 21:43:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:56.439 21:43:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:56.439 21:43:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:56.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:56.439 21:43:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:56.439 21:43:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:56.439 21:43:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.439 [2024-07-15 21:43:11.507842] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:56.439 [2024-07-15 21:43:11.508078] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:57.028 EAL: TSC is not safe to use in SMP mode 00:06:57.028 EAL: TSC is not invariant 00:06:57.028 [2024-07-15 21:43:12.052079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.028 [2024-07-15 21:43:12.161239] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:57.028 [2024-07-15 21:43:12.164310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.028 [2024-07-15 21:43:12.165685] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:57.028 [2024-07-15 21:43:12.165710] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:57.594 21:43:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:57.594 21:43:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@856 -- # return 0 00:06:57.594 21:43:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:06:57.858 Base_1 00:06:57.858 21:43:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:06:58.152 Base_2 00:06:58.152 21:43:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:06:58.152 [2024-07-15 21:43:13.330243] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:58.152 [2024-07-15 21:43:13.330821] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:58.152 [2024-07-15 21:43:13.330846] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x164423a34a00 00:06:58.152 [2024-07-15 21:43:13.330851] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:58.153 [2024-07-15 21:43:13.330885] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x164423a97e20 00:06:58.153 [2024-07-15 21:43:13.330951] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x164423a34a00 00:06:58.153 [2024-07-15 21:43:13.330956] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x164423a34a00 00:06:58.153 [2024-07-15 21:43:13.330991] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:58.409 21:43:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:06:58.409 [2024-07-15 21:43:13.594232] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:58.409 [2024-07-15 21:43:13.594258] bdev_raid.c:2276:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:58.667 true 00:06:58.667 21:43:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # jq '.[].num_blocks' 00:06:58.667 21:43:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:58.924 [2024-07-15 21:43:13.878262] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:58.924 21:43:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # blkcnt=131072 00:06:58.924 21:43:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # raid_size_mb=64 00:06:58.924 21:43:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # '[' 64 '!=' 64 ']' 00:06:58.924 21:43:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:06:59.182 [2024-07-15 21:43:14.118239] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:59.182 [2024-07-15 21:43:14.118267] bdev_raid.c:2276:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:59.182 [2024-07-15 21:43:14.118298] bdev_raid.c:2290:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:59.182 true 00:06:59.182 21:43:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:59.182 21:43:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # jq '.[].num_blocks' 00:06:59.182 [2024-07-15 21:43:14.370268] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:59.440 21:43:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # blkcnt=262144 00:06:59.441 21:43:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # raid_size_mb=128 00:06:59.441 21:43:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 128 '!=' 128 ']' 00:06:59.441 21:43:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@386 -- # killprocess 48606 00:06:59.441 21:43:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@942 -- # '[' -z 48606 ']' 00:06:59.441 21:43:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@946 -- # kill -0 48606 00:06:59.441 21:43:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@947 -- # uname 00:06:59.441 21:43:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:06:59.441 21:43:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # ps -c -o command 48606 00:06:59.441 21:43:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # tail -1 00:06:59.441 21:43:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:06:59.441 killing process with pid 48606 00:06:59.441 21:43:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:06:59.441 21:43:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 48606' 00:06:59.441 21:43:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@961 -- # kill 48606 00:06:59.441 [2024-07-15 21:43:14.399755] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:59.441 21:43:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@966 -- # wait 48606 00:06:59.441 [2024-07-15 21:43:14.399782] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:59.441 [2024-07-15 21:43:14.399793] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:59.441 [2024-07-15 21:43:14.399797] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x164423a34a00 name Raid, state offline 00:06:59.441 [2024-07-15 21:43:14.399925] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:59.441 21:43:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@388 -- # return 0 00:06:59.441 00:06:59.441 real 0m3.075s 00:06:59.441 user 0m4.587s 00:06:59.441 sys 0m0.805s 00:06:59.441 ************************************ 00:06:59.441 END TEST raid0_resize_test 00:06:59.441 ************************************ 00:06:59.441 21:43:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:59.441 21:43:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.441 21:43:14 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:06:59.441 21:43:14 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:06:59.441 21:43:14 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:06:59.441 21:43:14 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:59.441 21:43:14 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:06:59.441 21:43:14 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:59.441 21:43:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:59.441 ************************************ 00:06:59.441 START TEST raid_state_function_test 00:06:59.441 ************************************ 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1117 -- # raid_state_function_test raid0 2 false 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=48656 00:06:59.441 Process raid pid: 48656 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 48656' 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 48656 /var/tmp/spdk-raid.sock 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@823 -- # '[' -z 48656 ']' 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:59.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:59.441 21:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.699 [2024-07-15 21:43:14.632357] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:59.699 [2024-07-15 21:43:14.632606] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:59.957 EAL: TSC is not safe to use in SMP mode 00:06:59.957 EAL: TSC is not invariant 00:07:00.216 [2024-07-15 21:43:15.148334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.216 [2024-07-15 21:43:15.229525] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:00.216 [2024-07-15 21:43:15.231665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.216 [2024-07-15 21:43:15.232489] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:00.216 [2024-07-15 21:43:15.232505] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:00.797 21:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:07:00.797 21:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # return 0 00:07:00.797 21:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:00.797 [2024-07-15 21:43:15.912925] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:00.797 [2024-07-15 21:43:15.913001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:00.797 [2024-07-15 21:43:15.913007] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:00.797 [2024-07-15 21:43:15.913016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:00.797 21:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:00.797 21:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:00.797 21:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:00.797 21:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:00.797 21:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:00.797 21:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:00.798 21:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:00.798 21:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:00.798 21:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:00.798 21:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:00.798 21:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:00.798 21:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:01.067 21:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:01.067 "name": "Existed_Raid", 00:07:01.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:01.067 "strip_size_kb": 64, 00:07:01.067 "state": "configuring", 00:07:01.067 "raid_level": "raid0", 00:07:01.067 "superblock": false, 00:07:01.067 "num_base_bdevs": 2, 00:07:01.067 "num_base_bdevs_discovered": 0, 00:07:01.067 "num_base_bdevs_operational": 2, 00:07:01.067 "base_bdevs_list": [ 00:07:01.067 { 00:07:01.067 "name": "BaseBdev1", 00:07:01.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:01.067 "is_configured": false, 00:07:01.067 "data_offset": 0, 00:07:01.067 "data_size": 0 00:07:01.067 }, 00:07:01.067 { 00:07:01.067 "name": "BaseBdev2", 00:07:01.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:01.067 "is_configured": false, 00:07:01.067 "data_offset": 0, 00:07:01.067 "data_size": 0 00:07:01.067 } 00:07:01.067 ] 00:07:01.067 }' 00:07:01.067 21:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:01.067 21:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.325 21:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:01.584 [2024-07-15 21:43:16.768924] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:01.584 [2024-07-15 21:43:16.768962] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x670c0034500 name Existed_Raid, state configuring 00:07:01.843 21:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:02.102 [2024-07-15 21:43:17.056927] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:02.102 [2024-07-15 21:43:17.056993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:02.102 [2024-07-15 21:43:17.056998] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:02.102 [2024-07-15 21:43:17.057007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:02.102 21:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:02.361 [2024-07-15 21:43:17.358126] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:02.361 BaseBdev1 00:07:02.361 21:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:07:02.361 21:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev1 00:07:02.361 21:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:07:02.361 21:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:07:02.361 21:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:07:02.361 21:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:07:02.361 21:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:02.653 21:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:02.912 [ 00:07:02.912 { 00:07:02.912 "name": "BaseBdev1", 00:07:02.912 "aliases": [ 00:07:02.912 "3f053fcc-42f3-11ef-9f7f-e9a656123a8b" 00:07:02.912 ], 00:07:02.912 "product_name": "Malloc disk", 00:07:02.912 "block_size": 512, 00:07:02.912 "num_blocks": 65536, 00:07:02.912 "uuid": "3f053fcc-42f3-11ef-9f7f-e9a656123a8b", 00:07:02.912 "assigned_rate_limits": { 00:07:02.912 "rw_ios_per_sec": 0, 00:07:02.912 "rw_mbytes_per_sec": 0, 00:07:02.912 "r_mbytes_per_sec": 0, 00:07:02.912 "w_mbytes_per_sec": 0 00:07:02.912 }, 00:07:02.912 "claimed": true, 00:07:02.912 "claim_type": "exclusive_write", 00:07:02.912 "zoned": false, 00:07:02.912 "supported_io_types": { 00:07:02.912 "read": true, 00:07:02.912 "write": true, 00:07:02.912 "unmap": true, 00:07:02.912 "flush": true, 00:07:02.912 "reset": true, 00:07:02.912 "nvme_admin": false, 00:07:02.912 "nvme_io": false, 00:07:02.912 "nvme_io_md": false, 00:07:02.912 "write_zeroes": true, 00:07:02.912 "zcopy": true, 00:07:02.912 "get_zone_info": false, 00:07:02.912 "zone_management": false, 00:07:02.912 "zone_append": false, 00:07:02.912 "compare": false, 00:07:02.912 "compare_and_write": false, 00:07:02.912 "abort": true, 00:07:02.912 "seek_hole": false, 00:07:02.912 "seek_data": false, 00:07:02.912 "copy": true, 00:07:02.912 "nvme_iov_md": false 00:07:02.912 }, 00:07:02.912 "memory_domains": [ 00:07:02.912 { 00:07:02.912 "dma_device_id": "system", 00:07:02.912 "dma_device_type": 1 00:07:02.912 }, 00:07:02.912 { 00:07:02.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:02.912 "dma_device_type": 2 00:07:02.912 } 00:07:02.912 ], 00:07:02.912 "driver_specific": {} 00:07:02.912 } 00:07:02.912 ] 00:07:02.912 21:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:07:02.912 21:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:02.913 21:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:02.913 21:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:02.913 21:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:02.913 21:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:02.913 21:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:02.913 21:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:02.913 21:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:02.913 21:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:02.913 21:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:02.913 21:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:02.913 21:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:03.171 21:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:03.171 "name": "Existed_Raid", 00:07:03.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:03.171 "strip_size_kb": 64, 00:07:03.171 "state": "configuring", 00:07:03.171 "raid_level": "raid0", 00:07:03.171 "superblock": false, 00:07:03.171 "num_base_bdevs": 2, 00:07:03.171 "num_base_bdevs_discovered": 1, 00:07:03.171 "num_base_bdevs_operational": 2, 00:07:03.171 "base_bdevs_list": [ 00:07:03.171 { 00:07:03.171 "name": "BaseBdev1", 00:07:03.171 "uuid": "3f053fcc-42f3-11ef-9f7f-e9a656123a8b", 00:07:03.171 "is_configured": true, 00:07:03.171 "data_offset": 0, 00:07:03.171 "data_size": 65536 00:07:03.171 }, 00:07:03.171 { 00:07:03.171 "name": "BaseBdev2", 00:07:03.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:03.171 "is_configured": false, 00:07:03.171 "data_offset": 0, 00:07:03.171 "data_size": 0 00:07:03.171 } 00:07:03.171 ] 00:07:03.171 }' 00:07:03.171 21:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:03.171 21:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.429 21:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:03.994 [2024-07-15 21:43:18.884967] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:03.994 [2024-07-15 21:43:18.885009] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x670c0034500 name Existed_Raid, state configuring 00:07:03.994 21:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:04.253 [2024-07-15 21:43:19.196990] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:04.253 [2024-07-15 21:43:19.197968] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:04.253 [2024-07-15 21:43:19.198018] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:04.253 21:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:07:04.253 21:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:04.253 21:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:04.253 21:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:04.253 21:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:04.253 21:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:04.253 21:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:04.253 21:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:04.253 21:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:04.253 21:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:04.253 21:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:04.253 21:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:04.253 21:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:04.253 21:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:04.511 21:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:04.512 "name": "Existed_Raid", 00:07:04.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:04.512 "strip_size_kb": 64, 00:07:04.512 "state": "configuring", 00:07:04.512 "raid_level": "raid0", 00:07:04.512 "superblock": false, 00:07:04.512 "num_base_bdevs": 2, 00:07:04.512 "num_base_bdevs_discovered": 1, 00:07:04.512 "num_base_bdevs_operational": 2, 00:07:04.512 "base_bdevs_list": [ 00:07:04.512 { 00:07:04.512 "name": "BaseBdev1", 00:07:04.512 "uuid": "3f053fcc-42f3-11ef-9f7f-e9a656123a8b", 00:07:04.512 "is_configured": true, 00:07:04.512 "data_offset": 0, 00:07:04.512 "data_size": 65536 00:07:04.512 }, 00:07:04.512 { 00:07:04.512 "name": "BaseBdev2", 00:07:04.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:04.512 "is_configured": false, 00:07:04.512 "data_offset": 0, 00:07:04.512 "data_size": 0 00:07:04.512 } 00:07:04.512 ] 00:07:04.512 }' 00:07:04.512 21:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:04.512 21:43:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.770 21:43:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:05.028 [2024-07-15 21:43:20.113192] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:05.028 [2024-07-15 21:43:20.113229] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x670c0034a00 00:07:05.028 [2024-07-15 21:43:20.113234] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:05.028 [2024-07-15 21:43:20.113287] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x670c0097e20 00:07:05.028 [2024-07-15 21:43:20.113393] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x670c0034a00 00:07:05.028 [2024-07-15 21:43:20.113398] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x670c0034a00 00:07:05.028 [2024-07-15 21:43:20.113437] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:05.028 BaseBdev2 00:07:05.028 21:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:07:05.028 21:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev2 00:07:05.028 21:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:07:05.028 21:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:07:05.028 21:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:07:05.028 21:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:07:05.028 21:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:05.285 21:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:05.544 [ 00:07:05.544 { 00:07:05.544 "name": "BaseBdev2", 00:07:05.544 "aliases": [ 00:07:05.544 "40a9caa8-42f3-11ef-9f7f-e9a656123a8b" 00:07:05.544 ], 00:07:05.544 "product_name": "Malloc disk", 00:07:05.544 "block_size": 512, 00:07:05.544 "num_blocks": 65536, 00:07:05.544 "uuid": "40a9caa8-42f3-11ef-9f7f-e9a656123a8b", 00:07:05.544 "assigned_rate_limits": { 00:07:05.544 "rw_ios_per_sec": 0, 00:07:05.544 "rw_mbytes_per_sec": 0, 00:07:05.544 "r_mbytes_per_sec": 0, 00:07:05.544 "w_mbytes_per_sec": 0 00:07:05.544 }, 00:07:05.544 "claimed": true, 00:07:05.544 "claim_type": "exclusive_write", 00:07:05.544 "zoned": false, 00:07:05.544 "supported_io_types": { 00:07:05.544 "read": true, 00:07:05.544 "write": true, 00:07:05.544 "unmap": true, 00:07:05.544 "flush": true, 00:07:05.544 "reset": true, 00:07:05.544 "nvme_admin": false, 00:07:05.544 "nvme_io": false, 00:07:05.544 "nvme_io_md": false, 00:07:05.544 "write_zeroes": true, 00:07:05.544 "zcopy": true, 00:07:05.544 "get_zone_info": false, 00:07:05.544 "zone_management": false, 00:07:05.544 "zone_append": false, 00:07:05.544 "compare": false, 00:07:05.544 "compare_and_write": false, 00:07:05.544 "abort": true, 00:07:05.544 "seek_hole": false, 00:07:05.544 "seek_data": false, 00:07:05.544 "copy": true, 00:07:05.544 "nvme_iov_md": false 00:07:05.544 }, 00:07:05.544 "memory_domains": [ 00:07:05.544 { 00:07:05.544 "dma_device_id": "system", 00:07:05.544 "dma_device_type": 1 00:07:05.544 }, 00:07:05.544 { 00:07:05.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.544 "dma_device_type": 2 00:07:05.544 } 00:07:05.544 ], 00:07:05.544 "driver_specific": {} 00:07:05.544 } 00:07:05.544 ] 00:07:05.544 21:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:07:05.544 21:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:07:05.544 21:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:05.544 21:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:05.544 21:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:05.544 21:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:05.544 21:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:05.544 21:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:05.544 21:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:05.544 21:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:05.544 21:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:05.544 21:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:05.544 21:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:05.544 21:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:05.544 21:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:05.802 21:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:05.802 "name": "Existed_Raid", 00:07:05.802 "uuid": "40a9d405-42f3-11ef-9f7f-e9a656123a8b", 00:07:05.802 "strip_size_kb": 64, 00:07:05.803 "state": "online", 00:07:05.803 "raid_level": "raid0", 00:07:05.803 "superblock": false, 00:07:05.803 "num_base_bdevs": 2, 00:07:05.803 "num_base_bdevs_discovered": 2, 00:07:05.803 "num_base_bdevs_operational": 2, 00:07:05.803 "base_bdevs_list": [ 00:07:05.803 { 00:07:05.803 "name": "BaseBdev1", 00:07:05.803 "uuid": "3f053fcc-42f3-11ef-9f7f-e9a656123a8b", 00:07:05.803 "is_configured": true, 00:07:05.803 "data_offset": 0, 00:07:05.803 "data_size": 65536 00:07:05.803 }, 00:07:05.803 { 00:07:05.803 "name": "BaseBdev2", 00:07:05.803 "uuid": "40a9caa8-42f3-11ef-9f7f-e9a656123a8b", 00:07:05.803 "is_configured": true, 00:07:05.803 "data_offset": 0, 00:07:05.803 "data_size": 65536 00:07:05.803 } 00:07:05.803 ] 00:07:05.803 }' 00:07:05.803 21:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:05.803 21:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.369 21:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:07:06.369 21:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:07:06.369 21:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:06.369 21:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:06.369 21:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:06.369 21:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:06.369 21:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:07:06.369 21:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:06.627 [2024-07-15 21:43:21.577109] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:06.627 21:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:06.627 "name": "Existed_Raid", 00:07:06.627 "aliases": [ 00:07:06.627 "40a9d405-42f3-11ef-9f7f-e9a656123a8b" 00:07:06.627 ], 00:07:06.627 "product_name": "Raid Volume", 00:07:06.627 "block_size": 512, 00:07:06.627 "num_blocks": 131072, 00:07:06.627 "uuid": "40a9d405-42f3-11ef-9f7f-e9a656123a8b", 00:07:06.627 "assigned_rate_limits": { 00:07:06.627 "rw_ios_per_sec": 0, 00:07:06.627 "rw_mbytes_per_sec": 0, 00:07:06.627 "r_mbytes_per_sec": 0, 00:07:06.627 "w_mbytes_per_sec": 0 00:07:06.627 }, 00:07:06.627 "claimed": false, 00:07:06.627 "zoned": false, 00:07:06.627 "supported_io_types": { 00:07:06.627 "read": true, 00:07:06.627 "write": true, 00:07:06.627 "unmap": true, 00:07:06.627 "flush": true, 00:07:06.627 "reset": true, 00:07:06.627 "nvme_admin": false, 00:07:06.627 "nvme_io": false, 00:07:06.627 "nvme_io_md": false, 00:07:06.627 "write_zeroes": true, 00:07:06.627 "zcopy": false, 00:07:06.627 "get_zone_info": false, 00:07:06.627 "zone_management": false, 00:07:06.627 "zone_append": false, 00:07:06.627 "compare": false, 00:07:06.627 "compare_and_write": false, 00:07:06.627 "abort": false, 00:07:06.627 "seek_hole": false, 00:07:06.627 "seek_data": false, 00:07:06.627 "copy": false, 00:07:06.627 "nvme_iov_md": false 00:07:06.627 }, 00:07:06.627 "memory_domains": [ 00:07:06.627 { 00:07:06.627 "dma_device_id": "system", 00:07:06.627 "dma_device_type": 1 00:07:06.627 }, 00:07:06.627 { 00:07:06.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.627 "dma_device_type": 2 00:07:06.627 }, 00:07:06.627 { 00:07:06.627 "dma_device_id": "system", 00:07:06.627 "dma_device_type": 1 00:07:06.627 }, 00:07:06.627 { 00:07:06.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.627 "dma_device_type": 2 00:07:06.627 } 00:07:06.627 ], 00:07:06.627 "driver_specific": { 00:07:06.627 "raid": { 00:07:06.627 "uuid": "40a9d405-42f3-11ef-9f7f-e9a656123a8b", 00:07:06.627 "strip_size_kb": 64, 00:07:06.627 "state": "online", 00:07:06.627 "raid_level": "raid0", 00:07:06.627 "superblock": false, 00:07:06.627 "num_base_bdevs": 2, 00:07:06.627 "num_base_bdevs_discovered": 2, 00:07:06.627 "num_base_bdevs_operational": 2, 00:07:06.627 "base_bdevs_list": [ 00:07:06.627 { 00:07:06.627 "name": "BaseBdev1", 00:07:06.627 "uuid": "3f053fcc-42f3-11ef-9f7f-e9a656123a8b", 00:07:06.627 "is_configured": true, 00:07:06.627 "data_offset": 0, 00:07:06.627 "data_size": 65536 00:07:06.627 }, 00:07:06.627 { 00:07:06.627 "name": "BaseBdev2", 00:07:06.627 "uuid": "40a9caa8-42f3-11ef-9f7f-e9a656123a8b", 00:07:06.627 "is_configured": true, 00:07:06.627 "data_offset": 0, 00:07:06.627 "data_size": 65536 00:07:06.627 } 00:07:06.627 ] 00:07:06.627 } 00:07:06.627 } 00:07:06.627 }' 00:07:06.627 21:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:06.627 21:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:07:06.627 BaseBdev2' 00:07:06.627 21:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:06.627 21:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:07:06.627 21:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:06.886 21:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:06.886 "name": "BaseBdev1", 00:07:06.886 "aliases": [ 00:07:06.886 "3f053fcc-42f3-11ef-9f7f-e9a656123a8b" 00:07:06.886 ], 00:07:06.886 "product_name": "Malloc disk", 00:07:06.886 "block_size": 512, 00:07:06.886 "num_blocks": 65536, 00:07:06.886 "uuid": "3f053fcc-42f3-11ef-9f7f-e9a656123a8b", 00:07:06.886 "assigned_rate_limits": { 00:07:06.886 "rw_ios_per_sec": 0, 00:07:06.886 "rw_mbytes_per_sec": 0, 00:07:06.886 "r_mbytes_per_sec": 0, 00:07:06.886 "w_mbytes_per_sec": 0 00:07:06.886 }, 00:07:06.886 "claimed": true, 00:07:06.886 "claim_type": "exclusive_write", 00:07:06.886 "zoned": false, 00:07:06.886 "supported_io_types": { 00:07:06.886 "read": true, 00:07:06.886 "write": true, 00:07:06.886 "unmap": true, 00:07:06.886 "flush": true, 00:07:06.886 "reset": true, 00:07:06.886 "nvme_admin": false, 00:07:06.886 "nvme_io": false, 00:07:06.886 "nvme_io_md": false, 00:07:06.886 "write_zeroes": true, 00:07:06.886 "zcopy": true, 00:07:06.886 "get_zone_info": false, 00:07:06.886 "zone_management": false, 00:07:06.886 "zone_append": false, 00:07:06.886 "compare": false, 00:07:06.886 "compare_and_write": false, 00:07:06.886 "abort": true, 00:07:06.886 "seek_hole": false, 00:07:06.886 "seek_data": false, 00:07:06.886 "copy": true, 00:07:06.886 "nvme_iov_md": false 00:07:06.886 }, 00:07:06.886 "memory_domains": [ 00:07:06.886 { 00:07:06.886 "dma_device_id": "system", 00:07:06.886 "dma_device_type": 1 00:07:06.886 }, 00:07:06.886 { 00:07:06.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.886 "dma_device_type": 2 00:07:06.886 } 00:07:06.886 ], 00:07:06.886 "driver_specific": {} 00:07:06.886 }' 00:07:06.886 21:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:06.886 21:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:06.886 21:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:06.886 21:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:06.886 21:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:06.886 21:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:06.886 21:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:06.886 21:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:06.886 21:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:06.886 21:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:06.886 21:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:06.886 21:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:06.886 21:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:06.886 21:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:07:06.886 21:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:07.144 21:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:07.144 "name": "BaseBdev2", 00:07:07.144 "aliases": [ 00:07:07.144 "40a9caa8-42f3-11ef-9f7f-e9a656123a8b" 00:07:07.144 ], 00:07:07.144 "product_name": "Malloc disk", 00:07:07.144 "block_size": 512, 00:07:07.144 "num_blocks": 65536, 00:07:07.144 "uuid": "40a9caa8-42f3-11ef-9f7f-e9a656123a8b", 00:07:07.144 "assigned_rate_limits": { 00:07:07.144 "rw_ios_per_sec": 0, 00:07:07.144 "rw_mbytes_per_sec": 0, 00:07:07.144 "r_mbytes_per_sec": 0, 00:07:07.144 "w_mbytes_per_sec": 0 00:07:07.144 }, 00:07:07.144 "claimed": true, 00:07:07.144 "claim_type": "exclusive_write", 00:07:07.144 "zoned": false, 00:07:07.144 "supported_io_types": { 00:07:07.144 "read": true, 00:07:07.144 "write": true, 00:07:07.144 "unmap": true, 00:07:07.144 "flush": true, 00:07:07.144 "reset": true, 00:07:07.144 "nvme_admin": false, 00:07:07.144 "nvme_io": false, 00:07:07.144 "nvme_io_md": false, 00:07:07.144 "write_zeroes": true, 00:07:07.144 "zcopy": true, 00:07:07.144 "get_zone_info": false, 00:07:07.144 "zone_management": false, 00:07:07.144 "zone_append": false, 00:07:07.144 "compare": false, 00:07:07.144 "compare_and_write": false, 00:07:07.144 "abort": true, 00:07:07.144 "seek_hole": false, 00:07:07.144 "seek_data": false, 00:07:07.144 "copy": true, 00:07:07.144 "nvme_iov_md": false 00:07:07.144 }, 00:07:07.144 "memory_domains": [ 00:07:07.144 { 00:07:07.144 "dma_device_id": "system", 00:07:07.144 "dma_device_type": 1 00:07:07.144 }, 00:07:07.144 { 00:07:07.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.144 "dma_device_type": 2 00:07:07.145 } 00:07:07.145 ], 00:07:07.145 "driver_specific": {} 00:07:07.145 }' 00:07:07.145 21:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:07.145 21:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:07.145 21:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:07.145 21:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:07.145 21:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:07.145 21:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:07.145 21:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:07.145 21:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:07.145 21:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:07.145 21:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:07.145 21:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:07.145 21:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:07.145 21:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:07.403 [2024-07-15 21:43:22.541096] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:07.403 [2024-07-15 21:43:22.541130] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:07.403 [2024-07-15 21:43:22.541147] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:07.403 21:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:07:07.403 21:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:07:07.403 21:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:07.403 21:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:07.403 21:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:07:07.403 21:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:07.403 21:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:07.403 21:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:07:07.403 21:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:07.403 21:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:07.403 21:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:07:07.403 21:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:07.403 21:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:07.403 21:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:07.403 21:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:07.403 21:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:07.403 21:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:07.662 21:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:07.662 "name": "Existed_Raid", 00:07:07.662 "uuid": "40a9d405-42f3-11ef-9f7f-e9a656123a8b", 00:07:07.662 "strip_size_kb": 64, 00:07:07.662 "state": "offline", 00:07:07.662 "raid_level": "raid0", 00:07:07.662 "superblock": false, 00:07:07.662 "num_base_bdevs": 2, 00:07:07.662 "num_base_bdevs_discovered": 1, 00:07:07.662 "num_base_bdevs_operational": 1, 00:07:07.662 "base_bdevs_list": [ 00:07:07.662 { 00:07:07.662 "name": null, 00:07:07.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:07.662 "is_configured": false, 00:07:07.662 "data_offset": 0, 00:07:07.662 "data_size": 65536 00:07:07.662 }, 00:07:07.662 { 00:07:07.662 "name": "BaseBdev2", 00:07:07.662 "uuid": "40a9caa8-42f3-11ef-9f7f-e9a656123a8b", 00:07:07.662 "is_configured": true, 00:07:07.662 "data_offset": 0, 00:07:07.662 "data_size": 65536 00:07:07.662 } 00:07:07.662 ] 00:07:07.662 }' 00:07:07.662 21:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:07.662 21:43:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.227 21:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:07:08.227 21:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:08.227 21:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:07:08.227 21:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:08.227 21:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:07:08.227 21:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:08.227 21:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:08.485 [2024-07-15 21:43:23.609379] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:08.485 [2024-07-15 21:43:23.609424] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x670c0034a00 name Existed_Raid, state offline 00:07:08.485 21:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:07:08.485 21:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:08.485 21:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:07:08.485 21:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:08.742 21:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:07:08.742 21:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:07:08.742 21:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:07:08.742 21:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 48656 00:07:08.742 21:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@942 -- # '[' -z 48656 ']' 00:07:08.742 21:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # kill -0 48656 00:07:08.742 21:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@947 -- # uname 00:07:09.001 21:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:07:09.001 21:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # ps -c -o command 48656 00:07:09.001 21:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # tail -1 00:07:09.001 21:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:07:09.001 21:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:07:09.001 killing process with pid 48656 00:07:09.001 21:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 48656' 00:07:09.001 21:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@961 -- # kill 48656 00:07:09.001 [2024-07-15 21:43:23.938530] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:09.001 [2024-07-15 21:43:23.938575] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:09.001 21:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # wait 48656 00:07:09.001 21:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:07:09.001 00:07:09.001 real 0m9.525s 00:07:09.001 user 0m16.760s 00:07:09.001 sys 0m1.495s 00:07:09.001 21:43:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:09.001 21:43:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.001 ************************************ 00:07:09.001 END TEST raid_state_function_test 00:07:09.001 ************************************ 00:07:09.001 21:43:24 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:07:09.001 21:43:24 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:09.001 21:43:24 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:07:09.001 21:43:24 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:09.001 21:43:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:09.001 ************************************ 00:07:09.001 START TEST raid_state_function_test_sb 00:07:09.001 ************************************ 00:07:09.001 21:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1117 -- # raid_state_function_test raid0 2 true 00:07:09.001 21:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:07:09.001 21:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:07:09.001 21:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:07:09.001 21:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:07:09.259 21:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:07:09.259 21:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:09.259 21:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:07:09.259 21:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:09.259 21:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:09.259 21:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:07:09.259 21:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:09.259 21:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:09.259 21:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:09.259 21:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:07:09.259 21:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:07:09.259 21:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:07:09.259 21:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:07:09.259 21:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:07:09.259 21:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:07:09.259 21:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:07:09.259 21:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:07:09.259 21:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:07:09.259 21:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:07:09.259 21:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=48931 00:07:09.259 Process raid pid: 48931 00:07:09.259 21:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 48931' 00:07:09.259 21:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 48931 /var/tmp/spdk-raid.sock 00:07:09.259 21:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:09.259 21:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@823 -- # '[' -z 48931 ']' 00:07:09.259 21:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:09.259 21:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@828 -- # local max_retries=100 00:07:09.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:09.259 21:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:09.259 21:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # xtrace_disable 00:07:09.259 21:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:09.259 [2024-07-15 21:43:24.201911] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:07:09.259 [2024-07-15 21:43:24.202178] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:09.828 EAL: TSC is not safe to use in SMP mode 00:07:09.828 EAL: TSC is not invariant 00:07:09.828 [2024-07-15 21:43:24.763908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.828 [2024-07-15 21:43:24.888163] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:09.828 [2024-07-15 21:43:24.891462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.828 [2024-07-15 21:43:24.892913] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.828 [2024-07-15 21:43:24.892939] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:10.394 21:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:07:10.394 21:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # return 0 00:07:10.394 21:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:10.653 [2024-07-15 21:43:25.584081] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:10.653 [2024-07-15 21:43:25.584157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:10.653 [2024-07-15 21:43:25.584163] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:10.653 [2024-07-15 21:43:25.584172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:10.653 21:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:10.653 21:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:10.653 21:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:10.653 21:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:10.653 21:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:10.653 21:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:10.653 21:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:10.653 21:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:10.653 21:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:10.653 21:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:10.653 21:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:10.653 21:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:10.912 21:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:10.912 "name": "Existed_Raid", 00:07:10.912 "uuid": "43ec9be4-42f3-11ef-9f7f-e9a656123a8b", 00:07:10.912 "strip_size_kb": 64, 00:07:10.912 "state": "configuring", 00:07:10.912 "raid_level": "raid0", 00:07:10.912 "superblock": true, 00:07:10.912 "num_base_bdevs": 2, 00:07:10.912 "num_base_bdevs_discovered": 0, 00:07:10.912 "num_base_bdevs_operational": 2, 00:07:10.912 "base_bdevs_list": [ 00:07:10.912 { 00:07:10.912 "name": "BaseBdev1", 00:07:10.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.912 "is_configured": false, 00:07:10.912 "data_offset": 0, 00:07:10.912 "data_size": 0 00:07:10.912 }, 00:07:10.912 { 00:07:10.912 "name": "BaseBdev2", 00:07:10.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.912 "is_configured": false, 00:07:10.912 "data_offset": 0, 00:07:10.912 "data_size": 0 00:07:10.912 } 00:07:10.912 ] 00:07:10.912 }' 00:07:10.912 21:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:10.912 21:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.170 21:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:11.428 [2024-07-15 21:43:26.496089] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:11.428 [2024-07-15 21:43:26.496116] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x780a9a34500 name Existed_Raid, state configuring 00:07:11.428 21:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:11.687 [2024-07-15 21:43:26.760166] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:11.687 [2024-07-15 21:43:26.760240] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:11.687 [2024-07-15 21:43:26.760247] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:11.687 [2024-07-15 21:43:26.760258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:11.687 21:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:11.944 [2024-07-15 21:43:27.113305] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:11.944 BaseBdev1 00:07:12.202 21:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:07:12.202 21:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev1 00:07:12.202 21:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:07:12.202 21:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:07:12.202 21:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:07:12.202 21:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:07:12.202 21:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:12.202 21:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:12.460 [ 00:07:12.460 { 00:07:12.460 "name": "BaseBdev1", 00:07:12.460 "aliases": [ 00:07:12.460 "44d5c63b-42f3-11ef-9f7f-e9a656123a8b" 00:07:12.460 ], 00:07:12.460 "product_name": "Malloc disk", 00:07:12.460 "block_size": 512, 00:07:12.460 "num_blocks": 65536, 00:07:12.460 "uuid": "44d5c63b-42f3-11ef-9f7f-e9a656123a8b", 00:07:12.460 "assigned_rate_limits": { 00:07:12.460 "rw_ios_per_sec": 0, 00:07:12.460 "rw_mbytes_per_sec": 0, 00:07:12.460 "r_mbytes_per_sec": 0, 00:07:12.460 "w_mbytes_per_sec": 0 00:07:12.460 }, 00:07:12.460 "claimed": true, 00:07:12.460 "claim_type": "exclusive_write", 00:07:12.460 "zoned": false, 00:07:12.460 "supported_io_types": { 00:07:12.460 "read": true, 00:07:12.460 "write": true, 00:07:12.460 "unmap": true, 00:07:12.460 "flush": true, 00:07:12.460 "reset": true, 00:07:12.460 "nvme_admin": false, 00:07:12.460 "nvme_io": false, 00:07:12.460 "nvme_io_md": false, 00:07:12.460 "write_zeroes": true, 00:07:12.460 "zcopy": true, 00:07:12.460 "get_zone_info": false, 00:07:12.460 "zone_management": false, 00:07:12.460 "zone_append": false, 00:07:12.460 "compare": false, 00:07:12.460 "compare_and_write": false, 00:07:12.460 "abort": true, 00:07:12.460 "seek_hole": false, 00:07:12.460 "seek_data": false, 00:07:12.460 "copy": true, 00:07:12.460 "nvme_iov_md": false 00:07:12.460 }, 00:07:12.460 "memory_domains": [ 00:07:12.460 { 00:07:12.460 "dma_device_id": "system", 00:07:12.460 "dma_device_type": 1 00:07:12.460 }, 00:07:12.460 { 00:07:12.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.460 "dma_device_type": 2 00:07:12.460 } 00:07:12.460 ], 00:07:12.460 "driver_specific": {} 00:07:12.460 } 00:07:12.460 ] 00:07:12.460 21:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:07:12.460 21:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:12.460 21:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:12.460 21:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:12.460 21:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:12.460 21:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:12.460 21:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:12.460 21:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:12.460 21:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:12.460 21:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:12.460 21:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:12.460 21:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:12.460 21:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:12.718 21:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:12.718 "name": "Existed_Raid", 00:07:12.718 "uuid": "44a01093-42f3-11ef-9f7f-e9a656123a8b", 00:07:12.718 "strip_size_kb": 64, 00:07:12.718 "state": "configuring", 00:07:12.718 "raid_level": "raid0", 00:07:12.718 "superblock": true, 00:07:12.718 "num_base_bdevs": 2, 00:07:12.718 "num_base_bdevs_discovered": 1, 00:07:12.718 "num_base_bdevs_operational": 2, 00:07:12.718 "base_bdevs_list": [ 00:07:12.718 { 00:07:12.718 "name": "BaseBdev1", 00:07:12.718 "uuid": "44d5c63b-42f3-11ef-9f7f-e9a656123a8b", 00:07:12.718 "is_configured": true, 00:07:12.718 "data_offset": 2048, 00:07:12.718 "data_size": 63488 00:07:12.718 }, 00:07:12.718 { 00:07:12.718 "name": "BaseBdev2", 00:07:12.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:12.718 "is_configured": false, 00:07:12.718 "data_offset": 0, 00:07:12.718 "data_size": 0 00:07:12.718 } 00:07:12.718 ] 00:07:12.718 }' 00:07:12.718 21:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:12.718 21:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.284 21:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:13.284 [2024-07-15 21:43:28.396156] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:13.284 [2024-07-15 21:43:28.396200] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x780a9a34500 name Existed_Raid, state configuring 00:07:13.284 21:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:13.543 [2024-07-15 21:43:28.676217] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:13.543 [2024-07-15 21:43:28.677179] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:13.543 [2024-07-15 21:43:28.677224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:13.543 21:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:07:13.543 21:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:13.543 21:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:13.543 21:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:13.543 21:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:13.543 21:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:13.543 21:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:13.543 21:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:13.543 21:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:13.543 21:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:13.543 21:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:13.543 21:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:13.543 21:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:13.543 21:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.111 21:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:14.111 "name": "Existed_Raid", 00:07:14.111 "uuid": "45c46e2d-42f3-11ef-9f7f-e9a656123a8b", 00:07:14.111 "strip_size_kb": 64, 00:07:14.111 "state": "configuring", 00:07:14.111 "raid_level": "raid0", 00:07:14.111 "superblock": true, 00:07:14.111 "num_base_bdevs": 2, 00:07:14.111 "num_base_bdevs_discovered": 1, 00:07:14.111 "num_base_bdevs_operational": 2, 00:07:14.111 "base_bdevs_list": [ 00:07:14.111 { 00:07:14.111 "name": "BaseBdev1", 00:07:14.111 "uuid": "44d5c63b-42f3-11ef-9f7f-e9a656123a8b", 00:07:14.111 "is_configured": true, 00:07:14.111 "data_offset": 2048, 00:07:14.111 "data_size": 63488 00:07:14.111 }, 00:07:14.111 { 00:07:14.111 "name": "BaseBdev2", 00:07:14.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.111 "is_configured": false, 00:07:14.111 "data_offset": 0, 00:07:14.111 "data_size": 0 00:07:14.111 } 00:07:14.111 ] 00:07:14.111 }' 00:07:14.111 21:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:14.111 21:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.370 21:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:14.629 [2024-07-15 21:43:29.560357] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:14.629 [2024-07-15 21:43:29.560439] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x780a9a34a00 00:07:14.629 [2024-07-15 21:43:29.560447] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:14.629 [2024-07-15 21:43:29.560469] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x780a9a97e20 00:07:14.629 [2024-07-15 21:43:29.560515] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x780a9a34a00 00:07:14.629 [2024-07-15 21:43:29.560536] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x780a9a34a00 00:07:14.629 [2024-07-15 21:43:29.560558] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.629 BaseBdev2 00:07:14.629 21:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:07:14.629 21:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev2 00:07:14.629 21:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:07:14.629 21:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:07:14.629 21:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:07:14.629 21:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:07:14.629 21:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:14.887 21:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:15.145 [ 00:07:15.145 { 00:07:15.145 "name": "BaseBdev2", 00:07:15.145 "aliases": [ 00:07:15.145 "464b51bc-42f3-11ef-9f7f-e9a656123a8b" 00:07:15.145 ], 00:07:15.145 "product_name": "Malloc disk", 00:07:15.145 "block_size": 512, 00:07:15.145 "num_blocks": 65536, 00:07:15.145 "uuid": "464b51bc-42f3-11ef-9f7f-e9a656123a8b", 00:07:15.145 "assigned_rate_limits": { 00:07:15.145 "rw_ios_per_sec": 0, 00:07:15.145 "rw_mbytes_per_sec": 0, 00:07:15.145 "r_mbytes_per_sec": 0, 00:07:15.145 "w_mbytes_per_sec": 0 00:07:15.145 }, 00:07:15.145 "claimed": true, 00:07:15.145 "claim_type": "exclusive_write", 00:07:15.145 "zoned": false, 00:07:15.145 "supported_io_types": { 00:07:15.145 "read": true, 00:07:15.145 "write": true, 00:07:15.145 "unmap": true, 00:07:15.145 "flush": true, 00:07:15.145 "reset": true, 00:07:15.145 "nvme_admin": false, 00:07:15.145 "nvme_io": false, 00:07:15.145 "nvme_io_md": false, 00:07:15.145 "write_zeroes": true, 00:07:15.145 "zcopy": true, 00:07:15.145 "get_zone_info": false, 00:07:15.145 "zone_management": false, 00:07:15.145 "zone_append": false, 00:07:15.145 "compare": false, 00:07:15.145 "compare_and_write": false, 00:07:15.145 "abort": true, 00:07:15.145 "seek_hole": false, 00:07:15.145 "seek_data": false, 00:07:15.145 "copy": true, 00:07:15.145 "nvme_iov_md": false 00:07:15.145 }, 00:07:15.145 "memory_domains": [ 00:07:15.145 { 00:07:15.145 "dma_device_id": "system", 00:07:15.145 "dma_device_type": 1 00:07:15.145 }, 00:07:15.145 { 00:07:15.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.145 "dma_device_type": 2 00:07:15.145 } 00:07:15.145 ], 00:07:15.145 "driver_specific": {} 00:07:15.145 } 00:07:15.145 ] 00:07:15.145 21:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:07:15.145 21:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:07:15.145 21:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:15.145 21:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:15.145 21:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:15.145 21:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:15.145 21:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:15.145 21:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:15.145 21:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:15.145 21:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:15.145 21:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:15.145 21:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:15.145 21:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:15.145 21:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:15.145 21:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.403 21:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:15.404 "name": "Existed_Raid", 00:07:15.404 "uuid": "45c46e2d-42f3-11ef-9f7f-e9a656123a8b", 00:07:15.404 "strip_size_kb": 64, 00:07:15.404 "state": "online", 00:07:15.404 "raid_level": "raid0", 00:07:15.404 "superblock": true, 00:07:15.404 "num_base_bdevs": 2, 00:07:15.404 "num_base_bdevs_discovered": 2, 00:07:15.404 "num_base_bdevs_operational": 2, 00:07:15.404 "base_bdevs_list": [ 00:07:15.404 { 00:07:15.404 "name": "BaseBdev1", 00:07:15.404 "uuid": "44d5c63b-42f3-11ef-9f7f-e9a656123a8b", 00:07:15.404 "is_configured": true, 00:07:15.404 "data_offset": 2048, 00:07:15.404 "data_size": 63488 00:07:15.404 }, 00:07:15.404 { 00:07:15.404 "name": "BaseBdev2", 00:07:15.404 "uuid": "464b51bc-42f3-11ef-9f7f-e9a656123a8b", 00:07:15.404 "is_configured": true, 00:07:15.404 "data_offset": 2048, 00:07:15.404 "data_size": 63488 00:07:15.404 } 00:07:15.404 ] 00:07:15.404 }' 00:07:15.404 21:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:15.404 21:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.661 21:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:07:15.661 21:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:07:15.661 21:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:15.661 21:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:15.661 21:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:15.661 21:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:07:15.662 21:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:07:15.662 21:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:15.920 [2024-07-15 21:43:30.996266] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.920 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:15.920 "name": "Existed_Raid", 00:07:15.920 "aliases": [ 00:07:15.920 "45c46e2d-42f3-11ef-9f7f-e9a656123a8b" 00:07:15.920 ], 00:07:15.920 "product_name": "Raid Volume", 00:07:15.920 "block_size": 512, 00:07:15.920 "num_blocks": 126976, 00:07:15.920 "uuid": "45c46e2d-42f3-11ef-9f7f-e9a656123a8b", 00:07:15.920 "assigned_rate_limits": { 00:07:15.920 "rw_ios_per_sec": 0, 00:07:15.920 "rw_mbytes_per_sec": 0, 00:07:15.920 "r_mbytes_per_sec": 0, 00:07:15.920 "w_mbytes_per_sec": 0 00:07:15.920 }, 00:07:15.920 "claimed": false, 00:07:15.920 "zoned": false, 00:07:15.920 "supported_io_types": { 00:07:15.920 "read": true, 00:07:15.920 "write": true, 00:07:15.920 "unmap": true, 00:07:15.920 "flush": true, 00:07:15.920 "reset": true, 00:07:15.920 "nvme_admin": false, 00:07:15.920 "nvme_io": false, 00:07:15.920 "nvme_io_md": false, 00:07:15.920 "write_zeroes": true, 00:07:15.920 "zcopy": false, 00:07:15.920 "get_zone_info": false, 00:07:15.920 "zone_management": false, 00:07:15.920 "zone_append": false, 00:07:15.920 "compare": false, 00:07:15.920 "compare_and_write": false, 00:07:15.920 "abort": false, 00:07:15.920 "seek_hole": false, 00:07:15.920 "seek_data": false, 00:07:15.920 "copy": false, 00:07:15.920 "nvme_iov_md": false 00:07:15.920 }, 00:07:15.920 "memory_domains": [ 00:07:15.920 { 00:07:15.920 "dma_device_id": "system", 00:07:15.920 "dma_device_type": 1 00:07:15.920 }, 00:07:15.920 { 00:07:15.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.920 "dma_device_type": 2 00:07:15.920 }, 00:07:15.920 { 00:07:15.920 "dma_device_id": "system", 00:07:15.920 "dma_device_type": 1 00:07:15.920 }, 00:07:15.920 { 00:07:15.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.920 "dma_device_type": 2 00:07:15.920 } 00:07:15.920 ], 00:07:15.920 "driver_specific": { 00:07:15.920 "raid": { 00:07:15.920 "uuid": "45c46e2d-42f3-11ef-9f7f-e9a656123a8b", 00:07:15.920 "strip_size_kb": 64, 00:07:15.920 "state": "online", 00:07:15.920 "raid_level": "raid0", 00:07:15.920 "superblock": true, 00:07:15.920 "num_base_bdevs": 2, 00:07:15.920 "num_base_bdevs_discovered": 2, 00:07:15.920 "num_base_bdevs_operational": 2, 00:07:15.920 "base_bdevs_list": [ 00:07:15.920 { 00:07:15.920 "name": "BaseBdev1", 00:07:15.920 "uuid": "44d5c63b-42f3-11ef-9f7f-e9a656123a8b", 00:07:15.920 "is_configured": true, 00:07:15.920 "data_offset": 2048, 00:07:15.920 "data_size": 63488 00:07:15.920 }, 00:07:15.920 { 00:07:15.920 "name": "BaseBdev2", 00:07:15.920 "uuid": "464b51bc-42f3-11ef-9f7f-e9a656123a8b", 00:07:15.920 "is_configured": true, 00:07:15.920 "data_offset": 2048, 00:07:15.920 "data_size": 63488 00:07:15.920 } 00:07:15.920 ] 00:07:15.920 } 00:07:15.920 } 00:07:15.920 }' 00:07:15.920 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:15.920 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:07:15.920 BaseBdev2' 00:07:15.920 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:15.920 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:07:15.920 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:16.179 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:16.179 "name": "BaseBdev1", 00:07:16.179 "aliases": [ 00:07:16.179 "44d5c63b-42f3-11ef-9f7f-e9a656123a8b" 00:07:16.179 ], 00:07:16.179 "product_name": "Malloc disk", 00:07:16.179 "block_size": 512, 00:07:16.179 "num_blocks": 65536, 00:07:16.179 "uuid": "44d5c63b-42f3-11ef-9f7f-e9a656123a8b", 00:07:16.179 "assigned_rate_limits": { 00:07:16.179 "rw_ios_per_sec": 0, 00:07:16.179 "rw_mbytes_per_sec": 0, 00:07:16.179 "r_mbytes_per_sec": 0, 00:07:16.179 "w_mbytes_per_sec": 0 00:07:16.179 }, 00:07:16.179 "claimed": true, 00:07:16.179 "claim_type": "exclusive_write", 00:07:16.179 "zoned": false, 00:07:16.179 "supported_io_types": { 00:07:16.179 "read": true, 00:07:16.179 "write": true, 00:07:16.179 "unmap": true, 00:07:16.179 "flush": true, 00:07:16.179 "reset": true, 00:07:16.179 "nvme_admin": false, 00:07:16.179 "nvme_io": false, 00:07:16.179 "nvme_io_md": false, 00:07:16.179 "write_zeroes": true, 00:07:16.179 "zcopy": true, 00:07:16.179 "get_zone_info": false, 00:07:16.179 "zone_management": false, 00:07:16.179 "zone_append": false, 00:07:16.179 "compare": false, 00:07:16.179 "compare_and_write": false, 00:07:16.179 "abort": true, 00:07:16.179 "seek_hole": false, 00:07:16.179 "seek_data": false, 00:07:16.179 "copy": true, 00:07:16.179 "nvme_iov_md": false 00:07:16.179 }, 00:07:16.179 "memory_domains": [ 00:07:16.179 { 00:07:16.179 "dma_device_id": "system", 00:07:16.179 "dma_device_type": 1 00:07:16.179 }, 00:07:16.179 { 00:07:16.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.179 "dma_device_type": 2 00:07:16.179 } 00:07:16.179 ], 00:07:16.180 "driver_specific": {} 00:07:16.180 }' 00:07:16.180 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:16.180 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:16.180 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:16.180 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:16.180 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:16.180 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:16.180 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:16.180 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:16.180 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:16.180 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:16.180 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:16.180 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:16.180 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:16.180 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:07:16.180 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:16.438 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:16.438 "name": "BaseBdev2", 00:07:16.438 "aliases": [ 00:07:16.438 "464b51bc-42f3-11ef-9f7f-e9a656123a8b" 00:07:16.438 ], 00:07:16.438 "product_name": "Malloc disk", 00:07:16.438 "block_size": 512, 00:07:16.438 "num_blocks": 65536, 00:07:16.438 "uuid": "464b51bc-42f3-11ef-9f7f-e9a656123a8b", 00:07:16.438 "assigned_rate_limits": { 00:07:16.438 "rw_ios_per_sec": 0, 00:07:16.438 "rw_mbytes_per_sec": 0, 00:07:16.438 "r_mbytes_per_sec": 0, 00:07:16.438 "w_mbytes_per_sec": 0 00:07:16.438 }, 00:07:16.438 "claimed": true, 00:07:16.438 "claim_type": "exclusive_write", 00:07:16.438 "zoned": false, 00:07:16.438 "supported_io_types": { 00:07:16.438 "read": true, 00:07:16.438 "write": true, 00:07:16.438 "unmap": true, 00:07:16.438 "flush": true, 00:07:16.438 "reset": true, 00:07:16.438 "nvme_admin": false, 00:07:16.438 "nvme_io": false, 00:07:16.438 "nvme_io_md": false, 00:07:16.438 "write_zeroes": true, 00:07:16.438 "zcopy": true, 00:07:16.438 "get_zone_info": false, 00:07:16.438 "zone_management": false, 00:07:16.438 "zone_append": false, 00:07:16.438 "compare": false, 00:07:16.438 "compare_and_write": false, 00:07:16.438 "abort": true, 00:07:16.438 "seek_hole": false, 00:07:16.438 "seek_data": false, 00:07:16.438 "copy": true, 00:07:16.438 "nvme_iov_md": false 00:07:16.438 }, 00:07:16.438 "memory_domains": [ 00:07:16.438 { 00:07:16.438 "dma_device_id": "system", 00:07:16.438 "dma_device_type": 1 00:07:16.438 }, 00:07:16.438 { 00:07:16.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.438 "dma_device_type": 2 00:07:16.438 } 00:07:16.438 ], 00:07:16.438 "driver_specific": {} 00:07:16.439 }' 00:07:16.439 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:16.439 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:16.439 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:16.439 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:16.439 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:16.439 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:16.439 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:16.439 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:16.439 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:16.697 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:16.697 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:16.697 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:16.697 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:16.955 [2024-07-15 21:43:31.924255] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:16.955 [2024-07-15 21:43:31.924280] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:16.955 [2024-07-15 21:43:31.924294] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:16.955 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:07:16.955 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:07:16.955 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:16.955 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:07:16.955 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:07:16.955 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:16.955 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:16.955 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:07:16.955 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:16.955 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:16.955 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:07:16.955 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:16.955 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:16.955 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:16.955 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:16.955 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:16.955 21:43:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:17.213 21:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:17.214 "name": "Existed_Raid", 00:07:17.214 "uuid": "45c46e2d-42f3-11ef-9f7f-e9a656123a8b", 00:07:17.214 "strip_size_kb": 64, 00:07:17.214 "state": "offline", 00:07:17.214 "raid_level": "raid0", 00:07:17.214 "superblock": true, 00:07:17.214 "num_base_bdevs": 2, 00:07:17.214 "num_base_bdevs_discovered": 1, 00:07:17.214 "num_base_bdevs_operational": 1, 00:07:17.214 "base_bdevs_list": [ 00:07:17.214 { 00:07:17.214 "name": null, 00:07:17.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.214 "is_configured": false, 00:07:17.214 "data_offset": 2048, 00:07:17.214 "data_size": 63488 00:07:17.214 }, 00:07:17.214 { 00:07:17.214 "name": "BaseBdev2", 00:07:17.214 "uuid": "464b51bc-42f3-11ef-9f7f-e9a656123a8b", 00:07:17.214 "is_configured": true, 00:07:17.214 "data_offset": 2048, 00:07:17.214 "data_size": 63488 00:07:17.214 } 00:07:17.214 ] 00:07:17.214 }' 00:07:17.214 21:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:17.214 21:43:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.472 21:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:07:17.472 21:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:17.472 21:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:07:17.472 21:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:17.729 21:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:07:17.729 21:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:17.730 21:43:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:17.988 [2024-07-15 21:43:33.053932] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:17.988 [2024-07-15 21:43:33.053967] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x780a9a34a00 name Existed_Raid, state offline 00:07:17.988 21:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:07:17.988 21:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:17.988 21:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:17.988 21:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:07:18.255 21:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:07:18.255 21:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:07:18.255 21:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:07:18.255 21:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 48931 00:07:18.255 21:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@942 -- # '[' -z 48931 ']' 00:07:18.255 21:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # kill -0 48931 00:07:18.255 21:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@947 -- # uname 00:07:18.255 21:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:07:18.255 21:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # ps -c -o command 48931 00:07:18.255 21:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # tail -1 00:07:18.255 21:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:07:18.255 killing process with pid 48931 00:07:18.255 21:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:07:18.255 21:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # echo 'killing process with pid 48931' 00:07:18.255 21:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@961 -- # kill 48931 00:07:18.255 [2024-07-15 21:43:33.323853] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:18.255 [2024-07-15 21:43:33.323895] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:18.255 21:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # wait 48931 00:07:18.511 21:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:07:18.511 00:07:18.511 real 0m9.311s 00:07:18.511 user 0m16.230s 00:07:18.511 sys 0m1.633s 00:07:18.511 21:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:18.511 21:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.511 ************************************ 00:07:18.511 END TEST raid_state_function_test_sb 00:07:18.511 ************************************ 00:07:18.511 21:43:33 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:07:18.511 21:43:33 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:18.511 21:43:33 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:07:18.511 21:43:33 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:18.511 21:43:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:18.511 ************************************ 00:07:18.511 START TEST raid_superblock_test 00:07:18.511 ************************************ 00:07:18.511 21:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1117 -- # raid_superblock_test raid0 2 00:07:18.511 21:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:07:18.511 21:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:07:18.511 21:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:07:18.511 21:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:07:18.511 21:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:07:18.511 21:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:07:18.511 21:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:07:18.511 21:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:07:18.511 21:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:07:18.511 21:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:07:18.511 21:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:07:18.511 21:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:07:18.511 21:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:07:18.511 21:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:07:18.511 21:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:07:18.511 21:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:07:18.511 21:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=49205 00:07:18.511 21:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 49205 /var/tmp/spdk-raid.sock 00:07:18.511 21:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:07:18.511 21:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@823 -- # '[' -z 49205 ']' 00:07:18.511 21:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:18.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:18.511 21:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:07:18.512 21:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:18.512 21:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:07:18.512 21:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.512 [2024-07-15 21:43:33.557947] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:07:18.512 [2024-07-15 21:43:33.558114] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:19.078 EAL: TSC is not safe to use in SMP mode 00:07:19.078 EAL: TSC is not invariant 00:07:19.078 [2024-07-15 21:43:34.068943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.078 [2024-07-15 21:43:34.157002] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:19.078 [2024-07-15 21:43:34.159103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.078 [2024-07-15 21:43:34.159851] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.078 [2024-07-15 21:43:34.159864] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.642 21:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:07:19.642 21:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # return 0 00:07:19.642 21:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:07:19.642 21:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:07:19.642 21:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:07:19.642 21:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:07:19.642 21:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:19.642 21:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:19.642 21:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:07:19.642 21:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:19.642 21:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:07:19.899 malloc1 00:07:19.899 21:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:20.156 [2024-07-15 21:43:35.208094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:20.156 [2024-07-15 21:43:35.208175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.157 [2024-07-15 21:43:35.208187] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3b6f51c34780 00:07:20.157 [2024-07-15 21:43:35.208196] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.157 [2024-07-15 21:43:35.209070] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.157 [2024-07-15 21:43:35.209095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:20.157 pt1 00:07:20.157 21:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:07:20.157 21:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:07:20.157 21:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:07:20.157 21:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:07:20.157 21:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:20.157 21:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:20.157 21:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:07:20.157 21:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:20.157 21:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:07:20.414 malloc2 00:07:20.414 21:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:20.671 [2024-07-15 21:43:35.784073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:20.671 [2024-07-15 21:43:35.784139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.671 [2024-07-15 21:43:35.784151] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3b6f51c34c80 00:07:20.671 [2024-07-15 21:43:35.784159] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.671 [2024-07-15 21:43:35.784804] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.671 [2024-07-15 21:43:35.784828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:20.671 pt2 00:07:20.671 21:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:07:20.671 21:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:07:20.671 21:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:07:20.927 [2024-07-15 21:43:36.060071] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:20.927 [2024-07-15 21:43:36.060624] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:20.928 [2024-07-15 21:43:36.060689] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3b6f51c34f00 00:07:20.928 [2024-07-15 21:43:36.060697] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:20.928 [2024-07-15 21:43:36.060728] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3b6f51c97e20 00:07:20.928 [2024-07-15 21:43:36.060804] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3b6f51c34f00 00:07:20.928 [2024-07-15 21:43:36.060809] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3b6f51c34f00 00:07:20.928 [2024-07-15 21:43:36.060841] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.928 21:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:20.928 21:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:20.928 21:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:20.928 21:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:20.928 21:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:20.928 21:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:20.928 21:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:20.928 21:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:20.928 21:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:20.928 21:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:20.928 21:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:20.928 21:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:21.185 21:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:21.185 "name": "raid_bdev1", 00:07:21.185 "uuid": "4a2b1e27-42f3-11ef-9f7f-e9a656123a8b", 00:07:21.185 "strip_size_kb": 64, 00:07:21.185 "state": "online", 00:07:21.185 "raid_level": "raid0", 00:07:21.185 "superblock": true, 00:07:21.185 "num_base_bdevs": 2, 00:07:21.185 "num_base_bdevs_discovered": 2, 00:07:21.185 "num_base_bdevs_operational": 2, 00:07:21.185 "base_bdevs_list": [ 00:07:21.185 { 00:07:21.185 "name": "pt1", 00:07:21.185 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:21.185 "is_configured": true, 00:07:21.185 "data_offset": 2048, 00:07:21.185 "data_size": 63488 00:07:21.185 }, 00:07:21.185 { 00:07:21.185 "name": "pt2", 00:07:21.185 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:21.185 "is_configured": true, 00:07:21.185 "data_offset": 2048, 00:07:21.185 "data_size": 63488 00:07:21.185 } 00:07:21.185 ] 00:07:21.185 }' 00:07:21.185 21:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:21.185 21:43:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.751 21:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:07:21.751 21:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:07:21.751 21:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:21.751 21:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:21.751 21:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:21.751 21:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:21.751 21:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:21.751 21:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:22.007 [2024-07-15 21:43:36.964093] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:22.008 21:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:22.008 "name": "raid_bdev1", 00:07:22.008 "aliases": [ 00:07:22.008 "4a2b1e27-42f3-11ef-9f7f-e9a656123a8b" 00:07:22.008 ], 00:07:22.008 "product_name": "Raid Volume", 00:07:22.008 "block_size": 512, 00:07:22.008 "num_blocks": 126976, 00:07:22.008 "uuid": "4a2b1e27-42f3-11ef-9f7f-e9a656123a8b", 00:07:22.008 "assigned_rate_limits": { 00:07:22.008 "rw_ios_per_sec": 0, 00:07:22.008 "rw_mbytes_per_sec": 0, 00:07:22.008 "r_mbytes_per_sec": 0, 00:07:22.008 "w_mbytes_per_sec": 0 00:07:22.008 }, 00:07:22.008 "claimed": false, 00:07:22.008 "zoned": false, 00:07:22.008 "supported_io_types": { 00:07:22.008 "read": true, 00:07:22.008 "write": true, 00:07:22.008 "unmap": true, 00:07:22.008 "flush": true, 00:07:22.008 "reset": true, 00:07:22.008 "nvme_admin": false, 00:07:22.008 "nvme_io": false, 00:07:22.008 "nvme_io_md": false, 00:07:22.008 "write_zeroes": true, 00:07:22.008 "zcopy": false, 00:07:22.008 "get_zone_info": false, 00:07:22.008 "zone_management": false, 00:07:22.008 "zone_append": false, 00:07:22.008 "compare": false, 00:07:22.008 "compare_and_write": false, 00:07:22.008 "abort": false, 00:07:22.008 "seek_hole": false, 00:07:22.008 "seek_data": false, 00:07:22.008 "copy": false, 00:07:22.008 "nvme_iov_md": false 00:07:22.008 }, 00:07:22.008 "memory_domains": [ 00:07:22.008 { 00:07:22.008 "dma_device_id": "system", 00:07:22.008 "dma_device_type": 1 00:07:22.008 }, 00:07:22.008 { 00:07:22.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.008 "dma_device_type": 2 00:07:22.008 }, 00:07:22.008 { 00:07:22.008 "dma_device_id": "system", 00:07:22.008 "dma_device_type": 1 00:07:22.008 }, 00:07:22.008 { 00:07:22.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.008 "dma_device_type": 2 00:07:22.008 } 00:07:22.008 ], 00:07:22.008 "driver_specific": { 00:07:22.008 "raid": { 00:07:22.008 "uuid": "4a2b1e27-42f3-11ef-9f7f-e9a656123a8b", 00:07:22.008 "strip_size_kb": 64, 00:07:22.008 "state": "online", 00:07:22.008 "raid_level": "raid0", 00:07:22.008 "superblock": true, 00:07:22.008 "num_base_bdevs": 2, 00:07:22.008 "num_base_bdevs_discovered": 2, 00:07:22.008 "num_base_bdevs_operational": 2, 00:07:22.008 "base_bdevs_list": [ 00:07:22.008 { 00:07:22.008 "name": "pt1", 00:07:22.008 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:22.008 "is_configured": true, 00:07:22.008 "data_offset": 2048, 00:07:22.008 "data_size": 63488 00:07:22.008 }, 00:07:22.008 { 00:07:22.008 "name": "pt2", 00:07:22.008 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:22.008 "is_configured": true, 00:07:22.008 "data_offset": 2048, 00:07:22.008 "data_size": 63488 00:07:22.008 } 00:07:22.008 ] 00:07:22.008 } 00:07:22.008 } 00:07:22.008 }' 00:07:22.008 21:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:22.008 21:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:07:22.008 pt2' 00:07:22.008 21:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:22.008 21:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:07:22.008 21:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:22.265 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:22.265 "name": "pt1", 00:07:22.265 "aliases": [ 00:07:22.265 "00000000-0000-0000-0000-000000000001" 00:07:22.265 ], 00:07:22.265 "product_name": "passthru", 00:07:22.265 "block_size": 512, 00:07:22.265 "num_blocks": 65536, 00:07:22.265 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:22.265 "assigned_rate_limits": { 00:07:22.265 "rw_ios_per_sec": 0, 00:07:22.265 "rw_mbytes_per_sec": 0, 00:07:22.265 "r_mbytes_per_sec": 0, 00:07:22.265 "w_mbytes_per_sec": 0 00:07:22.265 }, 00:07:22.265 "claimed": true, 00:07:22.265 "claim_type": "exclusive_write", 00:07:22.265 "zoned": false, 00:07:22.265 "supported_io_types": { 00:07:22.265 "read": true, 00:07:22.265 "write": true, 00:07:22.265 "unmap": true, 00:07:22.265 "flush": true, 00:07:22.265 "reset": true, 00:07:22.265 "nvme_admin": false, 00:07:22.265 "nvme_io": false, 00:07:22.265 "nvme_io_md": false, 00:07:22.265 "write_zeroes": true, 00:07:22.265 "zcopy": true, 00:07:22.265 "get_zone_info": false, 00:07:22.265 "zone_management": false, 00:07:22.265 "zone_append": false, 00:07:22.265 "compare": false, 00:07:22.265 "compare_and_write": false, 00:07:22.265 "abort": true, 00:07:22.265 "seek_hole": false, 00:07:22.265 "seek_data": false, 00:07:22.265 "copy": true, 00:07:22.265 "nvme_iov_md": false 00:07:22.265 }, 00:07:22.265 "memory_domains": [ 00:07:22.265 { 00:07:22.265 "dma_device_id": "system", 00:07:22.265 "dma_device_type": 1 00:07:22.265 }, 00:07:22.265 { 00:07:22.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.265 "dma_device_type": 2 00:07:22.265 } 00:07:22.265 ], 00:07:22.265 "driver_specific": { 00:07:22.265 "passthru": { 00:07:22.265 "name": "pt1", 00:07:22.265 "base_bdev_name": "malloc1" 00:07:22.265 } 00:07:22.265 } 00:07:22.265 }' 00:07:22.265 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:22.265 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:22.265 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:22.265 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:22.265 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:22.265 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:22.265 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:22.265 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:22.265 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:22.265 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:22.265 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:22.265 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:22.265 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:22.265 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:07:22.265 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:22.523 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:22.523 "name": "pt2", 00:07:22.523 "aliases": [ 00:07:22.523 "00000000-0000-0000-0000-000000000002" 00:07:22.523 ], 00:07:22.523 "product_name": "passthru", 00:07:22.523 "block_size": 512, 00:07:22.523 "num_blocks": 65536, 00:07:22.523 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:22.523 "assigned_rate_limits": { 00:07:22.523 "rw_ios_per_sec": 0, 00:07:22.523 "rw_mbytes_per_sec": 0, 00:07:22.523 "r_mbytes_per_sec": 0, 00:07:22.523 "w_mbytes_per_sec": 0 00:07:22.523 }, 00:07:22.523 "claimed": true, 00:07:22.523 "claim_type": "exclusive_write", 00:07:22.523 "zoned": false, 00:07:22.523 "supported_io_types": { 00:07:22.523 "read": true, 00:07:22.523 "write": true, 00:07:22.523 "unmap": true, 00:07:22.523 "flush": true, 00:07:22.523 "reset": true, 00:07:22.523 "nvme_admin": false, 00:07:22.523 "nvme_io": false, 00:07:22.523 "nvme_io_md": false, 00:07:22.523 "write_zeroes": true, 00:07:22.523 "zcopy": true, 00:07:22.523 "get_zone_info": false, 00:07:22.523 "zone_management": false, 00:07:22.523 "zone_append": false, 00:07:22.523 "compare": false, 00:07:22.523 "compare_and_write": false, 00:07:22.523 "abort": true, 00:07:22.523 "seek_hole": false, 00:07:22.523 "seek_data": false, 00:07:22.523 "copy": true, 00:07:22.523 "nvme_iov_md": false 00:07:22.523 }, 00:07:22.524 "memory_domains": [ 00:07:22.524 { 00:07:22.524 "dma_device_id": "system", 00:07:22.524 "dma_device_type": 1 00:07:22.524 }, 00:07:22.524 { 00:07:22.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.524 "dma_device_type": 2 00:07:22.524 } 00:07:22.524 ], 00:07:22.524 "driver_specific": { 00:07:22.524 "passthru": { 00:07:22.524 "name": "pt2", 00:07:22.524 "base_bdev_name": "malloc2" 00:07:22.524 } 00:07:22.524 } 00:07:22.524 }' 00:07:22.524 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:22.524 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:22.524 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:22.524 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:22.524 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:22.524 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:22.524 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:22.524 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:22.524 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:22.524 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:22.524 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:22.524 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:22.524 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:22.524 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:07:22.781 [2024-07-15 21:43:37.912075] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:22.781 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=4a2b1e27-42f3-11ef-9f7f-e9a656123a8b 00:07:22.781 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 4a2b1e27-42f3-11ef-9f7f-e9a656123a8b ']' 00:07:22.781 21:43:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:23.043 [2024-07-15 21:43:38.184032] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:23.043 [2024-07-15 21:43:38.184057] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:23.043 [2024-07-15 21:43:38.184081] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:23.043 [2024-07-15 21:43:38.184092] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:23.043 [2024-07-15 21:43:38.184097] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3b6f51c34f00 name raid_bdev1, state offline 00:07:23.043 21:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:23.043 21:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:07:23.301 21:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:07:23.301 21:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:07:23.301 21:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:07:23.301 21:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:07:23.557 21:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:07:23.557 21:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:07:23.815 21:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:07:23.815 21:43:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:24.072 21:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:07:24.072 21:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:07:24.072 21:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # local es=0 00:07:24.072 21:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:07:24.072 21:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@630 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:24.072 21:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:07:24.072 21:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:24.072 21:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:07:24.072 21:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:24.072 21:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:07:24.072 21:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:24.072 21:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:24.072 21:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:07:24.330 [2024-07-15 21:43:39.444034] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:24.330 [2024-07-15 21:43:39.444595] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:24.330 [2024-07-15 21:43:39.444611] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:24.330 [2024-07-15 21:43:39.444653] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:24.330 [2024-07-15 21:43:39.444663] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:24.330 [2024-07-15 21:43:39.444668] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3b6f51c34c80 name raid_bdev1, state configuring 00:07:24.330 request: 00:07:24.330 { 00:07:24.330 "name": "raid_bdev1", 00:07:24.330 "raid_level": "raid0", 00:07:24.330 "base_bdevs": [ 00:07:24.330 "malloc1", 00:07:24.330 "malloc2" 00:07:24.330 ], 00:07:24.330 "strip_size_kb": 64, 00:07:24.330 "superblock": false, 00:07:24.330 "method": "bdev_raid_create", 00:07:24.330 "req_id": 1 00:07:24.330 } 00:07:24.330 Got JSON-RPC error response 00:07:24.330 response: 00:07:24.330 { 00:07:24.330 "code": -17, 00:07:24.330 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:24.330 } 00:07:24.330 21:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # es=1 00:07:24.330 21:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:07:24.330 21:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:07:24.330 21:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:07:24.330 21:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:24.330 21:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:07:24.589 21:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:07:24.589 21:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:07:24.589 21:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:24.847 [2024-07-15 21:43:39.940016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:24.847 [2024-07-15 21:43:39.940076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.847 [2024-07-15 21:43:39.940089] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3b6f51c34780 00:07:24.847 [2024-07-15 21:43:39.940097] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.847 [2024-07-15 21:43:39.940719] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.847 [2024-07-15 21:43:39.940740] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:24.847 [2024-07-15 21:43:39.940765] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:24.847 [2024-07-15 21:43:39.940777] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:24.847 pt1 00:07:24.847 21:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:24.847 21:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:24.847 21:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:24.847 21:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:24.847 21:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:24.847 21:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:24.847 21:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:24.847 21:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:24.847 21:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:24.847 21:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:24.847 21:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:24.847 21:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:25.105 21:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:25.105 "name": "raid_bdev1", 00:07:25.105 "uuid": "4a2b1e27-42f3-11ef-9f7f-e9a656123a8b", 00:07:25.105 "strip_size_kb": 64, 00:07:25.105 "state": "configuring", 00:07:25.105 "raid_level": "raid0", 00:07:25.105 "superblock": true, 00:07:25.105 "num_base_bdevs": 2, 00:07:25.105 "num_base_bdevs_discovered": 1, 00:07:25.105 "num_base_bdevs_operational": 2, 00:07:25.105 "base_bdevs_list": [ 00:07:25.105 { 00:07:25.105 "name": "pt1", 00:07:25.105 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:25.105 "is_configured": true, 00:07:25.105 "data_offset": 2048, 00:07:25.105 "data_size": 63488 00:07:25.105 }, 00:07:25.105 { 00:07:25.105 "name": null, 00:07:25.105 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:25.105 "is_configured": false, 00:07:25.105 "data_offset": 2048, 00:07:25.105 "data_size": 63488 00:07:25.105 } 00:07:25.105 ] 00:07:25.105 }' 00:07:25.105 21:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:25.105 21:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.361 21:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:07:25.361 21:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:07:25.361 21:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:07:25.361 21:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:25.926 [2024-07-15 21:43:40.811995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:25.926 [2024-07-15 21:43:40.812051] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:25.926 [2024-07-15 21:43:40.812063] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3b6f51c34f00 00:07:25.926 [2024-07-15 21:43:40.812071] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:25.926 [2024-07-15 21:43:40.812183] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:25.926 [2024-07-15 21:43:40.812194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:25.926 [2024-07-15 21:43:40.812216] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:25.926 [2024-07-15 21:43:40.812226] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:25.926 [2024-07-15 21:43:40.812252] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3b6f51c35180 00:07:25.926 [2024-07-15 21:43:40.812256] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:25.926 [2024-07-15 21:43:40.812275] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3b6f51c97e20 00:07:25.926 [2024-07-15 21:43:40.812335] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3b6f51c35180 00:07:25.926 [2024-07-15 21:43:40.812340] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3b6f51c35180 00:07:25.926 [2024-07-15 21:43:40.812362] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:25.926 pt2 00:07:25.926 21:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:07:25.926 21:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:07:25.926 21:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:25.926 21:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:25.926 21:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:25.926 21:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:25.926 21:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:25.926 21:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:25.926 21:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:25.926 21:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:25.926 21:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:25.926 21:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:25.926 21:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:25.926 21:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:25.926 21:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:25.926 "name": "raid_bdev1", 00:07:25.926 "uuid": "4a2b1e27-42f3-11ef-9f7f-e9a656123a8b", 00:07:25.926 "strip_size_kb": 64, 00:07:25.926 "state": "online", 00:07:25.926 "raid_level": "raid0", 00:07:25.926 "superblock": true, 00:07:25.926 "num_base_bdevs": 2, 00:07:25.926 "num_base_bdevs_discovered": 2, 00:07:25.926 "num_base_bdevs_operational": 2, 00:07:25.926 "base_bdevs_list": [ 00:07:25.926 { 00:07:25.926 "name": "pt1", 00:07:25.926 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:25.926 "is_configured": true, 00:07:25.926 "data_offset": 2048, 00:07:25.926 "data_size": 63488 00:07:25.926 }, 00:07:25.926 { 00:07:25.926 "name": "pt2", 00:07:25.926 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:25.926 "is_configured": true, 00:07:25.926 "data_offset": 2048, 00:07:25.926 "data_size": 63488 00:07:25.926 } 00:07:25.926 ] 00:07:25.926 }' 00:07:25.926 21:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:25.926 21:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.491 21:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:07:26.491 21:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:07:26.491 21:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:26.491 21:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:26.491 21:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:26.491 21:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:26.491 21:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:26.491 21:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:26.491 [2024-07-15 21:43:41.624037] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:26.491 21:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:26.491 "name": "raid_bdev1", 00:07:26.491 "aliases": [ 00:07:26.491 "4a2b1e27-42f3-11ef-9f7f-e9a656123a8b" 00:07:26.491 ], 00:07:26.491 "product_name": "Raid Volume", 00:07:26.491 "block_size": 512, 00:07:26.491 "num_blocks": 126976, 00:07:26.491 "uuid": "4a2b1e27-42f3-11ef-9f7f-e9a656123a8b", 00:07:26.491 "assigned_rate_limits": { 00:07:26.491 "rw_ios_per_sec": 0, 00:07:26.491 "rw_mbytes_per_sec": 0, 00:07:26.491 "r_mbytes_per_sec": 0, 00:07:26.491 "w_mbytes_per_sec": 0 00:07:26.491 }, 00:07:26.491 "claimed": false, 00:07:26.491 "zoned": false, 00:07:26.491 "supported_io_types": { 00:07:26.491 "read": true, 00:07:26.491 "write": true, 00:07:26.491 "unmap": true, 00:07:26.491 "flush": true, 00:07:26.491 "reset": true, 00:07:26.491 "nvme_admin": false, 00:07:26.491 "nvme_io": false, 00:07:26.491 "nvme_io_md": false, 00:07:26.491 "write_zeroes": true, 00:07:26.491 "zcopy": false, 00:07:26.491 "get_zone_info": false, 00:07:26.491 "zone_management": false, 00:07:26.491 "zone_append": false, 00:07:26.491 "compare": false, 00:07:26.491 "compare_and_write": false, 00:07:26.491 "abort": false, 00:07:26.491 "seek_hole": false, 00:07:26.491 "seek_data": false, 00:07:26.491 "copy": false, 00:07:26.491 "nvme_iov_md": false 00:07:26.491 }, 00:07:26.491 "memory_domains": [ 00:07:26.491 { 00:07:26.491 "dma_device_id": "system", 00:07:26.491 "dma_device_type": 1 00:07:26.491 }, 00:07:26.491 { 00:07:26.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.491 "dma_device_type": 2 00:07:26.491 }, 00:07:26.491 { 00:07:26.491 "dma_device_id": "system", 00:07:26.492 "dma_device_type": 1 00:07:26.492 }, 00:07:26.492 { 00:07:26.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.492 "dma_device_type": 2 00:07:26.492 } 00:07:26.492 ], 00:07:26.492 "driver_specific": { 00:07:26.492 "raid": { 00:07:26.492 "uuid": "4a2b1e27-42f3-11ef-9f7f-e9a656123a8b", 00:07:26.492 "strip_size_kb": 64, 00:07:26.492 "state": "online", 00:07:26.492 "raid_level": "raid0", 00:07:26.492 "superblock": true, 00:07:26.492 "num_base_bdevs": 2, 00:07:26.492 "num_base_bdevs_discovered": 2, 00:07:26.492 "num_base_bdevs_operational": 2, 00:07:26.492 "base_bdevs_list": [ 00:07:26.492 { 00:07:26.492 "name": "pt1", 00:07:26.492 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:26.492 "is_configured": true, 00:07:26.492 "data_offset": 2048, 00:07:26.492 "data_size": 63488 00:07:26.492 }, 00:07:26.492 { 00:07:26.492 "name": "pt2", 00:07:26.492 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:26.492 "is_configured": true, 00:07:26.492 "data_offset": 2048, 00:07:26.492 "data_size": 63488 00:07:26.492 } 00:07:26.492 ] 00:07:26.492 } 00:07:26.492 } 00:07:26.492 }' 00:07:26.492 21:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:26.492 21:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:07:26.492 pt2' 00:07:26.492 21:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:26.492 21:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:07:26.492 21:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:26.750 21:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:26.750 "name": "pt1", 00:07:26.750 "aliases": [ 00:07:26.750 "00000000-0000-0000-0000-000000000001" 00:07:26.750 ], 00:07:26.750 "product_name": "passthru", 00:07:26.750 "block_size": 512, 00:07:26.750 "num_blocks": 65536, 00:07:26.750 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:26.750 "assigned_rate_limits": { 00:07:26.750 "rw_ios_per_sec": 0, 00:07:26.750 "rw_mbytes_per_sec": 0, 00:07:26.750 "r_mbytes_per_sec": 0, 00:07:26.750 "w_mbytes_per_sec": 0 00:07:26.750 }, 00:07:26.750 "claimed": true, 00:07:26.750 "claim_type": "exclusive_write", 00:07:26.750 "zoned": false, 00:07:26.750 "supported_io_types": { 00:07:26.750 "read": true, 00:07:26.750 "write": true, 00:07:26.750 "unmap": true, 00:07:26.750 "flush": true, 00:07:26.750 "reset": true, 00:07:26.750 "nvme_admin": false, 00:07:26.750 "nvme_io": false, 00:07:26.750 "nvme_io_md": false, 00:07:26.750 "write_zeroes": true, 00:07:26.750 "zcopy": true, 00:07:26.750 "get_zone_info": false, 00:07:26.750 "zone_management": false, 00:07:26.750 "zone_append": false, 00:07:26.750 "compare": false, 00:07:26.750 "compare_and_write": false, 00:07:26.750 "abort": true, 00:07:26.750 "seek_hole": false, 00:07:26.750 "seek_data": false, 00:07:26.750 "copy": true, 00:07:26.750 "nvme_iov_md": false 00:07:26.750 }, 00:07:26.750 "memory_domains": [ 00:07:26.750 { 00:07:26.750 "dma_device_id": "system", 00:07:26.750 "dma_device_type": 1 00:07:26.750 }, 00:07:26.750 { 00:07:26.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.750 "dma_device_type": 2 00:07:26.750 } 00:07:26.750 ], 00:07:26.750 "driver_specific": { 00:07:26.750 "passthru": { 00:07:26.750 "name": "pt1", 00:07:26.750 "base_bdev_name": "malloc1" 00:07:26.750 } 00:07:26.750 } 00:07:26.750 }' 00:07:26.750 21:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:26.750 21:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:27.008 21:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:27.008 21:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:27.008 21:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:27.008 21:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:27.008 21:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:27.008 21:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:27.008 21:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:27.008 21:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:27.008 21:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:27.008 21:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:27.008 21:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:27.008 21:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:07:27.008 21:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:27.266 21:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:27.266 "name": "pt2", 00:07:27.266 "aliases": [ 00:07:27.266 "00000000-0000-0000-0000-000000000002" 00:07:27.266 ], 00:07:27.266 "product_name": "passthru", 00:07:27.266 "block_size": 512, 00:07:27.266 "num_blocks": 65536, 00:07:27.266 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:27.266 "assigned_rate_limits": { 00:07:27.266 "rw_ios_per_sec": 0, 00:07:27.266 "rw_mbytes_per_sec": 0, 00:07:27.266 "r_mbytes_per_sec": 0, 00:07:27.266 "w_mbytes_per_sec": 0 00:07:27.266 }, 00:07:27.266 "claimed": true, 00:07:27.266 "claim_type": "exclusive_write", 00:07:27.266 "zoned": false, 00:07:27.266 "supported_io_types": { 00:07:27.266 "read": true, 00:07:27.266 "write": true, 00:07:27.266 "unmap": true, 00:07:27.266 "flush": true, 00:07:27.266 "reset": true, 00:07:27.266 "nvme_admin": false, 00:07:27.266 "nvme_io": false, 00:07:27.266 "nvme_io_md": false, 00:07:27.266 "write_zeroes": true, 00:07:27.266 "zcopy": true, 00:07:27.266 "get_zone_info": false, 00:07:27.266 "zone_management": false, 00:07:27.266 "zone_append": false, 00:07:27.266 "compare": false, 00:07:27.266 "compare_and_write": false, 00:07:27.266 "abort": true, 00:07:27.266 "seek_hole": false, 00:07:27.266 "seek_data": false, 00:07:27.266 "copy": true, 00:07:27.266 "nvme_iov_md": false 00:07:27.266 }, 00:07:27.266 "memory_domains": [ 00:07:27.266 { 00:07:27.266 "dma_device_id": "system", 00:07:27.266 "dma_device_type": 1 00:07:27.266 }, 00:07:27.266 { 00:07:27.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.266 "dma_device_type": 2 00:07:27.266 } 00:07:27.266 ], 00:07:27.266 "driver_specific": { 00:07:27.266 "passthru": { 00:07:27.266 "name": "pt2", 00:07:27.266 "base_bdev_name": "malloc2" 00:07:27.266 } 00:07:27.266 } 00:07:27.266 }' 00:07:27.266 21:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:27.266 21:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:27.266 21:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:27.266 21:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:27.266 21:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:27.266 21:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:27.266 21:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:27.266 21:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:27.266 21:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:27.266 21:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:27.266 21:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:27.266 21:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:27.266 21:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:27.266 21:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:07:27.523 [2024-07-15 21:43:42.576024] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:27.523 21:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 4a2b1e27-42f3-11ef-9f7f-e9a656123a8b '!=' 4a2b1e27-42f3-11ef-9f7f-e9a656123a8b ']' 00:07:27.523 21:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:07:27.524 21:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:27.524 21:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:27.524 21:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 49205 00:07:27.524 21:43:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@942 -- # '[' -z 49205 ']' 00:07:27.524 21:43:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # kill -0 49205 00:07:27.524 21:43:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@947 -- # uname 00:07:27.524 21:43:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:07:27.524 21:43:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # ps -c -o command 49205 00:07:27.524 21:43:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # tail -1 00:07:27.524 21:43:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:07:27.524 21:43:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:07:27.524 killing process with pid 49205 00:07:27.524 21:43:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 49205' 00:07:27.524 21:43:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@961 -- # kill 49205 00:07:27.524 [2024-07-15 21:43:42.609664] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:27.524 [2024-07-15 21:43:42.609684] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:27.524 [2024-07-15 21:43:42.609696] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:27.524 [2024-07-15 21:43:42.609700] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3b6f51c35180 name raid_bdev1, state offline 00:07:27.524 21:43:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # wait 49205 00:07:27.524 [2024-07-15 21:43:42.620876] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:27.782 21:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:07:27.782 00:07:27.782 real 0m9.240s 00:07:27.782 user 0m16.095s 00:07:27.782 sys 0m1.621s 00:07:27.782 21:43:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:27.782 21:43:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.782 ************************************ 00:07:27.782 END TEST raid_superblock_test 00:07:27.782 ************************************ 00:07:27.782 21:43:42 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:07:27.782 21:43:42 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:27.782 21:43:42 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:07:27.782 21:43:42 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:27.782 21:43:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:27.782 ************************************ 00:07:27.782 START TEST raid_read_error_test 00:07:27.782 ************************************ 00:07:27.782 21:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1117 -- # raid_io_error_test raid0 2 read 00:07:27.782 21:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:07:27.782 21:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:07:27.782 21:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:07:27.782 21:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:07:27.782 21:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:27.782 21:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:07:27.782 21:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:27.782 21:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:27.782 21:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:07:27.782 21:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:27.782 21:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:27.782 21:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:27.782 21:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:07:27.782 21:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:07:27.782 21:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:07:27.782 21:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:07:27.782 21:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:07:27.782 21:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:07:27.782 21:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:07:27.782 21:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:07:27.782 21:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:07:27.782 21:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:07:27.782 21:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.wNK5L3TQ7f 00:07:27.782 21:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=49474 00:07:27.782 21:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 49474 /var/tmp/spdk-raid.sock 00:07:27.782 21:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@823 -- # '[' -z 49474 ']' 00:07:27.782 21:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:27.782 21:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:07:27.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:27.782 21:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:27.782 21:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:27.782 21:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:07:27.782 21:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.782 [2024-07-15 21:43:42.857433] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:07:27.782 [2024-07-15 21:43:42.857661] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:28.714 EAL: TSC is not safe to use in SMP mode 00:07:28.714 EAL: TSC is not invariant 00:07:28.714 [2024-07-15 21:43:43.583421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.714 [2024-07-15 21:43:43.666461] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:28.714 [2024-07-15 21:43:43.668536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.714 [2024-07-15 21:43:43.669286] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.714 [2024-07-15 21:43:43.669300] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.714 21:43:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:07:28.714 21:43:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # return 0 00:07:28.714 21:43:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:28.714 21:43:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:28.971 BaseBdev1_malloc 00:07:28.972 21:43:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:07:29.230 true 00:07:29.230 21:43:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:29.488 [2024-07-15 21:43:44.669145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:29.488 [2024-07-15 21:43:44.669216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.488 [2024-07-15 21:43:44.669243] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x6f6e8834780 00:07:29.488 [2024-07-15 21:43:44.669253] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.488 [2024-07-15 21:43:44.669900] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.488 [2024-07-15 21:43:44.669925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:29.488 BaseBdev1 00:07:29.746 21:43:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:29.746 21:43:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:30.004 BaseBdev2_malloc 00:07:30.004 21:43:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:07:30.263 true 00:07:30.263 21:43:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:30.521 [2024-07-15 21:43:45.465138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:30.521 [2024-07-15 21:43:45.465195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:30.521 [2024-07-15 21:43:45.465231] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x6f6e8834c80 00:07:30.521 [2024-07-15 21:43:45.465239] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:30.521 [2024-07-15 21:43:45.465893] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:30.521 [2024-07-15 21:43:45.465919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:30.521 BaseBdev2 00:07:30.521 21:43:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:07:30.780 [2024-07-15 21:43:45.713159] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:30.780 [2024-07-15 21:43:45.713766] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:30.780 [2024-07-15 21:43:45.713857] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x6f6e8834f00 00:07:30.780 [2024-07-15 21:43:45.713865] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:30.780 [2024-07-15 21:43:45.713898] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x6f6e88a0e20 00:07:30.780 [2024-07-15 21:43:45.713972] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x6f6e8834f00 00:07:30.780 [2024-07-15 21:43:45.713977] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x6f6e8834f00 00:07:30.780 [2024-07-15 21:43:45.714004] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:30.780 21:43:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:30.780 21:43:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:30.780 21:43:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:30.780 21:43:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:30.780 21:43:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:30.780 21:43:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:30.780 21:43:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:30.780 21:43:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:30.780 21:43:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:30.780 21:43:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:30.780 21:43:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:30.780 21:43:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:31.038 21:43:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:31.038 "name": "raid_bdev1", 00:07:31.038 "uuid": "4fec0fd7-42f3-11ef-9f7f-e9a656123a8b", 00:07:31.038 "strip_size_kb": 64, 00:07:31.038 "state": "online", 00:07:31.038 "raid_level": "raid0", 00:07:31.038 "superblock": true, 00:07:31.038 "num_base_bdevs": 2, 00:07:31.038 "num_base_bdevs_discovered": 2, 00:07:31.038 "num_base_bdevs_operational": 2, 00:07:31.038 "base_bdevs_list": [ 00:07:31.038 { 00:07:31.038 "name": "BaseBdev1", 00:07:31.038 "uuid": "b817050d-4aac-e457-a1ba-f143242d1b97", 00:07:31.038 "is_configured": true, 00:07:31.038 "data_offset": 2048, 00:07:31.038 "data_size": 63488 00:07:31.038 }, 00:07:31.038 { 00:07:31.038 "name": "BaseBdev2", 00:07:31.038 "uuid": "574f8f94-1ebb-fc5a-81aa-12ece5bcfc40", 00:07:31.038 "is_configured": true, 00:07:31.038 "data_offset": 2048, 00:07:31.038 "data_size": 63488 00:07:31.038 } 00:07:31.038 ] 00:07:31.038 }' 00:07:31.038 21:43:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:31.038 21:43:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.296 21:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:07:31.296 21:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:07:31.296 [2024-07-15 21:43:46.481334] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x6f6e88a0ec0 00:07:32.271 21:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:32.530 21:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:07:32.530 21:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:32.530 21:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:07:32.530 21:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:32.530 21:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:32.530 21:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:32.530 21:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:32.530 21:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:32.530 21:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:32.530 21:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:32.530 21:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:32.530 21:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:32.530 21:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:32.530 21:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:32.530 21:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:33.096 21:43:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:33.096 "name": "raid_bdev1", 00:07:33.096 "uuid": "4fec0fd7-42f3-11ef-9f7f-e9a656123a8b", 00:07:33.096 "strip_size_kb": 64, 00:07:33.096 "state": "online", 00:07:33.096 "raid_level": "raid0", 00:07:33.096 "superblock": true, 00:07:33.096 "num_base_bdevs": 2, 00:07:33.096 "num_base_bdevs_discovered": 2, 00:07:33.096 "num_base_bdevs_operational": 2, 00:07:33.096 "base_bdevs_list": [ 00:07:33.096 { 00:07:33.096 "name": "BaseBdev1", 00:07:33.096 "uuid": "b817050d-4aac-e457-a1ba-f143242d1b97", 00:07:33.096 "is_configured": true, 00:07:33.096 "data_offset": 2048, 00:07:33.096 "data_size": 63488 00:07:33.096 }, 00:07:33.096 { 00:07:33.096 "name": "BaseBdev2", 00:07:33.096 "uuid": "574f8f94-1ebb-fc5a-81aa-12ece5bcfc40", 00:07:33.096 "is_configured": true, 00:07:33.096 "data_offset": 2048, 00:07:33.096 "data_size": 63488 00:07:33.096 } 00:07:33.096 ] 00:07:33.096 }' 00:07:33.096 21:43:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:33.096 21:43:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.354 21:43:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:33.612 [2024-07-15 21:43:48.582835] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:33.612 [2024-07-15 21:43:48.582862] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:33.612 [2024-07-15 21:43:48.583193] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:33.612 [2024-07-15 21:43:48.583203] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.612 [2024-07-15 21:43:48.583209] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:33.612 [2024-07-15 21:43:48.583214] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x6f6e8834f00 name raid_bdev1, state offline 00:07:33.613 0 00:07:33.613 21:43:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 49474 00:07:33.613 21:43:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@942 -- # '[' -z 49474 ']' 00:07:33.613 21:43:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # kill -0 49474 00:07:33.613 21:43:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@947 -- # uname 00:07:33.613 21:43:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:07:33.613 21:43:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # ps -c -o command 49474 00:07:33.613 21:43:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # tail -1 00:07:33.613 21:43:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # process_name=bdevperf 00:07:33.613 21:43:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' bdevperf = sudo ']' 00:07:33.613 killing process with pid 49474 00:07:33.613 21:43:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 49474' 00:07:33.613 21:43:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@961 -- # kill 49474 00:07:33.613 [2024-07-15 21:43:48.617810] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:33.613 21:43:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # wait 49474 00:07:33.613 [2024-07-15 21:43:48.628974] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:33.873 21:43:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.wNK5L3TQ7f 00:07:33.873 21:43:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:07:33.873 21:43:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:07:33.873 21:43:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.48 00:07:33.873 21:43:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:07:33.873 21:43:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:33.873 21:43:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:33.873 21:43:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.48 != \0\.\0\0 ]] 00:07:33.873 00:07:33.873 real 0m5.973s 00:07:33.873 user 0m8.920s 00:07:33.873 sys 0m1.334s 00:07:33.873 21:43:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:33.873 21:43:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.873 ************************************ 00:07:33.873 END TEST raid_read_error_test 00:07:33.873 ************************************ 00:07:33.873 21:43:48 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:07:33.873 21:43:48 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:33.873 21:43:48 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:07:33.873 21:43:48 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:33.873 21:43:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:33.873 ************************************ 00:07:33.873 START TEST raid_write_error_test 00:07:33.873 ************************************ 00:07:33.873 21:43:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1117 -- # raid_io_error_test raid0 2 write 00:07:33.873 21:43:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:07:33.873 21:43:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:07:33.873 21:43:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:07:33.873 21:43:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:07:33.873 21:43:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:33.873 21:43:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:07:33.873 21:43:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:33.873 21:43:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:33.873 21:43:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:07:33.873 21:43:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:33.873 21:43:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:33.873 21:43:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:33.873 21:43:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:07:33.873 21:43:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:07:33.873 21:43:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:07:33.873 21:43:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:07:33.873 21:43:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:07:33.873 21:43:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:07:33.873 21:43:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:07:33.873 21:43:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:07:33.873 21:43:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:07:33.873 21:43:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:07:33.873 21:43:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.bptl08GGOb 00:07:33.873 21:43:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=49602 00:07:33.873 21:43:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 49602 /var/tmp/spdk-raid.sock 00:07:33.873 21:43:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@823 -- # '[' -z 49602 ']' 00:07:33.873 21:43:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:33.873 21:43:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:33.873 21:43:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:07:33.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:33.873 21:43:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:33.873 21:43:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:07:33.873 21:43:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.873 [2024-07-15 21:43:48.880636] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:07:33.873 [2024-07-15 21:43:48.880823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:34.437 EAL: TSC is not safe to use in SMP mode 00:07:34.437 EAL: TSC is not invariant 00:07:34.437 [2024-07-15 21:43:49.584484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.695 [2024-07-15 21:43:49.669884] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:34.695 [2024-07-15 21:43:49.671970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.695 [2024-07-15 21:43:49.672737] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.695 [2024-07-15 21:43:49.672744] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.952 21:43:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:07:34.952 21:43:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # return 0 00:07:34.952 21:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:34.952 21:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:35.211 BaseBdev1_malloc 00:07:35.211 21:43:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:07:35.471 true 00:07:35.471 21:43:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:35.728 [2024-07-15 21:43:50.768859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:35.728 [2024-07-15 21:43:50.768926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.728 [2024-07-15 21:43:50.768952] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f7048a34780 00:07:35.728 [2024-07-15 21:43:50.768961] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.728 [2024-07-15 21:43:50.769599] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.728 [2024-07-15 21:43:50.769635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:35.728 BaseBdev1 00:07:35.728 21:43:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:35.728 21:43:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:35.985 BaseBdev2_malloc 00:07:35.985 21:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:07:36.242 true 00:07:36.242 21:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:36.500 [2024-07-15 21:43:51.560851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:36.500 [2024-07-15 21:43:51.560907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:36.500 [2024-07-15 21:43:51.560933] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f7048a34c80 00:07:36.500 [2024-07-15 21:43:51.560941] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:36.500 [2024-07-15 21:43:51.561581] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:36.500 [2024-07-15 21:43:51.561607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:36.500 BaseBdev2 00:07:36.500 21:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:07:36.757 [2024-07-15 21:43:51.820876] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:36.757 [2024-07-15 21:43:51.821442] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:36.757 [2024-07-15 21:43:51.821511] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f7048a34f00 00:07:36.757 [2024-07-15 21:43:51.821517] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:36.757 [2024-07-15 21:43:51.821549] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f7048aa0e20 00:07:36.757 [2024-07-15 21:43:51.821623] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f7048a34f00 00:07:36.757 [2024-07-15 21:43:51.821628] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f7048a34f00 00:07:36.757 [2024-07-15 21:43:51.821663] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:36.757 21:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:36.757 21:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:36.757 21:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:36.757 21:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:36.757 21:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:36.757 21:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:36.757 21:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:36.757 21:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:36.757 21:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:36.757 21:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:36.757 21:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:36.757 21:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:37.015 21:43:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:37.015 "name": "raid_bdev1", 00:07:37.015 "uuid": "539006a1-42f3-11ef-9f7f-e9a656123a8b", 00:07:37.015 "strip_size_kb": 64, 00:07:37.015 "state": "online", 00:07:37.015 "raid_level": "raid0", 00:07:37.015 "superblock": true, 00:07:37.015 "num_base_bdevs": 2, 00:07:37.015 "num_base_bdevs_discovered": 2, 00:07:37.015 "num_base_bdevs_operational": 2, 00:07:37.015 "base_bdevs_list": [ 00:07:37.015 { 00:07:37.015 "name": "BaseBdev1", 00:07:37.015 "uuid": "671f32e9-0d4a-9053-8b34-667540ce733f", 00:07:37.015 "is_configured": true, 00:07:37.015 "data_offset": 2048, 00:07:37.015 "data_size": 63488 00:07:37.015 }, 00:07:37.015 { 00:07:37.015 "name": "BaseBdev2", 00:07:37.015 "uuid": "b37687af-51ed-ed53-9a82-72e094b5f017", 00:07:37.015 "is_configured": true, 00:07:37.015 "data_offset": 2048, 00:07:37.015 "data_size": 63488 00:07:37.015 } 00:07:37.015 ] 00:07:37.015 }' 00:07:37.015 21:43:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:37.015 21:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.580 21:43:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:07:37.580 21:43:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:07:37.580 [2024-07-15 21:43:52.581043] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f7048aa0ec0 00:07:38.539 21:43:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:38.834 21:43:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:07:38.834 21:43:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:38.834 21:43:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:07:38.834 21:43:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:38.834 21:43:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:38.834 21:43:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:38.834 21:43:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:38.835 21:43:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:38.835 21:43:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:38.835 21:43:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:38.835 21:43:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:38.835 21:43:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:38.835 21:43:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:38.835 21:43:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:38.835 21:43:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:39.093 21:43:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:39.093 "name": "raid_bdev1", 00:07:39.093 "uuid": "539006a1-42f3-11ef-9f7f-e9a656123a8b", 00:07:39.093 "strip_size_kb": 64, 00:07:39.093 "state": "online", 00:07:39.093 "raid_level": "raid0", 00:07:39.093 "superblock": true, 00:07:39.093 "num_base_bdevs": 2, 00:07:39.093 "num_base_bdevs_discovered": 2, 00:07:39.093 "num_base_bdevs_operational": 2, 00:07:39.093 "base_bdevs_list": [ 00:07:39.093 { 00:07:39.093 "name": "BaseBdev1", 00:07:39.093 "uuid": "671f32e9-0d4a-9053-8b34-667540ce733f", 00:07:39.093 "is_configured": true, 00:07:39.093 "data_offset": 2048, 00:07:39.093 "data_size": 63488 00:07:39.093 }, 00:07:39.093 { 00:07:39.093 "name": "BaseBdev2", 00:07:39.093 "uuid": "b37687af-51ed-ed53-9a82-72e094b5f017", 00:07:39.093 "is_configured": true, 00:07:39.093 "data_offset": 2048, 00:07:39.093 "data_size": 63488 00:07:39.093 } 00:07:39.093 ] 00:07:39.093 }' 00:07:39.093 21:43:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:39.093 21:43:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.352 21:43:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:39.610 [2024-07-15 21:43:54.657850] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:39.610 [2024-07-15 21:43:54.657880] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:39.610 [2024-07-15 21:43:54.658225] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:39.610 [2024-07-15 21:43:54.658253] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.610 [2024-07-15 21:43:54.658260] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:39.610 [2024-07-15 21:43:54.658264] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f7048a34f00 name raid_bdev1, state offline 00:07:39.610 0 00:07:39.610 21:43:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 49602 00:07:39.610 21:43:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@942 -- # '[' -z 49602 ']' 00:07:39.610 21:43:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # kill -0 49602 00:07:39.610 21:43:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@947 -- # uname 00:07:39.610 21:43:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:07:39.610 21:43:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # tail -1 00:07:39.610 21:43:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # ps -c -o command 49602 00:07:39.610 21:43:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # process_name=bdevperf 00:07:39.610 21:43:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' bdevperf = sudo ']' 00:07:39.610 killing process with pid 49602 00:07:39.610 21:43:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 49602' 00:07:39.610 21:43:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@961 -- # kill 49602 00:07:39.610 [2024-07-15 21:43:54.689076] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:39.610 21:43:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # wait 49602 00:07:39.610 [2024-07-15 21:43:54.698691] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:39.869 21:43:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:07:39.869 21:43:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.bptl08GGOb 00:07:39.869 21:43:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:07:39.869 21:43:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.48 00:07:39.869 21:43:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:07:39.869 21:43:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:39.869 21:43:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:39.869 21:43:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.48 != \0\.\0\0 ]] 00:07:39.869 00:07:39.869 real 0m6.014s 00:07:39.869 user 0m8.994s 00:07:39.869 sys 0m1.328s 00:07:39.869 21:43:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:39.869 ************************************ 00:07:39.869 END TEST raid_write_error_test 00:07:39.869 ************************************ 00:07:39.869 21:43:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.869 21:43:54 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:07:39.869 21:43:54 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:07:39.869 21:43:54 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:39.869 21:43:54 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:07:39.869 21:43:54 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:39.869 21:43:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:39.869 ************************************ 00:07:39.869 START TEST raid_state_function_test 00:07:39.869 ************************************ 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1117 -- # raid_state_function_test concat 2 false 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=49724 00:07:39.869 Process raid pid: 49724 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 49724' 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 49724 /var/tmp/spdk-raid.sock 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@823 -- # '[' -z 49724 ']' 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:39.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:07:39.869 21:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.869 [2024-07-15 21:43:54.937563] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:07:39.869 [2024-07-15 21:43:54.937812] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:40.806 EAL: TSC is not safe to use in SMP mode 00:07:40.806 EAL: TSC is not invariant 00:07:40.806 [2024-07-15 21:43:55.634989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.806 [2024-07-15 21:43:55.718439] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:40.806 [2024-07-15 21:43:55.720496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.806 [2024-07-15 21:43:55.721257] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.806 [2024-07-15 21:43:55.721272] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.065 21:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:07:41.065 21:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # return 0 00:07:41.065 21:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:41.324 [2024-07-15 21:43:56.293199] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:41.324 [2024-07-15 21:43:56.293252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:41.324 [2024-07-15 21:43:56.293258] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:41.324 [2024-07-15 21:43:56.293267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:41.324 21:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:41.324 21:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:41.324 21:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:41.324 21:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:41.324 21:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:41.324 21:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:41.324 21:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:41.324 21:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:41.324 21:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:41.324 21:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:41.324 21:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.324 21:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:41.583 21:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:41.583 "name": "Existed_Raid", 00:07:41.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.583 "strip_size_kb": 64, 00:07:41.583 "state": "configuring", 00:07:41.583 "raid_level": "concat", 00:07:41.583 "superblock": false, 00:07:41.583 "num_base_bdevs": 2, 00:07:41.583 "num_base_bdevs_discovered": 0, 00:07:41.583 "num_base_bdevs_operational": 2, 00:07:41.583 "base_bdevs_list": [ 00:07:41.583 { 00:07:41.583 "name": "BaseBdev1", 00:07:41.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.583 "is_configured": false, 00:07:41.583 "data_offset": 0, 00:07:41.583 "data_size": 0 00:07:41.583 }, 00:07:41.583 { 00:07:41.583 "name": "BaseBdev2", 00:07:41.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.583 "is_configured": false, 00:07:41.583 "data_offset": 0, 00:07:41.583 "data_size": 0 00:07:41.583 } 00:07:41.583 ] 00:07:41.583 }' 00:07:41.583 21:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:41.583 21:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.843 21:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:42.109 [2024-07-15 21:43:57.157190] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:42.109 [2024-07-15 21:43:57.157218] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1386f2434500 name Existed_Raid, state configuring 00:07:42.109 21:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:42.381 [2024-07-15 21:43:57.393196] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:42.381 [2024-07-15 21:43:57.393244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:42.381 [2024-07-15 21:43:57.393249] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:42.381 [2024-07-15 21:43:57.393258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:42.381 21:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:42.639 [2024-07-15 21:43:57.634230] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:42.639 BaseBdev1 00:07:42.639 21:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:07:42.639 21:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev1 00:07:42.639 21:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:07:42.639 21:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:07:42.639 21:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:07:42.639 21:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:07:42.639 21:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:42.896 21:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:43.155 [ 00:07:43.155 { 00:07:43.155 "name": "BaseBdev1", 00:07:43.155 "aliases": [ 00:07:43.155 "5706eb3a-42f3-11ef-9f7f-e9a656123a8b" 00:07:43.155 ], 00:07:43.155 "product_name": "Malloc disk", 00:07:43.155 "block_size": 512, 00:07:43.155 "num_blocks": 65536, 00:07:43.155 "uuid": "5706eb3a-42f3-11ef-9f7f-e9a656123a8b", 00:07:43.155 "assigned_rate_limits": { 00:07:43.155 "rw_ios_per_sec": 0, 00:07:43.155 "rw_mbytes_per_sec": 0, 00:07:43.155 "r_mbytes_per_sec": 0, 00:07:43.155 "w_mbytes_per_sec": 0 00:07:43.155 }, 00:07:43.155 "claimed": true, 00:07:43.155 "claim_type": "exclusive_write", 00:07:43.155 "zoned": false, 00:07:43.155 "supported_io_types": { 00:07:43.155 "read": true, 00:07:43.155 "write": true, 00:07:43.155 "unmap": true, 00:07:43.155 "flush": true, 00:07:43.155 "reset": true, 00:07:43.155 "nvme_admin": false, 00:07:43.155 "nvme_io": false, 00:07:43.155 "nvme_io_md": false, 00:07:43.155 "write_zeroes": true, 00:07:43.155 "zcopy": true, 00:07:43.155 "get_zone_info": false, 00:07:43.155 "zone_management": false, 00:07:43.155 "zone_append": false, 00:07:43.155 "compare": false, 00:07:43.155 "compare_and_write": false, 00:07:43.155 "abort": true, 00:07:43.155 "seek_hole": false, 00:07:43.155 "seek_data": false, 00:07:43.155 "copy": true, 00:07:43.155 "nvme_iov_md": false 00:07:43.155 }, 00:07:43.155 "memory_domains": [ 00:07:43.155 { 00:07:43.155 "dma_device_id": "system", 00:07:43.155 "dma_device_type": 1 00:07:43.155 }, 00:07:43.155 { 00:07:43.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.155 "dma_device_type": 2 00:07:43.155 } 00:07:43.155 ], 00:07:43.155 "driver_specific": {} 00:07:43.155 } 00:07:43.155 ] 00:07:43.155 21:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:07:43.155 21:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:43.155 21:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:43.155 21:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:43.155 21:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:43.155 21:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:43.155 21:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:43.155 21:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:43.155 21:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:43.156 21:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:43.156 21:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:43.156 21:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:43.156 21:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.415 21:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:43.415 "name": "Existed_Raid", 00:07:43.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.415 "strip_size_kb": 64, 00:07:43.415 "state": "configuring", 00:07:43.415 "raid_level": "concat", 00:07:43.415 "superblock": false, 00:07:43.415 "num_base_bdevs": 2, 00:07:43.415 "num_base_bdevs_discovered": 1, 00:07:43.415 "num_base_bdevs_operational": 2, 00:07:43.415 "base_bdevs_list": [ 00:07:43.415 { 00:07:43.415 "name": "BaseBdev1", 00:07:43.415 "uuid": "5706eb3a-42f3-11ef-9f7f-e9a656123a8b", 00:07:43.415 "is_configured": true, 00:07:43.415 "data_offset": 0, 00:07:43.415 "data_size": 65536 00:07:43.415 }, 00:07:43.415 { 00:07:43.415 "name": "BaseBdev2", 00:07:43.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.415 "is_configured": false, 00:07:43.415 "data_offset": 0, 00:07:43.415 "data_size": 0 00:07:43.415 } 00:07:43.415 ] 00:07:43.415 }' 00:07:43.415 21:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:43.415 21:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.674 21:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:43.932 [2024-07-15 21:43:59.025222] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:43.932 [2024-07-15 21:43:59.025266] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1386f2434500 name Existed_Raid, state configuring 00:07:43.932 21:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:44.191 [2024-07-15 21:43:59.261228] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:44.191 [2024-07-15 21:43:59.262232] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:44.191 [2024-07-15 21:43:59.262277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:44.191 21:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:07:44.191 21:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:44.191 21:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:44.191 21:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:44.191 21:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:44.191 21:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:44.191 21:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:44.191 21:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:44.191 21:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:44.191 21:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:44.191 21:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:44.191 21:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:44.191 21:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:44.191 21:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.449 21:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:44.449 "name": "Existed_Raid", 00:07:44.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.449 "strip_size_kb": 64, 00:07:44.449 "state": "configuring", 00:07:44.449 "raid_level": "concat", 00:07:44.449 "superblock": false, 00:07:44.449 "num_base_bdevs": 2, 00:07:44.449 "num_base_bdevs_discovered": 1, 00:07:44.449 "num_base_bdevs_operational": 2, 00:07:44.449 "base_bdevs_list": [ 00:07:44.449 { 00:07:44.449 "name": "BaseBdev1", 00:07:44.449 "uuid": "5706eb3a-42f3-11ef-9f7f-e9a656123a8b", 00:07:44.449 "is_configured": true, 00:07:44.449 "data_offset": 0, 00:07:44.449 "data_size": 65536 00:07:44.449 }, 00:07:44.449 { 00:07:44.449 "name": "BaseBdev2", 00:07:44.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.449 "is_configured": false, 00:07:44.449 "data_offset": 0, 00:07:44.449 "data_size": 0 00:07:44.449 } 00:07:44.449 ] 00:07:44.449 }' 00:07:44.449 21:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:44.449 21:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.708 21:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:44.967 [2024-07-15 21:44:00.137411] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:44.967 [2024-07-15 21:44:00.137444] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1386f2434a00 00:07:44.967 [2024-07-15 21:44:00.137449] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:44.967 [2024-07-15 21:44:00.137473] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1386f2497e20 00:07:44.967 [2024-07-15 21:44:00.137576] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1386f2434a00 00:07:44.967 [2024-07-15 21:44:00.137581] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1386f2434a00 00:07:44.967 [2024-07-15 21:44:00.137619] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:44.967 BaseBdev2 00:07:45.226 21:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:07:45.226 21:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev2 00:07:45.226 21:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:07:45.226 21:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:07:45.226 21:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:07:45.226 21:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:07:45.226 21:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:45.226 21:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:45.525 [ 00:07:45.525 { 00:07:45.525 "name": "BaseBdev2", 00:07:45.525 "aliases": [ 00:07:45.525 "58850123-42f3-11ef-9f7f-e9a656123a8b" 00:07:45.525 ], 00:07:45.525 "product_name": "Malloc disk", 00:07:45.525 "block_size": 512, 00:07:45.525 "num_blocks": 65536, 00:07:45.525 "uuid": "58850123-42f3-11ef-9f7f-e9a656123a8b", 00:07:45.525 "assigned_rate_limits": { 00:07:45.525 "rw_ios_per_sec": 0, 00:07:45.525 "rw_mbytes_per_sec": 0, 00:07:45.525 "r_mbytes_per_sec": 0, 00:07:45.525 "w_mbytes_per_sec": 0 00:07:45.525 }, 00:07:45.525 "claimed": true, 00:07:45.525 "claim_type": "exclusive_write", 00:07:45.525 "zoned": false, 00:07:45.525 "supported_io_types": { 00:07:45.525 "read": true, 00:07:45.525 "write": true, 00:07:45.525 "unmap": true, 00:07:45.525 "flush": true, 00:07:45.525 "reset": true, 00:07:45.525 "nvme_admin": false, 00:07:45.525 "nvme_io": false, 00:07:45.525 "nvme_io_md": false, 00:07:45.525 "write_zeroes": true, 00:07:45.525 "zcopy": true, 00:07:45.525 "get_zone_info": false, 00:07:45.525 "zone_management": false, 00:07:45.525 "zone_append": false, 00:07:45.525 "compare": false, 00:07:45.525 "compare_and_write": false, 00:07:45.525 "abort": true, 00:07:45.525 "seek_hole": false, 00:07:45.525 "seek_data": false, 00:07:45.525 "copy": true, 00:07:45.525 "nvme_iov_md": false 00:07:45.525 }, 00:07:45.525 "memory_domains": [ 00:07:45.525 { 00:07:45.525 "dma_device_id": "system", 00:07:45.525 "dma_device_type": 1 00:07:45.525 }, 00:07:45.525 { 00:07:45.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.525 "dma_device_type": 2 00:07:45.525 } 00:07:45.525 ], 00:07:45.525 "driver_specific": {} 00:07:45.525 } 00:07:45.525 ] 00:07:45.525 21:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:07:45.525 21:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:07:45.525 21:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:45.525 21:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:45.525 21:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:45.525 21:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:45.525 21:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:45.525 21:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:45.525 21:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:45.525 21:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:45.525 21:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:45.525 21:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:45.525 21:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:45.525 21:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:45.525 21:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.784 21:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:45.784 "name": "Existed_Raid", 00:07:45.784 "uuid": "58850954-42f3-11ef-9f7f-e9a656123a8b", 00:07:45.784 "strip_size_kb": 64, 00:07:45.784 "state": "online", 00:07:45.784 "raid_level": "concat", 00:07:45.784 "superblock": false, 00:07:45.784 "num_base_bdevs": 2, 00:07:45.784 "num_base_bdevs_discovered": 2, 00:07:45.784 "num_base_bdevs_operational": 2, 00:07:45.784 "base_bdevs_list": [ 00:07:45.784 { 00:07:45.784 "name": "BaseBdev1", 00:07:45.784 "uuid": "5706eb3a-42f3-11ef-9f7f-e9a656123a8b", 00:07:45.784 "is_configured": true, 00:07:45.784 "data_offset": 0, 00:07:45.784 "data_size": 65536 00:07:45.784 }, 00:07:45.784 { 00:07:45.784 "name": "BaseBdev2", 00:07:45.784 "uuid": "58850123-42f3-11ef-9f7f-e9a656123a8b", 00:07:45.784 "is_configured": true, 00:07:45.784 "data_offset": 0, 00:07:45.784 "data_size": 65536 00:07:45.784 } 00:07:45.784 ] 00:07:45.784 }' 00:07:45.784 21:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:45.784 21:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.042 21:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:07:46.042 21:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:07:46.042 21:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:46.042 21:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:46.042 21:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:46.042 21:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:46.042 21:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:07:46.042 21:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:46.300 [2024-07-15 21:44:01.473335] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.558 21:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:46.558 "name": "Existed_Raid", 00:07:46.558 "aliases": [ 00:07:46.558 "58850954-42f3-11ef-9f7f-e9a656123a8b" 00:07:46.558 ], 00:07:46.558 "product_name": "Raid Volume", 00:07:46.558 "block_size": 512, 00:07:46.558 "num_blocks": 131072, 00:07:46.558 "uuid": "58850954-42f3-11ef-9f7f-e9a656123a8b", 00:07:46.558 "assigned_rate_limits": { 00:07:46.558 "rw_ios_per_sec": 0, 00:07:46.559 "rw_mbytes_per_sec": 0, 00:07:46.559 "r_mbytes_per_sec": 0, 00:07:46.559 "w_mbytes_per_sec": 0 00:07:46.559 }, 00:07:46.559 "claimed": false, 00:07:46.559 "zoned": false, 00:07:46.559 "supported_io_types": { 00:07:46.559 "read": true, 00:07:46.559 "write": true, 00:07:46.559 "unmap": true, 00:07:46.559 "flush": true, 00:07:46.559 "reset": true, 00:07:46.559 "nvme_admin": false, 00:07:46.559 "nvme_io": false, 00:07:46.559 "nvme_io_md": false, 00:07:46.559 "write_zeroes": true, 00:07:46.559 "zcopy": false, 00:07:46.559 "get_zone_info": false, 00:07:46.559 "zone_management": false, 00:07:46.559 "zone_append": false, 00:07:46.559 "compare": false, 00:07:46.559 "compare_and_write": false, 00:07:46.559 "abort": false, 00:07:46.559 "seek_hole": false, 00:07:46.559 "seek_data": false, 00:07:46.559 "copy": false, 00:07:46.559 "nvme_iov_md": false 00:07:46.559 }, 00:07:46.559 "memory_domains": [ 00:07:46.559 { 00:07:46.559 "dma_device_id": "system", 00:07:46.559 "dma_device_type": 1 00:07:46.559 }, 00:07:46.559 { 00:07:46.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.559 "dma_device_type": 2 00:07:46.559 }, 00:07:46.559 { 00:07:46.559 "dma_device_id": "system", 00:07:46.559 "dma_device_type": 1 00:07:46.559 }, 00:07:46.559 { 00:07:46.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.559 "dma_device_type": 2 00:07:46.559 } 00:07:46.559 ], 00:07:46.559 "driver_specific": { 00:07:46.559 "raid": { 00:07:46.559 "uuid": "58850954-42f3-11ef-9f7f-e9a656123a8b", 00:07:46.559 "strip_size_kb": 64, 00:07:46.559 "state": "online", 00:07:46.559 "raid_level": "concat", 00:07:46.559 "superblock": false, 00:07:46.559 "num_base_bdevs": 2, 00:07:46.559 "num_base_bdevs_discovered": 2, 00:07:46.559 "num_base_bdevs_operational": 2, 00:07:46.559 "base_bdevs_list": [ 00:07:46.559 { 00:07:46.559 "name": "BaseBdev1", 00:07:46.559 "uuid": "5706eb3a-42f3-11ef-9f7f-e9a656123a8b", 00:07:46.559 "is_configured": true, 00:07:46.559 "data_offset": 0, 00:07:46.559 "data_size": 65536 00:07:46.559 }, 00:07:46.559 { 00:07:46.559 "name": "BaseBdev2", 00:07:46.559 "uuid": "58850123-42f3-11ef-9f7f-e9a656123a8b", 00:07:46.559 "is_configured": true, 00:07:46.559 "data_offset": 0, 00:07:46.559 "data_size": 65536 00:07:46.559 } 00:07:46.559 ] 00:07:46.559 } 00:07:46.559 } 00:07:46.559 }' 00:07:46.559 21:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:46.559 21:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:07:46.559 BaseBdev2' 00:07:46.559 21:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:46.559 21:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:07:46.559 21:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:46.819 21:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:46.819 "name": "BaseBdev1", 00:07:46.819 "aliases": [ 00:07:46.819 "5706eb3a-42f3-11ef-9f7f-e9a656123a8b" 00:07:46.819 ], 00:07:46.819 "product_name": "Malloc disk", 00:07:46.819 "block_size": 512, 00:07:46.819 "num_blocks": 65536, 00:07:46.819 "uuid": "5706eb3a-42f3-11ef-9f7f-e9a656123a8b", 00:07:46.819 "assigned_rate_limits": { 00:07:46.819 "rw_ios_per_sec": 0, 00:07:46.819 "rw_mbytes_per_sec": 0, 00:07:46.819 "r_mbytes_per_sec": 0, 00:07:46.819 "w_mbytes_per_sec": 0 00:07:46.819 }, 00:07:46.819 "claimed": true, 00:07:46.819 "claim_type": "exclusive_write", 00:07:46.819 "zoned": false, 00:07:46.819 "supported_io_types": { 00:07:46.819 "read": true, 00:07:46.819 "write": true, 00:07:46.819 "unmap": true, 00:07:46.819 "flush": true, 00:07:46.819 "reset": true, 00:07:46.819 "nvme_admin": false, 00:07:46.819 "nvme_io": false, 00:07:46.819 "nvme_io_md": false, 00:07:46.819 "write_zeroes": true, 00:07:46.819 "zcopy": true, 00:07:46.819 "get_zone_info": false, 00:07:46.819 "zone_management": false, 00:07:46.819 "zone_append": false, 00:07:46.819 "compare": false, 00:07:46.819 "compare_and_write": false, 00:07:46.819 "abort": true, 00:07:46.819 "seek_hole": false, 00:07:46.819 "seek_data": false, 00:07:46.819 "copy": true, 00:07:46.819 "nvme_iov_md": false 00:07:46.819 }, 00:07:46.819 "memory_domains": [ 00:07:46.819 { 00:07:46.819 "dma_device_id": "system", 00:07:46.819 "dma_device_type": 1 00:07:46.819 }, 00:07:46.819 { 00:07:46.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.819 "dma_device_type": 2 00:07:46.819 } 00:07:46.819 ], 00:07:46.819 "driver_specific": {} 00:07:46.819 }' 00:07:46.819 21:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:46.819 21:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:46.819 21:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:46.819 21:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:46.819 21:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:46.819 21:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:46.819 21:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:46.819 21:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:46.819 21:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:46.819 21:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:46.819 21:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:46.819 21:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:46.819 21:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:46.819 21:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:07:46.819 21:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:47.077 21:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:47.077 "name": "BaseBdev2", 00:07:47.077 "aliases": [ 00:07:47.077 "58850123-42f3-11ef-9f7f-e9a656123a8b" 00:07:47.077 ], 00:07:47.077 "product_name": "Malloc disk", 00:07:47.077 "block_size": 512, 00:07:47.077 "num_blocks": 65536, 00:07:47.077 "uuid": "58850123-42f3-11ef-9f7f-e9a656123a8b", 00:07:47.077 "assigned_rate_limits": { 00:07:47.077 "rw_ios_per_sec": 0, 00:07:47.077 "rw_mbytes_per_sec": 0, 00:07:47.077 "r_mbytes_per_sec": 0, 00:07:47.077 "w_mbytes_per_sec": 0 00:07:47.077 }, 00:07:47.077 "claimed": true, 00:07:47.077 "claim_type": "exclusive_write", 00:07:47.077 "zoned": false, 00:07:47.077 "supported_io_types": { 00:07:47.077 "read": true, 00:07:47.077 "write": true, 00:07:47.077 "unmap": true, 00:07:47.077 "flush": true, 00:07:47.077 "reset": true, 00:07:47.077 "nvme_admin": false, 00:07:47.077 "nvme_io": false, 00:07:47.077 "nvme_io_md": false, 00:07:47.077 "write_zeroes": true, 00:07:47.077 "zcopy": true, 00:07:47.077 "get_zone_info": false, 00:07:47.077 "zone_management": false, 00:07:47.077 "zone_append": false, 00:07:47.077 "compare": false, 00:07:47.077 "compare_and_write": false, 00:07:47.077 "abort": true, 00:07:47.077 "seek_hole": false, 00:07:47.077 "seek_data": false, 00:07:47.077 "copy": true, 00:07:47.077 "nvme_iov_md": false 00:07:47.077 }, 00:07:47.077 "memory_domains": [ 00:07:47.077 { 00:07:47.077 "dma_device_id": "system", 00:07:47.077 "dma_device_type": 1 00:07:47.077 }, 00:07:47.077 { 00:07:47.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.077 "dma_device_type": 2 00:07:47.077 } 00:07:47.077 ], 00:07:47.077 "driver_specific": {} 00:07:47.077 }' 00:07:47.077 21:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:47.077 21:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:47.077 21:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:47.077 21:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:47.077 21:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:47.077 21:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:47.077 21:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:47.077 21:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:47.077 21:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:47.077 21:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:47.077 21:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:47.077 21:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:47.077 21:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:47.336 [2024-07-15 21:44:02.453305] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:47.336 [2024-07-15 21:44:02.453335] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:47.336 [2024-07-15 21:44:02.453368] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:47.336 21:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:07:47.336 21:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:07:47.336 21:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:47.336 21:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:47.336 21:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:07:47.336 21:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:47.336 21:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:47.336 21:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:07:47.336 21:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:47.336 21:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:47.336 21:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:07:47.336 21:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:47.336 21:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:47.336 21:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:47.336 21:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:47.336 21:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:47.336 21:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.595 21:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:47.595 "name": "Existed_Raid", 00:07:47.595 "uuid": "58850954-42f3-11ef-9f7f-e9a656123a8b", 00:07:47.595 "strip_size_kb": 64, 00:07:47.595 "state": "offline", 00:07:47.595 "raid_level": "concat", 00:07:47.595 "superblock": false, 00:07:47.595 "num_base_bdevs": 2, 00:07:47.595 "num_base_bdevs_discovered": 1, 00:07:47.595 "num_base_bdevs_operational": 1, 00:07:47.595 "base_bdevs_list": [ 00:07:47.595 { 00:07:47.595 "name": null, 00:07:47.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.595 "is_configured": false, 00:07:47.595 "data_offset": 0, 00:07:47.595 "data_size": 65536 00:07:47.595 }, 00:07:47.595 { 00:07:47.595 "name": "BaseBdev2", 00:07:47.595 "uuid": "58850123-42f3-11ef-9f7f-e9a656123a8b", 00:07:47.595 "is_configured": true, 00:07:47.595 "data_offset": 0, 00:07:47.595 "data_size": 65536 00:07:47.595 } 00:07:47.595 ] 00:07:47.595 }' 00:07:47.595 21:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:47.595 21:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.163 21:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:07:48.163 21:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:48.163 21:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:48.163 21:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:07:48.421 21:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:07:48.421 21:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:48.421 21:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:48.679 [2024-07-15 21:44:03.642136] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:48.679 [2024-07-15 21:44:03.642183] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1386f2434a00 name Existed_Raid, state offline 00:07:48.679 21:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:07:48.679 21:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:48.679 21:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:48.679 21:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:07:48.936 21:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:07:48.936 21:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:07:48.936 21:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:07:48.936 21:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 49724 00:07:48.936 21:44:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@942 -- # '[' -z 49724 ']' 00:07:48.936 21:44:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # kill -0 49724 00:07:48.936 21:44:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@947 -- # uname 00:07:48.937 21:44:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:07:48.937 21:44:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # ps -c -o command 49724 00:07:48.937 21:44:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # tail -1 00:07:48.937 21:44:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:07:48.937 21:44:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:07:48.937 killing process with pid 49724 00:07:48.937 21:44:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 49724' 00:07:48.937 21:44:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@961 -- # kill 49724 00:07:48.937 [2024-07-15 21:44:03.936916] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:48.937 [2024-07-15 21:44:03.936961] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:48.937 21:44:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # wait 49724 00:07:49.194 21:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:07:49.194 00:07:49.194 real 0m9.265s 00:07:49.194 user 0m15.757s 00:07:49.194 sys 0m1.939s 00:07:49.194 21:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:49.194 ************************************ 00:07:49.194 END TEST raid_state_function_test 00:07:49.194 ************************************ 00:07:49.194 21:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.194 21:44:04 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:07:49.194 21:44:04 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:49.194 21:44:04 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:07:49.194 21:44:04 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:49.194 21:44:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:49.194 ************************************ 00:07:49.194 START TEST raid_state_function_test_sb 00:07:49.194 ************************************ 00:07:49.194 21:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1117 -- # raid_state_function_test concat 2 true 00:07:49.194 21:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:07:49.194 21:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:07:49.195 21:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:07:49.195 21:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:07:49.195 21:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:07:49.195 21:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:49.195 21:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:07:49.195 21:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:49.195 21:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:49.195 21:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:07:49.195 21:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:49.195 21:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:49.195 21:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:49.195 21:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:07:49.195 21:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:07:49.195 21:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:07:49.195 21:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:07:49.195 21:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:07:49.195 21:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:07:49.195 21:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:07:49.195 21:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:07:49.195 21:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:07:49.195 21:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:07:49.195 21:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=49999 00:07:49.195 Process raid pid: 49999 00:07:49.195 21:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 49999' 00:07:49.195 21:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 49999 /var/tmp/spdk-raid.sock 00:07:49.195 21:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@823 -- # '[' -z 49999 ']' 00:07:49.195 21:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:49.195 21:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:49.195 21:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@828 -- # local max_retries=100 00:07:49.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:49.195 21:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:49.195 21:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # xtrace_disable 00:07:49.195 21:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.195 [2024-07-15 21:44:04.253317] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:07:49.195 [2024-07-15 21:44:04.253603] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:49.762 EAL: TSC is not safe to use in SMP mode 00:07:49.762 EAL: TSC is not invariant 00:07:49.762 [2024-07-15 21:44:04.781214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.762 [2024-07-15 21:44:04.894394] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:49.762 [2024-07-15 21:44:04.896851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.762 [2024-07-15 21:44:04.897688] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.762 [2024-07-15 21:44:04.897703] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.329 21:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:07:50.329 21:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # return 0 00:07:50.329 21:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:50.588 [2024-07-15 21:44:05.521348] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:50.588 [2024-07-15 21:44:05.521424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:50.588 [2024-07-15 21:44:05.521430] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.588 [2024-07-15 21:44:05.521439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.588 21:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:50.588 21:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:50.588 21:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:50.588 21:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:50.588 21:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:50.588 21:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:50.588 21:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:50.588 21:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:50.588 21:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:50.588 21:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:50.588 21:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:50.588 21:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.588 21:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:50.588 "name": "Existed_Raid", 00:07:50.588 "uuid": "5bba8d53-42f3-11ef-9f7f-e9a656123a8b", 00:07:50.588 "strip_size_kb": 64, 00:07:50.588 "state": "configuring", 00:07:50.588 "raid_level": "concat", 00:07:50.588 "superblock": true, 00:07:50.588 "num_base_bdevs": 2, 00:07:50.588 "num_base_bdevs_discovered": 0, 00:07:50.588 "num_base_bdevs_operational": 2, 00:07:50.588 "base_bdevs_list": [ 00:07:50.588 { 00:07:50.588 "name": "BaseBdev1", 00:07:50.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.588 "is_configured": false, 00:07:50.588 "data_offset": 0, 00:07:50.588 "data_size": 0 00:07:50.588 }, 00:07:50.588 { 00:07:50.588 "name": "BaseBdev2", 00:07:50.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.588 "is_configured": false, 00:07:50.588 "data_offset": 0, 00:07:50.588 "data_size": 0 00:07:50.588 } 00:07:50.588 ] 00:07:50.588 }' 00:07:50.588 21:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:50.845 21:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.103 21:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:51.363 [2024-07-15 21:44:06.377351] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:51.363 [2024-07-15 21:44:06.377418] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f7518034500 name Existed_Raid, state configuring 00:07:51.363 21:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:51.621 [2024-07-15 21:44:06.661410] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:51.621 [2024-07-15 21:44:06.661472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:51.621 [2024-07-15 21:44:06.661477] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:51.621 [2024-07-15 21:44:06.661502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:51.621 21:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:51.878 [2024-07-15 21:44:06.898611] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:51.878 BaseBdev1 00:07:51.878 21:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:07:51.878 21:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev1 00:07:51.878 21:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:07:51.878 21:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:07:51.878 21:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:07:51.878 21:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:07:51.878 21:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:52.135 21:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:52.393 [ 00:07:52.393 { 00:07:52.393 "name": "BaseBdev1", 00:07:52.393 "aliases": [ 00:07:52.393 "5c8c8508-42f3-11ef-9f7f-e9a656123a8b" 00:07:52.393 ], 00:07:52.393 "product_name": "Malloc disk", 00:07:52.393 "block_size": 512, 00:07:52.393 "num_blocks": 65536, 00:07:52.393 "uuid": "5c8c8508-42f3-11ef-9f7f-e9a656123a8b", 00:07:52.393 "assigned_rate_limits": { 00:07:52.393 "rw_ios_per_sec": 0, 00:07:52.393 "rw_mbytes_per_sec": 0, 00:07:52.393 "r_mbytes_per_sec": 0, 00:07:52.393 "w_mbytes_per_sec": 0 00:07:52.393 }, 00:07:52.393 "claimed": true, 00:07:52.393 "claim_type": "exclusive_write", 00:07:52.393 "zoned": false, 00:07:52.393 "supported_io_types": { 00:07:52.393 "read": true, 00:07:52.393 "write": true, 00:07:52.393 "unmap": true, 00:07:52.393 "flush": true, 00:07:52.393 "reset": true, 00:07:52.393 "nvme_admin": false, 00:07:52.393 "nvme_io": false, 00:07:52.393 "nvme_io_md": false, 00:07:52.393 "write_zeroes": true, 00:07:52.393 "zcopy": true, 00:07:52.393 "get_zone_info": false, 00:07:52.393 "zone_management": false, 00:07:52.393 "zone_append": false, 00:07:52.393 "compare": false, 00:07:52.393 "compare_and_write": false, 00:07:52.393 "abort": true, 00:07:52.393 "seek_hole": false, 00:07:52.393 "seek_data": false, 00:07:52.393 "copy": true, 00:07:52.393 "nvme_iov_md": false 00:07:52.393 }, 00:07:52.393 "memory_domains": [ 00:07:52.393 { 00:07:52.393 "dma_device_id": "system", 00:07:52.393 "dma_device_type": 1 00:07:52.393 }, 00:07:52.393 { 00:07:52.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.393 "dma_device_type": 2 00:07:52.393 } 00:07:52.393 ], 00:07:52.393 "driver_specific": {} 00:07:52.393 } 00:07:52.393 ] 00:07:52.393 21:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:07:52.393 21:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:52.393 21:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:52.393 21:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:52.393 21:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:52.393 21:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:52.393 21:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:52.393 21:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:52.393 21:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:52.393 21:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:52.393 21:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:52.393 21:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:52.393 21:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.651 21:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:52.651 "name": "Existed_Raid", 00:07:52.651 "uuid": "5c68825d-42f3-11ef-9f7f-e9a656123a8b", 00:07:52.651 "strip_size_kb": 64, 00:07:52.651 "state": "configuring", 00:07:52.651 "raid_level": "concat", 00:07:52.651 "superblock": true, 00:07:52.651 "num_base_bdevs": 2, 00:07:52.651 "num_base_bdevs_discovered": 1, 00:07:52.651 "num_base_bdevs_operational": 2, 00:07:52.651 "base_bdevs_list": [ 00:07:52.651 { 00:07:52.651 "name": "BaseBdev1", 00:07:52.651 "uuid": "5c8c8508-42f3-11ef-9f7f-e9a656123a8b", 00:07:52.651 "is_configured": true, 00:07:52.651 "data_offset": 2048, 00:07:52.651 "data_size": 63488 00:07:52.651 }, 00:07:52.651 { 00:07:52.651 "name": "BaseBdev2", 00:07:52.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.651 "is_configured": false, 00:07:52.651 "data_offset": 0, 00:07:52.651 "data_size": 0 00:07:52.651 } 00:07:52.651 ] 00:07:52.651 }' 00:07:52.651 21:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:52.651 21:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.909 21:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:53.168 [2024-07-15 21:44:08.161447] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:53.168 [2024-07-15 21:44:08.161490] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f7518034500 name Existed_Raid, state configuring 00:07:53.168 21:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:53.426 [2024-07-15 21:44:08.449472] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:53.426 [2024-07-15 21:44:08.450549] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:53.426 [2024-07-15 21:44:08.450598] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:53.426 21:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:07:53.426 21:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:53.426 21:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:53.426 21:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:53.426 21:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:53.426 21:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:53.426 21:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:53.426 21:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:53.426 21:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:53.426 21:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:53.426 21:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:53.426 21:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:53.426 21:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:53.426 21:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.685 21:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:53.685 "name": "Existed_Raid", 00:07:53.685 "uuid": "5d7958db-42f3-11ef-9f7f-e9a656123a8b", 00:07:53.685 "strip_size_kb": 64, 00:07:53.685 "state": "configuring", 00:07:53.685 "raid_level": "concat", 00:07:53.685 "superblock": true, 00:07:53.685 "num_base_bdevs": 2, 00:07:53.685 "num_base_bdevs_discovered": 1, 00:07:53.685 "num_base_bdevs_operational": 2, 00:07:53.685 "base_bdevs_list": [ 00:07:53.685 { 00:07:53.685 "name": "BaseBdev1", 00:07:53.685 "uuid": "5c8c8508-42f3-11ef-9f7f-e9a656123a8b", 00:07:53.685 "is_configured": true, 00:07:53.685 "data_offset": 2048, 00:07:53.685 "data_size": 63488 00:07:53.685 }, 00:07:53.685 { 00:07:53.685 "name": "BaseBdev2", 00:07:53.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.685 "is_configured": false, 00:07:53.685 "data_offset": 0, 00:07:53.685 "data_size": 0 00:07:53.685 } 00:07:53.685 ] 00:07:53.685 }' 00:07:53.685 21:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:53.685 21:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.943 21:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:54.201 [2024-07-15 21:44:09.369671] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:54.201 [2024-07-15 21:44:09.369749] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f7518034a00 00:07:54.201 [2024-07-15 21:44:09.369756] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:54.201 [2024-07-15 21:44:09.369778] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f7518097e20 00:07:54.201 [2024-07-15 21:44:09.369827] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f7518034a00 00:07:54.201 [2024-07-15 21:44:09.369832] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1f7518034a00 00:07:54.201 [2024-07-15 21:44:09.369854] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.201 BaseBdev2 00:07:54.201 21:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:07:54.201 21:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev2 00:07:54.201 21:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:07:54.201 21:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:07:54.201 21:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:07:54.201 21:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:07:54.201 21:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:54.460 21:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:54.719 [ 00:07:54.719 { 00:07:54.719 "name": "BaseBdev2", 00:07:54.719 "aliases": [ 00:07:54.719 "5e05bbda-42f3-11ef-9f7f-e9a656123a8b" 00:07:54.719 ], 00:07:54.719 "product_name": "Malloc disk", 00:07:54.719 "block_size": 512, 00:07:54.719 "num_blocks": 65536, 00:07:54.719 "uuid": "5e05bbda-42f3-11ef-9f7f-e9a656123a8b", 00:07:54.719 "assigned_rate_limits": { 00:07:54.719 "rw_ios_per_sec": 0, 00:07:54.719 "rw_mbytes_per_sec": 0, 00:07:54.719 "r_mbytes_per_sec": 0, 00:07:54.719 "w_mbytes_per_sec": 0 00:07:54.719 }, 00:07:54.719 "claimed": true, 00:07:54.719 "claim_type": "exclusive_write", 00:07:54.719 "zoned": false, 00:07:54.719 "supported_io_types": { 00:07:54.719 "read": true, 00:07:54.719 "write": true, 00:07:54.719 "unmap": true, 00:07:54.719 "flush": true, 00:07:54.719 "reset": true, 00:07:54.719 "nvme_admin": false, 00:07:54.719 "nvme_io": false, 00:07:54.719 "nvme_io_md": false, 00:07:54.719 "write_zeroes": true, 00:07:54.719 "zcopy": true, 00:07:54.719 "get_zone_info": false, 00:07:54.719 "zone_management": false, 00:07:54.719 "zone_append": false, 00:07:54.719 "compare": false, 00:07:54.719 "compare_and_write": false, 00:07:54.719 "abort": true, 00:07:54.719 "seek_hole": false, 00:07:54.719 "seek_data": false, 00:07:54.719 "copy": true, 00:07:54.719 "nvme_iov_md": false 00:07:54.719 }, 00:07:54.719 "memory_domains": [ 00:07:54.719 { 00:07:54.719 "dma_device_id": "system", 00:07:54.719 "dma_device_type": 1 00:07:54.719 }, 00:07:54.719 { 00:07:54.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.719 "dma_device_type": 2 00:07:54.719 } 00:07:54.719 ], 00:07:54.719 "driver_specific": {} 00:07:54.719 } 00:07:54.719 ] 00:07:54.719 21:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:07:54.719 21:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:07:54.719 21:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:54.719 21:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:54.719 21:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:54.719 21:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:54.719 21:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:54.719 21:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:54.719 21:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:54.719 21:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:54.719 21:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:54.719 21:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:54.719 21:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:54.977 21:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:54.977 21:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.977 21:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:54.977 "name": "Existed_Raid", 00:07:54.977 "uuid": "5d7958db-42f3-11ef-9f7f-e9a656123a8b", 00:07:54.977 "strip_size_kb": 64, 00:07:54.977 "state": "online", 00:07:54.977 "raid_level": "concat", 00:07:54.977 "superblock": true, 00:07:54.977 "num_base_bdevs": 2, 00:07:54.977 "num_base_bdevs_discovered": 2, 00:07:54.977 "num_base_bdevs_operational": 2, 00:07:54.977 "base_bdevs_list": [ 00:07:54.977 { 00:07:54.977 "name": "BaseBdev1", 00:07:54.977 "uuid": "5c8c8508-42f3-11ef-9f7f-e9a656123a8b", 00:07:54.977 "is_configured": true, 00:07:54.977 "data_offset": 2048, 00:07:54.977 "data_size": 63488 00:07:54.977 }, 00:07:54.977 { 00:07:54.977 "name": "BaseBdev2", 00:07:54.977 "uuid": "5e05bbda-42f3-11ef-9f7f-e9a656123a8b", 00:07:54.977 "is_configured": true, 00:07:54.977 "data_offset": 2048, 00:07:54.977 "data_size": 63488 00:07:54.977 } 00:07:54.977 ] 00:07:54.977 }' 00:07:54.977 21:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:54.977 21:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.543 21:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:07:55.543 21:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:07:55.543 21:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:55.543 21:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:55.543 21:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:55.543 21:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:07:55.543 21:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:07:55.543 21:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:55.801 [2024-07-15 21:44:10.801540] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:55.801 21:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:55.801 "name": "Existed_Raid", 00:07:55.801 "aliases": [ 00:07:55.801 "5d7958db-42f3-11ef-9f7f-e9a656123a8b" 00:07:55.801 ], 00:07:55.801 "product_name": "Raid Volume", 00:07:55.801 "block_size": 512, 00:07:55.801 "num_blocks": 126976, 00:07:55.801 "uuid": "5d7958db-42f3-11ef-9f7f-e9a656123a8b", 00:07:55.801 "assigned_rate_limits": { 00:07:55.801 "rw_ios_per_sec": 0, 00:07:55.801 "rw_mbytes_per_sec": 0, 00:07:55.801 "r_mbytes_per_sec": 0, 00:07:55.801 "w_mbytes_per_sec": 0 00:07:55.801 }, 00:07:55.801 "claimed": false, 00:07:55.801 "zoned": false, 00:07:55.801 "supported_io_types": { 00:07:55.801 "read": true, 00:07:55.801 "write": true, 00:07:55.801 "unmap": true, 00:07:55.801 "flush": true, 00:07:55.801 "reset": true, 00:07:55.801 "nvme_admin": false, 00:07:55.801 "nvme_io": false, 00:07:55.801 "nvme_io_md": false, 00:07:55.801 "write_zeroes": true, 00:07:55.801 "zcopy": false, 00:07:55.801 "get_zone_info": false, 00:07:55.801 "zone_management": false, 00:07:55.801 "zone_append": false, 00:07:55.801 "compare": false, 00:07:55.801 "compare_and_write": false, 00:07:55.801 "abort": false, 00:07:55.801 "seek_hole": false, 00:07:55.801 "seek_data": false, 00:07:55.801 "copy": false, 00:07:55.801 "nvme_iov_md": false 00:07:55.801 }, 00:07:55.801 "memory_domains": [ 00:07:55.801 { 00:07:55.801 "dma_device_id": "system", 00:07:55.801 "dma_device_type": 1 00:07:55.801 }, 00:07:55.801 { 00:07:55.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.801 "dma_device_type": 2 00:07:55.801 }, 00:07:55.801 { 00:07:55.801 "dma_device_id": "system", 00:07:55.801 "dma_device_type": 1 00:07:55.801 }, 00:07:55.801 { 00:07:55.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.801 "dma_device_type": 2 00:07:55.801 } 00:07:55.801 ], 00:07:55.801 "driver_specific": { 00:07:55.801 "raid": { 00:07:55.801 "uuid": "5d7958db-42f3-11ef-9f7f-e9a656123a8b", 00:07:55.801 "strip_size_kb": 64, 00:07:55.801 "state": "online", 00:07:55.801 "raid_level": "concat", 00:07:55.801 "superblock": true, 00:07:55.801 "num_base_bdevs": 2, 00:07:55.801 "num_base_bdevs_discovered": 2, 00:07:55.801 "num_base_bdevs_operational": 2, 00:07:55.801 "base_bdevs_list": [ 00:07:55.801 { 00:07:55.801 "name": "BaseBdev1", 00:07:55.801 "uuid": "5c8c8508-42f3-11ef-9f7f-e9a656123a8b", 00:07:55.801 "is_configured": true, 00:07:55.801 "data_offset": 2048, 00:07:55.801 "data_size": 63488 00:07:55.801 }, 00:07:55.801 { 00:07:55.801 "name": "BaseBdev2", 00:07:55.801 "uuid": "5e05bbda-42f3-11ef-9f7f-e9a656123a8b", 00:07:55.801 "is_configured": true, 00:07:55.801 "data_offset": 2048, 00:07:55.801 "data_size": 63488 00:07:55.801 } 00:07:55.801 ] 00:07:55.801 } 00:07:55.801 } 00:07:55.801 }' 00:07:55.801 21:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:55.801 21:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:07:55.801 BaseBdev2' 00:07:55.801 21:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:55.801 21:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:07:55.801 21:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:56.058 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:56.058 "name": "BaseBdev1", 00:07:56.058 "aliases": [ 00:07:56.058 "5c8c8508-42f3-11ef-9f7f-e9a656123a8b" 00:07:56.058 ], 00:07:56.058 "product_name": "Malloc disk", 00:07:56.058 "block_size": 512, 00:07:56.058 "num_blocks": 65536, 00:07:56.058 "uuid": "5c8c8508-42f3-11ef-9f7f-e9a656123a8b", 00:07:56.058 "assigned_rate_limits": { 00:07:56.058 "rw_ios_per_sec": 0, 00:07:56.058 "rw_mbytes_per_sec": 0, 00:07:56.058 "r_mbytes_per_sec": 0, 00:07:56.058 "w_mbytes_per_sec": 0 00:07:56.058 }, 00:07:56.058 "claimed": true, 00:07:56.058 "claim_type": "exclusive_write", 00:07:56.058 "zoned": false, 00:07:56.058 "supported_io_types": { 00:07:56.058 "read": true, 00:07:56.058 "write": true, 00:07:56.058 "unmap": true, 00:07:56.058 "flush": true, 00:07:56.058 "reset": true, 00:07:56.058 "nvme_admin": false, 00:07:56.058 "nvme_io": false, 00:07:56.058 "nvme_io_md": false, 00:07:56.058 "write_zeroes": true, 00:07:56.058 "zcopy": true, 00:07:56.058 "get_zone_info": false, 00:07:56.058 "zone_management": false, 00:07:56.058 "zone_append": false, 00:07:56.058 "compare": false, 00:07:56.058 "compare_and_write": false, 00:07:56.058 "abort": true, 00:07:56.058 "seek_hole": false, 00:07:56.058 "seek_data": false, 00:07:56.058 "copy": true, 00:07:56.058 "nvme_iov_md": false 00:07:56.058 }, 00:07:56.058 "memory_domains": [ 00:07:56.058 { 00:07:56.058 "dma_device_id": "system", 00:07:56.058 "dma_device_type": 1 00:07:56.058 }, 00:07:56.058 { 00:07:56.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.058 "dma_device_type": 2 00:07:56.058 } 00:07:56.058 ], 00:07:56.058 "driver_specific": {} 00:07:56.058 }' 00:07:56.059 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:56.059 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:56.059 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:56.059 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:56.059 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:56.059 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:56.059 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:56.059 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:56.059 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:56.059 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:56.059 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:56.059 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:56.059 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:56.059 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:07:56.059 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:56.319 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:56.319 "name": "BaseBdev2", 00:07:56.319 "aliases": [ 00:07:56.319 "5e05bbda-42f3-11ef-9f7f-e9a656123a8b" 00:07:56.319 ], 00:07:56.319 "product_name": "Malloc disk", 00:07:56.319 "block_size": 512, 00:07:56.319 "num_blocks": 65536, 00:07:56.319 "uuid": "5e05bbda-42f3-11ef-9f7f-e9a656123a8b", 00:07:56.319 "assigned_rate_limits": { 00:07:56.319 "rw_ios_per_sec": 0, 00:07:56.319 "rw_mbytes_per_sec": 0, 00:07:56.319 "r_mbytes_per_sec": 0, 00:07:56.319 "w_mbytes_per_sec": 0 00:07:56.319 }, 00:07:56.319 "claimed": true, 00:07:56.319 "claim_type": "exclusive_write", 00:07:56.319 "zoned": false, 00:07:56.319 "supported_io_types": { 00:07:56.319 "read": true, 00:07:56.319 "write": true, 00:07:56.319 "unmap": true, 00:07:56.319 "flush": true, 00:07:56.319 "reset": true, 00:07:56.319 "nvme_admin": false, 00:07:56.319 "nvme_io": false, 00:07:56.319 "nvme_io_md": false, 00:07:56.319 "write_zeroes": true, 00:07:56.319 "zcopy": true, 00:07:56.319 "get_zone_info": false, 00:07:56.319 "zone_management": false, 00:07:56.319 "zone_append": false, 00:07:56.319 "compare": false, 00:07:56.319 "compare_and_write": false, 00:07:56.319 "abort": true, 00:07:56.319 "seek_hole": false, 00:07:56.319 "seek_data": false, 00:07:56.319 "copy": true, 00:07:56.319 "nvme_iov_md": false 00:07:56.319 }, 00:07:56.319 "memory_domains": [ 00:07:56.319 { 00:07:56.319 "dma_device_id": "system", 00:07:56.319 "dma_device_type": 1 00:07:56.319 }, 00:07:56.319 { 00:07:56.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.319 "dma_device_type": 2 00:07:56.319 } 00:07:56.319 ], 00:07:56.319 "driver_specific": {} 00:07:56.319 }' 00:07:56.319 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:56.319 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:56.320 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:56.320 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:56.320 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:56.320 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:56.320 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:56.320 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:56.320 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:56.320 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:56.320 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:56.320 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:56.320 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:56.576 [2024-07-15 21:44:11.721519] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:56.576 [2024-07-15 21:44:11.721552] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:56.576 [2024-07-15 21:44:11.721568] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.576 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:07:56.576 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:07:56.576 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:56.576 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:07:56.576 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:07:56.576 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:56.576 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:56.576 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:07:56.576 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:56.576 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:56.576 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:07:56.576 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:56.576 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:56.576 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:56.576 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:56.576 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:56.577 21:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.142 21:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:57.142 "name": "Existed_Raid", 00:07:57.142 "uuid": "5d7958db-42f3-11ef-9f7f-e9a656123a8b", 00:07:57.142 "strip_size_kb": 64, 00:07:57.142 "state": "offline", 00:07:57.142 "raid_level": "concat", 00:07:57.142 "superblock": true, 00:07:57.142 "num_base_bdevs": 2, 00:07:57.142 "num_base_bdevs_discovered": 1, 00:07:57.142 "num_base_bdevs_operational": 1, 00:07:57.142 "base_bdevs_list": [ 00:07:57.142 { 00:07:57.142 "name": null, 00:07:57.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.142 "is_configured": false, 00:07:57.142 "data_offset": 2048, 00:07:57.142 "data_size": 63488 00:07:57.142 }, 00:07:57.142 { 00:07:57.142 "name": "BaseBdev2", 00:07:57.142 "uuid": "5e05bbda-42f3-11ef-9f7f-e9a656123a8b", 00:07:57.142 "is_configured": true, 00:07:57.142 "data_offset": 2048, 00:07:57.142 "data_size": 63488 00:07:57.142 } 00:07:57.142 ] 00:07:57.142 }' 00:07:57.142 21:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:57.142 21:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.399 21:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:07:57.399 21:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:57.399 21:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:57.399 21:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:07:57.656 21:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:07:57.656 21:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:57.656 21:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:57.915 [2024-07-15 21:44:12.869680] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:57.915 [2024-07-15 21:44:12.869723] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f7518034a00 name Existed_Raid, state offline 00:07:57.915 21:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:07:57.915 21:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:57.915 21:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:57.915 21:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:07:58.240 21:44:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:07:58.240 21:44:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:07:58.240 21:44:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:07:58.240 21:44:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 49999 00:07:58.240 21:44:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@942 -- # '[' -z 49999 ']' 00:07:58.240 21:44:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # kill -0 49999 00:07:58.240 21:44:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@947 -- # uname 00:07:58.240 21:44:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:07:58.240 21:44:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # ps -c -o command 49999 00:07:58.240 21:44:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # tail -1 00:07:58.240 21:44:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:07:58.240 killing process with pid 49999 00:07:58.240 21:44:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:07:58.240 21:44:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # echo 'killing process with pid 49999' 00:07:58.240 21:44:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@961 -- # kill 49999 00:07:58.240 21:44:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # wait 49999 00:07:58.240 [2024-07-15 21:44:13.144520] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:58.240 [2024-07-15 21:44:13.144571] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:58.240 21:44:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:07:58.240 00:07:58.240 real 0m9.150s 00:07:58.240 user 0m16.025s 00:07:58.240 sys 0m1.490s 00:07:58.240 ************************************ 00:07:58.240 END TEST raid_state_function_test_sb 00:07:58.240 ************************************ 00:07:58.240 21:44:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:58.240 21:44:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.499 21:44:13 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:07:58.499 21:44:13 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:58.499 21:44:13 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:07:58.499 21:44:13 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:58.499 21:44:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:58.499 ************************************ 00:07:58.499 START TEST raid_superblock_test 00:07:58.499 ************************************ 00:07:58.499 21:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1117 -- # raid_superblock_test concat 2 00:07:58.499 21:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:07:58.499 21:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:07:58.499 21:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:07:58.499 21:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:07:58.499 21:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:07:58.499 21:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:07:58.499 21:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:07:58.499 21:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:07:58.499 21:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:07:58.499 21:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:07:58.499 21:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:07:58.499 21:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:07:58.499 21:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:07:58.499 21:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:07:58.499 21:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:07:58.499 21:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:07:58.499 21:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=50273 00:07:58.499 21:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 50273 /var/tmp/spdk-raid.sock 00:07:58.499 21:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@823 -- # '[' -z 50273 ']' 00:07:58.499 21:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:58.499 21:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:07:58.499 21:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:07:58.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:58.499 21:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:58.499 21:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:07:58.499 21:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.499 [2024-07-15 21:44:13.444007] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:07:58.499 [2024-07-15 21:44:13.444232] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:59.065 EAL: TSC is not safe to use in SMP mode 00:07:59.065 EAL: TSC is not invariant 00:07:59.065 [2024-07-15 21:44:13.961344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.065 [2024-07-15 21:44:14.069629] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:59.065 [2024-07-15 21:44:14.072096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.065 [2024-07-15 21:44:14.072986] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:59.065 [2024-07-15 21:44:14.072999] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:59.323 21:44:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:07:59.323 21:44:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # return 0 00:07:59.323 21:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:07:59.323 21:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:07:59.323 21:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:07:59.323 21:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:07:59.323 21:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:59.323 21:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:59.323 21:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:07:59.323 21:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:59.323 21:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:07:59.582 malloc1 00:07:59.582 21:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:59.840 [2024-07-15 21:44:14.907936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:59.840 [2024-07-15 21:44:14.908012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:59.840 [2024-07-15 21:44:14.908033] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f000f034780 00:07:59.841 [2024-07-15 21:44:14.908043] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:59.841 [2024-07-15 21:44:14.909170] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:59.841 [2024-07-15 21:44:14.909196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:59.841 pt1 00:07:59.841 21:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:07:59.841 21:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:07:59.841 21:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:07:59.841 21:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:07:59.841 21:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:59.841 21:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:59.841 21:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:07:59.841 21:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:59.841 21:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:08:00.099 malloc2 00:08:00.099 21:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:00.358 [2024-07-15 21:44:15.423941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:00.358 [2024-07-15 21:44:15.424018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:00.358 [2024-07-15 21:44:15.424037] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f000f034c80 00:08:00.358 [2024-07-15 21:44:15.424055] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:00.358 [2024-07-15 21:44:15.424923] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:00.358 [2024-07-15 21:44:15.424948] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:00.358 pt2 00:08:00.358 21:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:08:00.358 21:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:08:00.358 21:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:08:00.617 [2024-07-15 21:44:15.663952] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:00.617 [2024-07-15 21:44:15.664704] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:00.617 [2024-07-15 21:44:15.664784] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f000f034f00 00:08:00.617 [2024-07-15 21:44:15.664792] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:00.617 [2024-07-15 21:44:15.664830] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f000f097e20 00:08:00.617 [2024-07-15 21:44:15.664924] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f000f034f00 00:08:00.617 [2024-07-15 21:44:15.664952] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f000f034f00 00:08:00.617 [2024-07-15 21:44:15.664992] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.617 21:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:00.617 21:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:00.617 21:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:00.617 21:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:00.617 21:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:00.617 21:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:00.617 21:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:00.617 21:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:00.617 21:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:00.617 21:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:00.617 21:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:00.617 21:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:00.875 21:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:00.875 "name": "raid_bdev1", 00:08:00.875 "uuid": "61c63096-42f3-11ef-9f7f-e9a656123a8b", 00:08:00.875 "strip_size_kb": 64, 00:08:00.875 "state": "online", 00:08:00.875 "raid_level": "concat", 00:08:00.875 "superblock": true, 00:08:00.875 "num_base_bdevs": 2, 00:08:00.875 "num_base_bdevs_discovered": 2, 00:08:00.875 "num_base_bdevs_operational": 2, 00:08:00.875 "base_bdevs_list": [ 00:08:00.875 { 00:08:00.875 "name": "pt1", 00:08:00.875 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:00.875 "is_configured": true, 00:08:00.875 "data_offset": 2048, 00:08:00.875 "data_size": 63488 00:08:00.875 }, 00:08:00.875 { 00:08:00.875 "name": "pt2", 00:08:00.875 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:00.875 "is_configured": true, 00:08:00.875 "data_offset": 2048, 00:08:00.875 "data_size": 63488 00:08:00.875 } 00:08:00.875 ] 00:08:00.875 }' 00:08:00.875 21:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:00.875 21:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.134 21:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:08:01.134 21:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:08:01.134 21:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:01.134 21:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:01.134 21:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:01.134 21:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:01.134 21:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:01.134 21:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:01.392 [2024-07-15 21:44:16.491992] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:01.392 21:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:01.392 "name": "raid_bdev1", 00:08:01.392 "aliases": [ 00:08:01.392 "61c63096-42f3-11ef-9f7f-e9a656123a8b" 00:08:01.392 ], 00:08:01.392 "product_name": "Raid Volume", 00:08:01.392 "block_size": 512, 00:08:01.392 "num_blocks": 126976, 00:08:01.392 "uuid": "61c63096-42f3-11ef-9f7f-e9a656123a8b", 00:08:01.392 "assigned_rate_limits": { 00:08:01.392 "rw_ios_per_sec": 0, 00:08:01.392 "rw_mbytes_per_sec": 0, 00:08:01.392 "r_mbytes_per_sec": 0, 00:08:01.392 "w_mbytes_per_sec": 0 00:08:01.392 }, 00:08:01.392 "claimed": false, 00:08:01.392 "zoned": false, 00:08:01.392 "supported_io_types": { 00:08:01.392 "read": true, 00:08:01.392 "write": true, 00:08:01.392 "unmap": true, 00:08:01.392 "flush": true, 00:08:01.392 "reset": true, 00:08:01.392 "nvme_admin": false, 00:08:01.392 "nvme_io": false, 00:08:01.392 "nvme_io_md": false, 00:08:01.392 "write_zeroes": true, 00:08:01.392 "zcopy": false, 00:08:01.392 "get_zone_info": false, 00:08:01.392 "zone_management": false, 00:08:01.392 "zone_append": false, 00:08:01.392 "compare": false, 00:08:01.392 "compare_and_write": false, 00:08:01.392 "abort": false, 00:08:01.392 "seek_hole": false, 00:08:01.392 "seek_data": false, 00:08:01.392 "copy": false, 00:08:01.392 "nvme_iov_md": false 00:08:01.392 }, 00:08:01.392 "memory_domains": [ 00:08:01.392 { 00:08:01.392 "dma_device_id": "system", 00:08:01.392 "dma_device_type": 1 00:08:01.392 }, 00:08:01.392 { 00:08:01.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.392 "dma_device_type": 2 00:08:01.392 }, 00:08:01.392 { 00:08:01.392 "dma_device_id": "system", 00:08:01.392 "dma_device_type": 1 00:08:01.392 }, 00:08:01.392 { 00:08:01.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.392 "dma_device_type": 2 00:08:01.392 } 00:08:01.392 ], 00:08:01.392 "driver_specific": { 00:08:01.392 "raid": { 00:08:01.392 "uuid": "61c63096-42f3-11ef-9f7f-e9a656123a8b", 00:08:01.392 "strip_size_kb": 64, 00:08:01.392 "state": "online", 00:08:01.392 "raid_level": "concat", 00:08:01.392 "superblock": true, 00:08:01.392 "num_base_bdevs": 2, 00:08:01.392 "num_base_bdevs_discovered": 2, 00:08:01.392 "num_base_bdevs_operational": 2, 00:08:01.392 "base_bdevs_list": [ 00:08:01.392 { 00:08:01.392 "name": "pt1", 00:08:01.392 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:01.392 "is_configured": true, 00:08:01.392 "data_offset": 2048, 00:08:01.392 "data_size": 63488 00:08:01.392 }, 00:08:01.392 { 00:08:01.392 "name": "pt2", 00:08:01.392 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:01.392 "is_configured": true, 00:08:01.392 "data_offset": 2048, 00:08:01.392 "data_size": 63488 00:08:01.392 } 00:08:01.392 ] 00:08:01.392 } 00:08:01.392 } 00:08:01.392 }' 00:08:01.392 21:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:01.392 21:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:08:01.392 pt2' 00:08:01.392 21:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:01.392 21:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:01.392 21:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:08:01.651 21:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:01.651 "name": "pt1", 00:08:01.651 "aliases": [ 00:08:01.651 "00000000-0000-0000-0000-000000000001" 00:08:01.651 ], 00:08:01.651 "product_name": "passthru", 00:08:01.651 "block_size": 512, 00:08:01.651 "num_blocks": 65536, 00:08:01.651 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:01.651 "assigned_rate_limits": { 00:08:01.651 "rw_ios_per_sec": 0, 00:08:01.651 "rw_mbytes_per_sec": 0, 00:08:01.651 "r_mbytes_per_sec": 0, 00:08:01.651 "w_mbytes_per_sec": 0 00:08:01.651 }, 00:08:01.651 "claimed": true, 00:08:01.651 "claim_type": "exclusive_write", 00:08:01.651 "zoned": false, 00:08:01.651 "supported_io_types": { 00:08:01.651 "read": true, 00:08:01.651 "write": true, 00:08:01.651 "unmap": true, 00:08:01.651 "flush": true, 00:08:01.651 "reset": true, 00:08:01.651 "nvme_admin": false, 00:08:01.651 "nvme_io": false, 00:08:01.651 "nvme_io_md": false, 00:08:01.651 "write_zeroes": true, 00:08:01.651 "zcopy": true, 00:08:01.651 "get_zone_info": false, 00:08:01.651 "zone_management": false, 00:08:01.651 "zone_append": false, 00:08:01.651 "compare": false, 00:08:01.651 "compare_and_write": false, 00:08:01.651 "abort": true, 00:08:01.651 "seek_hole": false, 00:08:01.651 "seek_data": false, 00:08:01.651 "copy": true, 00:08:01.651 "nvme_iov_md": false 00:08:01.651 }, 00:08:01.651 "memory_domains": [ 00:08:01.651 { 00:08:01.651 "dma_device_id": "system", 00:08:01.651 "dma_device_type": 1 00:08:01.651 }, 00:08:01.651 { 00:08:01.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.651 "dma_device_type": 2 00:08:01.651 } 00:08:01.651 ], 00:08:01.651 "driver_specific": { 00:08:01.651 "passthru": { 00:08:01.651 "name": "pt1", 00:08:01.651 "base_bdev_name": "malloc1" 00:08:01.651 } 00:08:01.651 } 00:08:01.651 }' 00:08:01.651 21:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:01.651 21:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:01.651 21:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:01.651 21:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:01.651 21:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:01.651 21:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:01.651 21:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:01.651 21:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:01.909 21:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:01.909 21:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:01.909 21:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:01.909 21:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:01.909 21:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:01.909 21:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:08:01.909 21:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:02.168 21:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:02.168 "name": "pt2", 00:08:02.168 "aliases": [ 00:08:02.168 "00000000-0000-0000-0000-000000000002" 00:08:02.168 ], 00:08:02.168 "product_name": "passthru", 00:08:02.168 "block_size": 512, 00:08:02.168 "num_blocks": 65536, 00:08:02.168 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:02.168 "assigned_rate_limits": { 00:08:02.168 "rw_ios_per_sec": 0, 00:08:02.168 "rw_mbytes_per_sec": 0, 00:08:02.168 "r_mbytes_per_sec": 0, 00:08:02.168 "w_mbytes_per_sec": 0 00:08:02.168 }, 00:08:02.168 "claimed": true, 00:08:02.168 "claim_type": "exclusive_write", 00:08:02.168 "zoned": false, 00:08:02.168 "supported_io_types": { 00:08:02.168 "read": true, 00:08:02.168 "write": true, 00:08:02.168 "unmap": true, 00:08:02.168 "flush": true, 00:08:02.168 "reset": true, 00:08:02.168 "nvme_admin": false, 00:08:02.168 "nvme_io": false, 00:08:02.168 "nvme_io_md": false, 00:08:02.168 "write_zeroes": true, 00:08:02.168 "zcopy": true, 00:08:02.168 "get_zone_info": false, 00:08:02.168 "zone_management": false, 00:08:02.168 "zone_append": false, 00:08:02.168 "compare": false, 00:08:02.168 "compare_and_write": false, 00:08:02.168 "abort": true, 00:08:02.168 "seek_hole": false, 00:08:02.168 "seek_data": false, 00:08:02.168 "copy": true, 00:08:02.168 "nvme_iov_md": false 00:08:02.168 }, 00:08:02.168 "memory_domains": [ 00:08:02.168 { 00:08:02.168 "dma_device_id": "system", 00:08:02.168 "dma_device_type": 1 00:08:02.168 }, 00:08:02.168 { 00:08:02.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.168 "dma_device_type": 2 00:08:02.168 } 00:08:02.168 ], 00:08:02.168 "driver_specific": { 00:08:02.168 "passthru": { 00:08:02.168 "name": "pt2", 00:08:02.168 "base_bdev_name": "malloc2" 00:08:02.168 } 00:08:02.168 } 00:08:02.168 }' 00:08:02.168 21:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:02.168 21:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:02.168 21:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:02.168 21:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:02.168 21:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:02.168 21:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:02.168 21:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:02.168 21:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:02.168 21:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:02.168 21:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:02.168 21:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:02.168 21:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:02.168 21:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:02.168 21:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:08:02.427 [2024-07-15 21:44:17.436001] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:02.427 21:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=61c63096-42f3-11ef-9f7f-e9a656123a8b 00:08:02.427 21:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 61c63096-42f3-11ef-9f7f-e9a656123a8b ']' 00:08:02.427 21:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:02.687 [2024-07-15 21:44:17.663945] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:02.687 [2024-07-15 21:44:17.663974] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:02.687 [2024-07-15 21:44:17.664011] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:02.687 [2024-07-15 21:44:17.664026] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:02.687 [2024-07-15 21:44:17.664031] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f000f034f00 name raid_bdev1, state offline 00:08:02.687 21:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:02.687 21:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:08:02.946 21:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:08:02.946 21:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:08:02.946 21:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:08:02.946 21:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:03.205 21:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:08:03.205 21:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:03.463 21:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:03.463 21:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:08:03.723 21:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:08:03.723 21:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:08:03.723 21:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # local es=0 00:08:03.723 21:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:08:03.723 21:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@630 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:03.723 21:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:08:03.723 21:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:03.723 21:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:08:03.723 21:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:03.723 21:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:08:03.723 21:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:03.723 21:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:03.723 21:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:08:03.723 [2024-07-15 21:44:18.907992] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:03.723 [2024-07-15 21:44:18.908738] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:03.723 [2024-07-15 21:44:18.908768] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:03.723 [2024-07-15 21:44:18.908829] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:03.723 [2024-07-15 21:44:18.908840] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:03.723 [2024-07-15 21:44:18.908845] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f000f034c80 name raid_bdev1, state configuring 00:08:03.981 request: 00:08:03.981 { 00:08:03.981 "name": "raid_bdev1", 00:08:03.981 "raid_level": "concat", 00:08:03.981 "base_bdevs": [ 00:08:03.981 "malloc1", 00:08:03.981 "malloc2" 00:08:03.981 ], 00:08:03.981 "strip_size_kb": 64, 00:08:03.981 "superblock": false, 00:08:03.981 "method": "bdev_raid_create", 00:08:03.981 "req_id": 1 00:08:03.981 } 00:08:03.981 Got JSON-RPC error response 00:08:03.982 response: 00:08:03.982 { 00:08:03.982 "code": -17, 00:08:03.982 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:03.982 } 00:08:03.982 21:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # es=1 00:08:03.982 21:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:08:03.982 21:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:08:03.982 21:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:08:03.982 21:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:08:03.982 21:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:04.241 21:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:08:04.241 21:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:08:04.241 21:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:04.500 [2024-07-15 21:44:19.504003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:04.500 [2024-07-15 21:44:19.504070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.500 [2024-07-15 21:44:19.504084] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f000f034780 00:08:04.500 [2024-07-15 21:44:19.504093] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.500 [2024-07-15 21:44:19.504923] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.500 [2024-07-15 21:44:19.504947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:04.500 [2024-07-15 21:44:19.504976] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:04.500 [2024-07-15 21:44:19.504989] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:04.500 pt1 00:08:04.500 21:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:08:04.500 21:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:04.500 21:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:04.500 21:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:04.500 21:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:04.500 21:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:04.500 21:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:04.500 21:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:04.500 21:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:04.500 21:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:04.500 21:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:04.500 21:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:04.758 21:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:04.758 "name": "raid_bdev1", 00:08:04.758 "uuid": "61c63096-42f3-11ef-9f7f-e9a656123a8b", 00:08:04.758 "strip_size_kb": 64, 00:08:04.758 "state": "configuring", 00:08:04.758 "raid_level": "concat", 00:08:04.758 "superblock": true, 00:08:04.758 "num_base_bdevs": 2, 00:08:04.758 "num_base_bdevs_discovered": 1, 00:08:04.759 "num_base_bdevs_operational": 2, 00:08:04.759 "base_bdevs_list": [ 00:08:04.759 { 00:08:04.759 "name": "pt1", 00:08:04.759 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:04.759 "is_configured": true, 00:08:04.759 "data_offset": 2048, 00:08:04.759 "data_size": 63488 00:08:04.759 }, 00:08:04.759 { 00:08:04.759 "name": null, 00:08:04.759 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:04.759 "is_configured": false, 00:08:04.759 "data_offset": 2048, 00:08:04.759 "data_size": 63488 00:08:04.759 } 00:08:04.759 ] 00:08:04.759 }' 00:08:04.759 21:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:04.759 21:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.018 21:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:08:05.018 21:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:08:05.018 21:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:08:05.018 21:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:05.276 [2024-07-15 21:44:20.464025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:05.276 [2024-07-15 21:44:20.464106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:05.277 [2024-07-15 21:44:20.464120] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f000f034f00 00:08:05.277 [2024-07-15 21:44:20.464129] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:05.536 [2024-07-15 21:44:20.464294] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:05.536 [2024-07-15 21:44:20.464308] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:05.536 [2024-07-15 21:44:20.464336] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:05.536 [2024-07-15 21:44:20.464346] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:05.536 [2024-07-15 21:44:20.464378] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f000f035180 00:08:05.536 [2024-07-15 21:44:20.464383] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:05.536 [2024-07-15 21:44:20.464405] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f000f097e20 00:08:05.536 [2024-07-15 21:44:20.464474] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f000f035180 00:08:05.536 [2024-07-15 21:44:20.464478] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f000f035180 00:08:05.536 [2024-07-15 21:44:20.464502] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:05.536 pt2 00:08:05.536 21:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:08:05.536 21:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:08:05.536 21:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:05.536 21:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:05.536 21:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:05.536 21:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:05.536 21:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:05.536 21:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:05.536 21:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:05.536 21:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:05.536 21:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:05.536 21:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:05.536 21:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:05.536 21:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:05.795 21:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:05.795 "name": "raid_bdev1", 00:08:05.795 "uuid": "61c63096-42f3-11ef-9f7f-e9a656123a8b", 00:08:05.795 "strip_size_kb": 64, 00:08:05.795 "state": "online", 00:08:05.795 "raid_level": "concat", 00:08:05.795 "superblock": true, 00:08:05.795 "num_base_bdevs": 2, 00:08:05.795 "num_base_bdevs_discovered": 2, 00:08:05.795 "num_base_bdevs_operational": 2, 00:08:05.795 "base_bdevs_list": [ 00:08:05.795 { 00:08:05.795 "name": "pt1", 00:08:05.795 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:05.795 "is_configured": true, 00:08:05.795 "data_offset": 2048, 00:08:05.795 "data_size": 63488 00:08:05.795 }, 00:08:05.795 { 00:08:05.795 "name": "pt2", 00:08:05.795 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:05.795 "is_configured": true, 00:08:05.795 "data_offset": 2048, 00:08:05.795 "data_size": 63488 00:08:05.795 } 00:08:05.795 ] 00:08:05.795 }' 00:08:05.795 21:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:05.795 21:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.055 21:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:08:06.055 21:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:08:06.055 21:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:06.055 21:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:06.055 21:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:06.055 21:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:06.055 21:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:06.055 21:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:06.315 [2024-07-15 21:44:21.424105] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:06.316 21:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:06.316 "name": "raid_bdev1", 00:08:06.316 "aliases": [ 00:08:06.316 "61c63096-42f3-11ef-9f7f-e9a656123a8b" 00:08:06.316 ], 00:08:06.316 "product_name": "Raid Volume", 00:08:06.316 "block_size": 512, 00:08:06.316 "num_blocks": 126976, 00:08:06.316 "uuid": "61c63096-42f3-11ef-9f7f-e9a656123a8b", 00:08:06.316 "assigned_rate_limits": { 00:08:06.316 "rw_ios_per_sec": 0, 00:08:06.316 "rw_mbytes_per_sec": 0, 00:08:06.316 "r_mbytes_per_sec": 0, 00:08:06.316 "w_mbytes_per_sec": 0 00:08:06.316 }, 00:08:06.316 "claimed": false, 00:08:06.316 "zoned": false, 00:08:06.316 "supported_io_types": { 00:08:06.316 "read": true, 00:08:06.316 "write": true, 00:08:06.316 "unmap": true, 00:08:06.316 "flush": true, 00:08:06.316 "reset": true, 00:08:06.316 "nvme_admin": false, 00:08:06.316 "nvme_io": false, 00:08:06.316 "nvme_io_md": false, 00:08:06.316 "write_zeroes": true, 00:08:06.316 "zcopy": false, 00:08:06.316 "get_zone_info": false, 00:08:06.316 "zone_management": false, 00:08:06.316 "zone_append": false, 00:08:06.316 "compare": false, 00:08:06.316 "compare_and_write": false, 00:08:06.316 "abort": false, 00:08:06.316 "seek_hole": false, 00:08:06.316 "seek_data": false, 00:08:06.316 "copy": false, 00:08:06.316 "nvme_iov_md": false 00:08:06.316 }, 00:08:06.316 "memory_domains": [ 00:08:06.316 { 00:08:06.316 "dma_device_id": "system", 00:08:06.316 "dma_device_type": 1 00:08:06.316 }, 00:08:06.316 { 00:08:06.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.316 "dma_device_type": 2 00:08:06.316 }, 00:08:06.316 { 00:08:06.316 "dma_device_id": "system", 00:08:06.316 "dma_device_type": 1 00:08:06.316 }, 00:08:06.316 { 00:08:06.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.316 "dma_device_type": 2 00:08:06.316 } 00:08:06.316 ], 00:08:06.316 "driver_specific": { 00:08:06.316 "raid": { 00:08:06.316 "uuid": "61c63096-42f3-11ef-9f7f-e9a656123a8b", 00:08:06.316 "strip_size_kb": 64, 00:08:06.316 "state": "online", 00:08:06.316 "raid_level": "concat", 00:08:06.316 "superblock": true, 00:08:06.316 "num_base_bdevs": 2, 00:08:06.316 "num_base_bdevs_discovered": 2, 00:08:06.316 "num_base_bdevs_operational": 2, 00:08:06.316 "base_bdevs_list": [ 00:08:06.316 { 00:08:06.316 "name": "pt1", 00:08:06.316 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:06.316 "is_configured": true, 00:08:06.316 "data_offset": 2048, 00:08:06.316 "data_size": 63488 00:08:06.316 }, 00:08:06.316 { 00:08:06.316 "name": "pt2", 00:08:06.316 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:06.316 "is_configured": true, 00:08:06.316 "data_offset": 2048, 00:08:06.316 "data_size": 63488 00:08:06.316 } 00:08:06.316 ] 00:08:06.316 } 00:08:06.316 } 00:08:06.316 }' 00:08:06.316 21:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:06.316 21:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:08:06.316 pt2' 00:08:06.316 21:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:06.316 21:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:08:06.316 21:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:06.891 21:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:06.891 "name": "pt1", 00:08:06.891 "aliases": [ 00:08:06.891 "00000000-0000-0000-0000-000000000001" 00:08:06.891 ], 00:08:06.891 "product_name": "passthru", 00:08:06.891 "block_size": 512, 00:08:06.891 "num_blocks": 65536, 00:08:06.891 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:06.891 "assigned_rate_limits": { 00:08:06.891 "rw_ios_per_sec": 0, 00:08:06.891 "rw_mbytes_per_sec": 0, 00:08:06.891 "r_mbytes_per_sec": 0, 00:08:06.891 "w_mbytes_per_sec": 0 00:08:06.891 }, 00:08:06.892 "claimed": true, 00:08:06.892 "claim_type": "exclusive_write", 00:08:06.892 "zoned": false, 00:08:06.892 "supported_io_types": { 00:08:06.892 "read": true, 00:08:06.892 "write": true, 00:08:06.892 "unmap": true, 00:08:06.892 "flush": true, 00:08:06.892 "reset": true, 00:08:06.892 "nvme_admin": false, 00:08:06.892 "nvme_io": false, 00:08:06.892 "nvme_io_md": false, 00:08:06.892 "write_zeroes": true, 00:08:06.892 "zcopy": true, 00:08:06.892 "get_zone_info": false, 00:08:06.892 "zone_management": false, 00:08:06.892 "zone_append": false, 00:08:06.892 "compare": false, 00:08:06.892 "compare_and_write": false, 00:08:06.892 "abort": true, 00:08:06.892 "seek_hole": false, 00:08:06.892 "seek_data": false, 00:08:06.892 "copy": true, 00:08:06.892 "nvme_iov_md": false 00:08:06.892 }, 00:08:06.892 "memory_domains": [ 00:08:06.892 { 00:08:06.892 "dma_device_id": "system", 00:08:06.892 "dma_device_type": 1 00:08:06.892 }, 00:08:06.892 { 00:08:06.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.892 "dma_device_type": 2 00:08:06.892 } 00:08:06.892 ], 00:08:06.892 "driver_specific": { 00:08:06.892 "passthru": { 00:08:06.892 "name": "pt1", 00:08:06.892 "base_bdev_name": "malloc1" 00:08:06.892 } 00:08:06.892 } 00:08:06.892 }' 00:08:06.892 21:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:06.892 21:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:06.892 21:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:06.892 21:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:06.892 21:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:06.892 21:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:06.892 21:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:06.892 21:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:06.892 21:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:06.892 21:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:06.892 21:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:06.892 21:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:06.892 21:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:06.892 21:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:08:06.892 21:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:07.151 21:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:07.151 "name": "pt2", 00:08:07.151 "aliases": [ 00:08:07.151 "00000000-0000-0000-0000-000000000002" 00:08:07.151 ], 00:08:07.151 "product_name": "passthru", 00:08:07.151 "block_size": 512, 00:08:07.151 "num_blocks": 65536, 00:08:07.151 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:07.151 "assigned_rate_limits": { 00:08:07.151 "rw_ios_per_sec": 0, 00:08:07.151 "rw_mbytes_per_sec": 0, 00:08:07.151 "r_mbytes_per_sec": 0, 00:08:07.151 "w_mbytes_per_sec": 0 00:08:07.151 }, 00:08:07.151 "claimed": true, 00:08:07.151 "claim_type": "exclusive_write", 00:08:07.151 "zoned": false, 00:08:07.151 "supported_io_types": { 00:08:07.151 "read": true, 00:08:07.151 "write": true, 00:08:07.151 "unmap": true, 00:08:07.151 "flush": true, 00:08:07.151 "reset": true, 00:08:07.151 "nvme_admin": false, 00:08:07.151 "nvme_io": false, 00:08:07.151 "nvme_io_md": false, 00:08:07.151 "write_zeroes": true, 00:08:07.151 "zcopy": true, 00:08:07.151 "get_zone_info": false, 00:08:07.151 "zone_management": false, 00:08:07.151 "zone_append": false, 00:08:07.151 "compare": false, 00:08:07.151 "compare_and_write": false, 00:08:07.151 "abort": true, 00:08:07.151 "seek_hole": false, 00:08:07.151 "seek_data": false, 00:08:07.151 "copy": true, 00:08:07.151 "nvme_iov_md": false 00:08:07.151 }, 00:08:07.151 "memory_domains": [ 00:08:07.151 { 00:08:07.151 "dma_device_id": "system", 00:08:07.151 "dma_device_type": 1 00:08:07.151 }, 00:08:07.151 { 00:08:07.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.151 "dma_device_type": 2 00:08:07.151 } 00:08:07.151 ], 00:08:07.151 "driver_specific": { 00:08:07.151 "passthru": { 00:08:07.151 "name": "pt2", 00:08:07.151 "base_bdev_name": "malloc2" 00:08:07.151 } 00:08:07.151 } 00:08:07.151 }' 00:08:07.151 21:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:07.151 21:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:07.151 21:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:07.151 21:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:07.151 21:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:07.151 21:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:07.151 21:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:07.151 21:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:07.151 21:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:07.151 21:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:07.151 21:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:07.151 21:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:07.151 21:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:07.151 21:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:08:07.410 [2024-07-15 21:44:22.436129] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:07.410 21:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 61c63096-42f3-11ef-9f7f-e9a656123a8b '!=' 61c63096-42f3-11ef-9f7f-e9a656123a8b ']' 00:08:07.410 21:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:08:07.410 21:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:07.410 21:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:08:07.410 21:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 50273 00:08:07.410 21:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@942 -- # '[' -z 50273 ']' 00:08:07.410 21:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # kill -0 50273 00:08:07.410 21:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@947 -- # uname 00:08:07.410 21:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:08:07.410 21:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # ps -c -o command 50273 00:08:07.410 21:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # tail -1 00:08:07.410 21:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:08:07.410 21:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:08:07.410 killing process with pid 50273 00:08:07.410 21:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 50273' 00:08:07.410 21:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@961 -- # kill 50273 00:08:07.410 21:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # wait 50273 00:08:07.410 [2024-07-15 21:44:22.466532] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:07.410 [2024-07-15 21:44:22.466568] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:07.410 [2024-07-15 21:44:22.466584] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:07.410 [2024-07-15 21:44:22.466588] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f000f035180 name raid_bdev1, state offline 00:08:07.410 [2024-07-15 21:44:22.483757] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:07.670 21:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:08:07.670 00:08:07.670 real 0m9.298s 00:08:07.670 user 0m16.110s 00:08:07.670 sys 0m1.679s 00:08:07.670 21:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:08:07.670 ************************************ 00:08:07.670 END TEST raid_superblock_test 00:08:07.670 ************************************ 00:08:07.670 21:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.670 21:44:22 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:08:07.670 21:44:22 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:07.670 21:44:22 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:08:07.670 21:44:22 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:08:07.670 21:44:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:07.670 ************************************ 00:08:07.670 START TEST raid_read_error_test 00:08:07.670 ************************************ 00:08:07.670 21:44:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1117 -- # raid_io_error_test concat 2 read 00:08:07.670 21:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:08:07.670 21:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:08:07.670 21:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:08:07.670 21:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:08:07.670 21:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:07.670 21:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:08:07.670 21:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:07.670 21:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:07.670 21:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:08:07.670 21:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:07.670 21:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:07.670 21:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:07.670 21:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:08:07.670 21:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:08:07.670 21:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:08:07.670 21:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:08:07.670 21:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:08:07.670 21:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:08:07.670 21:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:08:07.670 21:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:08:07.670 21:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:08:07.670 21:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:08:07.670 21:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.0qss5WRrGR 00:08:07.670 21:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=50542 00:08:07.670 21:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:07.670 21:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 50542 /var/tmp/spdk-raid.sock 00:08:07.670 21:44:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@823 -- # '[' -z 50542 ']' 00:08:07.670 21:44:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:07.670 21:44:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:08:07.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:07.670 21:44:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:07.670 21:44:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:08:07.670 21:44:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.670 [2024-07-15 21:44:22.799873] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:08:07.670 [2024-07-15 21:44:22.800063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:08.237 EAL: TSC is not safe to use in SMP mode 00:08:08.237 EAL: TSC is not invariant 00:08:08.237 [2024-07-15 21:44:23.383234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.495 [2024-07-15 21:44:23.484324] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:08.495 [2024-07-15 21:44:23.486740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.495 [2024-07-15 21:44:23.487565] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.495 [2024-07-15 21:44:23.487578] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.753 21:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:08:08.753 21:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # return 0 00:08:08.753 21:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:08.753 21:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:09.012 BaseBdev1_malloc 00:08:09.012 21:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:08:09.269 true 00:08:09.269 21:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:09.527 [2024-07-15 21:44:24.650333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:09.527 [2024-07-15 21:44:24.650401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:09.527 [2024-07-15 21:44:24.650436] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e2feda34780 00:08:09.527 [2024-07-15 21:44:24.650445] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:09.527 [2024-07-15 21:44:24.651112] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:09.527 [2024-07-15 21:44:24.651139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:09.527 BaseBdev1 00:08:09.527 21:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:09.527 21:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:09.789 BaseBdev2_malloc 00:08:09.789 21:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:08:10.056 true 00:08:10.056 21:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:10.314 [2024-07-15 21:44:25.342341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:10.314 [2024-07-15 21:44:25.342398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:10.314 [2024-07-15 21:44:25.342425] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e2feda34c80 00:08:10.314 [2024-07-15 21:44:25.342435] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:10.314 [2024-07-15 21:44:25.343078] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:10.314 [2024-07-15 21:44:25.343099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:10.314 BaseBdev2 00:08:10.314 21:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:08:10.572 [2024-07-15 21:44:25.574340] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:10.572 [2024-07-15 21:44:25.574904] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:10.572 [2024-07-15 21:44:25.574966] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3e2feda34f00 00:08:10.572 [2024-07-15 21:44:25.574973] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:10.572 [2024-07-15 21:44:25.575005] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3e2fedaa0e20 00:08:10.572 [2024-07-15 21:44:25.575078] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3e2feda34f00 00:08:10.572 [2024-07-15 21:44:25.575083] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3e2feda34f00 00:08:10.572 [2024-07-15 21:44:25.575109] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:10.572 21:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:10.572 21:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:10.572 21:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:10.572 21:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:10.572 21:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:10.572 21:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:10.572 21:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:10.572 21:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:10.572 21:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:10.572 21:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:10.572 21:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:10.572 21:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:10.830 21:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:10.830 "name": "raid_bdev1", 00:08:10.830 "uuid": "67ae655b-42f3-11ef-9f7f-e9a656123a8b", 00:08:10.830 "strip_size_kb": 64, 00:08:10.830 "state": "online", 00:08:10.830 "raid_level": "concat", 00:08:10.830 "superblock": true, 00:08:10.830 "num_base_bdevs": 2, 00:08:10.830 "num_base_bdevs_discovered": 2, 00:08:10.830 "num_base_bdevs_operational": 2, 00:08:10.830 "base_bdevs_list": [ 00:08:10.830 { 00:08:10.830 "name": "BaseBdev1", 00:08:10.830 "uuid": "c084d2c9-7627-4558-aaa5-52432a06e2ee", 00:08:10.830 "is_configured": true, 00:08:10.830 "data_offset": 2048, 00:08:10.830 "data_size": 63488 00:08:10.830 }, 00:08:10.830 { 00:08:10.830 "name": "BaseBdev2", 00:08:10.830 "uuid": "e69fb422-3c99-4153-a893-0fb61bd672f6", 00:08:10.830 "is_configured": true, 00:08:10.830 "data_offset": 2048, 00:08:10.830 "data_size": 63488 00:08:10.830 } 00:08:10.830 ] 00:08:10.830 }' 00:08:10.830 21:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:10.830 21:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.089 21:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:08:11.089 21:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:08:11.348 [2024-07-15 21:44:26.286523] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3e2fedaa0ec0 00:08:12.279 21:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:12.537 21:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:08:12.537 21:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:08:12.537 21:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:08:12.537 21:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:12.537 21:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:12.537 21:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:12.537 21:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:12.537 21:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:12.537 21:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:12.537 21:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:12.537 21:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:12.537 21:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:12.537 21:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:12.537 21:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:12.537 21:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:12.796 21:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:12.796 "name": "raid_bdev1", 00:08:12.796 "uuid": "67ae655b-42f3-11ef-9f7f-e9a656123a8b", 00:08:12.796 "strip_size_kb": 64, 00:08:12.796 "state": "online", 00:08:12.796 "raid_level": "concat", 00:08:12.796 "superblock": true, 00:08:12.796 "num_base_bdevs": 2, 00:08:12.796 "num_base_bdevs_discovered": 2, 00:08:12.796 "num_base_bdevs_operational": 2, 00:08:12.796 "base_bdevs_list": [ 00:08:12.796 { 00:08:12.796 "name": "BaseBdev1", 00:08:12.796 "uuid": "c084d2c9-7627-4558-aaa5-52432a06e2ee", 00:08:12.796 "is_configured": true, 00:08:12.796 "data_offset": 2048, 00:08:12.796 "data_size": 63488 00:08:12.796 }, 00:08:12.796 { 00:08:12.796 "name": "BaseBdev2", 00:08:12.796 "uuid": "e69fb422-3c99-4153-a893-0fb61bd672f6", 00:08:12.796 "is_configured": true, 00:08:12.796 "data_offset": 2048, 00:08:12.796 "data_size": 63488 00:08:12.796 } 00:08:12.796 ] 00:08:12.796 }' 00:08:12.796 21:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:12.796 21:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.054 21:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:13.312 [2024-07-15 21:44:28.315342] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:13.312 [2024-07-15 21:44:28.315372] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:13.312 [2024-07-15 21:44:28.315702] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.312 [2024-07-15 21:44:28.315719] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.312 [2024-07-15 21:44:28.315727] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:13.312 [2024-07-15 21:44:28.315732] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3e2feda34f00 name raid_bdev1, state offline 00:08:13.312 0 00:08:13.312 21:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 50542 00:08:13.312 21:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@942 -- # '[' -z 50542 ']' 00:08:13.312 21:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # kill -0 50542 00:08:13.312 21:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@947 -- # uname 00:08:13.312 21:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:08:13.312 21:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # ps -c -o command 50542 00:08:13.312 21:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # tail -1 00:08:13.312 21:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # process_name=bdevperf 00:08:13.312 21:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' bdevperf = sudo ']' 00:08:13.312 killing process with pid 50542 00:08:13.312 21:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 50542' 00:08:13.312 21:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@961 -- # kill 50542 00:08:13.312 [2024-07-15 21:44:28.341964] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:13.312 21:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # wait 50542 00:08:13.312 [2024-07-15 21:44:28.353289] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:13.570 21:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.0qss5WRrGR 00:08:13.570 21:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:08:13.570 21:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:08:13.570 21:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.49 00:08:13.570 21:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:08:13.570 21:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:13.570 21:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:08:13.570 21:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.49 != \0\.\0\0 ]] 00:08:13.570 00:08:13.570 real 0m5.748s 00:08:13.570 user 0m8.717s 00:08:13.570 sys 0m1.067s 00:08:13.570 21:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:08:13.570 21:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.570 ************************************ 00:08:13.570 END TEST raid_read_error_test 00:08:13.570 ************************************ 00:08:13.571 21:44:28 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:08:13.571 21:44:28 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:13.571 21:44:28 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:08:13.571 21:44:28 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:08:13.571 21:44:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:13.571 ************************************ 00:08:13.571 START TEST raid_write_error_test 00:08:13.571 ************************************ 00:08:13.571 21:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1117 -- # raid_io_error_test concat 2 write 00:08:13.571 21:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:08:13.571 21:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:08:13.571 21:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:08:13.571 21:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:08:13.571 21:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:13.571 21:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:08:13.571 21:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:13.571 21:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:13.571 21:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:08:13.571 21:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:13.571 21:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:13.571 21:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:13.571 21:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:08:13.571 21:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:08:13.571 21:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:08:13.571 21:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:08:13.571 21:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:08:13.571 21:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:08:13.571 21:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:08:13.571 21:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:08:13.571 21:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:08:13.571 21:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:08:13.571 21:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.H8Pu6zocSE 00:08:13.571 21:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=50666 00:08:13.571 21:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 50666 /var/tmp/spdk-raid.sock 00:08:13.571 21:44:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:13.571 21:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@823 -- # '[' -z 50666 ']' 00:08:13.571 21:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:13.571 21:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:08:13.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:13.571 21:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:13.571 21:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:08:13.571 21:44:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.571 [2024-07-15 21:44:28.588191] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:08:13.571 [2024-07-15 21:44:28.588385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:14.137 EAL: TSC is not safe to use in SMP mode 00:08:14.137 EAL: TSC is not invariant 00:08:14.137 [2024-07-15 21:44:29.100999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.137 [2024-07-15 21:44:29.183198] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:14.137 [2024-07-15 21:44:29.185287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.137 [2024-07-15 21:44:29.186043] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.137 [2024-07-15 21:44:29.186058] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.703 21:44:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:08:14.703 21:44:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # return 0 00:08:14.703 21:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:14.703 21:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:14.961 BaseBdev1_malloc 00:08:14.961 21:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:08:15.220 true 00:08:15.220 21:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:15.479 [2024-07-15 21:44:30.460305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:15.479 [2024-07-15 21:44:30.460370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.479 [2024-07-15 21:44:30.460395] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x278f97834780 00:08:15.479 [2024-07-15 21:44:30.460404] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.479 [2024-07-15 21:44:30.461034] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.479 [2024-07-15 21:44:30.461059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:15.479 BaseBdev1 00:08:15.479 21:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:15.479 21:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:15.737 BaseBdev2_malloc 00:08:15.737 21:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:08:15.995 true 00:08:15.996 21:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:16.254 [2024-07-15 21:44:31.316305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:16.254 [2024-07-15 21:44:31.316355] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:16.254 [2024-07-15 21:44:31.316381] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x278f97834c80 00:08:16.254 [2024-07-15 21:44:31.316390] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:16.254 [2024-07-15 21:44:31.317030] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:16.254 [2024-07-15 21:44:31.317054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:16.254 BaseBdev2 00:08:16.254 21:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:08:16.512 [2024-07-15 21:44:31.600317] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:16.512 [2024-07-15 21:44:31.600884] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:16.512 [2024-07-15 21:44:31.600945] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x278f97834f00 00:08:16.512 [2024-07-15 21:44:31.600951] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:16.512 [2024-07-15 21:44:31.600982] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x278f978a0e20 00:08:16.512 [2024-07-15 21:44:31.601054] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x278f97834f00 00:08:16.512 [2024-07-15 21:44:31.601059] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x278f97834f00 00:08:16.512 [2024-07-15 21:44:31.601084] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.512 21:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:16.512 21:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:16.512 21:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:16.512 21:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:16.512 21:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:16.512 21:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:16.512 21:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:16.512 21:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:16.512 21:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:16.512 21:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:16.512 21:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:16.512 21:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:16.771 21:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:16.771 "name": "raid_bdev1", 00:08:16.771 "uuid": "6b45e323-42f3-11ef-9f7f-e9a656123a8b", 00:08:16.771 "strip_size_kb": 64, 00:08:16.771 "state": "online", 00:08:16.771 "raid_level": "concat", 00:08:16.771 "superblock": true, 00:08:16.771 "num_base_bdevs": 2, 00:08:16.771 "num_base_bdevs_discovered": 2, 00:08:16.771 "num_base_bdevs_operational": 2, 00:08:16.771 "base_bdevs_list": [ 00:08:16.771 { 00:08:16.771 "name": "BaseBdev1", 00:08:16.771 "uuid": "13e2e8dd-8515-6a58-9467-6e703e4efd9f", 00:08:16.771 "is_configured": true, 00:08:16.771 "data_offset": 2048, 00:08:16.771 "data_size": 63488 00:08:16.771 }, 00:08:16.771 { 00:08:16.771 "name": "BaseBdev2", 00:08:16.771 "uuid": "acf9271b-d034-e257-a786-be4e3d0b3723", 00:08:16.771 "is_configured": true, 00:08:16.771 "data_offset": 2048, 00:08:16.771 "data_size": 63488 00:08:16.771 } 00:08:16.771 ] 00:08:16.771 }' 00:08:16.771 21:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:16.771 21:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.030 21:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:08:17.030 21:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:08:17.288 [2024-07-15 21:44:32.272488] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x278f978a0ec0 00:08:18.256 21:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:18.515 21:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:08:18.515 21:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:08:18.515 21:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:08:18.515 21:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:18.515 21:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:18.515 21:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:18.515 21:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:18.515 21:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:18.515 21:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:18.515 21:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:18.515 21:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:18.515 21:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:18.515 21:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:18.515 21:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:18.515 21:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:18.774 21:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:18.774 "name": "raid_bdev1", 00:08:18.774 "uuid": "6b45e323-42f3-11ef-9f7f-e9a656123a8b", 00:08:18.774 "strip_size_kb": 64, 00:08:18.774 "state": "online", 00:08:18.774 "raid_level": "concat", 00:08:18.774 "superblock": true, 00:08:18.774 "num_base_bdevs": 2, 00:08:18.774 "num_base_bdevs_discovered": 2, 00:08:18.774 "num_base_bdevs_operational": 2, 00:08:18.774 "base_bdevs_list": [ 00:08:18.774 { 00:08:18.774 "name": "BaseBdev1", 00:08:18.774 "uuid": "13e2e8dd-8515-6a58-9467-6e703e4efd9f", 00:08:18.774 "is_configured": true, 00:08:18.774 "data_offset": 2048, 00:08:18.774 "data_size": 63488 00:08:18.774 }, 00:08:18.774 { 00:08:18.774 "name": "BaseBdev2", 00:08:18.774 "uuid": "acf9271b-d034-e257-a786-be4e3d0b3723", 00:08:18.774 "is_configured": true, 00:08:18.774 "data_offset": 2048, 00:08:18.774 "data_size": 63488 00:08:18.774 } 00:08:18.774 ] 00:08:18.774 }' 00:08:18.774 21:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:18.774 21:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.033 21:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:19.303 [2024-07-15 21:44:34.317308] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:19.303 [2024-07-15 21:44:34.317332] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:19.303 [2024-07-15 21:44:34.317691] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:19.303 [2024-07-15 21:44:34.317700] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:19.303 [2024-07-15 21:44:34.317706] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:19.303 [2024-07-15 21:44:34.317710] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x278f97834f00 name raid_bdev1, state offline 00:08:19.303 0 00:08:19.303 21:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 50666 00:08:19.303 21:44:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@942 -- # '[' -z 50666 ']' 00:08:19.303 21:44:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # kill -0 50666 00:08:19.303 21:44:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@947 -- # uname 00:08:19.303 21:44:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:08:19.303 21:44:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # ps -c -o command 50666 00:08:19.303 21:44:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # tail -1 00:08:19.303 21:44:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # process_name=bdevperf 00:08:19.303 21:44:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' bdevperf = sudo ']' 00:08:19.303 killing process with pid 50666 00:08:19.303 21:44:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 50666' 00:08:19.303 21:44:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@961 -- # kill 50666 00:08:19.303 [2024-07-15 21:44:34.344565] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:19.303 21:44:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # wait 50666 00:08:19.303 [2024-07-15 21:44:34.355868] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:19.561 21:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.H8Pu6zocSE 00:08:19.561 21:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:08:19.561 21:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:08:19.561 21:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.49 00:08:19.561 21:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:08:19.561 21:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:19.561 21:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:08:19.561 21:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.49 != \0\.\0\0 ]] 00:08:19.561 00:08:19.561 real 0m5.962s 00:08:19.561 user 0m9.210s 00:08:19.561 sys 0m0.969s 00:08:19.561 21:44:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:08:19.561 21:44:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.561 ************************************ 00:08:19.561 END TEST raid_write_error_test 00:08:19.561 ************************************ 00:08:19.561 21:44:34 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:08:19.561 21:44:34 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:08:19.561 21:44:34 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:19.561 21:44:34 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:08:19.561 21:44:34 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:08:19.561 21:44:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:19.561 ************************************ 00:08:19.561 START TEST raid_state_function_test 00:08:19.561 ************************************ 00:08:19.561 21:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1117 -- # raid_state_function_test raid1 2 false 00:08:19.561 21:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:08:19.561 21:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:08:19.561 21:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:08:19.561 21:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:08:19.561 21:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:08:19.561 21:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:19.561 21:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:08:19.561 21:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:19.561 21:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:19.561 21:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:08:19.561 21:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:19.561 21:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:19.561 21:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:19.561 21:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:08:19.561 21:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:08:19.561 21:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:08:19.561 21:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:08:19.561 21:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:08:19.562 21:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:08:19.562 21:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:08:19.562 21:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:08:19.562 21:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:08:19.562 21:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=50792 00:08:19.562 21:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:19.562 21:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 50792' 00:08:19.562 Process raid pid: 50792 00:08:19.562 21:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 50792 /var/tmp/spdk-raid.sock 00:08:19.562 21:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@823 -- # '[' -z 50792 ']' 00:08:19.562 21:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:19.562 21:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:08:19.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:19.562 21:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:19.562 21:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:08:19.562 21:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.562 [2024-07-15 21:44:34.591942] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:08:19.562 [2024-07-15 21:44:34.592135] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:20.127 EAL: TSC is not safe to use in SMP mode 00:08:20.127 EAL: TSC is not invariant 00:08:20.127 [2024-07-15 21:44:35.105957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.127 [2024-07-15 21:44:35.189814] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:20.127 [2024-07-15 21:44:35.191906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.127 [2024-07-15 21:44:35.192664] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.127 [2024-07-15 21:44:35.192678] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.693 21:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:08:20.693 21:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # return 0 00:08:20.694 21:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:20.951 [2024-07-15 21:44:35.921088] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:20.951 [2024-07-15 21:44:35.921155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:20.951 [2024-07-15 21:44:35.921160] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:20.951 [2024-07-15 21:44:35.921185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:20.951 21:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:20.951 21:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:20.951 21:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:20.952 21:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:20.952 21:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:20.952 21:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:20.952 21:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:20.952 21:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:20.952 21:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:20.952 21:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:20.952 21:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:20.952 21:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.241 21:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:21.241 "name": "Existed_Raid", 00:08:21.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.241 "strip_size_kb": 0, 00:08:21.241 "state": "configuring", 00:08:21.241 "raid_level": "raid1", 00:08:21.241 "superblock": false, 00:08:21.241 "num_base_bdevs": 2, 00:08:21.241 "num_base_bdevs_discovered": 0, 00:08:21.241 "num_base_bdevs_operational": 2, 00:08:21.241 "base_bdevs_list": [ 00:08:21.241 { 00:08:21.241 "name": "BaseBdev1", 00:08:21.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.241 "is_configured": false, 00:08:21.241 "data_offset": 0, 00:08:21.241 "data_size": 0 00:08:21.241 }, 00:08:21.241 { 00:08:21.241 "name": "BaseBdev2", 00:08:21.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.241 "is_configured": false, 00:08:21.241 "data_offset": 0, 00:08:21.241 "data_size": 0 00:08:21.241 } 00:08:21.241 ] 00:08:21.241 }' 00:08:21.241 21:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:21.241 21:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.498 21:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:21.755 [2024-07-15 21:44:36.761144] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:21.755 [2024-07-15 21:44:36.761166] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3c7e4f434500 name Existed_Raid, state configuring 00:08:21.755 21:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:22.012 [2024-07-15 21:44:37.041152] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:22.012 [2024-07-15 21:44:37.041211] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:22.012 [2024-07-15 21:44:37.041216] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:22.012 [2024-07-15 21:44:37.041224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:22.012 21:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:22.269 [2024-07-15 21:44:37.294270] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.269 BaseBdev1 00:08:22.269 21:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:08:22.269 21:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev1 00:08:22.269 21:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:08:22.269 21:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:08:22.269 21:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:08:22.269 21:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:08:22.269 21:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:22.527 21:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:22.785 [ 00:08:22.785 { 00:08:22.785 "name": "BaseBdev1", 00:08:22.785 "aliases": [ 00:08:22.785 "6eaa8e5d-42f3-11ef-9f7f-e9a656123a8b" 00:08:22.785 ], 00:08:22.785 "product_name": "Malloc disk", 00:08:22.785 "block_size": 512, 00:08:22.785 "num_blocks": 65536, 00:08:22.785 "uuid": "6eaa8e5d-42f3-11ef-9f7f-e9a656123a8b", 00:08:22.785 "assigned_rate_limits": { 00:08:22.785 "rw_ios_per_sec": 0, 00:08:22.785 "rw_mbytes_per_sec": 0, 00:08:22.785 "r_mbytes_per_sec": 0, 00:08:22.785 "w_mbytes_per_sec": 0 00:08:22.785 }, 00:08:22.785 "claimed": true, 00:08:22.785 "claim_type": "exclusive_write", 00:08:22.785 "zoned": false, 00:08:22.785 "supported_io_types": { 00:08:22.785 "read": true, 00:08:22.785 "write": true, 00:08:22.785 "unmap": true, 00:08:22.785 "flush": true, 00:08:22.785 "reset": true, 00:08:22.785 "nvme_admin": false, 00:08:22.785 "nvme_io": false, 00:08:22.785 "nvme_io_md": false, 00:08:22.785 "write_zeroes": true, 00:08:22.785 "zcopy": true, 00:08:22.785 "get_zone_info": false, 00:08:22.785 "zone_management": false, 00:08:22.785 "zone_append": false, 00:08:22.785 "compare": false, 00:08:22.785 "compare_and_write": false, 00:08:22.785 "abort": true, 00:08:22.785 "seek_hole": false, 00:08:22.785 "seek_data": false, 00:08:22.785 "copy": true, 00:08:22.785 "nvme_iov_md": false 00:08:22.785 }, 00:08:22.785 "memory_domains": [ 00:08:22.785 { 00:08:22.785 "dma_device_id": "system", 00:08:22.785 "dma_device_type": 1 00:08:22.785 }, 00:08:22.785 { 00:08:22.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.785 "dma_device_type": 2 00:08:22.785 } 00:08:22.785 ], 00:08:22.785 "driver_specific": {} 00:08:22.785 } 00:08:22.785 ] 00:08:22.785 21:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:08:22.785 21:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:22.785 21:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:22.786 21:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:22.786 21:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:22.786 21:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:22.786 21:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:22.786 21:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:22.786 21:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:22.786 21:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:22.786 21:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:22.786 21:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:22.786 21:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.044 21:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:23.044 "name": "Existed_Raid", 00:08:23.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.044 "strip_size_kb": 0, 00:08:23.044 "state": "configuring", 00:08:23.044 "raid_level": "raid1", 00:08:23.044 "superblock": false, 00:08:23.044 "num_base_bdevs": 2, 00:08:23.044 "num_base_bdevs_discovered": 1, 00:08:23.044 "num_base_bdevs_operational": 2, 00:08:23.044 "base_bdevs_list": [ 00:08:23.044 { 00:08:23.044 "name": "BaseBdev1", 00:08:23.044 "uuid": "6eaa8e5d-42f3-11ef-9f7f-e9a656123a8b", 00:08:23.044 "is_configured": true, 00:08:23.044 "data_offset": 0, 00:08:23.044 "data_size": 65536 00:08:23.044 }, 00:08:23.044 { 00:08:23.044 "name": "BaseBdev2", 00:08:23.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.044 "is_configured": false, 00:08:23.044 "data_offset": 0, 00:08:23.044 "data_size": 0 00:08:23.044 } 00:08:23.044 ] 00:08:23.044 }' 00:08:23.044 21:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:23.044 21:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.303 21:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:23.562 [2024-07-15 21:44:38.609251] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:23.562 [2024-07-15 21:44:38.609282] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3c7e4f434500 name Existed_Raid, state configuring 00:08:23.562 21:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:23.821 [2024-07-15 21:44:38.909282] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:23.821 [2024-07-15 21:44:38.910080] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:23.821 [2024-07-15 21:44:38.910116] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:23.821 21:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:08:23.821 21:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:23.821 21:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:23.821 21:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:23.821 21:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:23.821 21:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:23.821 21:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:23.821 21:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:23.821 21:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:23.821 21:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:23.821 21:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:23.821 21:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:23.821 21:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:23.821 21:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.080 21:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:24.080 "name": "Existed_Raid", 00:08:24.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.080 "strip_size_kb": 0, 00:08:24.080 "state": "configuring", 00:08:24.080 "raid_level": "raid1", 00:08:24.080 "superblock": false, 00:08:24.080 "num_base_bdevs": 2, 00:08:24.080 "num_base_bdevs_discovered": 1, 00:08:24.080 "num_base_bdevs_operational": 2, 00:08:24.080 "base_bdevs_list": [ 00:08:24.080 { 00:08:24.080 "name": "BaseBdev1", 00:08:24.080 "uuid": "6eaa8e5d-42f3-11ef-9f7f-e9a656123a8b", 00:08:24.080 "is_configured": true, 00:08:24.080 "data_offset": 0, 00:08:24.080 "data_size": 65536 00:08:24.080 }, 00:08:24.080 { 00:08:24.080 "name": "BaseBdev2", 00:08:24.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.080 "is_configured": false, 00:08:24.080 "data_offset": 0, 00:08:24.080 "data_size": 0 00:08:24.080 } 00:08:24.080 ] 00:08:24.080 }' 00:08:24.080 21:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:24.080 21:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.648 21:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:24.648 [2024-07-15 21:44:39.801456] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:24.648 [2024-07-15 21:44:39.801515] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3c7e4f434a00 00:08:24.648 [2024-07-15 21:44:39.801519] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:24.648 [2024-07-15 21:44:39.801555] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3c7e4f497e20 00:08:24.648 [2024-07-15 21:44:39.801642] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3c7e4f434a00 00:08:24.648 [2024-07-15 21:44:39.801646] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3c7e4f434a00 00:08:24.648 [2024-07-15 21:44:39.801676] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.648 BaseBdev2 00:08:24.648 21:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:08:24.648 21:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev2 00:08:24.648 21:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:08:24.648 21:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:08:24.648 21:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:08:24.648 21:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:08:24.648 21:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:24.907 21:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:25.166 [ 00:08:25.166 { 00:08:25.166 "name": "BaseBdev2", 00:08:25.166 "aliases": [ 00:08:25.166 "7029436a-42f3-11ef-9f7f-e9a656123a8b" 00:08:25.166 ], 00:08:25.166 "product_name": "Malloc disk", 00:08:25.166 "block_size": 512, 00:08:25.166 "num_blocks": 65536, 00:08:25.166 "uuid": "7029436a-42f3-11ef-9f7f-e9a656123a8b", 00:08:25.166 "assigned_rate_limits": { 00:08:25.166 "rw_ios_per_sec": 0, 00:08:25.166 "rw_mbytes_per_sec": 0, 00:08:25.166 "r_mbytes_per_sec": 0, 00:08:25.166 "w_mbytes_per_sec": 0 00:08:25.166 }, 00:08:25.166 "claimed": true, 00:08:25.166 "claim_type": "exclusive_write", 00:08:25.166 "zoned": false, 00:08:25.166 "supported_io_types": { 00:08:25.166 "read": true, 00:08:25.166 "write": true, 00:08:25.166 "unmap": true, 00:08:25.166 "flush": true, 00:08:25.166 "reset": true, 00:08:25.166 "nvme_admin": false, 00:08:25.166 "nvme_io": false, 00:08:25.166 "nvme_io_md": false, 00:08:25.166 "write_zeroes": true, 00:08:25.166 "zcopy": true, 00:08:25.166 "get_zone_info": false, 00:08:25.166 "zone_management": false, 00:08:25.166 "zone_append": false, 00:08:25.166 "compare": false, 00:08:25.166 "compare_and_write": false, 00:08:25.166 "abort": true, 00:08:25.166 "seek_hole": false, 00:08:25.166 "seek_data": false, 00:08:25.166 "copy": true, 00:08:25.166 "nvme_iov_md": false 00:08:25.166 }, 00:08:25.166 "memory_domains": [ 00:08:25.166 { 00:08:25.166 "dma_device_id": "system", 00:08:25.166 "dma_device_type": 1 00:08:25.166 }, 00:08:25.166 { 00:08:25.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.166 "dma_device_type": 2 00:08:25.166 } 00:08:25.166 ], 00:08:25.166 "driver_specific": {} 00:08:25.166 } 00:08:25.166 ] 00:08:25.166 21:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:08:25.166 21:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:08:25.166 21:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:25.166 21:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:25.166 21:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:25.166 21:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:25.166 21:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:25.166 21:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:25.166 21:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:25.166 21:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:25.166 21:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:25.166 21:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:25.166 21:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:25.166 21:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:25.166 21:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.426 21:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:25.426 "name": "Existed_Raid", 00:08:25.426 "uuid": "70294bae-42f3-11ef-9f7f-e9a656123a8b", 00:08:25.426 "strip_size_kb": 0, 00:08:25.426 "state": "online", 00:08:25.426 "raid_level": "raid1", 00:08:25.426 "superblock": false, 00:08:25.426 "num_base_bdevs": 2, 00:08:25.426 "num_base_bdevs_discovered": 2, 00:08:25.426 "num_base_bdevs_operational": 2, 00:08:25.426 "base_bdevs_list": [ 00:08:25.426 { 00:08:25.426 "name": "BaseBdev1", 00:08:25.426 "uuid": "6eaa8e5d-42f3-11ef-9f7f-e9a656123a8b", 00:08:25.426 "is_configured": true, 00:08:25.426 "data_offset": 0, 00:08:25.426 "data_size": 65536 00:08:25.426 }, 00:08:25.426 { 00:08:25.426 "name": "BaseBdev2", 00:08:25.426 "uuid": "7029436a-42f3-11ef-9f7f-e9a656123a8b", 00:08:25.426 "is_configured": true, 00:08:25.426 "data_offset": 0, 00:08:25.426 "data_size": 65536 00:08:25.426 } 00:08:25.426 ] 00:08:25.426 }' 00:08:25.426 21:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:25.426 21:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.685 21:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:08:25.685 21:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:08:25.685 21:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:25.685 21:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:25.685 21:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:25.685 21:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:25.685 21:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:25.685 21:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:08:25.956 [2024-07-15 21:44:41.041415] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:25.956 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:25.956 "name": "Existed_Raid", 00:08:25.956 "aliases": [ 00:08:25.956 "70294bae-42f3-11ef-9f7f-e9a656123a8b" 00:08:25.956 ], 00:08:25.956 "product_name": "Raid Volume", 00:08:25.956 "block_size": 512, 00:08:25.956 "num_blocks": 65536, 00:08:25.956 "uuid": "70294bae-42f3-11ef-9f7f-e9a656123a8b", 00:08:25.956 "assigned_rate_limits": { 00:08:25.956 "rw_ios_per_sec": 0, 00:08:25.956 "rw_mbytes_per_sec": 0, 00:08:25.956 "r_mbytes_per_sec": 0, 00:08:25.956 "w_mbytes_per_sec": 0 00:08:25.956 }, 00:08:25.956 "claimed": false, 00:08:25.956 "zoned": false, 00:08:25.956 "supported_io_types": { 00:08:25.956 "read": true, 00:08:25.956 "write": true, 00:08:25.956 "unmap": false, 00:08:25.956 "flush": false, 00:08:25.956 "reset": true, 00:08:25.956 "nvme_admin": false, 00:08:25.956 "nvme_io": false, 00:08:25.956 "nvme_io_md": false, 00:08:25.956 "write_zeroes": true, 00:08:25.956 "zcopy": false, 00:08:25.956 "get_zone_info": false, 00:08:25.956 "zone_management": false, 00:08:25.956 "zone_append": false, 00:08:25.956 "compare": false, 00:08:25.956 "compare_and_write": false, 00:08:25.956 "abort": false, 00:08:25.956 "seek_hole": false, 00:08:25.956 "seek_data": false, 00:08:25.956 "copy": false, 00:08:25.956 "nvme_iov_md": false 00:08:25.956 }, 00:08:25.956 "memory_domains": [ 00:08:25.956 { 00:08:25.956 "dma_device_id": "system", 00:08:25.956 "dma_device_type": 1 00:08:25.956 }, 00:08:25.956 { 00:08:25.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.956 "dma_device_type": 2 00:08:25.956 }, 00:08:25.956 { 00:08:25.956 "dma_device_id": "system", 00:08:25.956 "dma_device_type": 1 00:08:25.956 }, 00:08:25.956 { 00:08:25.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.956 "dma_device_type": 2 00:08:25.956 } 00:08:25.956 ], 00:08:25.956 "driver_specific": { 00:08:25.956 "raid": { 00:08:25.956 "uuid": "70294bae-42f3-11ef-9f7f-e9a656123a8b", 00:08:25.956 "strip_size_kb": 0, 00:08:25.956 "state": "online", 00:08:25.956 "raid_level": "raid1", 00:08:25.956 "superblock": false, 00:08:25.956 "num_base_bdevs": 2, 00:08:25.956 "num_base_bdevs_discovered": 2, 00:08:25.956 "num_base_bdevs_operational": 2, 00:08:25.956 "base_bdevs_list": [ 00:08:25.956 { 00:08:25.956 "name": "BaseBdev1", 00:08:25.956 "uuid": "6eaa8e5d-42f3-11ef-9f7f-e9a656123a8b", 00:08:25.956 "is_configured": true, 00:08:25.956 "data_offset": 0, 00:08:25.956 "data_size": 65536 00:08:25.956 }, 00:08:25.956 { 00:08:25.956 "name": "BaseBdev2", 00:08:25.956 "uuid": "7029436a-42f3-11ef-9f7f-e9a656123a8b", 00:08:25.956 "is_configured": true, 00:08:25.956 "data_offset": 0, 00:08:25.956 "data_size": 65536 00:08:25.956 } 00:08:25.956 ] 00:08:25.956 } 00:08:25.956 } 00:08:25.956 }' 00:08:25.956 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:25.956 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:08:25.956 BaseBdev2' 00:08:25.956 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:25.956 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:25.956 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:08:26.215 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:26.215 "name": "BaseBdev1", 00:08:26.215 "aliases": [ 00:08:26.215 "6eaa8e5d-42f3-11ef-9f7f-e9a656123a8b" 00:08:26.215 ], 00:08:26.215 "product_name": "Malloc disk", 00:08:26.215 "block_size": 512, 00:08:26.215 "num_blocks": 65536, 00:08:26.215 "uuid": "6eaa8e5d-42f3-11ef-9f7f-e9a656123a8b", 00:08:26.215 "assigned_rate_limits": { 00:08:26.215 "rw_ios_per_sec": 0, 00:08:26.215 "rw_mbytes_per_sec": 0, 00:08:26.215 "r_mbytes_per_sec": 0, 00:08:26.215 "w_mbytes_per_sec": 0 00:08:26.215 }, 00:08:26.215 "claimed": true, 00:08:26.215 "claim_type": "exclusive_write", 00:08:26.215 "zoned": false, 00:08:26.215 "supported_io_types": { 00:08:26.215 "read": true, 00:08:26.215 "write": true, 00:08:26.215 "unmap": true, 00:08:26.215 "flush": true, 00:08:26.215 "reset": true, 00:08:26.215 "nvme_admin": false, 00:08:26.215 "nvme_io": false, 00:08:26.215 "nvme_io_md": false, 00:08:26.215 "write_zeroes": true, 00:08:26.215 "zcopy": true, 00:08:26.215 "get_zone_info": false, 00:08:26.215 "zone_management": false, 00:08:26.215 "zone_append": false, 00:08:26.215 "compare": false, 00:08:26.215 "compare_and_write": false, 00:08:26.215 "abort": true, 00:08:26.215 "seek_hole": false, 00:08:26.215 "seek_data": false, 00:08:26.215 "copy": true, 00:08:26.215 "nvme_iov_md": false 00:08:26.215 }, 00:08:26.215 "memory_domains": [ 00:08:26.215 { 00:08:26.215 "dma_device_id": "system", 00:08:26.215 "dma_device_type": 1 00:08:26.215 }, 00:08:26.215 { 00:08:26.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.215 "dma_device_type": 2 00:08:26.215 } 00:08:26.215 ], 00:08:26.215 "driver_specific": {} 00:08:26.215 }' 00:08:26.215 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:26.216 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:26.216 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:26.216 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:26.216 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:26.216 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:26.216 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:26.216 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:26.216 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:26.216 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:26.216 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:26.216 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:26.216 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:26.216 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:26.216 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:08:26.475 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:26.475 "name": "BaseBdev2", 00:08:26.475 "aliases": [ 00:08:26.475 "7029436a-42f3-11ef-9f7f-e9a656123a8b" 00:08:26.475 ], 00:08:26.475 "product_name": "Malloc disk", 00:08:26.475 "block_size": 512, 00:08:26.475 "num_blocks": 65536, 00:08:26.475 "uuid": "7029436a-42f3-11ef-9f7f-e9a656123a8b", 00:08:26.475 "assigned_rate_limits": { 00:08:26.475 "rw_ios_per_sec": 0, 00:08:26.475 "rw_mbytes_per_sec": 0, 00:08:26.475 "r_mbytes_per_sec": 0, 00:08:26.475 "w_mbytes_per_sec": 0 00:08:26.475 }, 00:08:26.475 "claimed": true, 00:08:26.475 "claim_type": "exclusive_write", 00:08:26.475 "zoned": false, 00:08:26.475 "supported_io_types": { 00:08:26.475 "read": true, 00:08:26.475 "write": true, 00:08:26.475 "unmap": true, 00:08:26.475 "flush": true, 00:08:26.476 "reset": true, 00:08:26.476 "nvme_admin": false, 00:08:26.476 "nvme_io": false, 00:08:26.476 "nvme_io_md": false, 00:08:26.476 "write_zeroes": true, 00:08:26.476 "zcopy": true, 00:08:26.476 "get_zone_info": false, 00:08:26.476 "zone_management": false, 00:08:26.476 "zone_append": false, 00:08:26.476 "compare": false, 00:08:26.476 "compare_and_write": false, 00:08:26.476 "abort": true, 00:08:26.476 "seek_hole": false, 00:08:26.476 "seek_data": false, 00:08:26.476 "copy": true, 00:08:26.476 "nvme_iov_md": false 00:08:26.476 }, 00:08:26.476 "memory_domains": [ 00:08:26.476 { 00:08:26.476 "dma_device_id": "system", 00:08:26.476 "dma_device_type": 1 00:08:26.476 }, 00:08:26.476 { 00:08:26.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.476 "dma_device_type": 2 00:08:26.476 } 00:08:26.476 ], 00:08:26.476 "driver_specific": {} 00:08:26.476 }' 00:08:26.476 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:26.476 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:26.476 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:26.476 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:26.476 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:26.476 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:26.476 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:26.476 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:26.734 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:26.734 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:26.734 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:26.734 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:26.734 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:26.993 [2024-07-15 21:44:41.941423] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:26.993 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:08:26.993 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:08:26.993 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:26.993 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:08:26.993 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:08:26.993 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:26.993 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:26.993 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:26.993 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:26.993 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:26.993 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:26.993 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:26.993 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:26.993 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:26.993 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:26.993 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:26.993 21:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.251 21:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:27.251 "name": "Existed_Raid", 00:08:27.251 "uuid": "70294bae-42f3-11ef-9f7f-e9a656123a8b", 00:08:27.251 "strip_size_kb": 0, 00:08:27.251 "state": "online", 00:08:27.251 "raid_level": "raid1", 00:08:27.251 "superblock": false, 00:08:27.251 "num_base_bdevs": 2, 00:08:27.251 "num_base_bdevs_discovered": 1, 00:08:27.251 "num_base_bdevs_operational": 1, 00:08:27.251 "base_bdevs_list": [ 00:08:27.251 { 00:08:27.251 "name": null, 00:08:27.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.251 "is_configured": false, 00:08:27.251 "data_offset": 0, 00:08:27.251 "data_size": 65536 00:08:27.251 }, 00:08:27.251 { 00:08:27.251 "name": "BaseBdev2", 00:08:27.251 "uuid": "7029436a-42f3-11ef-9f7f-e9a656123a8b", 00:08:27.251 "is_configured": true, 00:08:27.251 "data_offset": 0, 00:08:27.251 "data_size": 65536 00:08:27.251 } 00:08:27.251 ] 00:08:27.251 }' 00:08:27.251 21:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:27.251 21:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.509 21:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:08:27.509 21:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:27.509 21:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:08:27.509 21:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:27.766 21:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:08:27.766 21:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:27.766 21:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:28.023 [2024-07-15 21:44:42.959664] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:28.023 [2024-07-15 21:44:42.959738] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:28.023 [2024-07-15 21:44:42.965553] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:28.023 [2024-07-15 21:44:42.965570] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:28.023 [2024-07-15 21:44:42.965574] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3c7e4f434a00 name Existed_Raid, state offline 00:08:28.023 21:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:08:28.023 21:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:28.023 21:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:28.023 21:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:08:28.280 21:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:08:28.280 21:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:08:28.280 21:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:08:28.280 21:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 50792 00:08:28.280 21:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@942 -- # '[' -z 50792 ']' 00:08:28.280 21:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # kill -0 50792 00:08:28.280 21:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@947 -- # uname 00:08:28.280 21:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:08:28.280 21:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # ps -c -o command 50792 00:08:28.280 21:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # tail -1 00:08:28.280 21:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:08:28.280 21:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:08:28.280 killing process with pid 50792 00:08:28.280 21:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 50792' 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@961 -- # kill 50792 00:08:28.281 [2024-07-15 21:44:43.225498] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:28.281 [2024-07-15 21:44:43.225530] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # wait 50792 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:08:28.281 00:08:28.281 real 0m8.823s 00:08:28.281 user 0m15.444s 00:08:28.281 sys 0m1.419s 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.281 ************************************ 00:08:28.281 END TEST raid_state_function_test 00:08:28.281 ************************************ 00:08:28.281 21:44:43 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:08:28.281 21:44:43 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:28.281 21:44:43 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:08:28.281 21:44:43 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:08:28.281 21:44:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:28.281 ************************************ 00:08:28.281 START TEST raid_state_function_test_sb 00:08:28.281 ************************************ 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1117 -- # raid_state_function_test raid1 2 true 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=51063 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:28.281 Process raid pid: 51063 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 51063' 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 51063 /var/tmp/spdk-raid.sock 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@823 -- # '[' -z 51063 ']' 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@828 -- # local max_retries=100 00:08:28.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # xtrace_disable 00:08:28.281 21:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.281 [2024-07-15 21:44:43.462884] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:08:28.281 [2024-07-15 21:44:43.463047] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:28.847 EAL: TSC is not safe to use in SMP mode 00:08:28.847 EAL: TSC is not invariant 00:08:28.847 [2024-07-15 21:44:44.006755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.106 [2024-07-15 21:44:44.092162] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:29.106 [2024-07-15 21:44:44.094322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.106 [2024-07-15 21:44:44.095099] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.106 [2024-07-15 21:44:44.095114] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.364 21:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:08:29.364 21:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # return 0 00:08:29.364 21:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:29.622 [2024-07-15 21:44:44.754996] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:29.622 [2024-07-15 21:44:44.755062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:29.622 [2024-07-15 21:44:44.755067] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:29.622 [2024-07-15 21:44:44.755092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:29.622 21:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:29.622 21:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:29.622 21:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:29.622 21:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:29.622 21:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:29.622 21:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:29.622 21:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:29.622 21:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:29.622 21:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:29.622 21:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:29.622 21:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:29.622 21:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.880 21:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:29.880 "name": "Existed_Raid", 00:08:29.880 "uuid": "731d21cc-42f3-11ef-9f7f-e9a656123a8b", 00:08:29.880 "strip_size_kb": 0, 00:08:29.880 "state": "configuring", 00:08:29.880 "raid_level": "raid1", 00:08:29.880 "superblock": true, 00:08:29.880 "num_base_bdevs": 2, 00:08:29.880 "num_base_bdevs_discovered": 0, 00:08:29.880 "num_base_bdevs_operational": 2, 00:08:29.880 "base_bdevs_list": [ 00:08:29.880 { 00:08:29.880 "name": "BaseBdev1", 00:08:29.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.880 "is_configured": false, 00:08:29.880 "data_offset": 0, 00:08:29.880 "data_size": 0 00:08:29.880 }, 00:08:29.880 { 00:08:29.880 "name": "BaseBdev2", 00:08:29.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.880 "is_configured": false, 00:08:29.880 "data_offset": 0, 00:08:29.880 "data_size": 0 00:08:29.880 } 00:08:29.880 ] 00:08:29.880 }' 00:08:29.880 21:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:29.880 21:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.455 21:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:30.455 [2024-07-15 21:44:45.587027] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:30.455 [2024-07-15 21:44:45.587054] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x212fae834500 name Existed_Raid, state configuring 00:08:30.455 21:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:30.713 [2024-07-15 21:44:45.827020] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:30.713 [2024-07-15 21:44:45.827087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:30.713 [2024-07-15 21:44:45.827093] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:30.713 [2024-07-15 21:44:45.827117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:30.713 21:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:30.972 [2024-07-15 21:44:46.096065] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:30.972 BaseBdev1 00:08:30.972 21:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:08:30.972 21:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev1 00:08:30.972 21:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:08:30.972 21:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:08:30.972 21:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:08:30.972 21:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:08:30.972 21:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:31.231 21:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:31.489 [ 00:08:31.489 { 00:08:31.489 "name": "BaseBdev1", 00:08:31.489 "aliases": [ 00:08:31.489 "73e99c0d-42f3-11ef-9f7f-e9a656123a8b" 00:08:31.489 ], 00:08:31.489 "product_name": "Malloc disk", 00:08:31.489 "block_size": 512, 00:08:31.489 "num_blocks": 65536, 00:08:31.490 "uuid": "73e99c0d-42f3-11ef-9f7f-e9a656123a8b", 00:08:31.490 "assigned_rate_limits": { 00:08:31.490 "rw_ios_per_sec": 0, 00:08:31.490 "rw_mbytes_per_sec": 0, 00:08:31.490 "r_mbytes_per_sec": 0, 00:08:31.490 "w_mbytes_per_sec": 0 00:08:31.490 }, 00:08:31.490 "claimed": true, 00:08:31.490 "claim_type": "exclusive_write", 00:08:31.490 "zoned": false, 00:08:31.490 "supported_io_types": { 00:08:31.490 "read": true, 00:08:31.490 "write": true, 00:08:31.490 "unmap": true, 00:08:31.490 "flush": true, 00:08:31.490 "reset": true, 00:08:31.490 "nvme_admin": false, 00:08:31.490 "nvme_io": false, 00:08:31.490 "nvme_io_md": false, 00:08:31.490 "write_zeroes": true, 00:08:31.490 "zcopy": true, 00:08:31.490 "get_zone_info": false, 00:08:31.490 "zone_management": false, 00:08:31.490 "zone_append": false, 00:08:31.490 "compare": false, 00:08:31.490 "compare_and_write": false, 00:08:31.490 "abort": true, 00:08:31.490 "seek_hole": false, 00:08:31.490 "seek_data": false, 00:08:31.490 "copy": true, 00:08:31.490 "nvme_iov_md": false 00:08:31.490 }, 00:08:31.490 "memory_domains": [ 00:08:31.490 { 00:08:31.490 "dma_device_id": "system", 00:08:31.490 "dma_device_type": 1 00:08:31.490 }, 00:08:31.490 { 00:08:31.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.490 "dma_device_type": 2 00:08:31.490 } 00:08:31.490 ], 00:08:31.490 "driver_specific": {} 00:08:31.490 } 00:08:31.490 ] 00:08:31.490 21:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:08:31.490 21:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:31.490 21:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:31.490 21:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:31.490 21:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:31.490 21:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:31.490 21:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:31.490 21:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:31.490 21:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:31.490 21:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:31.490 21:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:31.490 21:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:31.490 21:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.749 21:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:31.749 "name": "Existed_Raid", 00:08:31.749 "uuid": "73c0b5c9-42f3-11ef-9f7f-e9a656123a8b", 00:08:31.749 "strip_size_kb": 0, 00:08:31.749 "state": "configuring", 00:08:31.749 "raid_level": "raid1", 00:08:31.749 "superblock": true, 00:08:31.749 "num_base_bdevs": 2, 00:08:31.749 "num_base_bdevs_discovered": 1, 00:08:31.749 "num_base_bdevs_operational": 2, 00:08:31.749 "base_bdevs_list": [ 00:08:31.749 { 00:08:31.749 "name": "BaseBdev1", 00:08:31.749 "uuid": "73e99c0d-42f3-11ef-9f7f-e9a656123a8b", 00:08:31.749 "is_configured": true, 00:08:31.749 "data_offset": 2048, 00:08:31.749 "data_size": 63488 00:08:31.749 }, 00:08:31.749 { 00:08:31.749 "name": "BaseBdev2", 00:08:31.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.749 "is_configured": false, 00:08:31.749 "data_offset": 0, 00:08:31.749 "data_size": 0 00:08:31.749 } 00:08:31.749 ] 00:08:31.749 }' 00:08:31.749 21:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:31.749 21:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.007 21:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:32.265 [2024-07-15 21:44:47.335114] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:32.265 [2024-07-15 21:44:47.335146] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x212fae834500 name Existed_Raid, state configuring 00:08:32.265 21:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:32.523 [2024-07-15 21:44:47.575149] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.523 [2024-07-15 21:44:47.575990] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:32.523 [2024-07-15 21:44:47.576028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:32.523 21:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:08:32.523 21:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:32.523 21:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:32.523 21:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:32.523 21:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:32.523 21:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:32.523 21:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:32.523 21:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:32.523 21:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:32.523 21:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:32.523 21:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:32.523 21:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:32.523 21:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:32.523 21:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.781 21:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:32.781 "name": "Existed_Raid", 00:08:32.781 "uuid": "74cb73e2-42f3-11ef-9f7f-e9a656123a8b", 00:08:32.781 "strip_size_kb": 0, 00:08:32.781 "state": "configuring", 00:08:32.781 "raid_level": "raid1", 00:08:32.781 "superblock": true, 00:08:32.781 "num_base_bdevs": 2, 00:08:32.781 "num_base_bdevs_discovered": 1, 00:08:32.781 "num_base_bdevs_operational": 2, 00:08:32.781 "base_bdevs_list": [ 00:08:32.781 { 00:08:32.781 "name": "BaseBdev1", 00:08:32.781 "uuid": "73e99c0d-42f3-11ef-9f7f-e9a656123a8b", 00:08:32.781 "is_configured": true, 00:08:32.781 "data_offset": 2048, 00:08:32.781 "data_size": 63488 00:08:32.781 }, 00:08:32.781 { 00:08:32.781 "name": "BaseBdev2", 00:08:32.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.781 "is_configured": false, 00:08:32.781 "data_offset": 0, 00:08:32.781 "data_size": 0 00:08:32.781 } 00:08:32.781 ] 00:08:32.781 }' 00:08:32.781 21:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:32.781 21:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.040 21:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:33.299 [2024-07-15 21:44:48.395295] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:33.299 [2024-07-15 21:44:48.395372] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x212fae834a00 00:08:33.299 [2024-07-15 21:44:48.395379] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:33.299 [2024-07-15 21:44:48.395399] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x212fae897e20 00:08:33.299 [2024-07-15 21:44:48.395445] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x212fae834a00 00:08:33.299 [2024-07-15 21:44:48.395449] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x212fae834a00 00:08:33.299 [2024-07-15 21:44:48.395469] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.299 BaseBdev2 00:08:33.299 21:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:08:33.299 21:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev2 00:08:33.299 21:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:08:33.299 21:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:08:33.299 21:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:08:33.299 21:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:08:33.299 21:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:33.558 21:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:33.817 [ 00:08:33.817 { 00:08:33.817 "name": "BaseBdev2", 00:08:33.817 "aliases": [ 00:08:33.817 "75489448-42f3-11ef-9f7f-e9a656123a8b" 00:08:33.817 ], 00:08:33.817 "product_name": "Malloc disk", 00:08:33.817 "block_size": 512, 00:08:33.817 "num_blocks": 65536, 00:08:33.817 "uuid": "75489448-42f3-11ef-9f7f-e9a656123a8b", 00:08:33.817 "assigned_rate_limits": { 00:08:33.817 "rw_ios_per_sec": 0, 00:08:33.817 "rw_mbytes_per_sec": 0, 00:08:33.817 "r_mbytes_per_sec": 0, 00:08:33.817 "w_mbytes_per_sec": 0 00:08:33.817 }, 00:08:33.817 "claimed": true, 00:08:33.817 "claim_type": "exclusive_write", 00:08:33.817 "zoned": false, 00:08:33.817 "supported_io_types": { 00:08:33.817 "read": true, 00:08:33.817 "write": true, 00:08:33.817 "unmap": true, 00:08:33.817 "flush": true, 00:08:33.817 "reset": true, 00:08:33.817 "nvme_admin": false, 00:08:33.817 "nvme_io": false, 00:08:33.817 "nvme_io_md": false, 00:08:33.817 "write_zeroes": true, 00:08:33.817 "zcopy": true, 00:08:33.817 "get_zone_info": false, 00:08:33.817 "zone_management": false, 00:08:33.817 "zone_append": false, 00:08:33.817 "compare": false, 00:08:33.817 "compare_and_write": false, 00:08:33.817 "abort": true, 00:08:33.817 "seek_hole": false, 00:08:33.817 "seek_data": false, 00:08:33.817 "copy": true, 00:08:33.817 "nvme_iov_md": false 00:08:33.817 }, 00:08:33.817 "memory_domains": [ 00:08:33.817 { 00:08:33.817 "dma_device_id": "system", 00:08:33.817 "dma_device_type": 1 00:08:33.817 }, 00:08:33.817 { 00:08:33.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.817 "dma_device_type": 2 00:08:33.817 } 00:08:33.817 ], 00:08:33.817 "driver_specific": {} 00:08:33.817 } 00:08:33.817 ] 00:08:33.817 21:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:08:33.817 21:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:08:33.817 21:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:33.817 21:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:33.817 21:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:33.817 21:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:33.817 21:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:33.817 21:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:33.817 21:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:33.817 21:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:33.817 21:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:33.817 21:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:33.817 21:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:33.817 21:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.817 21:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:34.077 21:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:34.077 "name": "Existed_Raid", 00:08:34.077 "uuid": "74cb73e2-42f3-11ef-9f7f-e9a656123a8b", 00:08:34.077 "strip_size_kb": 0, 00:08:34.077 "state": "online", 00:08:34.077 "raid_level": "raid1", 00:08:34.077 "superblock": true, 00:08:34.077 "num_base_bdevs": 2, 00:08:34.077 "num_base_bdevs_discovered": 2, 00:08:34.077 "num_base_bdevs_operational": 2, 00:08:34.077 "base_bdevs_list": [ 00:08:34.077 { 00:08:34.077 "name": "BaseBdev1", 00:08:34.077 "uuid": "73e99c0d-42f3-11ef-9f7f-e9a656123a8b", 00:08:34.077 "is_configured": true, 00:08:34.077 "data_offset": 2048, 00:08:34.077 "data_size": 63488 00:08:34.077 }, 00:08:34.077 { 00:08:34.077 "name": "BaseBdev2", 00:08:34.077 "uuid": "75489448-42f3-11ef-9f7f-e9a656123a8b", 00:08:34.077 "is_configured": true, 00:08:34.077 "data_offset": 2048, 00:08:34.077 "data_size": 63488 00:08:34.077 } 00:08:34.077 ] 00:08:34.077 }' 00:08:34.077 21:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:34.077 21:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.335 21:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:08:34.335 21:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:08:34.335 21:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:34.335 21:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:34.335 21:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:34.335 21:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:08:34.335 21:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:08:34.335 21:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:34.593 [2024-07-15 21:44:49.667252] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:34.593 21:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:34.593 "name": "Existed_Raid", 00:08:34.593 "aliases": [ 00:08:34.593 "74cb73e2-42f3-11ef-9f7f-e9a656123a8b" 00:08:34.593 ], 00:08:34.593 "product_name": "Raid Volume", 00:08:34.593 "block_size": 512, 00:08:34.593 "num_blocks": 63488, 00:08:34.593 "uuid": "74cb73e2-42f3-11ef-9f7f-e9a656123a8b", 00:08:34.593 "assigned_rate_limits": { 00:08:34.593 "rw_ios_per_sec": 0, 00:08:34.593 "rw_mbytes_per_sec": 0, 00:08:34.593 "r_mbytes_per_sec": 0, 00:08:34.593 "w_mbytes_per_sec": 0 00:08:34.593 }, 00:08:34.593 "claimed": false, 00:08:34.593 "zoned": false, 00:08:34.593 "supported_io_types": { 00:08:34.593 "read": true, 00:08:34.593 "write": true, 00:08:34.593 "unmap": false, 00:08:34.593 "flush": false, 00:08:34.593 "reset": true, 00:08:34.593 "nvme_admin": false, 00:08:34.593 "nvme_io": false, 00:08:34.593 "nvme_io_md": false, 00:08:34.593 "write_zeroes": true, 00:08:34.593 "zcopy": false, 00:08:34.593 "get_zone_info": false, 00:08:34.593 "zone_management": false, 00:08:34.593 "zone_append": false, 00:08:34.593 "compare": false, 00:08:34.593 "compare_and_write": false, 00:08:34.593 "abort": false, 00:08:34.593 "seek_hole": false, 00:08:34.593 "seek_data": false, 00:08:34.593 "copy": false, 00:08:34.593 "nvme_iov_md": false 00:08:34.593 }, 00:08:34.593 "memory_domains": [ 00:08:34.593 { 00:08:34.593 "dma_device_id": "system", 00:08:34.593 "dma_device_type": 1 00:08:34.593 }, 00:08:34.593 { 00:08:34.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.593 "dma_device_type": 2 00:08:34.593 }, 00:08:34.593 { 00:08:34.593 "dma_device_id": "system", 00:08:34.593 "dma_device_type": 1 00:08:34.593 }, 00:08:34.593 { 00:08:34.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.593 "dma_device_type": 2 00:08:34.593 } 00:08:34.593 ], 00:08:34.593 "driver_specific": { 00:08:34.593 "raid": { 00:08:34.593 "uuid": "74cb73e2-42f3-11ef-9f7f-e9a656123a8b", 00:08:34.593 "strip_size_kb": 0, 00:08:34.593 "state": "online", 00:08:34.593 "raid_level": "raid1", 00:08:34.593 "superblock": true, 00:08:34.593 "num_base_bdevs": 2, 00:08:34.593 "num_base_bdevs_discovered": 2, 00:08:34.593 "num_base_bdevs_operational": 2, 00:08:34.593 "base_bdevs_list": [ 00:08:34.593 { 00:08:34.593 "name": "BaseBdev1", 00:08:34.593 "uuid": "73e99c0d-42f3-11ef-9f7f-e9a656123a8b", 00:08:34.593 "is_configured": true, 00:08:34.593 "data_offset": 2048, 00:08:34.593 "data_size": 63488 00:08:34.593 }, 00:08:34.593 { 00:08:34.593 "name": "BaseBdev2", 00:08:34.593 "uuid": "75489448-42f3-11ef-9f7f-e9a656123a8b", 00:08:34.593 "is_configured": true, 00:08:34.593 "data_offset": 2048, 00:08:34.593 "data_size": 63488 00:08:34.593 } 00:08:34.593 ] 00:08:34.593 } 00:08:34.593 } 00:08:34.593 }' 00:08:34.593 21:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:34.593 21:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:08:34.593 BaseBdev2' 00:08:34.593 21:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:34.593 21:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:08:34.593 21:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:34.852 21:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:34.852 "name": "BaseBdev1", 00:08:34.852 "aliases": [ 00:08:34.852 "73e99c0d-42f3-11ef-9f7f-e9a656123a8b" 00:08:34.852 ], 00:08:34.852 "product_name": "Malloc disk", 00:08:34.852 "block_size": 512, 00:08:34.852 "num_blocks": 65536, 00:08:34.852 "uuid": "73e99c0d-42f3-11ef-9f7f-e9a656123a8b", 00:08:34.852 "assigned_rate_limits": { 00:08:34.852 "rw_ios_per_sec": 0, 00:08:34.852 "rw_mbytes_per_sec": 0, 00:08:34.852 "r_mbytes_per_sec": 0, 00:08:34.852 "w_mbytes_per_sec": 0 00:08:34.852 }, 00:08:34.852 "claimed": true, 00:08:34.852 "claim_type": "exclusive_write", 00:08:34.852 "zoned": false, 00:08:34.852 "supported_io_types": { 00:08:34.852 "read": true, 00:08:34.852 "write": true, 00:08:34.852 "unmap": true, 00:08:34.852 "flush": true, 00:08:34.852 "reset": true, 00:08:34.852 "nvme_admin": false, 00:08:34.852 "nvme_io": false, 00:08:34.852 "nvme_io_md": false, 00:08:34.852 "write_zeroes": true, 00:08:34.852 "zcopy": true, 00:08:34.852 "get_zone_info": false, 00:08:34.852 "zone_management": false, 00:08:34.852 "zone_append": false, 00:08:34.852 "compare": false, 00:08:34.852 "compare_and_write": false, 00:08:34.852 "abort": true, 00:08:34.852 "seek_hole": false, 00:08:34.852 "seek_data": false, 00:08:34.852 "copy": true, 00:08:34.852 "nvme_iov_md": false 00:08:34.852 }, 00:08:34.852 "memory_domains": [ 00:08:34.852 { 00:08:34.852 "dma_device_id": "system", 00:08:34.852 "dma_device_type": 1 00:08:34.852 }, 00:08:34.852 { 00:08:34.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.852 "dma_device_type": 2 00:08:34.852 } 00:08:34.852 ], 00:08:34.852 "driver_specific": {} 00:08:34.852 }' 00:08:34.852 21:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:34.852 21:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:34.852 21:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:34.852 21:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:34.852 21:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:34.852 21:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:34.852 21:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:34.852 21:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:34.852 21:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:34.852 21:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:34.852 21:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:34.852 21:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:34.852 21:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:34.852 21:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:08:34.852 21:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:35.111 21:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:35.111 "name": "BaseBdev2", 00:08:35.111 "aliases": [ 00:08:35.111 "75489448-42f3-11ef-9f7f-e9a656123a8b" 00:08:35.111 ], 00:08:35.111 "product_name": "Malloc disk", 00:08:35.111 "block_size": 512, 00:08:35.111 "num_blocks": 65536, 00:08:35.111 "uuid": "75489448-42f3-11ef-9f7f-e9a656123a8b", 00:08:35.111 "assigned_rate_limits": { 00:08:35.111 "rw_ios_per_sec": 0, 00:08:35.111 "rw_mbytes_per_sec": 0, 00:08:35.111 "r_mbytes_per_sec": 0, 00:08:35.111 "w_mbytes_per_sec": 0 00:08:35.111 }, 00:08:35.111 "claimed": true, 00:08:35.111 "claim_type": "exclusive_write", 00:08:35.111 "zoned": false, 00:08:35.111 "supported_io_types": { 00:08:35.111 "read": true, 00:08:35.111 "write": true, 00:08:35.111 "unmap": true, 00:08:35.111 "flush": true, 00:08:35.111 "reset": true, 00:08:35.111 "nvme_admin": false, 00:08:35.111 "nvme_io": false, 00:08:35.111 "nvme_io_md": false, 00:08:35.111 "write_zeroes": true, 00:08:35.111 "zcopy": true, 00:08:35.111 "get_zone_info": false, 00:08:35.111 "zone_management": false, 00:08:35.111 "zone_append": false, 00:08:35.111 "compare": false, 00:08:35.111 "compare_and_write": false, 00:08:35.111 "abort": true, 00:08:35.111 "seek_hole": false, 00:08:35.111 "seek_data": false, 00:08:35.111 "copy": true, 00:08:35.111 "nvme_iov_md": false 00:08:35.111 }, 00:08:35.111 "memory_domains": [ 00:08:35.111 { 00:08:35.111 "dma_device_id": "system", 00:08:35.111 "dma_device_type": 1 00:08:35.111 }, 00:08:35.111 { 00:08:35.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.111 "dma_device_type": 2 00:08:35.111 } 00:08:35.111 ], 00:08:35.111 "driver_specific": {} 00:08:35.111 }' 00:08:35.111 21:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:35.111 21:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:35.111 21:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:35.111 21:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:35.370 21:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:35.370 21:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:35.370 21:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:35.370 21:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:35.370 21:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:35.370 21:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:35.370 21:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:35.370 21:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:35.370 21:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:35.370 [2024-07-15 21:44:50.547248] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:35.629 21:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:08:35.629 21:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:08:35.629 21:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:35.629 21:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:08:35.629 21:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:08:35.629 21:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:35.629 21:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:35.629 21:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:35.629 21:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:35.629 21:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:35.629 21:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:35.629 21:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:35.629 21:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:35.629 21:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:35.629 21:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:35.630 21:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:35.630 21:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.888 21:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:35.888 "name": "Existed_Raid", 00:08:35.888 "uuid": "74cb73e2-42f3-11ef-9f7f-e9a656123a8b", 00:08:35.888 "strip_size_kb": 0, 00:08:35.888 "state": "online", 00:08:35.888 "raid_level": "raid1", 00:08:35.888 "superblock": true, 00:08:35.888 "num_base_bdevs": 2, 00:08:35.888 "num_base_bdevs_discovered": 1, 00:08:35.888 "num_base_bdevs_operational": 1, 00:08:35.888 "base_bdevs_list": [ 00:08:35.888 { 00:08:35.888 "name": null, 00:08:35.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.888 "is_configured": false, 00:08:35.888 "data_offset": 2048, 00:08:35.888 "data_size": 63488 00:08:35.888 }, 00:08:35.888 { 00:08:35.888 "name": "BaseBdev2", 00:08:35.888 "uuid": "75489448-42f3-11ef-9f7f-e9a656123a8b", 00:08:35.888 "is_configured": true, 00:08:35.888 "data_offset": 2048, 00:08:35.888 "data_size": 63488 00:08:35.888 } 00:08:35.888 ] 00:08:35.888 }' 00:08:35.888 21:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:35.888 21:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.146 21:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:08:36.147 21:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:36.147 21:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:36.147 21:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:08:36.405 21:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:08:36.405 21:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:36.405 21:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:36.664 [2024-07-15 21:44:51.701092] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:36.664 [2024-07-15 21:44:51.701141] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:36.664 [2024-07-15 21:44:51.707002] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:36.664 [2024-07-15 21:44:51.707021] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:36.664 [2024-07-15 21:44:51.707026] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x212fae834a00 name Existed_Raid, state offline 00:08:36.665 21:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:08:36.665 21:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:36.665 21:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:36.665 21:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:08:36.923 21:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:08:36.923 21:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:08:36.923 21:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:08:36.923 21:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 51063 00:08:36.923 21:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@942 -- # '[' -z 51063 ']' 00:08:36.923 21:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # kill -0 51063 00:08:36.923 21:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@947 -- # uname 00:08:36.923 21:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:08:36.923 21:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # ps -c -o command 51063 00:08:36.923 21:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # tail -1 00:08:36.923 21:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:08:36.923 21:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:08:36.923 killing process with pid 51063 00:08:36.923 21:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # echo 'killing process with pid 51063' 00:08:36.923 21:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@961 -- # kill 51063 00:08:36.923 [2024-07-15 21:44:52.025497] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:36.923 [2024-07-15 21:44:52.025531] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:36.923 21:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # wait 51063 00:08:37.196 21:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:08:37.196 00:08:37.196 real 0m8.745s 00:08:37.196 user 0m15.079s 00:08:37.196 sys 0m1.652s 00:08:37.196 21:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1118 -- # xtrace_disable 00:08:37.196 ************************************ 00:08:37.196 END TEST raid_state_function_test_sb 00:08:37.196 ************************************ 00:08:37.196 21:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.196 21:44:52 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:08:37.196 21:44:52 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:37.196 21:44:52 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:08:37.196 21:44:52 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:08:37.196 21:44:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:37.196 ************************************ 00:08:37.196 START TEST raid_superblock_test 00:08:37.196 ************************************ 00:08:37.196 21:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1117 -- # raid_superblock_test raid1 2 00:08:37.196 21:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:08:37.196 21:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:08:37.196 21:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:08:37.196 21:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:08:37.196 21:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:08:37.196 21:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:08:37.196 21:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:08:37.196 21:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:08:37.196 21:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:08:37.196 21:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:08:37.196 21:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:08:37.196 21:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:08:37.196 21:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:08:37.196 21:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:08:37.196 21:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:08:37.196 21:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=51333 00:08:37.196 21:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 51333 /var/tmp/spdk-raid.sock 00:08:37.196 21:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@823 -- # '[' -z 51333 ']' 00:08:37.196 21:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:37.196 21:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:08:37.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:37.196 21:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:37.196 21:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:08:37.196 21:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:08:37.196 21:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.196 [2024-07-15 21:44:52.251773] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:08:37.196 [2024-07-15 21:44:52.252067] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:37.760 EAL: TSC is not safe to use in SMP mode 00:08:37.760 EAL: TSC is not invariant 00:08:37.760 [2024-07-15 21:44:52.758346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.760 [2024-07-15 21:44:52.842423] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:37.760 [2024-07-15 21:44:52.844724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.760 [2024-07-15 21:44:52.845666] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.760 [2024-07-15 21:44:52.845680] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.325 21:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:08:38.325 21:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # return 0 00:08:38.325 21:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:08:38.325 21:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:08:38.325 21:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:08:38.325 21:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:08:38.325 21:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:38.325 21:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:38.325 21:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:08:38.325 21:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:38.325 21:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:08:38.582 malloc1 00:08:38.582 21:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:38.840 [2024-07-15 21:44:53.829658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:38.840 [2024-07-15 21:44:53.829730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:38.840 [2024-07-15 21:44:53.829757] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x144dd4834780 00:08:38.840 [2024-07-15 21:44:53.829766] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:38.840 [2024-07-15 21:44:53.830705] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:38.840 [2024-07-15 21:44:53.830732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:38.840 pt1 00:08:38.840 21:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:08:38.840 21:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:08:38.840 21:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:08:38.840 21:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:08:38.840 21:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:38.840 21:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:38.840 21:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:08:38.840 21:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:38.840 21:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:08:39.097 malloc2 00:08:39.097 21:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:39.353 [2024-07-15 21:44:54.317676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:39.353 [2024-07-15 21:44:54.317739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.353 [2024-07-15 21:44:54.317781] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x144dd4834c80 00:08:39.353 [2024-07-15 21:44:54.317807] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.353 [2024-07-15 21:44:54.318479] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.353 [2024-07-15 21:44:54.318506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:39.353 pt2 00:08:39.353 21:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:08:39.353 21:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:08:39.353 21:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:08:39.610 [2024-07-15 21:44:54.541689] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:39.610 [2024-07-15 21:44:54.542287] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:39.610 [2024-07-15 21:44:54.542348] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x144dd4834f00 00:08:39.610 [2024-07-15 21:44:54.542355] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:39.610 [2024-07-15 21:44:54.542391] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x144dd4897e20 00:08:39.610 [2024-07-15 21:44:54.542464] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x144dd4834f00 00:08:39.610 [2024-07-15 21:44:54.542469] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x144dd4834f00 00:08:39.610 [2024-07-15 21:44:54.542499] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.610 21:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:39.610 21:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:39.610 21:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:39.610 21:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:39.610 21:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:39.610 21:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:39.610 21:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:39.610 21:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:39.610 21:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:39.610 21:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:39.610 21:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:39.610 21:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:39.610 21:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:39.610 "name": "raid_bdev1", 00:08:39.610 "uuid": "78f2765c-42f3-11ef-9f7f-e9a656123a8b", 00:08:39.610 "strip_size_kb": 0, 00:08:39.610 "state": "online", 00:08:39.610 "raid_level": "raid1", 00:08:39.610 "superblock": true, 00:08:39.610 "num_base_bdevs": 2, 00:08:39.610 "num_base_bdevs_discovered": 2, 00:08:39.610 "num_base_bdevs_operational": 2, 00:08:39.610 "base_bdevs_list": [ 00:08:39.610 { 00:08:39.610 "name": "pt1", 00:08:39.610 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:39.610 "is_configured": true, 00:08:39.610 "data_offset": 2048, 00:08:39.610 "data_size": 63488 00:08:39.610 }, 00:08:39.610 { 00:08:39.610 "name": "pt2", 00:08:39.610 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:39.610 "is_configured": true, 00:08:39.610 "data_offset": 2048, 00:08:39.610 "data_size": 63488 00:08:39.610 } 00:08:39.610 ] 00:08:39.610 }' 00:08:39.610 21:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:39.610 21:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.173 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:08:40.173 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:08:40.173 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:40.173 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:40.173 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:40.173 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:40.173 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:40.173 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:40.173 [2024-07-15 21:44:55.353761] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:40.431 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:40.431 "name": "raid_bdev1", 00:08:40.431 "aliases": [ 00:08:40.431 "78f2765c-42f3-11ef-9f7f-e9a656123a8b" 00:08:40.431 ], 00:08:40.431 "product_name": "Raid Volume", 00:08:40.431 "block_size": 512, 00:08:40.431 "num_blocks": 63488, 00:08:40.431 "uuid": "78f2765c-42f3-11ef-9f7f-e9a656123a8b", 00:08:40.431 "assigned_rate_limits": { 00:08:40.431 "rw_ios_per_sec": 0, 00:08:40.431 "rw_mbytes_per_sec": 0, 00:08:40.431 "r_mbytes_per_sec": 0, 00:08:40.431 "w_mbytes_per_sec": 0 00:08:40.431 }, 00:08:40.431 "claimed": false, 00:08:40.431 "zoned": false, 00:08:40.431 "supported_io_types": { 00:08:40.431 "read": true, 00:08:40.431 "write": true, 00:08:40.431 "unmap": false, 00:08:40.431 "flush": false, 00:08:40.431 "reset": true, 00:08:40.431 "nvme_admin": false, 00:08:40.431 "nvme_io": false, 00:08:40.431 "nvme_io_md": false, 00:08:40.431 "write_zeroes": true, 00:08:40.431 "zcopy": false, 00:08:40.431 "get_zone_info": false, 00:08:40.431 "zone_management": false, 00:08:40.431 "zone_append": false, 00:08:40.431 "compare": false, 00:08:40.431 "compare_and_write": false, 00:08:40.431 "abort": false, 00:08:40.431 "seek_hole": false, 00:08:40.431 "seek_data": false, 00:08:40.431 "copy": false, 00:08:40.431 "nvme_iov_md": false 00:08:40.431 }, 00:08:40.431 "memory_domains": [ 00:08:40.431 { 00:08:40.431 "dma_device_id": "system", 00:08:40.431 "dma_device_type": 1 00:08:40.431 }, 00:08:40.431 { 00:08:40.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.431 "dma_device_type": 2 00:08:40.431 }, 00:08:40.431 { 00:08:40.431 "dma_device_id": "system", 00:08:40.431 "dma_device_type": 1 00:08:40.431 }, 00:08:40.431 { 00:08:40.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.431 "dma_device_type": 2 00:08:40.431 } 00:08:40.431 ], 00:08:40.431 "driver_specific": { 00:08:40.431 "raid": { 00:08:40.431 "uuid": "78f2765c-42f3-11ef-9f7f-e9a656123a8b", 00:08:40.431 "strip_size_kb": 0, 00:08:40.431 "state": "online", 00:08:40.431 "raid_level": "raid1", 00:08:40.431 "superblock": true, 00:08:40.431 "num_base_bdevs": 2, 00:08:40.431 "num_base_bdevs_discovered": 2, 00:08:40.431 "num_base_bdevs_operational": 2, 00:08:40.431 "base_bdevs_list": [ 00:08:40.431 { 00:08:40.431 "name": "pt1", 00:08:40.431 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:40.431 "is_configured": true, 00:08:40.431 "data_offset": 2048, 00:08:40.431 "data_size": 63488 00:08:40.431 }, 00:08:40.431 { 00:08:40.431 "name": "pt2", 00:08:40.431 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:40.431 "is_configured": true, 00:08:40.431 "data_offset": 2048, 00:08:40.431 "data_size": 63488 00:08:40.431 } 00:08:40.431 ] 00:08:40.431 } 00:08:40.431 } 00:08:40.431 }' 00:08:40.432 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:40.432 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:08:40.432 pt2' 00:08:40.432 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:40.432 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:08:40.432 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:40.690 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:40.690 "name": "pt1", 00:08:40.690 "aliases": [ 00:08:40.690 "00000000-0000-0000-0000-000000000001" 00:08:40.690 ], 00:08:40.690 "product_name": "passthru", 00:08:40.690 "block_size": 512, 00:08:40.690 "num_blocks": 65536, 00:08:40.690 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:40.690 "assigned_rate_limits": { 00:08:40.690 "rw_ios_per_sec": 0, 00:08:40.690 "rw_mbytes_per_sec": 0, 00:08:40.690 "r_mbytes_per_sec": 0, 00:08:40.690 "w_mbytes_per_sec": 0 00:08:40.690 }, 00:08:40.690 "claimed": true, 00:08:40.690 "claim_type": "exclusive_write", 00:08:40.690 "zoned": false, 00:08:40.690 "supported_io_types": { 00:08:40.690 "read": true, 00:08:40.690 "write": true, 00:08:40.690 "unmap": true, 00:08:40.690 "flush": true, 00:08:40.690 "reset": true, 00:08:40.690 "nvme_admin": false, 00:08:40.690 "nvme_io": false, 00:08:40.690 "nvme_io_md": false, 00:08:40.690 "write_zeroes": true, 00:08:40.690 "zcopy": true, 00:08:40.690 "get_zone_info": false, 00:08:40.690 "zone_management": false, 00:08:40.690 "zone_append": false, 00:08:40.690 "compare": false, 00:08:40.690 "compare_and_write": false, 00:08:40.690 "abort": true, 00:08:40.690 "seek_hole": false, 00:08:40.690 "seek_data": false, 00:08:40.690 "copy": true, 00:08:40.690 "nvme_iov_md": false 00:08:40.690 }, 00:08:40.690 "memory_domains": [ 00:08:40.690 { 00:08:40.690 "dma_device_id": "system", 00:08:40.690 "dma_device_type": 1 00:08:40.690 }, 00:08:40.690 { 00:08:40.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.690 "dma_device_type": 2 00:08:40.690 } 00:08:40.690 ], 00:08:40.690 "driver_specific": { 00:08:40.690 "passthru": { 00:08:40.690 "name": "pt1", 00:08:40.690 "base_bdev_name": "malloc1" 00:08:40.690 } 00:08:40.690 } 00:08:40.690 }' 00:08:40.690 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:40.690 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:40.690 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:40.690 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:40.690 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:40.690 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:40.690 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:40.690 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:40.690 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:40.690 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:40.690 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:40.690 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:40.690 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:40.690 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:08:40.690 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:40.949 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:40.949 "name": "pt2", 00:08:40.949 "aliases": [ 00:08:40.949 "00000000-0000-0000-0000-000000000002" 00:08:40.949 ], 00:08:40.949 "product_name": "passthru", 00:08:40.949 "block_size": 512, 00:08:40.949 "num_blocks": 65536, 00:08:40.949 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:40.949 "assigned_rate_limits": { 00:08:40.949 "rw_ios_per_sec": 0, 00:08:40.949 "rw_mbytes_per_sec": 0, 00:08:40.949 "r_mbytes_per_sec": 0, 00:08:40.949 "w_mbytes_per_sec": 0 00:08:40.949 }, 00:08:40.949 "claimed": true, 00:08:40.949 "claim_type": "exclusive_write", 00:08:40.949 "zoned": false, 00:08:40.949 "supported_io_types": { 00:08:40.949 "read": true, 00:08:40.949 "write": true, 00:08:40.949 "unmap": true, 00:08:40.949 "flush": true, 00:08:40.949 "reset": true, 00:08:40.949 "nvme_admin": false, 00:08:40.949 "nvme_io": false, 00:08:40.949 "nvme_io_md": false, 00:08:40.949 "write_zeroes": true, 00:08:40.949 "zcopy": true, 00:08:40.949 "get_zone_info": false, 00:08:40.949 "zone_management": false, 00:08:40.949 "zone_append": false, 00:08:40.949 "compare": false, 00:08:40.949 "compare_and_write": false, 00:08:40.949 "abort": true, 00:08:40.949 "seek_hole": false, 00:08:40.949 "seek_data": false, 00:08:40.949 "copy": true, 00:08:40.949 "nvme_iov_md": false 00:08:40.949 }, 00:08:40.949 "memory_domains": [ 00:08:40.949 { 00:08:40.949 "dma_device_id": "system", 00:08:40.949 "dma_device_type": 1 00:08:40.949 }, 00:08:40.949 { 00:08:40.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.949 "dma_device_type": 2 00:08:40.949 } 00:08:40.949 ], 00:08:40.949 "driver_specific": { 00:08:40.949 "passthru": { 00:08:40.949 "name": "pt2", 00:08:40.949 "base_bdev_name": "malloc2" 00:08:40.949 } 00:08:40.949 } 00:08:40.949 }' 00:08:40.949 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:40.949 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:40.949 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:40.949 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:40.949 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:40.949 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:40.949 21:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:40.949 21:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:40.949 21:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:40.949 21:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:40.949 21:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:40.949 21:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:40.949 21:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:40.949 21:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:08:41.207 [2024-07-15 21:44:56.273775] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.207 21:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=78f2765c-42f3-11ef-9f7f-e9a656123a8b 00:08:41.207 21:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 78f2765c-42f3-11ef-9f7f-e9a656123a8b ']' 00:08:41.207 21:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:41.465 [2024-07-15 21:44:56.573766] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:41.465 [2024-07-15 21:44:56.573788] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:41.465 [2024-07-15 21:44:56.573815] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.465 [2024-07-15 21:44:56.573830] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:41.465 [2024-07-15 21:44:56.573834] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x144dd4834f00 name raid_bdev1, state offline 00:08:41.465 21:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:41.465 21:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:08:41.723 21:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:08:41.723 21:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:08:41.723 21:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:08:41.723 21:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:41.981 21:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:08:41.981 21:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:42.239 21:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:08:42.239 21:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:42.807 21:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:08:42.807 21:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:08:42.807 21:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # local es=0 00:08:42.807 21:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:08:42.807 21:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@630 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:42.807 21:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:08:42.807 21:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:42.807 21:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:08:42.807 21:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:42.807 21:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:08:42.807 21:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:42.807 21:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:42.807 21:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:08:42.807 [2024-07-15 21:44:57.989853] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:42.807 [2024-07-15 21:44:57.990493] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:42.807 [2024-07-15 21:44:57.990517] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:42.807 [2024-07-15 21:44:57.990555] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:42.807 [2024-07-15 21:44:57.990566] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:42.807 [2024-07-15 21:44:57.990570] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x144dd4834c80 name raid_bdev1, state configuring 00:08:43.066 request: 00:08:43.066 { 00:08:43.066 "name": "raid_bdev1", 00:08:43.066 "raid_level": "raid1", 00:08:43.066 "base_bdevs": [ 00:08:43.066 "malloc1", 00:08:43.066 "malloc2" 00:08:43.066 ], 00:08:43.066 "superblock": false, 00:08:43.066 "method": "bdev_raid_create", 00:08:43.066 "req_id": 1 00:08:43.066 } 00:08:43.066 Got JSON-RPC error response 00:08:43.066 response: 00:08:43.066 { 00:08:43.066 "code": -17, 00:08:43.066 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:43.066 } 00:08:43.066 21:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # es=1 00:08:43.066 21:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:08:43.066 21:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:08:43.066 21:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:08:43.066 21:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:43.066 21:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:08:43.324 21:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:08:43.324 21:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:08:43.324 21:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:43.324 [2024-07-15 21:44:58.493861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:43.324 [2024-07-15 21:44:58.493931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.324 [2024-07-15 21:44:58.493943] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x144dd4834780 00:08:43.324 [2024-07-15 21:44:58.493951] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.324 [2024-07-15 21:44:58.494625] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.324 [2024-07-15 21:44:58.494651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:43.324 [2024-07-15 21:44:58.494675] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:43.324 [2024-07-15 21:44:58.494687] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:43.324 pt1 00:08:43.324 21:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:43.324 21:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:43.324 21:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:43.324 21:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:43.324 21:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:43.324 21:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:43.324 21:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:43.582 21:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:43.582 21:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:43.582 21:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:43.582 21:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:43.582 21:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.863 21:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:43.863 "name": "raid_bdev1", 00:08:43.863 "uuid": "78f2765c-42f3-11ef-9f7f-e9a656123a8b", 00:08:43.863 "strip_size_kb": 0, 00:08:43.863 "state": "configuring", 00:08:43.863 "raid_level": "raid1", 00:08:43.863 "superblock": true, 00:08:43.863 "num_base_bdevs": 2, 00:08:43.863 "num_base_bdevs_discovered": 1, 00:08:43.863 "num_base_bdevs_operational": 2, 00:08:43.863 "base_bdevs_list": [ 00:08:43.863 { 00:08:43.863 "name": "pt1", 00:08:43.863 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:43.863 "is_configured": true, 00:08:43.863 "data_offset": 2048, 00:08:43.863 "data_size": 63488 00:08:43.863 }, 00:08:43.863 { 00:08:43.863 "name": null, 00:08:43.863 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.863 "is_configured": false, 00:08:43.863 "data_offset": 2048, 00:08:43.863 "data_size": 63488 00:08:43.863 } 00:08:43.863 ] 00:08:43.863 }' 00:08:43.863 21:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:43.863 21:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.121 21:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:08:44.121 21:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:08:44.121 21:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:08:44.121 21:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:44.380 [2024-07-15 21:44:59.377937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:44.380 [2024-07-15 21:44:59.378008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.380 [2024-07-15 21:44:59.378036] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x144dd4834f00 00:08:44.380 [2024-07-15 21:44:59.378044] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.380 [2024-07-15 21:44:59.378179] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.380 [2024-07-15 21:44:59.378191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:44.380 [2024-07-15 21:44:59.378214] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:44.380 [2024-07-15 21:44:59.378223] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:44.380 [2024-07-15 21:44:59.378249] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x144dd4835180 00:08:44.380 [2024-07-15 21:44:59.378254] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:44.380 [2024-07-15 21:44:59.378273] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x144dd4897e20 00:08:44.380 [2024-07-15 21:44:59.378338] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x144dd4835180 00:08:44.380 [2024-07-15 21:44:59.378343] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x144dd4835180 00:08:44.380 [2024-07-15 21:44:59.378365] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.380 pt2 00:08:44.380 21:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:08:44.380 21:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:08:44.380 21:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:44.380 21:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:44.380 21:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:44.380 21:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:44.380 21:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:44.380 21:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:44.380 21:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:44.380 21:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:44.380 21:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:44.380 21:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:44.380 21:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:44.380 21:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.639 21:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:44.639 "name": "raid_bdev1", 00:08:44.639 "uuid": "78f2765c-42f3-11ef-9f7f-e9a656123a8b", 00:08:44.639 "strip_size_kb": 0, 00:08:44.639 "state": "online", 00:08:44.639 "raid_level": "raid1", 00:08:44.639 "superblock": true, 00:08:44.639 "num_base_bdevs": 2, 00:08:44.639 "num_base_bdevs_discovered": 2, 00:08:44.639 "num_base_bdevs_operational": 2, 00:08:44.639 "base_bdevs_list": [ 00:08:44.639 { 00:08:44.639 "name": "pt1", 00:08:44.639 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:44.639 "is_configured": true, 00:08:44.639 "data_offset": 2048, 00:08:44.639 "data_size": 63488 00:08:44.639 }, 00:08:44.639 { 00:08:44.639 "name": "pt2", 00:08:44.639 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.639 "is_configured": true, 00:08:44.639 "data_offset": 2048, 00:08:44.639 "data_size": 63488 00:08:44.639 } 00:08:44.639 ] 00:08:44.639 }' 00:08:44.639 21:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:44.639 21:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.897 21:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:08:44.897 21:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:08:44.897 21:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:44.897 21:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:44.897 21:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:44.897 21:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:44.897 21:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:44.897 21:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:45.155 [2024-07-15 21:45:00.230028] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.155 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:45.155 "name": "raid_bdev1", 00:08:45.155 "aliases": [ 00:08:45.155 "78f2765c-42f3-11ef-9f7f-e9a656123a8b" 00:08:45.155 ], 00:08:45.155 "product_name": "Raid Volume", 00:08:45.155 "block_size": 512, 00:08:45.155 "num_blocks": 63488, 00:08:45.155 "uuid": "78f2765c-42f3-11ef-9f7f-e9a656123a8b", 00:08:45.155 "assigned_rate_limits": { 00:08:45.155 "rw_ios_per_sec": 0, 00:08:45.155 "rw_mbytes_per_sec": 0, 00:08:45.155 "r_mbytes_per_sec": 0, 00:08:45.155 "w_mbytes_per_sec": 0 00:08:45.155 }, 00:08:45.155 "claimed": false, 00:08:45.155 "zoned": false, 00:08:45.155 "supported_io_types": { 00:08:45.155 "read": true, 00:08:45.155 "write": true, 00:08:45.155 "unmap": false, 00:08:45.155 "flush": false, 00:08:45.155 "reset": true, 00:08:45.155 "nvme_admin": false, 00:08:45.155 "nvme_io": false, 00:08:45.155 "nvme_io_md": false, 00:08:45.155 "write_zeroes": true, 00:08:45.155 "zcopy": false, 00:08:45.155 "get_zone_info": false, 00:08:45.155 "zone_management": false, 00:08:45.155 "zone_append": false, 00:08:45.155 "compare": false, 00:08:45.155 "compare_and_write": false, 00:08:45.155 "abort": false, 00:08:45.155 "seek_hole": false, 00:08:45.155 "seek_data": false, 00:08:45.155 "copy": false, 00:08:45.155 "nvme_iov_md": false 00:08:45.155 }, 00:08:45.155 "memory_domains": [ 00:08:45.155 { 00:08:45.155 "dma_device_id": "system", 00:08:45.155 "dma_device_type": 1 00:08:45.155 }, 00:08:45.155 { 00:08:45.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.155 "dma_device_type": 2 00:08:45.155 }, 00:08:45.155 { 00:08:45.155 "dma_device_id": "system", 00:08:45.155 "dma_device_type": 1 00:08:45.155 }, 00:08:45.155 { 00:08:45.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.155 "dma_device_type": 2 00:08:45.155 } 00:08:45.155 ], 00:08:45.155 "driver_specific": { 00:08:45.155 "raid": { 00:08:45.155 "uuid": "78f2765c-42f3-11ef-9f7f-e9a656123a8b", 00:08:45.155 "strip_size_kb": 0, 00:08:45.155 "state": "online", 00:08:45.155 "raid_level": "raid1", 00:08:45.155 "superblock": true, 00:08:45.155 "num_base_bdevs": 2, 00:08:45.155 "num_base_bdevs_discovered": 2, 00:08:45.155 "num_base_bdevs_operational": 2, 00:08:45.155 "base_bdevs_list": [ 00:08:45.155 { 00:08:45.155 "name": "pt1", 00:08:45.155 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:45.155 "is_configured": true, 00:08:45.155 "data_offset": 2048, 00:08:45.155 "data_size": 63488 00:08:45.155 }, 00:08:45.155 { 00:08:45.155 "name": "pt2", 00:08:45.155 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:45.155 "is_configured": true, 00:08:45.155 "data_offset": 2048, 00:08:45.155 "data_size": 63488 00:08:45.155 } 00:08:45.155 ] 00:08:45.155 } 00:08:45.155 } 00:08:45.155 }' 00:08:45.155 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:45.155 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:08:45.155 pt2' 00:08:45.155 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:45.155 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:08:45.155 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:45.414 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:45.414 "name": "pt1", 00:08:45.414 "aliases": [ 00:08:45.415 "00000000-0000-0000-0000-000000000001" 00:08:45.415 ], 00:08:45.415 "product_name": "passthru", 00:08:45.415 "block_size": 512, 00:08:45.415 "num_blocks": 65536, 00:08:45.415 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:45.415 "assigned_rate_limits": { 00:08:45.415 "rw_ios_per_sec": 0, 00:08:45.415 "rw_mbytes_per_sec": 0, 00:08:45.415 "r_mbytes_per_sec": 0, 00:08:45.415 "w_mbytes_per_sec": 0 00:08:45.415 }, 00:08:45.415 "claimed": true, 00:08:45.415 "claim_type": "exclusive_write", 00:08:45.415 "zoned": false, 00:08:45.415 "supported_io_types": { 00:08:45.415 "read": true, 00:08:45.415 "write": true, 00:08:45.415 "unmap": true, 00:08:45.415 "flush": true, 00:08:45.415 "reset": true, 00:08:45.415 "nvme_admin": false, 00:08:45.415 "nvme_io": false, 00:08:45.415 "nvme_io_md": false, 00:08:45.415 "write_zeroes": true, 00:08:45.415 "zcopy": true, 00:08:45.415 "get_zone_info": false, 00:08:45.415 "zone_management": false, 00:08:45.415 "zone_append": false, 00:08:45.415 "compare": false, 00:08:45.415 "compare_and_write": false, 00:08:45.415 "abort": true, 00:08:45.415 "seek_hole": false, 00:08:45.415 "seek_data": false, 00:08:45.415 "copy": true, 00:08:45.415 "nvme_iov_md": false 00:08:45.415 }, 00:08:45.415 "memory_domains": [ 00:08:45.415 { 00:08:45.415 "dma_device_id": "system", 00:08:45.415 "dma_device_type": 1 00:08:45.415 }, 00:08:45.415 { 00:08:45.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.415 "dma_device_type": 2 00:08:45.415 } 00:08:45.415 ], 00:08:45.415 "driver_specific": { 00:08:45.415 "passthru": { 00:08:45.415 "name": "pt1", 00:08:45.415 "base_bdev_name": "malloc1" 00:08:45.415 } 00:08:45.415 } 00:08:45.415 }' 00:08:45.415 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:45.415 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:45.415 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:45.415 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:45.415 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:45.415 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:45.415 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:45.415 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:45.415 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:45.415 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:45.415 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:45.415 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:45.415 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:45.415 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:08:45.415 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:45.673 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:45.673 "name": "pt2", 00:08:45.673 "aliases": [ 00:08:45.673 "00000000-0000-0000-0000-000000000002" 00:08:45.673 ], 00:08:45.673 "product_name": "passthru", 00:08:45.673 "block_size": 512, 00:08:45.673 "num_blocks": 65536, 00:08:45.673 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:45.673 "assigned_rate_limits": { 00:08:45.673 "rw_ios_per_sec": 0, 00:08:45.673 "rw_mbytes_per_sec": 0, 00:08:45.673 "r_mbytes_per_sec": 0, 00:08:45.673 "w_mbytes_per_sec": 0 00:08:45.673 }, 00:08:45.673 "claimed": true, 00:08:45.673 "claim_type": "exclusive_write", 00:08:45.673 "zoned": false, 00:08:45.673 "supported_io_types": { 00:08:45.673 "read": true, 00:08:45.673 "write": true, 00:08:45.673 "unmap": true, 00:08:45.673 "flush": true, 00:08:45.673 "reset": true, 00:08:45.673 "nvme_admin": false, 00:08:45.673 "nvme_io": false, 00:08:45.673 "nvme_io_md": false, 00:08:45.673 "write_zeroes": true, 00:08:45.673 "zcopy": true, 00:08:45.673 "get_zone_info": false, 00:08:45.673 "zone_management": false, 00:08:45.673 "zone_append": false, 00:08:45.673 "compare": false, 00:08:45.673 "compare_and_write": false, 00:08:45.673 "abort": true, 00:08:45.673 "seek_hole": false, 00:08:45.673 "seek_data": false, 00:08:45.673 "copy": true, 00:08:45.673 "nvme_iov_md": false 00:08:45.673 }, 00:08:45.673 "memory_domains": [ 00:08:45.673 { 00:08:45.673 "dma_device_id": "system", 00:08:45.673 "dma_device_type": 1 00:08:45.673 }, 00:08:45.673 { 00:08:45.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.673 "dma_device_type": 2 00:08:45.673 } 00:08:45.673 ], 00:08:45.673 "driver_specific": { 00:08:45.673 "passthru": { 00:08:45.673 "name": "pt2", 00:08:45.673 "base_bdev_name": "malloc2" 00:08:45.673 } 00:08:45.673 } 00:08:45.673 }' 00:08:45.673 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:45.931 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:45.931 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:45.931 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:45.931 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:45.931 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:45.931 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:45.931 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:45.931 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:45.931 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:45.931 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:45.931 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:45.931 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:45.931 21:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:08:46.189 [2024-07-15 21:45:01.186089] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:46.189 21:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 78f2765c-42f3-11ef-9f7f-e9a656123a8b '!=' 78f2765c-42f3-11ef-9f7f-e9a656123a8b ']' 00:08:46.189 21:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:08:46.189 21:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:46.189 21:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:08:46.189 21:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:46.447 [2024-07-15 21:45:01.466094] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:46.447 21:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:46.447 21:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:46.447 21:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:46.447 21:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:46.447 21:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:46.447 21:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:46.447 21:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:46.447 21:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:46.447 21:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:46.447 21:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:46.447 21:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:46.447 21:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:46.705 21:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:46.705 "name": "raid_bdev1", 00:08:46.705 "uuid": "78f2765c-42f3-11ef-9f7f-e9a656123a8b", 00:08:46.705 "strip_size_kb": 0, 00:08:46.705 "state": "online", 00:08:46.705 "raid_level": "raid1", 00:08:46.705 "superblock": true, 00:08:46.705 "num_base_bdevs": 2, 00:08:46.705 "num_base_bdevs_discovered": 1, 00:08:46.705 "num_base_bdevs_operational": 1, 00:08:46.705 "base_bdevs_list": [ 00:08:46.705 { 00:08:46.705 "name": null, 00:08:46.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.705 "is_configured": false, 00:08:46.705 "data_offset": 2048, 00:08:46.705 "data_size": 63488 00:08:46.705 }, 00:08:46.705 { 00:08:46.705 "name": "pt2", 00:08:46.705 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:46.705 "is_configured": true, 00:08:46.705 "data_offset": 2048, 00:08:46.705 "data_size": 63488 00:08:46.705 } 00:08:46.705 ] 00:08:46.705 }' 00:08:46.705 21:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:46.705 21:45:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.962 21:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:47.220 [2024-07-15 21:45:02.342176] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:47.220 [2024-07-15 21:45:02.342204] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:47.220 [2024-07-15 21:45:02.342228] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:47.220 [2024-07-15 21:45:02.342240] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:47.220 [2024-07-15 21:45:02.342245] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x144dd4835180 name raid_bdev1, state offline 00:08:47.220 21:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:47.220 21:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:08:47.479 21:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:08:47.479 21:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:08:47.479 21:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:08:47.479 21:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:08:47.479 21:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:47.736 21:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:08:47.736 21:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:08:47.736 21:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:08:47.736 21:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:08:47.736 21:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=1 00:08:47.736 21:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:47.996 [2024-07-15 21:45:03.122186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:47.996 [2024-07-15 21:45:03.122239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.996 [2024-07-15 21:45:03.122252] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x144dd4834f00 00:08:47.996 [2024-07-15 21:45:03.122260] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.996 [2024-07-15 21:45:03.122910] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.996 [2024-07-15 21:45:03.122936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:47.996 [2024-07-15 21:45:03.122961] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:47.996 [2024-07-15 21:45:03.122973] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:47.996 [2024-07-15 21:45:03.122998] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x144dd4835180 00:08:47.996 [2024-07-15 21:45:03.123003] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:47.996 [2024-07-15 21:45:03.123023] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x144dd4897e20 00:08:47.996 [2024-07-15 21:45:03.123071] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x144dd4835180 00:08:47.996 [2024-07-15 21:45:03.123082] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x144dd4835180 00:08:47.996 [2024-07-15 21:45:03.123104] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.996 pt2 00:08:47.996 21:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:47.996 21:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:47.996 21:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:47.996 21:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:47.996 21:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:47.996 21:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:47.996 21:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:47.996 21:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:47.996 21:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:47.996 21:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:47.996 21:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:47.996 21:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:48.254 21:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:48.254 "name": "raid_bdev1", 00:08:48.254 "uuid": "78f2765c-42f3-11ef-9f7f-e9a656123a8b", 00:08:48.254 "strip_size_kb": 0, 00:08:48.254 "state": "online", 00:08:48.254 "raid_level": "raid1", 00:08:48.255 "superblock": true, 00:08:48.255 "num_base_bdevs": 2, 00:08:48.255 "num_base_bdevs_discovered": 1, 00:08:48.255 "num_base_bdevs_operational": 1, 00:08:48.255 "base_bdevs_list": [ 00:08:48.255 { 00:08:48.255 "name": null, 00:08:48.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.255 "is_configured": false, 00:08:48.255 "data_offset": 2048, 00:08:48.255 "data_size": 63488 00:08:48.255 }, 00:08:48.255 { 00:08:48.255 "name": "pt2", 00:08:48.255 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:48.255 "is_configured": true, 00:08:48.255 "data_offset": 2048, 00:08:48.255 "data_size": 63488 00:08:48.255 } 00:08:48.255 ] 00:08:48.255 }' 00:08:48.255 21:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:48.255 21:45:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.512 21:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:48.770 [2024-07-15 21:45:03.890200] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:48.770 [2024-07-15 21:45:03.890226] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:48.770 [2024-07-15 21:45:03.890247] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:48.770 [2024-07-15 21:45:03.890259] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:48.770 [2024-07-15 21:45:03.890263] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x144dd4835180 name raid_bdev1, state offline 00:08:48.770 21:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:48.770 21:45:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:08:49.028 21:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:08:49.028 21:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:08:49.028 21:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:08:49.028 21:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:49.304 [2024-07-15 21:45:04.422221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:49.304 [2024-07-15 21:45:04.422292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.304 [2024-07-15 21:45:04.422304] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x144dd4834c80 00:08:49.304 [2024-07-15 21:45:04.422312] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.304 [2024-07-15 21:45:04.422992] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.304 [2024-07-15 21:45:04.423016] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:49.304 [2024-07-15 21:45:04.423040] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:49.304 [2024-07-15 21:45:04.423051] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:49.304 [2024-07-15 21:45:04.423080] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:49.304 [2024-07-15 21:45:04.423085] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:49.304 [2024-07-15 21:45:04.423090] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x144dd4834780 name raid_bdev1, state configuring 00:08:49.304 [2024-07-15 21:45:04.423097] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:49.304 [2024-07-15 21:45:04.423112] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x144dd4834780 00:08:49.304 [2024-07-15 21:45:04.423115] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:49.304 [2024-07-15 21:45:04.423135] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x144dd4897e20 00:08:49.304 [2024-07-15 21:45:04.423180] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x144dd4834780 00:08:49.304 [2024-07-15 21:45:04.423185] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x144dd4834780 00:08:49.304 [2024-07-15 21:45:04.423205] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.304 pt1 00:08:49.304 21:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:08:49.304 21:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:49.304 21:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:49.304 21:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:49.304 21:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:49.304 21:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:49.304 21:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:49.304 21:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:49.304 21:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:49.304 21:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:49.304 21:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:49.304 21:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:49.304 21:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:49.564 21:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:49.564 "name": "raid_bdev1", 00:08:49.564 "uuid": "78f2765c-42f3-11ef-9f7f-e9a656123a8b", 00:08:49.564 "strip_size_kb": 0, 00:08:49.564 "state": "online", 00:08:49.564 "raid_level": "raid1", 00:08:49.564 "superblock": true, 00:08:49.564 "num_base_bdevs": 2, 00:08:49.564 "num_base_bdevs_discovered": 1, 00:08:49.564 "num_base_bdevs_operational": 1, 00:08:49.564 "base_bdevs_list": [ 00:08:49.564 { 00:08:49.564 "name": null, 00:08:49.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.564 "is_configured": false, 00:08:49.564 "data_offset": 2048, 00:08:49.564 "data_size": 63488 00:08:49.564 }, 00:08:49.564 { 00:08:49.564 "name": "pt2", 00:08:49.564 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:49.564 "is_configured": true, 00:08:49.564 "data_offset": 2048, 00:08:49.564 "data_size": 63488 00:08:49.564 } 00:08:49.564 ] 00:08:49.564 }' 00:08:49.564 21:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:49.564 21:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.823 21:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:08:49.823 21:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:50.080 21:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:08:50.080 21:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:50.080 21:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:08:50.338 [2024-07-15 21:45:05.446292] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:50.338 21:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 78f2765c-42f3-11ef-9f7f-e9a656123a8b '!=' 78f2765c-42f3-11ef-9f7f-e9a656123a8b ']' 00:08:50.338 21:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 51333 00:08:50.338 21:45:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@942 -- # '[' -z 51333 ']' 00:08:50.338 21:45:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # kill -0 51333 00:08:50.338 21:45:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@947 -- # uname 00:08:50.338 21:45:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:08:50.338 21:45:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # ps -c -o command 51333 00:08:50.338 21:45:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # tail -1 00:08:50.338 21:45:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:08:50.338 21:45:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:08:50.338 killing process with pid 51333 00:08:50.338 21:45:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 51333' 00:08:50.338 21:45:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@961 -- # kill 51333 00:08:50.338 [2024-07-15 21:45:05.474056] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:50.338 [2024-07-15 21:45:05.474077] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:50.338 [2024-07-15 21:45:05.474089] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:50.338 [2024-07-15 21:45:05.474093] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x144dd4834780 name raid_bdev1, state offline 00:08:50.338 21:45:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # wait 51333 00:08:50.338 [2024-07-15 21:45:05.485953] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:50.596 21:45:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:08:50.596 00:08:50.596 real 0m13.413s 00:08:50.596 user 0m23.967s 00:08:50.596 sys 0m2.100s 00:08:50.596 21:45:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:08:50.596 ************************************ 00:08:50.596 END TEST raid_superblock_test 00:08:50.596 ************************************ 00:08:50.596 21:45:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.596 21:45:05 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:08:50.596 21:45:05 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:50.596 21:45:05 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:08:50.596 21:45:05 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:08:50.596 21:45:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:50.596 ************************************ 00:08:50.596 START TEST raid_read_error_test 00:08:50.596 ************************************ 00:08:50.596 21:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1117 -- # raid_io_error_test raid1 2 read 00:08:50.596 21:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:08:50.596 21:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:08:50.596 21:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:08:50.596 21:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:08:50.596 21:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:50.596 21:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:08:50.596 21:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:50.596 21:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:50.596 21:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:08:50.596 21:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:50.596 21:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:50.596 21:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:50.596 21:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:08:50.596 21:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:08:50.596 21:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:08:50.596 21:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:08:50.596 21:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:08:50.596 21:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:08:50.596 21:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:08:50.596 21:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:08:50.596 21:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:08:50.596 21:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.IGbMUoUqfC 00:08:50.596 21:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=51726 00:08:50.596 21:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 51726 /var/tmp/spdk-raid.sock 00:08:50.596 21:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:50.596 21:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@823 -- # '[' -z 51726 ']' 00:08:50.596 21:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:50.596 21:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:08:50.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:50.596 21:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:50.596 21:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:08:50.596 21:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.596 [2024-07-15 21:45:05.721050] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:08:50.596 [2024-07-15 21:45:05.721299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:51.161 EAL: TSC is not safe to use in SMP mode 00:08:51.161 EAL: TSC is not invariant 00:08:51.161 [2024-07-15 21:45:06.255647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.161 [2024-07-15 21:45:06.339915] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:51.161 [2024-07-15 21:45:06.342098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.161 [2024-07-15 21:45:06.342859] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.161 [2024-07-15 21:45:06.342873] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.731 21:45:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:08:51.731 21:45:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # return 0 00:08:51.731 21:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:51.731 21:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:51.994 BaseBdev1_malloc 00:08:51.994 21:45:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:08:52.252 true 00:08:52.252 21:45:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:52.510 [2024-07-15 21:45:07.450912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:52.510 [2024-07-15 21:45:07.450976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.510 [2024-07-15 21:45:07.451004] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1540cc634780 00:08:52.510 [2024-07-15 21:45:07.451013] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.510 [2024-07-15 21:45:07.451656] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.510 [2024-07-15 21:45:07.451680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:52.510 BaseBdev1 00:08:52.510 21:45:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:52.510 21:45:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:52.768 BaseBdev2_malloc 00:08:52.768 21:45:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:08:53.026 true 00:08:53.026 21:45:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:53.284 [2024-07-15 21:45:08.226952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:53.284 [2024-07-15 21:45:08.227021] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.284 [2024-07-15 21:45:08.227065] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1540cc634c80 00:08:53.284 [2024-07-15 21:45:08.227073] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.284 [2024-07-15 21:45:08.227722] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.284 [2024-07-15 21:45:08.227747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:53.284 BaseBdev2 00:08:53.284 21:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:08:53.284 [2024-07-15 21:45:08.458967] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.284 [2024-07-15 21:45:08.459585] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:53.284 [2024-07-15 21:45:08.459652] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1540cc634f00 00:08:53.284 [2024-07-15 21:45:08.459672] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:53.284 [2024-07-15 21:45:08.459704] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1540cc6a0e20 00:08:53.284 [2024-07-15 21:45:08.459778] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1540cc634f00 00:08:53.284 [2024-07-15 21:45:08.459783] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1540cc634f00 00:08:53.284 [2024-07-15 21:45:08.459809] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.543 21:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:53.543 21:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:53.543 21:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:53.543 21:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:53.543 21:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:53.543 21:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:53.543 21:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:53.543 21:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:53.543 21:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:53.543 21:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:53.543 21:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:53.543 21:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:53.543 21:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:53.543 "name": "raid_bdev1", 00:08:53.543 "uuid": "813e121c-42f3-11ef-9f7f-e9a656123a8b", 00:08:53.543 "strip_size_kb": 0, 00:08:53.543 "state": "online", 00:08:53.543 "raid_level": "raid1", 00:08:53.543 "superblock": true, 00:08:53.543 "num_base_bdevs": 2, 00:08:53.543 "num_base_bdevs_discovered": 2, 00:08:53.543 "num_base_bdevs_operational": 2, 00:08:53.543 "base_bdevs_list": [ 00:08:53.543 { 00:08:53.543 "name": "BaseBdev1", 00:08:53.543 "uuid": "4a5f6b0c-6779-b657-a060-9ca1a16f8789", 00:08:53.543 "is_configured": true, 00:08:53.543 "data_offset": 2048, 00:08:53.543 "data_size": 63488 00:08:53.543 }, 00:08:53.543 { 00:08:53.543 "name": "BaseBdev2", 00:08:53.543 "uuid": "e3ec05db-10fb-1855-9910-0e55cb830d3f", 00:08:53.543 "is_configured": true, 00:08:53.543 "data_offset": 2048, 00:08:53.543 "data_size": 63488 00:08:53.543 } 00:08:53.543 ] 00:08:53.543 }' 00:08:53.543 21:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:53.543 21:45:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.109 21:45:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:08:54.109 21:45:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:08:54.109 [2024-07-15 21:45:09.131183] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1540cc6a0ec0 00:08:55.042 21:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:55.342 21:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:08:55.342 21:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:55.342 21:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:08:55.342 21:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:08:55.342 21:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:55.342 21:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:55.342 21:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:55.342 21:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:55.342 21:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:55.342 21:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:55.342 21:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:55.342 21:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:55.342 21:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:55.342 21:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:55.342 21:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:55.342 21:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.599 21:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:55.599 "name": "raid_bdev1", 00:08:55.599 "uuid": "813e121c-42f3-11ef-9f7f-e9a656123a8b", 00:08:55.599 "strip_size_kb": 0, 00:08:55.599 "state": "online", 00:08:55.599 "raid_level": "raid1", 00:08:55.599 "superblock": true, 00:08:55.599 "num_base_bdevs": 2, 00:08:55.599 "num_base_bdevs_discovered": 2, 00:08:55.599 "num_base_bdevs_operational": 2, 00:08:55.599 "base_bdevs_list": [ 00:08:55.599 { 00:08:55.599 "name": "BaseBdev1", 00:08:55.599 "uuid": "4a5f6b0c-6779-b657-a060-9ca1a16f8789", 00:08:55.599 "is_configured": true, 00:08:55.599 "data_offset": 2048, 00:08:55.599 "data_size": 63488 00:08:55.599 }, 00:08:55.599 { 00:08:55.599 "name": "BaseBdev2", 00:08:55.599 "uuid": "e3ec05db-10fb-1855-9910-0e55cb830d3f", 00:08:55.599 "is_configured": true, 00:08:55.599 "data_offset": 2048, 00:08:55.599 "data_size": 63488 00:08:55.599 } 00:08:55.599 ] 00:08:55.599 }' 00:08:55.599 21:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:55.599 21:45:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.858 21:45:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:56.116 [2024-07-15 21:45:11.158541] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:56.116 [2024-07-15 21:45:11.158568] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:56.116 [2024-07-15 21:45:11.158940] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:56.116 [2024-07-15 21:45:11.158950] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:56.116 [2024-07-15 21:45:11.158963] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:56.116 [2024-07-15 21:45:11.158968] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1540cc634f00 name raid_bdev1, state offline 00:08:56.116 0 00:08:56.116 21:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 51726 00:08:56.116 21:45:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@942 -- # '[' -z 51726 ']' 00:08:56.116 21:45:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # kill -0 51726 00:08:56.116 21:45:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@947 -- # uname 00:08:56.116 21:45:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:08:56.116 21:45:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # tail -1 00:08:56.116 21:45:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # ps -c -o command 51726 00:08:56.116 21:45:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # process_name=bdevperf 00:08:56.116 21:45:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' bdevperf = sudo ']' 00:08:56.116 killing process with pid 51726 00:08:56.116 21:45:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 51726' 00:08:56.116 21:45:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@961 -- # kill 51726 00:08:56.116 [2024-07-15 21:45:11.188627] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:56.116 21:45:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # wait 51726 00:08:56.116 [2024-07-15 21:45:11.199849] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:56.375 21:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.IGbMUoUqfC 00:08:56.375 21:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:08:56.375 21:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:08:56.375 21:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:08:56.375 21:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:08:56.375 21:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:56.375 21:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:08:56.375 21:45:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:56.375 00:08:56.375 real 0m5.672s 00:08:56.375 user 0m8.522s 00:08:56.375 sys 0m1.112s 00:08:56.375 21:45:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:08:56.375 ************************************ 00:08:56.375 END TEST raid_read_error_test 00:08:56.375 ************************************ 00:08:56.375 21:45:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.375 21:45:11 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:08:56.375 21:45:11 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:56.375 21:45:11 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:08:56.375 21:45:11 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:08:56.375 21:45:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:56.375 ************************************ 00:08:56.375 START TEST raid_write_error_test 00:08:56.375 ************************************ 00:08:56.375 21:45:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1117 -- # raid_io_error_test raid1 2 write 00:08:56.375 21:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:08:56.375 21:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:08:56.375 21:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:08:56.375 21:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:08:56.375 21:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:56.375 21:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:08:56.375 21:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:56.375 21:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:56.375 21:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:08:56.375 21:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:56.375 21:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:56.375 21:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:56.375 21:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:08:56.375 21:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:08:56.375 21:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:08:56.375 21:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:08:56.375 21:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:08:56.375 21:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:08:56.375 21:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:08:56.375 21:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:08:56.375 21:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:08:56.375 21:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.Ng0jkQrp4x 00:08:56.375 21:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=51854 00:08:56.375 21:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 51854 /var/tmp/spdk-raid.sock 00:08:56.375 21:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:56.375 21:45:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@823 -- # '[' -z 51854 ']' 00:08:56.375 21:45:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:56.375 21:45:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:08:56.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:56.375 21:45:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:56.375 21:45:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:08:56.375 21:45:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.375 [2024-07-15 21:45:11.441490] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:08:56.375 [2024-07-15 21:45:11.441747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:56.939 EAL: TSC is not safe to use in SMP mode 00:08:56.939 EAL: TSC is not invariant 00:08:56.939 [2024-07-15 21:45:11.970748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.939 [2024-07-15 21:45:12.051395] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:56.939 [2024-07-15 21:45:12.053499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.939 [2024-07-15 21:45:12.054311] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.939 [2024-07-15 21:45:12.054326] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.502 21:45:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:08:57.502 21:45:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # return 0 00:08:57.502 21:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:57.502 21:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:57.502 BaseBdev1_malloc 00:08:57.502 21:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:08:57.758 true 00:08:57.758 21:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:58.015 [2024-07-15 21:45:13.149760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:58.015 [2024-07-15 21:45:13.149842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.015 [2024-07-15 21:45:13.149867] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3c405cc34780 00:08:58.015 [2024-07-15 21:45:13.149877] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.015 [2024-07-15 21:45:13.150520] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.015 [2024-07-15 21:45:13.150549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:58.015 BaseBdev1 00:08:58.015 21:45:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:58.015 21:45:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:58.271 BaseBdev2_malloc 00:08:58.528 21:45:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:08:58.528 true 00:08:58.528 21:45:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:58.785 [2024-07-15 21:45:13.961783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:58.785 [2024-07-15 21:45:13.961853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.785 [2024-07-15 21:45:13.961894] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3c405cc34c80 00:08:58.785 [2024-07-15 21:45:13.961902] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.785 [2024-07-15 21:45:13.962561] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.785 [2024-07-15 21:45:13.962670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:58.785 BaseBdev2 00:08:59.042 21:45:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:08:59.299 [2024-07-15 21:45:14.257840] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:59.299 [2024-07-15 21:45:14.258438] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:59.299 [2024-07-15 21:45:14.258501] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3c405cc34f00 00:08:59.299 [2024-07-15 21:45:14.258508] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:59.299 [2024-07-15 21:45:14.258550] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3c405cca0e20 00:08:59.299 [2024-07-15 21:45:14.258625] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3c405cc34f00 00:08:59.299 [2024-07-15 21:45:14.258630] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3c405cc34f00 00:08:59.299 [2024-07-15 21:45:14.258656] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.299 21:45:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:59.299 21:45:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:59.299 21:45:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:59.299 21:45:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:59.299 21:45:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:59.299 21:45:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:59.299 21:45:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:59.299 21:45:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:59.299 21:45:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:59.299 21:45:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:59.299 21:45:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:59.299 21:45:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:59.557 21:45:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:59.557 "name": "raid_bdev1", 00:08:59.557 "uuid": "84b2e7f7-42f3-11ef-9f7f-e9a656123a8b", 00:08:59.557 "strip_size_kb": 0, 00:08:59.557 "state": "online", 00:08:59.557 "raid_level": "raid1", 00:08:59.557 "superblock": true, 00:08:59.557 "num_base_bdevs": 2, 00:08:59.557 "num_base_bdevs_discovered": 2, 00:08:59.557 "num_base_bdevs_operational": 2, 00:08:59.557 "base_bdevs_list": [ 00:08:59.557 { 00:08:59.557 "name": "BaseBdev1", 00:08:59.557 "uuid": "6d2d2448-bd3c-8651-87e1-bfe69b6c4ff5", 00:08:59.557 "is_configured": true, 00:08:59.557 "data_offset": 2048, 00:08:59.557 "data_size": 63488 00:08:59.557 }, 00:08:59.557 { 00:08:59.557 "name": "BaseBdev2", 00:08:59.557 "uuid": "da559d79-9d52-bd51-8acf-d8962a275892", 00:08:59.557 "is_configured": true, 00:08:59.557 "data_offset": 2048, 00:08:59.557 "data_size": 63488 00:08:59.557 } 00:08:59.557 ] 00:08:59.557 }' 00:08:59.557 21:45:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:59.557 21:45:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.814 21:45:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:08:59.814 21:45:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:08:59.814 [2024-07-15 21:45:14.890031] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3c405cca0ec0 00:09:00.747 21:45:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:01.004 [2024-07-15 21:45:16.111164] bdev_raid.c:2222:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:01.004 [2024-07-15 21:45:16.111232] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:01.004 [2024-07-15 21:45:16.111371] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x3c405cca0ec0 00:09:01.004 21:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:09:01.004 21:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:01.004 21:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:09:01.004 21:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=1 00:09:01.004 21:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:01.004 21:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:01.004 21:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:01.004 21:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:01.004 21:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:01.004 21:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:09:01.004 21:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:01.004 21:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:01.004 21:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:01.004 21:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:01.004 21:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:01.004 21:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:01.261 21:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:01.261 "name": "raid_bdev1", 00:09:01.261 "uuid": "84b2e7f7-42f3-11ef-9f7f-e9a656123a8b", 00:09:01.261 "strip_size_kb": 0, 00:09:01.261 "state": "online", 00:09:01.261 "raid_level": "raid1", 00:09:01.261 "superblock": true, 00:09:01.261 "num_base_bdevs": 2, 00:09:01.261 "num_base_bdevs_discovered": 1, 00:09:01.261 "num_base_bdevs_operational": 1, 00:09:01.261 "base_bdevs_list": [ 00:09:01.261 { 00:09:01.261 "name": null, 00:09:01.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.261 "is_configured": false, 00:09:01.261 "data_offset": 2048, 00:09:01.261 "data_size": 63488 00:09:01.261 }, 00:09:01.261 { 00:09:01.261 "name": "BaseBdev2", 00:09:01.261 "uuid": "da559d79-9d52-bd51-8acf-d8962a275892", 00:09:01.261 "is_configured": true, 00:09:01.261 "data_offset": 2048, 00:09:01.261 "data_size": 63488 00:09:01.261 } 00:09:01.261 ] 00:09:01.261 }' 00:09:01.261 21:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:01.261 21:45:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.518 21:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:01.776 [2024-07-15 21:45:16.908406] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:01.776 [2024-07-15 21:45:16.908438] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:01.776 [2024-07-15 21:45:16.908773] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:01.776 [2024-07-15 21:45:16.908782] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:01.776 [2024-07-15 21:45:16.908807] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:01.776 [2024-07-15 21:45:16.908828] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3c405cc34f00 name raid_bdev1, state offline 00:09:01.776 0 00:09:01.776 21:45:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 51854 00:09:01.776 21:45:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@942 -- # '[' -z 51854 ']' 00:09:01.776 21:45:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # kill -0 51854 00:09:01.776 21:45:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@947 -- # uname 00:09:01.776 21:45:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:09:01.776 21:45:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # ps -c -o command 51854 00:09:01.776 21:45:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # tail -1 00:09:01.776 21:45:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # process_name=bdevperf 00:09:01.776 21:45:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' bdevperf = sudo ']' 00:09:01.776 killing process with pid 51854 00:09:01.776 21:45:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 51854' 00:09:01.776 21:45:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@961 -- # kill 51854 00:09:01.776 [2024-07-15 21:45:16.937021] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:01.776 21:45:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # wait 51854 00:09:01.776 [2024-07-15 21:45:16.948909] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:02.034 21:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.Ng0jkQrp4x 00:09:02.034 21:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:09:02.034 21:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:09:02.034 21:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:09:02.034 21:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:09:02.034 21:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:02.034 21:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:09:02.034 21:45:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:02.034 00:09:02.034 real 0m5.701s 00:09:02.034 user 0m8.789s 00:09:02.034 sys 0m0.879s 00:09:02.034 21:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:09:02.035 21:45:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.035 ************************************ 00:09:02.035 END TEST raid_write_error_test 00:09:02.035 ************************************ 00:09:02.035 21:45:17 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:09:02.035 21:45:17 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:09:02.035 21:45:17 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:09:02.035 21:45:17 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:09:02.035 21:45:17 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:09:02.035 21:45:17 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:09:02.035 21:45:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:02.035 ************************************ 00:09:02.035 START TEST raid_state_function_test 00:09:02.035 ************************************ 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1117 -- # raid_state_function_test raid0 3 false 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=51976 00:09:02.035 Process raid pid: 51976 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 51976' 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 51976 /var/tmp/spdk-raid.sock 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@823 -- # '[' -z 51976 ']' 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:02.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:09:02.035 21:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.035 [2024-07-15 21:45:17.183394] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:09:02.035 [2024-07-15 21:45:17.183613] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:09:02.600 EAL: TSC is not safe to use in SMP mode 00:09:02.600 EAL: TSC is not invariant 00:09:02.600 [2024-07-15 21:45:17.709509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.857 [2024-07-15 21:45:17.790349] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:02.857 [2024-07-15 21:45:17.792402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.857 [2024-07-15 21:45:17.793198] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.857 [2024-07-15 21:45:17.793213] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.115 21:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:09:03.115 21:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # return 0 00:09:03.115 21:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:03.373 [2024-07-15 21:45:18.457411] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:03.373 [2024-07-15 21:45:18.457486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:03.373 [2024-07-15 21:45:18.457492] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:03.373 [2024-07-15 21:45:18.457516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:03.373 [2024-07-15 21:45:18.457520] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:03.373 [2024-07-15 21:45:18.457528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:03.373 21:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:03.373 21:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:03.373 21:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:03.373 21:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:03.373 21:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:03.373 21:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:03.373 21:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:03.373 21:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:03.373 21:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:03.373 21:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:03.374 21:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:03.374 21:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.631 21:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:03.631 "name": "Existed_Raid", 00:09:03.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.631 "strip_size_kb": 64, 00:09:03.631 "state": "configuring", 00:09:03.631 "raid_level": "raid0", 00:09:03.631 "superblock": false, 00:09:03.631 "num_base_bdevs": 3, 00:09:03.631 "num_base_bdevs_discovered": 0, 00:09:03.631 "num_base_bdevs_operational": 3, 00:09:03.631 "base_bdevs_list": [ 00:09:03.631 { 00:09:03.631 "name": "BaseBdev1", 00:09:03.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.631 "is_configured": false, 00:09:03.631 "data_offset": 0, 00:09:03.631 "data_size": 0 00:09:03.631 }, 00:09:03.631 { 00:09:03.631 "name": "BaseBdev2", 00:09:03.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.631 "is_configured": false, 00:09:03.631 "data_offset": 0, 00:09:03.631 "data_size": 0 00:09:03.631 }, 00:09:03.631 { 00:09:03.631 "name": "BaseBdev3", 00:09:03.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.631 "is_configured": false, 00:09:03.631 "data_offset": 0, 00:09:03.631 "data_size": 0 00:09:03.631 } 00:09:03.631 ] 00:09:03.631 }' 00:09:03.631 21:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:03.631 21:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.889 21:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:04.147 [2024-07-15 21:45:19.297491] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:04.147 [2024-07-15 21:45:19.297518] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x331ddb834500 name Existed_Raid, state configuring 00:09:04.147 21:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:04.405 [2024-07-15 21:45:19.517500] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:04.405 [2024-07-15 21:45:19.517559] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:04.405 [2024-07-15 21:45:19.517564] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:04.405 [2024-07-15 21:45:19.517571] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:04.405 [2024-07-15 21:45:19.517574] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:04.405 [2024-07-15 21:45:19.517581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:04.405 21:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:04.663 [2024-07-15 21:45:19.746485] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:04.663 BaseBdev1 00:09:04.663 21:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:09:04.663 21:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev1 00:09:04.663 21:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:09:04.663 21:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:09:04.663 21:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:09:04.663 21:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:09:04.663 21:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:04.921 21:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:05.179 [ 00:09:05.179 { 00:09:05.179 "name": "BaseBdev1", 00:09:05.179 "aliases": [ 00:09:05.179 "87f84399-42f3-11ef-9f7f-e9a656123a8b" 00:09:05.179 ], 00:09:05.179 "product_name": "Malloc disk", 00:09:05.179 "block_size": 512, 00:09:05.179 "num_blocks": 65536, 00:09:05.179 "uuid": "87f84399-42f3-11ef-9f7f-e9a656123a8b", 00:09:05.179 "assigned_rate_limits": { 00:09:05.179 "rw_ios_per_sec": 0, 00:09:05.179 "rw_mbytes_per_sec": 0, 00:09:05.179 "r_mbytes_per_sec": 0, 00:09:05.179 "w_mbytes_per_sec": 0 00:09:05.179 }, 00:09:05.179 "claimed": true, 00:09:05.179 "claim_type": "exclusive_write", 00:09:05.179 "zoned": false, 00:09:05.179 "supported_io_types": { 00:09:05.179 "read": true, 00:09:05.179 "write": true, 00:09:05.179 "unmap": true, 00:09:05.179 "flush": true, 00:09:05.179 "reset": true, 00:09:05.179 "nvme_admin": false, 00:09:05.179 "nvme_io": false, 00:09:05.179 "nvme_io_md": false, 00:09:05.179 "write_zeroes": true, 00:09:05.179 "zcopy": true, 00:09:05.179 "get_zone_info": false, 00:09:05.179 "zone_management": false, 00:09:05.179 "zone_append": false, 00:09:05.179 "compare": false, 00:09:05.179 "compare_and_write": false, 00:09:05.179 "abort": true, 00:09:05.179 "seek_hole": false, 00:09:05.179 "seek_data": false, 00:09:05.179 "copy": true, 00:09:05.179 "nvme_iov_md": false 00:09:05.179 }, 00:09:05.179 "memory_domains": [ 00:09:05.179 { 00:09:05.179 "dma_device_id": "system", 00:09:05.179 "dma_device_type": 1 00:09:05.179 }, 00:09:05.179 { 00:09:05.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.179 "dma_device_type": 2 00:09:05.179 } 00:09:05.179 ], 00:09:05.179 "driver_specific": {} 00:09:05.179 } 00:09:05.179 ] 00:09:05.179 21:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:09:05.179 21:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:05.179 21:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:05.179 21:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:05.179 21:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:05.179 21:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:05.179 21:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:05.179 21:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:05.180 21:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:05.180 21:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:05.180 21:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:05.180 21:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:05.180 21:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.438 21:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:05.438 "name": "Existed_Raid", 00:09:05.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.438 "strip_size_kb": 64, 00:09:05.438 "state": "configuring", 00:09:05.438 "raid_level": "raid0", 00:09:05.438 "superblock": false, 00:09:05.438 "num_base_bdevs": 3, 00:09:05.438 "num_base_bdevs_discovered": 1, 00:09:05.438 "num_base_bdevs_operational": 3, 00:09:05.438 "base_bdevs_list": [ 00:09:05.438 { 00:09:05.438 "name": "BaseBdev1", 00:09:05.438 "uuid": "87f84399-42f3-11ef-9f7f-e9a656123a8b", 00:09:05.438 "is_configured": true, 00:09:05.438 "data_offset": 0, 00:09:05.438 "data_size": 65536 00:09:05.438 }, 00:09:05.438 { 00:09:05.438 "name": "BaseBdev2", 00:09:05.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.438 "is_configured": false, 00:09:05.438 "data_offset": 0, 00:09:05.438 "data_size": 0 00:09:05.438 }, 00:09:05.438 { 00:09:05.438 "name": "BaseBdev3", 00:09:05.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.438 "is_configured": false, 00:09:05.438 "data_offset": 0, 00:09:05.438 "data_size": 0 00:09:05.438 } 00:09:05.438 ] 00:09:05.438 }' 00:09:05.438 21:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:05.438 21:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.696 21:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:05.955 [2024-07-15 21:45:21.053574] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:05.955 [2024-07-15 21:45:21.053602] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x331ddb834500 name Existed_Raid, state configuring 00:09:05.955 21:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:06.212 [2024-07-15 21:45:21.281589] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:06.212 [2024-07-15 21:45:21.282496] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:06.212 [2024-07-15 21:45:21.282552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:06.212 [2024-07-15 21:45:21.282556] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:06.212 [2024-07-15 21:45:21.282564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:06.212 21:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:09:06.212 21:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:06.212 21:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:06.212 21:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:06.212 21:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:06.212 21:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:06.212 21:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:06.212 21:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:06.212 21:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:06.212 21:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:06.212 21:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:06.212 21:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:06.212 21:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.212 21:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:06.469 21:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:06.469 "name": "Existed_Raid", 00:09:06.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.469 "strip_size_kb": 64, 00:09:06.469 "state": "configuring", 00:09:06.469 "raid_level": "raid0", 00:09:06.469 "superblock": false, 00:09:06.469 "num_base_bdevs": 3, 00:09:06.469 "num_base_bdevs_discovered": 1, 00:09:06.469 "num_base_bdevs_operational": 3, 00:09:06.469 "base_bdevs_list": [ 00:09:06.469 { 00:09:06.469 "name": "BaseBdev1", 00:09:06.469 "uuid": "87f84399-42f3-11ef-9f7f-e9a656123a8b", 00:09:06.469 "is_configured": true, 00:09:06.469 "data_offset": 0, 00:09:06.469 "data_size": 65536 00:09:06.469 }, 00:09:06.469 { 00:09:06.469 "name": "BaseBdev2", 00:09:06.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.469 "is_configured": false, 00:09:06.469 "data_offset": 0, 00:09:06.469 "data_size": 0 00:09:06.469 }, 00:09:06.469 { 00:09:06.469 "name": "BaseBdev3", 00:09:06.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.469 "is_configured": false, 00:09:06.469 "data_offset": 0, 00:09:06.469 "data_size": 0 00:09:06.469 } 00:09:06.469 ] 00:09:06.469 }' 00:09:06.469 21:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:06.469 21:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.727 21:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:06.985 [2024-07-15 21:45:22.153759] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:06.985 BaseBdev2 00:09:06.985 21:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:09:06.985 21:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev2 00:09:07.243 21:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:09:07.243 21:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:09:07.243 21:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:09:07.243 21:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:09:07.243 21:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:07.501 21:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:07.501 [ 00:09:07.501 { 00:09:07.501 "name": "BaseBdev2", 00:09:07.501 "aliases": [ 00:09:07.501 "8967b64a-42f3-11ef-9f7f-e9a656123a8b" 00:09:07.501 ], 00:09:07.501 "product_name": "Malloc disk", 00:09:07.501 "block_size": 512, 00:09:07.501 "num_blocks": 65536, 00:09:07.501 "uuid": "8967b64a-42f3-11ef-9f7f-e9a656123a8b", 00:09:07.501 "assigned_rate_limits": { 00:09:07.501 "rw_ios_per_sec": 0, 00:09:07.501 "rw_mbytes_per_sec": 0, 00:09:07.501 "r_mbytes_per_sec": 0, 00:09:07.501 "w_mbytes_per_sec": 0 00:09:07.501 }, 00:09:07.501 "claimed": true, 00:09:07.501 "claim_type": "exclusive_write", 00:09:07.501 "zoned": false, 00:09:07.501 "supported_io_types": { 00:09:07.501 "read": true, 00:09:07.501 "write": true, 00:09:07.501 "unmap": true, 00:09:07.501 "flush": true, 00:09:07.501 "reset": true, 00:09:07.501 "nvme_admin": false, 00:09:07.501 "nvme_io": false, 00:09:07.501 "nvme_io_md": false, 00:09:07.501 "write_zeroes": true, 00:09:07.501 "zcopy": true, 00:09:07.501 "get_zone_info": false, 00:09:07.501 "zone_management": false, 00:09:07.501 "zone_append": false, 00:09:07.501 "compare": false, 00:09:07.501 "compare_and_write": false, 00:09:07.501 "abort": true, 00:09:07.501 "seek_hole": false, 00:09:07.501 "seek_data": false, 00:09:07.501 "copy": true, 00:09:07.501 "nvme_iov_md": false 00:09:07.501 }, 00:09:07.501 "memory_domains": [ 00:09:07.501 { 00:09:07.501 "dma_device_id": "system", 00:09:07.501 "dma_device_type": 1 00:09:07.501 }, 00:09:07.501 { 00:09:07.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.501 "dma_device_type": 2 00:09:07.501 } 00:09:07.501 ], 00:09:07.501 "driver_specific": {} 00:09:07.501 } 00:09:07.501 ] 00:09:07.501 21:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:09:07.501 21:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:07.501 21:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:07.501 21:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:07.501 21:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:07.501 21:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:07.501 21:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:07.501 21:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:07.501 21:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:07.501 21:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:07.501 21:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:07.501 21:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:07.501 21:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:07.501 21:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:07.501 21:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.759 21:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:07.759 "name": "Existed_Raid", 00:09:07.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.759 "strip_size_kb": 64, 00:09:07.759 "state": "configuring", 00:09:07.759 "raid_level": "raid0", 00:09:07.759 "superblock": false, 00:09:07.759 "num_base_bdevs": 3, 00:09:07.759 "num_base_bdevs_discovered": 2, 00:09:07.759 "num_base_bdevs_operational": 3, 00:09:07.759 "base_bdevs_list": [ 00:09:07.759 { 00:09:07.759 "name": "BaseBdev1", 00:09:07.759 "uuid": "87f84399-42f3-11ef-9f7f-e9a656123a8b", 00:09:07.759 "is_configured": true, 00:09:07.759 "data_offset": 0, 00:09:07.759 "data_size": 65536 00:09:07.759 }, 00:09:07.759 { 00:09:07.759 "name": "BaseBdev2", 00:09:07.759 "uuid": "8967b64a-42f3-11ef-9f7f-e9a656123a8b", 00:09:07.759 "is_configured": true, 00:09:07.759 "data_offset": 0, 00:09:07.759 "data_size": 65536 00:09:07.759 }, 00:09:07.759 { 00:09:07.759 "name": "BaseBdev3", 00:09:07.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.759 "is_configured": false, 00:09:07.759 "data_offset": 0, 00:09:07.759 "data_size": 0 00:09:07.759 } 00:09:07.759 ] 00:09:07.759 }' 00:09:07.759 21:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:07.759 21:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.327 21:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:08.327 [2024-07-15 21:45:23.501826] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:08.327 [2024-07-15 21:45:23.501854] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x331ddb834a00 00:09:08.327 [2024-07-15 21:45:23.501874] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:08.327 [2024-07-15 21:45:23.501893] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x331ddb897e20 00:09:08.327 [2024-07-15 21:45:23.501989] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x331ddb834a00 00:09:08.327 [2024-07-15 21:45:23.501994] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x331ddb834a00 00:09:08.327 [2024-07-15 21:45:23.502027] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.327 BaseBdev3 00:09:08.585 21:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:09:08.585 21:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev3 00:09:08.585 21:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:09:08.585 21:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:09:08.585 21:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:09:08.585 21:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:09:08.585 21:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:08.843 21:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:09.101 [ 00:09:09.101 { 00:09:09.101 "name": "BaseBdev3", 00:09:09.101 "aliases": [ 00:09:09.101 "8a3569c2-42f3-11ef-9f7f-e9a656123a8b" 00:09:09.101 ], 00:09:09.101 "product_name": "Malloc disk", 00:09:09.101 "block_size": 512, 00:09:09.101 "num_blocks": 65536, 00:09:09.101 "uuid": "8a3569c2-42f3-11ef-9f7f-e9a656123a8b", 00:09:09.101 "assigned_rate_limits": { 00:09:09.101 "rw_ios_per_sec": 0, 00:09:09.101 "rw_mbytes_per_sec": 0, 00:09:09.101 "r_mbytes_per_sec": 0, 00:09:09.101 "w_mbytes_per_sec": 0 00:09:09.101 }, 00:09:09.101 "claimed": true, 00:09:09.101 "claim_type": "exclusive_write", 00:09:09.101 "zoned": false, 00:09:09.101 "supported_io_types": { 00:09:09.101 "read": true, 00:09:09.101 "write": true, 00:09:09.101 "unmap": true, 00:09:09.101 "flush": true, 00:09:09.101 "reset": true, 00:09:09.101 "nvme_admin": false, 00:09:09.101 "nvme_io": false, 00:09:09.101 "nvme_io_md": false, 00:09:09.101 "write_zeroes": true, 00:09:09.101 "zcopy": true, 00:09:09.101 "get_zone_info": false, 00:09:09.101 "zone_management": false, 00:09:09.101 "zone_append": false, 00:09:09.101 "compare": false, 00:09:09.101 "compare_and_write": false, 00:09:09.101 "abort": true, 00:09:09.101 "seek_hole": false, 00:09:09.101 "seek_data": false, 00:09:09.101 "copy": true, 00:09:09.101 "nvme_iov_md": false 00:09:09.101 }, 00:09:09.101 "memory_domains": [ 00:09:09.101 { 00:09:09.101 "dma_device_id": "system", 00:09:09.101 "dma_device_type": 1 00:09:09.101 }, 00:09:09.101 { 00:09:09.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.101 "dma_device_type": 2 00:09:09.101 } 00:09:09.101 ], 00:09:09.101 "driver_specific": {} 00:09:09.101 } 00:09:09.101 ] 00:09:09.101 21:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:09:09.101 21:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:09.101 21:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:09.101 21:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:09.101 21:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:09.101 21:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:09.101 21:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:09.101 21:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:09.101 21:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:09.101 21:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:09.101 21:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:09.101 21:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:09.101 21:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:09.101 21:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:09.101 21:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.358 21:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:09.358 "name": "Existed_Raid", 00:09:09.358 "uuid": "8a357024-42f3-11ef-9f7f-e9a656123a8b", 00:09:09.358 "strip_size_kb": 64, 00:09:09.358 "state": "online", 00:09:09.358 "raid_level": "raid0", 00:09:09.358 "superblock": false, 00:09:09.358 "num_base_bdevs": 3, 00:09:09.359 "num_base_bdevs_discovered": 3, 00:09:09.359 "num_base_bdevs_operational": 3, 00:09:09.359 "base_bdevs_list": [ 00:09:09.359 { 00:09:09.359 "name": "BaseBdev1", 00:09:09.359 "uuid": "87f84399-42f3-11ef-9f7f-e9a656123a8b", 00:09:09.359 "is_configured": true, 00:09:09.359 "data_offset": 0, 00:09:09.359 "data_size": 65536 00:09:09.359 }, 00:09:09.359 { 00:09:09.359 "name": "BaseBdev2", 00:09:09.359 "uuid": "8967b64a-42f3-11ef-9f7f-e9a656123a8b", 00:09:09.359 "is_configured": true, 00:09:09.359 "data_offset": 0, 00:09:09.359 "data_size": 65536 00:09:09.359 }, 00:09:09.359 { 00:09:09.359 "name": "BaseBdev3", 00:09:09.359 "uuid": "8a3569c2-42f3-11ef-9f7f-e9a656123a8b", 00:09:09.359 "is_configured": true, 00:09:09.359 "data_offset": 0, 00:09:09.359 "data_size": 65536 00:09:09.359 } 00:09:09.359 ] 00:09:09.359 }' 00:09:09.359 21:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:09.359 21:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.619 21:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:09:09.619 21:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:09.619 21:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:09.619 21:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:09.619 21:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:09.619 21:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:09.619 21:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:09.619 21:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:09.877 [2024-07-15 21:45:24.933825] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:09.877 21:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:09.877 "name": "Existed_Raid", 00:09:09.877 "aliases": [ 00:09:09.877 "8a357024-42f3-11ef-9f7f-e9a656123a8b" 00:09:09.877 ], 00:09:09.877 "product_name": "Raid Volume", 00:09:09.877 "block_size": 512, 00:09:09.877 "num_blocks": 196608, 00:09:09.877 "uuid": "8a357024-42f3-11ef-9f7f-e9a656123a8b", 00:09:09.877 "assigned_rate_limits": { 00:09:09.877 "rw_ios_per_sec": 0, 00:09:09.877 "rw_mbytes_per_sec": 0, 00:09:09.877 "r_mbytes_per_sec": 0, 00:09:09.877 "w_mbytes_per_sec": 0 00:09:09.877 }, 00:09:09.877 "claimed": false, 00:09:09.877 "zoned": false, 00:09:09.877 "supported_io_types": { 00:09:09.877 "read": true, 00:09:09.877 "write": true, 00:09:09.877 "unmap": true, 00:09:09.877 "flush": true, 00:09:09.877 "reset": true, 00:09:09.877 "nvme_admin": false, 00:09:09.877 "nvme_io": false, 00:09:09.877 "nvme_io_md": false, 00:09:09.877 "write_zeroes": true, 00:09:09.877 "zcopy": false, 00:09:09.877 "get_zone_info": false, 00:09:09.877 "zone_management": false, 00:09:09.877 "zone_append": false, 00:09:09.877 "compare": false, 00:09:09.877 "compare_and_write": false, 00:09:09.877 "abort": false, 00:09:09.877 "seek_hole": false, 00:09:09.877 "seek_data": false, 00:09:09.877 "copy": false, 00:09:09.877 "nvme_iov_md": false 00:09:09.877 }, 00:09:09.877 "memory_domains": [ 00:09:09.877 { 00:09:09.877 "dma_device_id": "system", 00:09:09.877 "dma_device_type": 1 00:09:09.877 }, 00:09:09.877 { 00:09:09.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.877 "dma_device_type": 2 00:09:09.877 }, 00:09:09.877 { 00:09:09.877 "dma_device_id": "system", 00:09:09.877 "dma_device_type": 1 00:09:09.877 }, 00:09:09.877 { 00:09:09.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.877 "dma_device_type": 2 00:09:09.877 }, 00:09:09.877 { 00:09:09.877 "dma_device_id": "system", 00:09:09.877 "dma_device_type": 1 00:09:09.877 }, 00:09:09.877 { 00:09:09.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.877 "dma_device_type": 2 00:09:09.877 } 00:09:09.877 ], 00:09:09.877 "driver_specific": { 00:09:09.877 "raid": { 00:09:09.877 "uuid": "8a357024-42f3-11ef-9f7f-e9a656123a8b", 00:09:09.877 "strip_size_kb": 64, 00:09:09.877 "state": "online", 00:09:09.877 "raid_level": "raid0", 00:09:09.877 "superblock": false, 00:09:09.877 "num_base_bdevs": 3, 00:09:09.877 "num_base_bdevs_discovered": 3, 00:09:09.877 "num_base_bdevs_operational": 3, 00:09:09.877 "base_bdevs_list": [ 00:09:09.877 { 00:09:09.877 "name": "BaseBdev1", 00:09:09.877 "uuid": "87f84399-42f3-11ef-9f7f-e9a656123a8b", 00:09:09.877 "is_configured": true, 00:09:09.877 "data_offset": 0, 00:09:09.877 "data_size": 65536 00:09:09.877 }, 00:09:09.877 { 00:09:09.877 "name": "BaseBdev2", 00:09:09.878 "uuid": "8967b64a-42f3-11ef-9f7f-e9a656123a8b", 00:09:09.878 "is_configured": true, 00:09:09.878 "data_offset": 0, 00:09:09.878 "data_size": 65536 00:09:09.878 }, 00:09:09.878 { 00:09:09.878 "name": "BaseBdev3", 00:09:09.878 "uuid": "8a3569c2-42f3-11ef-9f7f-e9a656123a8b", 00:09:09.878 "is_configured": true, 00:09:09.878 "data_offset": 0, 00:09:09.878 "data_size": 65536 00:09:09.878 } 00:09:09.878 ] 00:09:09.878 } 00:09:09.878 } 00:09:09.878 }' 00:09:09.878 21:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:09.878 21:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:09:09.878 BaseBdev2 00:09:09.878 BaseBdev3' 00:09:09.878 21:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:09.878 21:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:09.878 21:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:10.136 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:10.136 "name": "BaseBdev1", 00:09:10.136 "aliases": [ 00:09:10.136 "87f84399-42f3-11ef-9f7f-e9a656123a8b" 00:09:10.136 ], 00:09:10.136 "product_name": "Malloc disk", 00:09:10.136 "block_size": 512, 00:09:10.136 "num_blocks": 65536, 00:09:10.136 "uuid": "87f84399-42f3-11ef-9f7f-e9a656123a8b", 00:09:10.136 "assigned_rate_limits": { 00:09:10.136 "rw_ios_per_sec": 0, 00:09:10.136 "rw_mbytes_per_sec": 0, 00:09:10.136 "r_mbytes_per_sec": 0, 00:09:10.136 "w_mbytes_per_sec": 0 00:09:10.136 }, 00:09:10.136 "claimed": true, 00:09:10.136 "claim_type": "exclusive_write", 00:09:10.136 "zoned": false, 00:09:10.136 "supported_io_types": { 00:09:10.136 "read": true, 00:09:10.136 "write": true, 00:09:10.136 "unmap": true, 00:09:10.136 "flush": true, 00:09:10.136 "reset": true, 00:09:10.136 "nvme_admin": false, 00:09:10.136 "nvme_io": false, 00:09:10.136 "nvme_io_md": false, 00:09:10.136 "write_zeroes": true, 00:09:10.136 "zcopy": true, 00:09:10.136 "get_zone_info": false, 00:09:10.136 "zone_management": false, 00:09:10.136 "zone_append": false, 00:09:10.136 "compare": false, 00:09:10.136 "compare_and_write": false, 00:09:10.136 "abort": true, 00:09:10.136 "seek_hole": false, 00:09:10.136 "seek_data": false, 00:09:10.136 "copy": true, 00:09:10.136 "nvme_iov_md": false 00:09:10.136 }, 00:09:10.136 "memory_domains": [ 00:09:10.136 { 00:09:10.136 "dma_device_id": "system", 00:09:10.136 "dma_device_type": 1 00:09:10.136 }, 00:09:10.136 { 00:09:10.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.136 "dma_device_type": 2 00:09:10.136 } 00:09:10.136 ], 00:09:10.136 "driver_specific": {} 00:09:10.136 }' 00:09:10.136 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:10.136 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:10.136 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:10.136 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:10.136 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:10.136 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:10.136 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:10.136 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:10.136 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:10.136 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:10.136 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:10.394 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:10.394 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:10.394 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:10.394 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:10.394 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:10.394 "name": "BaseBdev2", 00:09:10.394 "aliases": [ 00:09:10.394 "8967b64a-42f3-11ef-9f7f-e9a656123a8b" 00:09:10.394 ], 00:09:10.394 "product_name": "Malloc disk", 00:09:10.394 "block_size": 512, 00:09:10.394 "num_blocks": 65536, 00:09:10.394 "uuid": "8967b64a-42f3-11ef-9f7f-e9a656123a8b", 00:09:10.394 "assigned_rate_limits": { 00:09:10.394 "rw_ios_per_sec": 0, 00:09:10.394 "rw_mbytes_per_sec": 0, 00:09:10.394 "r_mbytes_per_sec": 0, 00:09:10.394 "w_mbytes_per_sec": 0 00:09:10.394 }, 00:09:10.394 "claimed": true, 00:09:10.394 "claim_type": "exclusive_write", 00:09:10.394 "zoned": false, 00:09:10.394 "supported_io_types": { 00:09:10.394 "read": true, 00:09:10.394 "write": true, 00:09:10.394 "unmap": true, 00:09:10.394 "flush": true, 00:09:10.394 "reset": true, 00:09:10.394 "nvme_admin": false, 00:09:10.394 "nvme_io": false, 00:09:10.394 "nvme_io_md": false, 00:09:10.394 "write_zeroes": true, 00:09:10.394 "zcopy": true, 00:09:10.394 "get_zone_info": false, 00:09:10.394 "zone_management": false, 00:09:10.394 "zone_append": false, 00:09:10.394 "compare": false, 00:09:10.394 "compare_and_write": false, 00:09:10.394 "abort": true, 00:09:10.394 "seek_hole": false, 00:09:10.394 "seek_data": false, 00:09:10.394 "copy": true, 00:09:10.394 "nvme_iov_md": false 00:09:10.394 }, 00:09:10.394 "memory_domains": [ 00:09:10.394 { 00:09:10.394 "dma_device_id": "system", 00:09:10.394 "dma_device_type": 1 00:09:10.394 }, 00:09:10.394 { 00:09:10.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.394 "dma_device_type": 2 00:09:10.394 } 00:09:10.394 ], 00:09:10.394 "driver_specific": {} 00:09:10.394 }' 00:09:10.394 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:10.652 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:10.652 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:10.652 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:10.652 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:10.652 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:10.652 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:10.652 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:10.653 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:10.653 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:10.653 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:10.653 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:10.653 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:10.653 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:09:10.653 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:10.909 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:10.909 "name": "BaseBdev3", 00:09:10.909 "aliases": [ 00:09:10.909 "8a3569c2-42f3-11ef-9f7f-e9a656123a8b" 00:09:10.909 ], 00:09:10.909 "product_name": "Malloc disk", 00:09:10.909 "block_size": 512, 00:09:10.909 "num_blocks": 65536, 00:09:10.909 "uuid": "8a3569c2-42f3-11ef-9f7f-e9a656123a8b", 00:09:10.909 "assigned_rate_limits": { 00:09:10.909 "rw_ios_per_sec": 0, 00:09:10.909 "rw_mbytes_per_sec": 0, 00:09:10.909 "r_mbytes_per_sec": 0, 00:09:10.909 "w_mbytes_per_sec": 0 00:09:10.909 }, 00:09:10.909 "claimed": true, 00:09:10.909 "claim_type": "exclusive_write", 00:09:10.909 "zoned": false, 00:09:10.909 "supported_io_types": { 00:09:10.909 "read": true, 00:09:10.909 "write": true, 00:09:10.909 "unmap": true, 00:09:10.909 "flush": true, 00:09:10.909 "reset": true, 00:09:10.909 "nvme_admin": false, 00:09:10.909 "nvme_io": false, 00:09:10.909 "nvme_io_md": false, 00:09:10.909 "write_zeroes": true, 00:09:10.909 "zcopy": true, 00:09:10.909 "get_zone_info": false, 00:09:10.909 "zone_management": false, 00:09:10.909 "zone_append": false, 00:09:10.909 "compare": false, 00:09:10.909 "compare_and_write": false, 00:09:10.909 "abort": true, 00:09:10.909 "seek_hole": false, 00:09:10.909 "seek_data": false, 00:09:10.909 "copy": true, 00:09:10.909 "nvme_iov_md": false 00:09:10.909 }, 00:09:10.909 "memory_domains": [ 00:09:10.909 { 00:09:10.909 "dma_device_id": "system", 00:09:10.909 "dma_device_type": 1 00:09:10.909 }, 00:09:10.909 { 00:09:10.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.909 "dma_device_type": 2 00:09:10.909 } 00:09:10.909 ], 00:09:10.909 "driver_specific": {} 00:09:10.909 }' 00:09:10.909 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:10.909 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:10.909 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:10.909 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:10.909 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:10.909 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:10.909 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:10.909 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:10.909 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:10.909 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:10.909 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:10.909 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:10.909 21:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:11.166 [2024-07-15 21:45:26.201826] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:11.166 [2024-07-15 21:45:26.201885] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:11.166 [2024-07-15 21:45:26.201938] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.166 21:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:09:11.166 21:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:09:11.166 21:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:11.166 21:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:09:11.166 21:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:09:11.166 21:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:11.166 21:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:11.166 21:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:09:11.166 21:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:11.166 21:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:11.166 21:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:11.166 21:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:11.166 21:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:11.166 21:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:11.166 21:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:11.166 21:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:11.166 21:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.424 21:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:11.424 "name": "Existed_Raid", 00:09:11.424 "uuid": "8a357024-42f3-11ef-9f7f-e9a656123a8b", 00:09:11.424 "strip_size_kb": 64, 00:09:11.424 "state": "offline", 00:09:11.424 "raid_level": "raid0", 00:09:11.424 "superblock": false, 00:09:11.424 "num_base_bdevs": 3, 00:09:11.424 "num_base_bdevs_discovered": 2, 00:09:11.424 "num_base_bdevs_operational": 2, 00:09:11.424 "base_bdevs_list": [ 00:09:11.424 { 00:09:11.424 "name": null, 00:09:11.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.424 "is_configured": false, 00:09:11.424 "data_offset": 0, 00:09:11.424 "data_size": 65536 00:09:11.424 }, 00:09:11.424 { 00:09:11.424 "name": "BaseBdev2", 00:09:11.424 "uuid": "8967b64a-42f3-11ef-9f7f-e9a656123a8b", 00:09:11.424 "is_configured": true, 00:09:11.424 "data_offset": 0, 00:09:11.424 "data_size": 65536 00:09:11.424 }, 00:09:11.424 { 00:09:11.424 "name": "BaseBdev3", 00:09:11.424 "uuid": "8a3569c2-42f3-11ef-9f7f-e9a656123a8b", 00:09:11.424 "is_configured": true, 00:09:11.424 "data_offset": 0, 00:09:11.424 "data_size": 65536 00:09:11.424 } 00:09:11.424 ] 00:09:11.424 }' 00:09:11.424 21:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:11.424 21:45:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.682 21:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:09:11.682 21:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:11.682 21:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:11.682 21:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:11.939 21:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:11.939 21:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:11.939 21:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:12.196 [2024-07-15 21:45:27.344143] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:12.196 21:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:12.196 21:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:12.196 21:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:12.196 21:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:12.760 21:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:12.760 21:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:12.760 21:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:09:12.760 [2024-07-15 21:45:27.901994] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:12.760 [2024-07-15 21:45:27.902027] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x331ddb834a00 name Existed_Raid, state offline 00:09:12.760 21:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:12.760 21:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:12.760 21:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:12.760 21:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:09:13.018 21:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:09:13.018 21:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:09:13.018 21:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:09:13.018 21:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:09:13.018 21:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:13.018 21:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:13.282 BaseBdev2 00:09:13.282 21:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:09:13.282 21:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev2 00:09:13.282 21:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:09:13.282 21:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:09:13.282 21:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:09:13.282 21:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:09:13.282 21:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:13.847 21:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:14.106 [ 00:09:14.106 { 00:09:14.106 "name": "BaseBdev2", 00:09:14.106 "aliases": [ 00:09:14.106 "8d25c047-42f3-11ef-9f7f-e9a656123a8b" 00:09:14.106 ], 00:09:14.106 "product_name": "Malloc disk", 00:09:14.106 "block_size": 512, 00:09:14.106 "num_blocks": 65536, 00:09:14.106 "uuid": "8d25c047-42f3-11ef-9f7f-e9a656123a8b", 00:09:14.106 "assigned_rate_limits": { 00:09:14.106 "rw_ios_per_sec": 0, 00:09:14.106 "rw_mbytes_per_sec": 0, 00:09:14.106 "r_mbytes_per_sec": 0, 00:09:14.106 "w_mbytes_per_sec": 0 00:09:14.106 }, 00:09:14.106 "claimed": false, 00:09:14.106 "zoned": false, 00:09:14.106 "supported_io_types": { 00:09:14.106 "read": true, 00:09:14.106 "write": true, 00:09:14.106 "unmap": true, 00:09:14.106 "flush": true, 00:09:14.106 "reset": true, 00:09:14.106 "nvme_admin": false, 00:09:14.106 "nvme_io": false, 00:09:14.106 "nvme_io_md": false, 00:09:14.106 "write_zeroes": true, 00:09:14.106 "zcopy": true, 00:09:14.106 "get_zone_info": false, 00:09:14.106 "zone_management": false, 00:09:14.106 "zone_append": false, 00:09:14.106 "compare": false, 00:09:14.106 "compare_and_write": false, 00:09:14.106 "abort": true, 00:09:14.106 "seek_hole": false, 00:09:14.106 "seek_data": false, 00:09:14.106 "copy": true, 00:09:14.106 "nvme_iov_md": false 00:09:14.106 }, 00:09:14.106 "memory_domains": [ 00:09:14.106 { 00:09:14.106 "dma_device_id": "system", 00:09:14.106 "dma_device_type": 1 00:09:14.106 }, 00:09:14.106 { 00:09:14.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.106 "dma_device_type": 2 00:09:14.106 } 00:09:14.106 ], 00:09:14.106 "driver_specific": {} 00:09:14.106 } 00:09:14.107 ] 00:09:14.107 21:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:09:14.107 21:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:09:14.107 21:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:14.107 21:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:14.365 BaseBdev3 00:09:14.365 21:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:09:14.365 21:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev3 00:09:14.365 21:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:09:14.365 21:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:09:14.365 21:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:09:14.365 21:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:09:14.365 21:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:14.365 21:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:14.623 [ 00:09:14.623 { 00:09:14.623 "name": "BaseBdev3", 00:09:14.623 "aliases": [ 00:09:14.623 "8da99664-42f3-11ef-9f7f-e9a656123a8b" 00:09:14.623 ], 00:09:14.623 "product_name": "Malloc disk", 00:09:14.623 "block_size": 512, 00:09:14.623 "num_blocks": 65536, 00:09:14.623 "uuid": "8da99664-42f3-11ef-9f7f-e9a656123a8b", 00:09:14.623 "assigned_rate_limits": { 00:09:14.623 "rw_ios_per_sec": 0, 00:09:14.623 "rw_mbytes_per_sec": 0, 00:09:14.623 "r_mbytes_per_sec": 0, 00:09:14.623 "w_mbytes_per_sec": 0 00:09:14.623 }, 00:09:14.623 "claimed": false, 00:09:14.623 "zoned": false, 00:09:14.623 "supported_io_types": { 00:09:14.623 "read": true, 00:09:14.623 "write": true, 00:09:14.623 "unmap": true, 00:09:14.623 "flush": true, 00:09:14.623 "reset": true, 00:09:14.623 "nvme_admin": false, 00:09:14.623 "nvme_io": false, 00:09:14.623 "nvme_io_md": false, 00:09:14.623 "write_zeroes": true, 00:09:14.623 "zcopy": true, 00:09:14.623 "get_zone_info": false, 00:09:14.623 "zone_management": false, 00:09:14.623 "zone_append": false, 00:09:14.623 "compare": false, 00:09:14.623 "compare_and_write": false, 00:09:14.623 "abort": true, 00:09:14.623 "seek_hole": false, 00:09:14.623 "seek_data": false, 00:09:14.623 "copy": true, 00:09:14.623 "nvme_iov_md": false 00:09:14.623 }, 00:09:14.623 "memory_domains": [ 00:09:14.623 { 00:09:14.623 "dma_device_id": "system", 00:09:14.623 "dma_device_type": 1 00:09:14.623 }, 00:09:14.623 { 00:09:14.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.623 "dma_device_type": 2 00:09:14.623 } 00:09:14.623 ], 00:09:14.623 "driver_specific": {} 00:09:14.623 } 00:09:14.623 ] 00:09:14.623 21:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:09:14.623 21:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:09:14.623 21:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:14.623 21:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:14.880 [2024-07-15 21:45:29.972268] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:14.880 [2024-07-15 21:45:29.972337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:14.880 [2024-07-15 21:45:29.972361] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:14.880 [2024-07-15 21:45:29.972953] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:14.880 21:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:14.880 21:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:14.880 21:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:14.880 21:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:14.880 21:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:14.880 21:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:14.880 21:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:14.880 21:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:14.880 21:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:14.880 21:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:14.880 21:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:14.880 21:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.138 21:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:15.138 "name": "Existed_Raid", 00:09:15.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.138 "strip_size_kb": 64, 00:09:15.138 "state": "configuring", 00:09:15.138 "raid_level": "raid0", 00:09:15.138 "superblock": false, 00:09:15.138 "num_base_bdevs": 3, 00:09:15.138 "num_base_bdevs_discovered": 2, 00:09:15.138 "num_base_bdevs_operational": 3, 00:09:15.138 "base_bdevs_list": [ 00:09:15.138 { 00:09:15.138 "name": "BaseBdev1", 00:09:15.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.138 "is_configured": false, 00:09:15.138 "data_offset": 0, 00:09:15.138 "data_size": 0 00:09:15.138 }, 00:09:15.138 { 00:09:15.138 "name": "BaseBdev2", 00:09:15.138 "uuid": "8d25c047-42f3-11ef-9f7f-e9a656123a8b", 00:09:15.138 "is_configured": true, 00:09:15.138 "data_offset": 0, 00:09:15.138 "data_size": 65536 00:09:15.138 }, 00:09:15.138 { 00:09:15.138 "name": "BaseBdev3", 00:09:15.138 "uuid": "8da99664-42f3-11ef-9f7f-e9a656123a8b", 00:09:15.138 "is_configured": true, 00:09:15.138 "data_offset": 0, 00:09:15.138 "data_size": 65536 00:09:15.138 } 00:09:15.138 ] 00:09:15.138 }' 00:09:15.138 21:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:15.138 21:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.395 21:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:09:15.653 [2024-07-15 21:45:30.688302] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:15.653 21:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:15.653 21:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:15.653 21:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:15.653 21:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:15.653 21:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:15.653 21:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:15.653 21:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:15.653 21:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:15.653 21:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:15.653 21:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:15.653 21:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:15.653 21:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.911 21:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:15.911 "name": "Existed_Raid", 00:09:15.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.911 "strip_size_kb": 64, 00:09:15.911 "state": "configuring", 00:09:15.911 "raid_level": "raid0", 00:09:15.911 "superblock": false, 00:09:15.911 "num_base_bdevs": 3, 00:09:15.911 "num_base_bdevs_discovered": 1, 00:09:15.911 "num_base_bdevs_operational": 3, 00:09:15.911 "base_bdevs_list": [ 00:09:15.911 { 00:09:15.911 "name": "BaseBdev1", 00:09:15.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.911 "is_configured": false, 00:09:15.911 "data_offset": 0, 00:09:15.911 "data_size": 0 00:09:15.911 }, 00:09:15.911 { 00:09:15.911 "name": null, 00:09:15.911 "uuid": "8d25c047-42f3-11ef-9f7f-e9a656123a8b", 00:09:15.911 "is_configured": false, 00:09:15.911 "data_offset": 0, 00:09:15.911 "data_size": 65536 00:09:15.911 }, 00:09:15.911 { 00:09:15.911 "name": "BaseBdev3", 00:09:15.911 "uuid": "8da99664-42f3-11ef-9f7f-e9a656123a8b", 00:09:15.911 "is_configured": true, 00:09:15.911 "data_offset": 0, 00:09:15.911 "data_size": 65536 00:09:15.911 } 00:09:15.911 ] 00:09:15.911 }' 00:09:15.911 21:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:15.911 21:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.169 21:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:16.169 21:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:16.427 21:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:09:16.427 21:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:16.685 [2024-07-15 21:45:31.640468] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:16.685 BaseBdev1 00:09:16.685 21:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:09:16.685 21:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev1 00:09:16.685 21:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:09:16.685 21:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:09:16.685 21:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:09:16.685 21:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:09:16.685 21:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:16.943 21:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:16.943 [ 00:09:16.943 { 00:09:16.943 "name": "BaseBdev1", 00:09:16.943 "aliases": [ 00:09:16.943 "8f0f4579-42f3-11ef-9f7f-e9a656123a8b" 00:09:16.943 ], 00:09:16.943 "product_name": "Malloc disk", 00:09:16.943 "block_size": 512, 00:09:16.943 "num_blocks": 65536, 00:09:16.943 "uuid": "8f0f4579-42f3-11ef-9f7f-e9a656123a8b", 00:09:16.943 "assigned_rate_limits": { 00:09:16.943 "rw_ios_per_sec": 0, 00:09:16.943 "rw_mbytes_per_sec": 0, 00:09:16.943 "r_mbytes_per_sec": 0, 00:09:16.943 "w_mbytes_per_sec": 0 00:09:16.943 }, 00:09:16.943 "claimed": true, 00:09:16.943 "claim_type": "exclusive_write", 00:09:16.943 "zoned": false, 00:09:16.943 "supported_io_types": { 00:09:16.943 "read": true, 00:09:16.943 "write": true, 00:09:16.943 "unmap": true, 00:09:16.943 "flush": true, 00:09:16.943 "reset": true, 00:09:16.943 "nvme_admin": false, 00:09:16.943 "nvme_io": false, 00:09:16.943 "nvme_io_md": false, 00:09:16.943 "write_zeroes": true, 00:09:16.943 "zcopy": true, 00:09:16.943 "get_zone_info": false, 00:09:16.943 "zone_management": false, 00:09:16.943 "zone_append": false, 00:09:16.943 "compare": false, 00:09:16.943 "compare_and_write": false, 00:09:16.943 "abort": true, 00:09:16.943 "seek_hole": false, 00:09:16.943 "seek_data": false, 00:09:16.943 "copy": true, 00:09:16.943 "nvme_iov_md": false 00:09:16.943 }, 00:09:16.943 "memory_domains": [ 00:09:16.943 { 00:09:16.943 "dma_device_id": "system", 00:09:16.943 "dma_device_type": 1 00:09:16.943 }, 00:09:16.943 { 00:09:16.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.943 "dma_device_type": 2 00:09:16.943 } 00:09:16.943 ], 00:09:16.943 "driver_specific": {} 00:09:16.943 } 00:09:16.943 ] 00:09:16.943 21:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:09:16.943 21:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:16.943 21:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:16.943 21:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:16.943 21:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:16.943 21:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:16.943 21:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:16.944 21:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:16.944 21:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:16.944 21:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:16.944 21:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:16.944 21:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.944 21:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:17.201 21:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:17.201 "name": "Existed_Raid", 00:09:17.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.201 "strip_size_kb": 64, 00:09:17.201 "state": "configuring", 00:09:17.201 "raid_level": "raid0", 00:09:17.201 "superblock": false, 00:09:17.201 "num_base_bdevs": 3, 00:09:17.201 "num_base_bdevs_discovered": 2, 00:09:17.201 "num_base_bdevs_operational": 3, 00:09:17.201 "base_bdevs_list": [ 00:09:17.201 { 00:09:17.201 "name": "BaseBdev1", 00:09:17.201 "uuid": "8f0f4579-42f3-11ef-9f7f-e9a656123a8b", 00:09:17.201 "is_configured": true, 00:09:17.201 "data_offset": 0, 00:09:17.201 "data_size": 65536 00:09:17.201 }, 00:09:17.201 { 00:09:17.201 "name": null, 00:09:17.201 "uuid": "8d25c047-42f3-11ef-9f7f-e9a656123a8b", 00:09:17.201 "is_configured": false, 00:09:17.201 "data_offset": 0, 00:09:17.201 "data_size": 65536 00:09:17.201 }, 00:09:17.201 { 00:09:17.201 "name": "BaseBdev3", 00:09:17.201 "uuid": "8da99664-42f3-11ef-9f7f-e9a656123a8b", 00:09:17.201 "is_configured": true, 00:09:17.201 "data_offset": 0, 00:09:17.201 "data_size": 65536 00:09:17.201 } 00:09:17.201 ] 00:09:17.201 }' 00:09:17.201 21:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:17.201 21:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.767 21:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:17.767 21:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:17.767 21:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:09:17.767 21:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:09:18.025 [2024-07-15 21:45:33.096496] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:18.025 21:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:18.025 21:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:18.025 21:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:18.025 21:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:18.025 21:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:18.025 21:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:18.025 21:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:18.025 21:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:18.025 21:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:18.025 21:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:18.026 21:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:18.026 21:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.295 21:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:18.295 "name": "Existed_Raid", 00:09:18.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.295 "strip_size_kb": 64, 00:09:18.295 "state": "configuring", 00:09:18.295 "raid_level": "raid0", 00:09:18.295 "superblock": false, 00:09:18.295 "num_base_bdevs": 3, 00:09:18.295 "num_base_bdevs_discovered": 1, 00:09:18.295 "num_base_bdevs_operational": 3, 00:09:18.295 "base_bdevs_list": [ 00:09:18.295 { 00:09:18.295 "name": "BaseBdev1", 00:09:18.295 "uuid": "8f0f4579-42f3-11ef-9f7f-e9a656123a8b", 00:09:18.295 "is_configured": true, 00:09:18.295 "data_offset": 0, 00:09:18.295 "data_size": 65536 00:09:18.295 }, 00:09:18.295 { 00:09:18.295 "name": null, 00:09:18.295 "uuid": "8d25c047-42f3-11ef-9f7f-e9a656123a8b", 00:09:18.295 "is_configured": false, 00:09:18.295 "data_offset": 0, 00:09:18.295 "data_size": 65536 00:09:18.295 }, 00:09:18.295 { 00:09:18.295 "name": null, 00:09:18.295 "uuid": "8da99664-42f3-11ef-9f7f-e9a656123a8b", 00:09:18.295 "is_configured": false, 00:09:18.295 "data_offset": 0, 00:09:18.295 "data_size": 65536 00:09:18.295 } 00:09:18.295 ] 00:09:18.295 }' 00:09:18.295 21:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:18.295 21:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.553 21:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:18.553 21:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:18.810 21:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:09:18.810 21:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:19.068 [2024-07-15 21:45:34.200543] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:19.068 21:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:19.068 21:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:19.068 21:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:19.068 21:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:19.068 21:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:19.068 21:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:19.068 21:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:19.068 21:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:19.068 21:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:19.069 21:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:19.069 21:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:19.069 21:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.634 21:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:19.634 "name": "Existed_Raid", 00:09:19.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.634 "strip_size_kb": 64, 00:09:19.634 "state": "configuring", 00:09:19.634 "raid_level": "raid0", 00:09:19.634 "superblock": false, 00:09:19.634 "num_base_bdevs": 3, 00:09:19.634 "num_base_bdevs_discovered": 2, 00:09:19.634 "num_base_bdevs_operational": 3, 00:09:19.634 "base_bdevs_list": [ 00:09:19.634 { 00:09:19.634 "name": "BaseBdev1", 00:09:19.634 "uuid": "8f0f4579-42f3-11ef-9f7f-e9a656123a8b", 00:09:19.634 "is_configured": true, 00:09:19.634 "data_offset": 0, 00:09:19.634 "data_size": 65536 00:09:19.634 }, 00:09:19.634 { 00:09:19.634 "name": null, 00:09:19.634 "uuid": "8d25c047-42f3-11ef-9f7f-e9a656123a8b", 00:09:19.634 "is_configured": false, 00:09:19.634 "data_offset": 0, 00:09:19.634 "data_size": 65536 00:09:19.634 }, 00:09:19.634 { 00:09:19.634 "name": "BaseBdev3", 00:09:19.634 "uuid": "8da99664-42f3-11ef-9f7f-e9a656123a8b", 00:09:19.634 "is_configured": true, 00:09:19.634 "data_offset": 0, 00:09:19.634 "data_size": 65536 00:09:19.634 } 00:09:19.634 ] 00:09:19.634 }' 00:09:19.634 21:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:19.634 21:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.634 21:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:19.634 21:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:20.199 21:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:09:20.199 21:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:20.199 [2024-07-15 21:45:35.352576] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:20.199 21:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:20.199 21:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:20.199 21:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:20.199 21:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:20.199 21:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:20.199 21:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:20.199 21:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:20.199 21:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:20.199 21:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:20.199 21:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:20.199 21:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:20.199 21:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.457 21:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:20.457 "name": "Existed_Raid", 00:09:20.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.457 "strip_size_kb": 64, 00:09:20.457 "state": "configuring", 00:09:20.457 "raid_level": "raid0", 00:09:20.457 "superblock": false, 00:09:20.457 "num_base_bdevs": 3, 00:09:20.457 "num_base_bdevs_discovered": 1, 00:09:20.457 "num_base_bdevs_operational": 3, 00:09:20.457 "base_bdevs_list": [ 00:09:20.457 { 00:09:20.457 "name": null, 00:09:20.457 "uuid": "8f0f4579-42f3-11ef-9f7f-e9a656123a8b", 00:09:20.457 "is_configured": false, 00:09:20.457 "data_offset": 0, 00:09:20.457 "data_size": 65536 00:09:20.457 }, 00:09:20.457 { 00:09:20.457 "name": null, 00:09:20.457 "uuid": "8d25c047-42f3-11ef-9f7f-e9a656123a8b", 00:09:20.457 "is_configured": false, 00:09:20.457 "data_offset": 0, 00:09:20.457 "data_size": 65536 00:09:20.457 }, 00:09:20.457 { 00:09:20.457 "name": "BaseBdev3", 00:09:20.457 "uuid": "8da99664-42f3-11ef-9f7f-e9a656123a8b", 00:09:20.457 "is_configured": true, 00:09:20.457 "data_offset": 0, 00:09:20.457 "data_size": 65536 00:09:20.457 } 00:09:20.457 ] 00:09:20.457 }' 00:09:20.457 21:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:20.457 21:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.023 21:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:21.023 21:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:21.280 21:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:09:21.280 21:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:21.538 [2024-07-15 21:45:36.502343] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:21.538 21:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:21.538 21:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:21.538 21:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:21.538 21:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:21.538 21:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:21.538 21:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:21.538 21:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:21.538 21:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:21.538 21:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:21.538 21:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:21.538 21:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:21.538 21:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.817 21:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:21.817 "name": "Existed_Raid", 00:09:21.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.817 "strip_size_kb": 64, 00:09:21.817 "state": "configuring", 00:09:21.817 "raid_level": "raid0", 00:09:21.817 "superblock": false, 00:09:21.817 "num_base_bdevs": 3, 00:09:21.817 "num_base_bdevs_discovered": 2, 00:09:21.817 "num_base_bdevs_operational": 3, 00:09:21.817 "base_bdevs_list": [ 00:09:21.817 { 00:09:21.817 "name": null, 00:09:21.817 "uuid": "8f0f4579-42f3-11ef-9f7f-e9a656123a8b", 00:09:21.817 "is_configured": false, 00:09:21.817 "data_offset": 0, 00:09:21.817 "data_size": 65536 00:09:21.817 }, 00:09:21.817 { 00:09:21.817 "name": "BaseBdev2", 00:09:21.817 "uuid": "8d25c047-42f3-11ef-9f7f-e9a656123a8b", 00:09:21.817 "is_configured": true, 00:09:21.817 "data_offset": 0, 00:09:21.817 "data_size": 65536 00:09:21.817 }, 00:09:21.817 { 00:09:21.817 "name": "BaseBdev3", 00:09:21.817 "uuid": "8da99664-42f3-11ef-9f7f-e9a656123a8b", 00:09:21.817 "is_configured": true, 00:09:21.817 "data_offset": 0, 00:09:21.817 "data_size": 65536 00:09:21.817 } 00:09:21.817 ] 00:09:21.817 }' 00:09:21.817 21:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:21.817 21:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.075 21:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:22.075 21:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:22.333 21:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:09:22.333 21:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:22.333 21:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:22.591 21:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 8f0f4579-42f3-11ef-9f7f-e9a656123a8b 00:09:22.848 [2024-07-15 21:45:37.798665] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:22.848 [2024-07-15 21:45:37.798688] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x331ddb834a00 00:09:22.848 [2024-07-15 21:45:37.798692] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:22.848 [2024-07-15 21:45:37.798730] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x331ddb897e20 00:09:22.848 [2024-07-15 21:45:37.798795] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x331ddb834a00 00:09:22.848 [2024-07-15 21:45:37.798800] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x331ddb834a00 00:09:22.848 [2024-07-15 21:45:37.798832] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.848 NewBaseBdev 00:09:22.848 21:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:09:22.848 21:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=NewBaseBdev 00:09:22.848 21:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:09:22.848 21:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:09:22.848 21:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:09:22.848 21:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:09:22.848 21:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:23.106 21:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:23.106 [ 00:09:23.106 { 00:09:23.106 "name": "NewBaseBdev", 00:09:23.106 "aliases": [ 00:09:23.107 "8f0f4579-42f3-11ef-9f7f-e9a656123a8b" 00:09:23.107 ], 00:09:23.107 "product_name": "Malloc disk", 00:09:23.107 "block_size": 512, 00:09:23.107 "num_blocks": 65536, 00:09:23.107 "uuid": "8f0f4579-42f3-11ef-9f7f-e9a656123a8b", 00:09:23.107 "assigned_rate_limits": { 00:09:23.107 "rw_ios_per_sec": 0, 00:09:23.107 "rw_mbytes_per_sec": 0, 00:09:23.107 "r_mbytes_per_sec": 0, 00:09:23.107 "w_mbytes_per_sec": 0 00:09:23.107 }, 00:09:23.107 "claimed": true, 00:09:23.107 "claim_type": "exclusive_write", 00:09:23.107 "zoned": false, 00:09:23.107 "supported_io_types": { 00:09:23.107 "read": true, 00:09:23.107 "write": true, 00:09:23.107 "unmap": true, 00:09:23.107 "flush": true, 00:09:23.107 "reset": true, 00:09:23.107 "nvme_admin": false, 00:09:23.107 "nvme_io": false, 00:09:23.107 "nvme_io_md": false, 00:09:23.107 "write_zeroes": true, 00:09:23.107 "zcopy": true, 00:09:23.107 "get_zone_info": false, 00:09:23.107 "zone_management": false, 00:09:23.107 "zone_append": false, 00:09:23.107 "compare": false, 00:09:23.107 "compare_and_write": false, 00:09:23.107 "abort": true, 00:09:23.107 "seek_hole": false, 00:09:23.107 "seek_data": false, 00:09:23.107 "copy": true, 00:09:23.107 "nvme_iov_md": false 00:09:23.107 }, 00:09:23.107 "memory_domains": [ 00:09:23.107 { 00:09:23.107 "dma_device_id": "system", 00:09:23.107 "dma_device_type": 1 00:09:23.107 }, 00:09:23.107 { 00:09:23.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.107 "dma_device_type": 2 00:09:23.107 } 00:09:23.107 ], 00:09:23.107 "driver_specific": {} 00:09:23.107 } 00:09:23.107 ] 00:09:23.107 21:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:09:23.107 21:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:23.107 21:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:23.107 21:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:23.107 21:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:23.107 21:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:23.107 21:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:23.107 21:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:23.107 21:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:23.107 21:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:23.107 21:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:23.107 21:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:23.107 21:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.364 21:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:23.364 "name": "Existed_Raid", 00:09:23.364 "uuid": "92baf648-42f3-11ef-9f7f-e9a656123a8b", 00:09:23.364 "strip_size_kb": 64, 00:09:23.364 "state": "online", 00:09:23.364 "raid_level": "raid0", 00:09:23.364 "superblock": false, 00:09:23.364 "num_base_bdevs": 3, 00:09:23.364 "num_base_bdevs_discovered": 3, 00:09:23.364 "num_base_bdevs_operational": 3, 00:09:23.364 "base_bdevs_list": [ 00:09:23.364 { 00:09:23.364 "name": "NewBaseBdev", 00:09:23.364 "uuid": "8f0f4579-42f3-11ef-9f7f-e9a656123a8b", 00:09:23.364 "is_configured": true, 00:09:23.364 "data_offset": 0, 00:09:23.364 "data_size": 65536 00:09:23.364 }, 00:09:23.364 { 00:09:23.364 "name": "BaseBdev2", 00:09:23.364 "uuid": "8d25c047-42f3-11ef-9f7f-e9a656123a8b", 00:09:23.364 "is_configured": true, 00:09:23.364 "data_offset": 0, 00:09:23.364 "data_size": 65536 00:09:23.364 }, 00:09:23.364 { 00:09:23.364 "name": "BaseBdev3", 00:09:23.364 "uuid": "8da99664-42f3-11ef-9f7f-e9a656123a8b", 00:09:23.364 "is_configured": true, 00:09:23.364 "data_offset": 0, 00:09:23.364 "data_size": 65536 00:09:23.364 } 00:09:23.364 ] 00:09:23.364 }' 00:09:23.364 21:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:23.364 21:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.621 21:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:09:23.621 21:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:23.621 21:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:23.621 21:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:23.621 21:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:23.621 21:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:23.621 21:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:23.621 21:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:23.879 [2024-07-15 21:45:39.030641] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:23.879 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:23.879 "name": "Existed_Raid", 00:09:23.879 "aliases": [ 00:09:23.879 "92baf648-42f3-11ef-9f7f-e9a656123a8b" 00:09:23.879 ], 00:09:23.879 "product_name": "Raid Volume", 00:09:23.879 "block_size": 512, 00:09:23.879 "num_blocks": 196608, 00:09:23.879 "uuid": "92baf648-42f3-11ef-9f7f-e9a656123a8b", 00:09:23.879 "assigned_rate_limits": { 00:09:23.879 "rw_ios_per_sec": 0, 00:09:23.879 "rw_mbytes_per_sec": 0, 00:09:23.879 "r_mbytes_per_sec": 0, 00:09:23.879 "w_mbytes_per_sec": 0 00:09:23.879 }, 00:09:23.879 "claimed": false, 00:09:23.879 "zoned": false, 00:09:23.879 "supported_io_types": { 00:09:23.879 "read": true, 00:09:23.879 "write": true, 00:09:23.879 "unmap": true, 00:09:23.879 "flush": true, 00:09:23.879 "reset": true, 00:09:23.879 "nvme_admin": false, 00:09:23.879 "nvme_io": false, 00:09:23.879 "nvme_io_md": false, 00:09:23.879 "write_zeroes": true, 00:09:23.879 "zcopy": false, 00:09:23.879 "get_zone_info": false, 00:09:23.879 "zone_management": false, 00:09:23.879 "zone_append": false, 00:09:23.879 "compare": false, 00:09:23.879 "compare_and_write": false, 00:09:23.879 "abort": false, 00:09:23.879 "seek_hole": false, 00:09:23.879 "seek_data": false, 00:09:23.879 "copy": false, 00:09:23.879 "nvme_iov_md": false 00:09:23.879 }, 00:09:23.879 "memory_domains": [ 00:09:23.879 { 00:09:23.879 "dma_device_id": "system", 00:09:23.879 "dma_device_type": 1 00:09:23.879 }, 00:09:23.879 { 00:09:23.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.879 "dma_device_type": 2 00:09:23.879 }, 00:09:23.879 { 00:09:23.879 "dma_device_id": "system", 00:09:23.879 "dma_device_type": 1 00:09:23.879 }, 00:09:23.879 { 00:09:23.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.879 "dma_device_type": 2 00:09:23.879 }, 00:09:23.879 { 00:09:23.879 "dma_device_id": "system", 00:09:23.879 "dma_device_type": 1 00:09:23.879 }, 00:09:23.879 { 00:09:23.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.879 "dma_device_type": 2 00:09:23.879 } 00:09:23.879 ], 00:09:23.879 "driver_specific": { 00:09:23.879 "raid": { 00:09:23.879 "uuid": "92baf648-42f3-11ef-9f7f-e9a656123a8b", 00:09:23.879 "strip_size_kb": 64, 00:09:23.879 "state": "online", 00:09:23.879 "raid_level": "raid0", 00:09:23.879 "superblock": false, 00:09:23.879 "num_base_bdevs": 3, 00:09:23.879 "num_base_bdevs_discovered": 3, 00:09:23.879 "num_base_bdevs_operational": 3, 00:09:23.879 "base_bdevs_list": [ 00:09:23.879 { 00:09:23.879 "name": "NewBaseBdev", 00:09:23.879 "uuid": "8f0f4579-42f3-11ef-9f7f-e9a656123a8b", 00:09:23.879 "is_configured": true, 00:09:23.879 "data_offset": 0, 00:09:23.879 "data_size": 65536 00:09:23.879 }, 00:09:23.879 { 00:09:23.879 "name": "BaseBdev2", 00:09:23.879 "uuid": "8d25c047-42f3-11ef-9f7f-e9a656123a8b", 00:09:23.879 "is_configured": true, 00:09:23.879 "data_offset": 0, 00:09:23.879 "data_size": 65536 00:09:23.879 }, 00:09:23.879 { 00:09:23.879 "name": "BaseBdev3", 00:09:23.879 "uuid": "8da99664-42f3-11ef-9f7f-e9a656123a8b", 00:09:23.879 "is_configured": true, 00:09:23.879 "data_offset": 0, 00:09:23.879 "data_size": 65536 00:09:23.879 } 00:09:23.879 ] 00:09:23.879 } 00:09:23.879 } 00:09:23.879 }' 00:09:23.879 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:23.879 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:09:23.879 BaseBdev2 00:09:23.879 BaseBdev3' 00:09:23.879 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:23.879 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:09:23.879 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:24.137 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:24.137 "name": "NewBaseBdev", 00:09:24.137 "aliases": [ 00:09:24.137 "8f0f4579-42f3-11ef-9f7f-e9a656123a8b" 00:09:24.137 ], 00:09:24.137 "product_name": "Malloc disk", 00:09:24.137 "block_size": 512, 00:09:24.137 "num_blocks": 65536, 00:09:24.137 "uuid": "8f0f4579-42f3-11ef-9f7f-e9a656123a8b", 00:09:24.137 "assigned_rate_limits": { 00:09:24.137 "rw_ios_per_sec": 0, 00:09:24.137 "rw_mbytes_per_sec": 0, 00:09:24.137 "r_mbytes_per_sec": 0, 00:09:24.137 "w_mbytes_per_sec": 0 00:09:24.137 }, 00:09:24.137 "claimed": true, 00:09:24.137 "claim_type": "exclusive_write", 00:09:24.137 "zoned": false, 00:09:24.137 "supported_io_types": { 00:09:24.137 "read": true, 00:09:24.137 "write": true, 00:09:24.137 "unmap": true, 00:09:24.137 "flush": true, 00:09:24.137 "reset": true, 00:09:24.137 "nvme_admin": false, 00:09:24.137 "nvme_io": false, 00:09:24.137 "nvme_io_md": false, 00:09:24.137 "write_zeroes": true, 00:09:24.137 "zcopy": true, 00:09:24.137 "get_zone_info": false, 00:09:24.137 "zone_management": false, 00:09:24.137 "zone_append": false, 00:09:24.137 "compare": false, 00:09:24.137 "compare_and_write": false, 00:09:24.137 "abort": true, 00:09:24.137 "seek_hole": false, 00:09:24.137 "seek_data": false, 00:09:24.137 "copy": true, 00:09:24.137 "nvme_iov_md": false 00:09:24.137 }, 00:09:24.137 "memory_domains": [ 00:09:24.137 { 00:09:24.137 "dma_device_id": "system", 00:09:24.137 "dma_device_type": 1 00:09:24.137 }, 00:09:24.137 { 00:09:24.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.137 "dma_device_type": 2 00:09:24.137 } 00:09:24.137 ], 00:09:24.137 "driver_specific": {} 00:09:24.137 }' 00:09:24.137 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:24.137 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:24.137 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:24.137 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:24.137 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:24.395 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:24.395 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:24.395 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:24.395 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:24.395 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:24.395 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:24.395 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:24.395 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:24.395 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:24.395 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:24.653 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:24.653 "name": "BaseBdev2", 00:09:24.653 "aliases": [ 00:09:24.653 "8d25c047-42f3-11ef-9f7f-e9a656123a8b" 00:09:24.653 ], 00:09:24.653 "product_name": "Malloc disk", 00:09:24.653 "block_size": 512, 00:09:24.653 "num_blocks": 65536, 00:09:24.653 "uuid": "8d25c047-42f3-11ef-9f7f-e9a656123a8b", 00:09:24.653 "assigned_rate_limits": { 00:09:24.653 "rw_ios_per_sec": 0, 00:09:24.653 "rw_mbytes_per_sec": 0, 00:09:24.653 "r_mbytes_per_sec": 0, 00:09:24.653 "w_mbytes_per_sec": 0 00:09:24.653 }, 00:09:24.653 "claimed": true, 00:09:24.653 "claim_type": "exclusive_write", 00:09:24.653 "zoned": false, 00:09:24.653 "supported_io_types": { 00:09:24.653 "read": true, 00:09:24.653 "write": true, 00:09:24.653 "unmap": true, 00:09:24.653 "flush": true, 00:09:24.653 "reset": true, 00:09:24.653 "nvme_admin": false, 00:09:24.653 "nvme_io": false, 00:09:24.653 "nvme_io_md": false, 00:09:24.653 "write_zeroes": true, 00:09:24.653 "zcopy": true, 00:09:24.653 "get_zone_info": false, 00:09:24.653 "zone_management": false, 00:09:24.653 "zone_append": false, 00:09:24.653 "compare": false, 00:09:24.653 "compare_and_write": false, 00:09:24.653 "abort": true, 00:09:24.653 "seek_hole": false, 00:09:24.653 "seek_data": false, 00:09:24.653 "copy": true, 00:09:24.653 "nvme_iov_md": false 00:09:24.653 }, 00:09:24.653 "memory_domains": [ 00:09:24.653 { 00:09:24.653 "dma_device_id": "system", 00:09:24.653 "dma_device_type": 1 00:09:24.653 }, 00:09:24.653 { 00:09:24.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.653 "dma_device_type": 2 00:09:24.653 } 00:09:24.653 ], 00:09:24.653 "driver_specific": {} 00:09:24.653 }' 00:09:24.653 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:24.653 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:24.653 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:24.653 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:24.653 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:24.653 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:24.653 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:24.653 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:24.653 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:24.653 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:24.653 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:24.653 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:24.653 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:24.653 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:09:24.653 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:24.911 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:24.911 "name": "BaseBdev3", 00:09:24.911 "aliases": [ 00:09:24.911 "8da99664-42f3-11ef-9f7f-e9a656123a8b" 00:09:24.911 ], 00:09:24.911 "product_name": "Malloc disk", 00:09:24.911 "block_size": 512, 00:09:24.911 "num_blocks": 65536, 00:09:24.911 "uuid": "8da99664-42f3-11ef-9f7f-e9a656123a8b", 00:09:24.911 "assigned_rate_limits": { 00:09:24.911 "rw_ios_per_sec": 0, 00:09:24.911 "rw_mbytes_per_sec": 0, 00:09:24.911 "r_mbytes_per_sec": 0, 00:09:24.911 "w_mbytes_per_sec": 0 00:09:24.911 }, 00:09:24.911 "claimed": true, 00:09:24.911 "claim_type": "exclusive_write", 00:09:24.911 "zoned": false, 00:09:24.911 "supported_io_types": { 00:09:24.911 "read": true, 00:09:24.911 "write": true, 00:09:24.911 "unmap": true, 00:09:24.911 "flush": true, 00:09:24.911 "reset": true, 00:09:24.911 "nvme_admin": false, 00:09:24.911 "nvme_io": false, 00:09:24.911 "nvme_io_md": false, 00:09:24.911 "write_zeroes": true, 00:09:24.911 "zcopy": true, 00:09:24.911 "get_zone_info": false, 00:09:24.911 "zone_management": false, 00:09:24.911 "zone_append": false, 00:09:24.911 "compare": false, 00:09:24.911 "compare_and_write": false, 00:09:24.911 "abort": true, 00:09:24.911 "seek_hole": false, 00:09:24.911 "seek_data": false, 00:09:24.911 "copy": true, 00:09:24.911 "nvme_iov_md": false 00:09:24.911 }, 00:09:24.911 "memory_domains": [ 00:09:24.911 { 00:09:24.911 "dma_device_id": "system", 00:09:24.911 "dma_device_type": 1 00:09:24.911 }, 00:09:24.911 { 00:09:24.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.911 "dma_device_type": 2 00:09:24.911 } 00:09:24.911 ], 00:09:24.911 "driver_specific": {} 00:09:24.911 }' 00:09:24.911 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:24.911 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:24.911 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:24.911 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:24.912 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:24.912 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:24.912 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:24.912 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:24.912 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:24.912 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:24.912 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:24.912 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:24.912 21:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:25.169 [2024-07-15 21:45:40.186632] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:25.169 [2024-07-15 21:45:40.186657] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:25.169 [2024-07-15 21:45:40.186695] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:25.169 [2024-07-15 21:45:40.186708] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:25.169 [2024-07-15 21:45:40.186712] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x331ddb834a00 name Existed_Raid, state offline 00:09:25.169 21:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 51976 00:09:25.169 21:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@942 -- # '[' -z 51976 ']' 00:09:25.169 21:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # kill -0 51976 00:09:25.169 21:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@947 -- # uname 00:09:25.169 21:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:09:25.169 21:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # ps -c -o command 51976 00:09:25.169 21:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # tail -1 00:09:25.169 21:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:09:25.169 21:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:09:25.169 killing process with pid 51976 00:09:25.169 21:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 51976' 00:09:25.169 21:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@961 -- # kill 51976 00:09:25.169 [2024-07-15 21:45:40.213451] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:25.169 21:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # wait 51976 00:09:25.169 [2024-07-15 21:45:40.230184] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:09:25.427 ************************************ 00:09:25.427 END TEST raid_state_function_test 00:09:25.427 ************************************ 00:09:25.427 00:09:25.427 real 0m23.230s 00:09:25.427 user 0m42.544s 00:09:25.427 sys 0m3.079s 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.427 21:45:40 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:09:25.427 21:45:40 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:09:25.427 21:45:40 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:09:25.427 21:45:40 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:09:25.427 21:45:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:25.427 ************************************ 00:09:25.427 START TEST raid_state_function_test_sb 00:09:25.427 ************************************ 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1117 -- # raid_state_function_test raid0 3 true 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=52701 00:09:25.427 Process raid pid: 52701 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 52701' 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 52701 /var/tmp/spdk-raid.sock 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@823 -- # '[' -z 52701 ']' 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@828 -- # local max_retries=100 00:09:25.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # xtrace_disable 00:09:25.427 21:45:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.427 [2024-07-15 21:45:40.463195] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:09:25.427 [2024-07-15 21:45:40.463462] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:09:25.992 EAL: TSC is not safe to use in SMP mode 00:09:25.992 EAL: TSC is not invariant 00:09:25.992 [2024-07-15 21:45:41.012519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.992 [2024-07-15 21:45:41.095442] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:25.992 [2024-07-15 21:45:41.097513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.992 [2024-07-15 21:45:41.098297] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:25.992 [2024-07-15 21:45:41.098312] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.558 21:45:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:09:26.558 21:45:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # return 0 00:09:26.558 21:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:26.558 [2024-07-15 21:45:41.742605] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:26.558 [2024-07-15 21:45:41.742678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:26.558 [2024-07-15 21:45:41.742683] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:26.558 [2024-07-15 21:45:41.742692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:26.558 [2024-07-15 21:45:41.742696] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:26.558 [2024-07-15 21:45:41.742703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:26.816 21:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:26.816 21:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:26.816 21:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:26.816 21:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:26.816 21:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:26.816 21:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:26.816 21:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:26.816 21:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:26.816 21:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:26.816 21:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:26.816 21:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:26.816 21:45:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.074 21:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:27.074 "name": "Existed_Raid", 00:09:27.074 "uuid": "9514c042-42f3-11ef-9f7f-e9a656123a8b", 00:09:27.074 "strip_size_kb": 64, 00:09:27.074 "state": "configuring", 00:09:27.074 "raid_level": "raid0", 00:09:27.074 "superblock": true, 00:09:27.074 "num_base_bdevs": 3, 00:09:27.074 "num_base_bdevs_discovered": 0, 00:09:27.074 "num_base_bdevs_operational": 3, 00:09:27.074 "base_bdevs_list": [ 00:09:27.074 { 00:09:27.074 "name": "BaseBdev1", 00:09:27.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.074 "is_configured": false, 00:09:27.074 "data_offset": 0, 00:09:27.074 "data_size": 0 00:09:27.074 }, 00:09:27.074 { 00:09:27.074 "name": "BaseBdev2", 00:09:27.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.074 "is_configured": false, 00:09:27.074 "data_offset": 0, 00:09:27.074 "data_size": 0 00:09:27.074 }, 00:09:27.074 { 00:09:27.074 "name": "BaseBdev3", 00:09:27.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.074 "is_configured": false, 00:09:27.074 "data_offset": 0, 00:09:27.074 "data_size": 0 00:09:27.074 } 00:09:27.074 ] 00:09:27.074 }' 00:09:27.074 21:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:27.074 21:45:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.355 21:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:27.616 [2024-07-15 21:45:42.558594] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:27.616 [2024-07-15 21:45:42.558619] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3762aa434500 name Existed_Raid, state configuring 00:09:27.616 21:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:27.616 [2024-07-15 21:45:42.790621] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:27.616 [2024-07-15 21:45:42.790671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:27.616 [2024-07-15 21:45:42.790677] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:27.616 [2024-07-15 21:45:42.790685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:27.616 [2024-07-15 21:45:42.790689] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:27.616 [2024-07-15 21:45:42.790696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:27.874 21:45:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:28.132 [2024-07-15 21:45:43.075563] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:28.132 BaseBdev1 00:09:28.132 21:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:09:28.132 21:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev1 00:09:28.132 21:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:09:28.132 21:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:09:28.132 21:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:09:28.132 21:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:09:28.132 21:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:28.389 21:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:28.647 [ 00:09:28.647 { 00:09:28.647 "name": "BaseBdev1", 00:09:28.647 "aliases": [ 00:09:28.647 "95e0006e-42f3-11ef-9f7f-e9a656123a8b" 00:09:28.647 ], 00:09:28.647 "product_name": "Malloc disk", 00:09:28.647 "block_size": 512, 00:09:28.647 "num_blocks": 65536, 00:09:28.647 "uuid": "95e0006e-42f3-11ef-9f7f-e9a656123a8b", 00:09:28.647 "assigned_rate_limits": { 00:09:28.648 "rw_ios_per_sec": 0, 00:09:28.648 "rw_mbytes_per_sec": 0, 00:09:28.648 "r_mbytes_per_sec": 0, 00:09:28.648 "w_mbytes_per_sec": 0 00:09:28.648 }, 00:09:28.648 "claimed": true, 00:09:28.648 "claim_type": "exclusive_write", 00:09:28.648 "zoned": false, 00:09:28.648 "supported_io_types": { 00:09:28.648 "read": true, 00:09:28.648 "write": true, 00:09:28.648 "unmap": true, 00:09:28.648 "flush": true, 00:09:28.648 "reset": true, 00:09:28.648 "nvme_admin": false, 00:09:28.648 "nvme_io": false, 00:09:28.648 "nvme_io_md": false, 00:09:28.648 "write_zeroes": true, 00:09:28.648 "zcopy": true, 00:09:28.648 "get_zone_info": false, 00:09:28.648 "zone_management": false, 00:09:28.648 "zone_append": false, 00:09:28.648 "compare": false, 00:09:28.648 "compare_and_write": false, 00:09:28.648 "abort": true, 00:09:28.648 "seek_hole": false, 00:09:28.648 "seek_data": false, 00:09:28.648 "copy": true, 00:09:28.648 "nvme_iov_md": false 00:09:28.648 }, 00:09:28.648 "memory_domains": [ 00:09:28.648 { 00:09:28.648 "dma_device_id": "system", 00:09:28.648 "dma_device_type": 1 00:09:28.648 }, 00:09:28.648 { 00:09:28.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.648 "dma_device_type": 2 00:09:28.648 } 00:09:28.648 ], 00:09:28.648 "driver_specific": {} 00:09:28.648 } 00:09:28.648 ] 00:09:28.648 21:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:09:28.648 21:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:28.648 21:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:28.648 21:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:28.648 21:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:28.648 21:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:28.648 21:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:28.648 21:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:28.648 21:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:28.648 21:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:28.648 21:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:28.648 21:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:28.648 21:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.906 21:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:28.906 "name": "Existed_Raid", 00:09:28.906 "uuid": "95b4aa67-42f3-11ef-9f7f-e9a656123a8b", 00:09:28.906 "strip_size_kb": 64, 00:09:28.906 "state": "configuring", 00:09:28.906 "raid_level": "raid0", 00:09:28.906 "superblock": true, 00:09:28.906 "num_base_bdevs": 3, 00:09:28.906 "num_base_bdevs_discovered": 1, 00:09:28.906 "num_base_bdevs_operational": 3, 00:09:28.906 "base_bdevs_list": [ 00:09:28.906 { 00:09:28.906 "name": "BaseBdev1", 00:09:28.906 "uuid": "95e0006e-42f3-11ef-9f7f-e9a656123a8b", 00:09:28.906 "is_configured": true, 00:09:28.906 "data_offset": 2048, 00:09:28.906 "data_size": 63488 00:09:28.906 }, 00:09:28.906 { 00:09:28.906 "name": "BaseBdev2", 00:09:28.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.906 "is_configured": false, 00:09:28.906 "data_offset": 0, 00:09:28.906 "data_size": 0 00:09:28.906 }, 00:09:28.906 { 00:09:28.906 "name": "BaseBdev3", 00:09:28.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.906 "is_configured": false, 00:09:28.906 "data_offset": 0, 00:09:28.906 "data_size": 0 00:09:28.906 } 00:09:28.906 ] 00:09:28.906 }' 00:09:28.906 21:45:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:28.906 21:45:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.471 21:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:29.471 [2024-07-15 21:45:44.574664] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:29.471 [2024-07-15 21:45:44.574697] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3762aa434500 name Existed_Raid, state configuring 00:09:29.471 21:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:29.729 [2024-07-15 21:45:44.802690] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:29.729 [2024-07-15 21:45:44.803487] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:29.729 [2024-07-15 21:45:44.803526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:29.729 [2024-07-15 21:45:44.803531] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:29.729 [2024-07-15 21:45:44.803539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:29.729 21:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:09:29.729 21:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:29.729 21:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:29.729 21:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:29.729 21:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:29.729 21:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:29.729 21:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:29.729 21:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:29.729 21:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:29.729 21:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:29.729 21:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:29.729 21:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:29.729 21:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:29.729 21:45:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.005 21:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:30.005 "name": "Existed_Raid", 00:09:30.005 "uuid": "96e7aec2-42f3-11ef-9f7f-e9a656123a8b", 00:09:30.005 "strip_size_kb": 64, 00:09:30.005 "state": "configuring", 00:09:30.005 "raid_level": "raid0", 00:09:30.005 "superblock": true, 00:09:30.005 "num_base_bdevs": 3, 00:09:30.005 "num_base_bdevs_discovered": 1, 00:09:30.005 "num_base_bdevs_operational": 3, 00:09:30.005 "base_bdevs_list": [ 00:09:30.005 { 00:09:30.005 "name": "BaseBdev1", 00:09:30.005 "uuid": "95e0006e-42f3-11ef-9f7f-e9a656123a8b", 00:09:30.005 "is_configured": true, 00:09:30.005 "data_offset": 2048, 00:09:30.005 "data_size": 63488 00:09:30.005 }, 00:09:30.005 { 00:09:30.005 "name": "BaseBdev2", 00:09:30.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.005 "is_configured": false, 00:09:30.005 "data_offset": 0, 00:09:30.005 "data_size": 0 00:09:30.005 }, 00:09:30.005 { 00:09:30.005 "name": "BaseBdev3", 00:09:30.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.005 "is_configured": false, 00:09:30.005 "data_offset": 0, 00:09:30.005 "data_size": 0 00:09:30.005 } 00:09:30.005 ] 00:09:30.005 }' 00:09:30.005 21:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:30.005 21:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.287 21:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:30.544 [2024-07-15 21:45:45.610831] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:30.544 BaseBdev2 00:09:30.544 21:45:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:09:30.544 21:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev2 00:09:30.544 21:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:09:30.545 21:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:09:30.545 21:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:09:30.545 21:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:09:30.545 21:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:30.803 21:45:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:31.061 [ 00:09:31.061 { 00:09:31.061 "name": "BaseBdev2", 00:09:31.061 "aliases": [ 00:09:31.062 "9762fa14-42f3-11ef-9f7f-e9a656123a8b" 00:09:31.062 ], 00:09:31.062 "product_name": "Malloc disk", 00:09:31.062 "block_size": 512, 00:09:31.062 "num_blocks": 65536, 00:09:31.062 "uuid": "9762fa14-42f3-11ef-9f7f-e9a656123a8b", 00:09:31.062 "assigned_rate_limits": { 00:09:31.062 "rw_ios_per_sec": 0, 00:09:31.062 "rw_mbytes_per_sec": 0, 00:09:31.062 "r_mbytes_per_sec": 0, 00:09:31.062 "w_mbytes_per_sec": 0 00:09:31.062 }, 00:09:31.062 "claimed": true, 00:09:31.062 "claim_type": "exclusive_write", 00:09:31.062 "zoned": false, 00:09:31.062 "supported_io_types": { 00:09:31.062 "read": true, 00:09:31.062 "write": true, 00:09:31.062 "unmap": true, 00:09:31.062 "flush": true, 00:09:31.062 "reset": true, 00:09:31.062 "nvme_admin": false, 00:09:31.062 "nvme_io": false, 00:09:31.062 "nvme_io_md": false, 00:09:31.062 "write_zeroes": true, 00:09:31.062 "zcopy": true, 00:09:31.062 "get_zone_info": false, 00:09:31.062 "zone_management": false, 00:09:31.062 "zone_append": false, 00:09:31.062 "compare": false, 00:09:31.062 "compare_and_write": false, 00:09:31.062 "abort": true, 00:09:31.062 "seek_hole": false, 00:09:31.062 "seek_data": false, 00:09:31.062 "copy": true, 00:09:31.062 "nvme_iov_md": false 00:09:31.062 }, 00:09:31.062 "memory_domains": [ 00:09:31.062 { 00:09:31.062 "dma_device_id": "system", 00:09:31.062 "dma_device_type": 1 00:09:31.062 }, 00:09:31.062 { 00:09:31.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.062 "dma_device_type": 2 00:09:31.062 } 00:09:31.062 ], 00:09:31.062 "driver_specific": {} 00:09:31.062 } 00:09:31.062 ] 00:09:31.062 21:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:09:31.062 21:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:31.062 21:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:31.062 21:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:31.062 21:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:31.062 21:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:31.062 21:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:31.062 21:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:31.062 21:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:31.062 21:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:31.062 21:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:31.062 21:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:31.062 21:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:31.062 21:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:31.062 21:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.320 21:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:31.320 "name": "Existed_Raid", 00:09:31.320 "uuid": "96e7aec2-42f3-11ef-9f7f-e9a656123a8b", 00:09:31.320 "strip_size_kb": 64, 00:09:31.320 "state": "configuring", 00:09:31.320 "raid_level": "raid0", 00:09:31.320 "superblock": true, 00:09:31.320 "num_base_bdevs": 3, 00:09:31.320 "num_base_bdevs_discovered": 2, 00:09:31.320 "num_base_bdevs_operational": 3, 00:09:31.320 "base_bdevs_list": [ 00:09:31.320 { 00:09:31.320 "name": "BaseBdev1", 00:09:31.320 "uuid": "95e0006e-42f3-11ef-9f7f-e9a656123a8b", 00:09:31.320 "is_configured": true, 00:09:31.320 "data_offset": 2048, 00:09:31.320 "data_size": 63488 00:09:31.320 }, 00:09:31.320 { 00:09:31.320 "name": "BaseBdev2", 00:09:31.320 "uuid": "9762fa14-42f3-11ef-9f7f-e9a656123a8b", 00:09:31.320 "is_configured": true, 00:09:31.320 "data_offset": 2048, 00:09:31.320 "data_size": 63488 00:09:31.320 }, 00:09:31.320 { 00:09:31.320 "name": "BaseBdev3", 00:09:31.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.320 "is_configured": false, 00:09:31.320 "data_offset": 0, 00:09:31.320 "data_size": 0 00:09:31.320 } 00:09:31.320 ] 00:09:31.320 }' 00:09:31.320 21:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:31.320 21:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.578 21:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:31.836 [2024-07-15 21:45:46.886865] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:31.836 [2024-07-15 21:45:46.886958] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3762aa434a00 00:09:31.836 [2024-07-15 21:45:46.886965] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:31.836 [2024-07-15 21:45:46.886985] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3762aa497e20 00:09:31.836 [2024-07-15 21:45:46.887036] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3762aa434a00 00:09:31.836 [2024-07-15 21:45:46.887040] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3762aa434a00 00:09:31.836 [2024-07-15 21:45:46.887060] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.836 BaseBdev3 00:09:31.836 21:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:09:31.836 21:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev3 00:09:31.836 21:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:09:31.836 21:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:09:31.836 21:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:09:31.836 21:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:09:31.836 21:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:32.094 21:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:32.352 [ 00:09:32.352 { 00:09:32.352 "name": "BaseBdev3", 00:09:32.352 "aliases": [ 00:09:32.352 "9825aed2-42f3-11ef-9f7f-e9a656123a8b" 00:09:32.352 ], 00:09:32.352 "product_name": "Malloc disk", 00:09:32.352 "block_size": 512, 00:09:32.352 "num_blocks": 65536, 00:09:32.352 "uuid": "9825aed2-42f3-11ef-9f7f-e9a656123a8b", 00:09:32.352 "assigned_rate_limits": { 00:09:32.352 "rw_ios_per_sec": 0, 00:09:32.352 "rw_mbytes_per_sec": 0, 00:09:32.352 "r_mbytes_per_sec": 0, 00:09:32.352 "w_mbytes_per_sec": 0 00:09:32.352 }, 00:09:32.352 "claimed": true, 00:09:32.352 "claim_type": "exclusive_write", 00:09:32.352 "zoned": false, 00:09:32.352 "supported_io_types": { 00:09:32.352 "read": true, 00:09:32.352 "write": true, 00:09:32.352 "unmap": true, 00:09:32.352 "flush": true, 00:09:32.352 "reset": true, 00:09:32.352 "nvme_admin": false, 00:09:32.352 "nvme_io": false, 00:09:32.352 "nvme_io_md": false, 00:09:32.352 "write_zeroes": true, 00:09:32.352 "zcopy": true, 00:09:32.352 "get_zone_info": false, 00:09:32.352 "zone_management": false, 00:09:32.352 "zone_append": false, 00:09:32.352 "compare": false, 00:09:32.352 "compare_and_write": false, 00:09:32.352 "abort": true, 00:09:32.352 "seek_hole": false, 00:09:32.352 "seek_data": false, 00:09:32.352 "copy": true, 00:09:32.352 "nvme_iov_md": false 00:09:32.352 }, 00:09:32.352 "memory_domains": [ 00:09:32.352 { 00:09:32.352 "dma_device_id": "system", 00:09:32.352 "dma_device_type": 1 00:09:32.352 }, 00:09:32.352 { 00:09:32.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.352 "dma_device_type": 2 00:09:32.352 } 00:09:32.352 ], 00:09:32.352 "driver_specific": {} 00:09:32.352 } 00:09:32.352 ] 00:09:32.352 21:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:09:32.352 21:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:32.352 21:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:32.352 21:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:32.352 21:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:32.352 21:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:32.352 21:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:32.352 21:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:32.352 21:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:32.352 21:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:32.352 21:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:32.352 21:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:32.352 21:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:32.352 21:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:32.352 21:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.610 21:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:32.610 "name": "Existed_Raid", 00:09:32.610 "uuid": "96e7aec2-42f3-11ef-9f7f-e9a656123a8b", 00:09:32.610 "strip_size_kb": 64, 00:09:32.610 "state": "online", 00:09:32.610 "raid_level": "raid0", 00:09:32.610 "superblock": true, 00:09:32.610 "num_base_bdevs": 3, 00:09:32.610 "num_base_bdevs_discovered": 3, 00:09:32.610 "num_base_bdevs_operational": 3, 00:09:32.610 "base_bdevs_list": [ 00:09:32.610 { 00:09:32.610 "name": "BaseBdev1", 00:09:32.610 "uuid": "95e0006e-42f3-11ef-9f7f-e9a656123a8b", 00:09:32.610 "is_configured": true, 00:09:32.610 "data_offset": 2048, 00:09:32.610 "data_size": 63488 00:09:32.610 }, 00:09:32.610 { 00:09:32.610 "name": "BaseBdev2", 00:09:32.610 "uuid": "9762fa14-42f3-11ef-9f7f-e9a656123a8b", 00:09:32.610 "is_configured": true, 00:09:32.610 "data_offset": 2048, 00:09:32.610 "data_size": 63488 00:09:32.610 }, 00:09:32.610 { 00:09:32.610 "name": "BaseBdev3", 00:09:32.610 "uuid": "9825aed2-42f3-11ef-9f7f-e9a656123a8b", 00:09:32.610 "is_configured": true, 00:09:32.610 "data_offset": 2048, 00:09:32.610 "data_size": 63488 00:09:32.610 } 00:09:32.610 ] 00:09:32.610 }' 00:09:32.610 21:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:32.610 21:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.867 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:09:32.867 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:32.867 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:32.867 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:32.867 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:32.867 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:09:32.867 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:32.867 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:33.125 [2024-07-15 21:45:48.286773] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:33.125 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:33.125 "name": "Existed_Raid", 00:09:33.125 "aliases": [ 00:09:33.125 "96e7aec2-42f3-11ef-9f7f-e9a656123a8b" 00:09:33.125 ], 00:09:33.125 "product_name": "Raid Volume", 00:09:33.125 "block_size": 512, 00:09:33.125 "num_blocks": 190464, 00:09:33.125 "uuid": "96e7aec2-42f3-11ef-9f7f-e9a656123a8b", 00:09:33.125 "assigned_rate_limits": { 00:09:33.125 "rw_ios_per_sec": 0, 00:09:33.125 "rw_mbytes_per_sec": 0, 00:09:33.125 "r_mbytes_per_sec": 0, 00:09:33.125 "w_mbytes_per_sec": 0 00:09:33.125 }, 00:09:33.125 "claimed": false, 00:09:33.125 "zoned": false, 00:09:33.125 "supported_io_types": { 00:09:33.125 "read": true, 00:09:33.125 "write": true, 00:09:33.125 "unmap": true, 00:09:33.125 "flush": true, 00:09:33.125 "reset": true, 00:09:33.125 "nvme_admin": false, 00:09:33.125 "nvme_io": false, 00:09:33.125 "nvme_io_md": false, 00:09:33.125 "write_zeroes": true, 00:09:33.125 "zcopy": false, 00:09:33.125 "get_zone_info": false, 00:09:33.125 "zone_management": false, 00:09:33.125 "zone_append": false, 00:09:33.125 "compare": false, 00:09:33.125 "compare_and_write": false, 00:09:33.125 "abort": false, 00:09:33.125 "seek_hole": false, 00:09:33.125 "seek_data": false, 00:09:33.125 "copy": false, 00:09:33.125 "nvme_iov_md": false 00:09:33.125 }, 00:09:33.125 "memory_domains": [ 00:09:33.125 { 00:09:33.125 "dma_device_id": "system", 00:09:33.125 "dma_device_type": 1 00:09:33.125 }, 00:09:33.125 { 00:09:33.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.125 "dma_device_type": 2 00:09:33.125 }, 00:09:33.125 { 00:09:33.125 "dma_device_id": "system", 00:09:33.125 "dma_device_type": 1 00:09:33.125 }, 00:09:33.125 { 00:09:33.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.125 "dma_device_type": 2 00:09:33.125 }, 00:09:33.125 { 00:09:33.125 "dma_device_id": "system", 00:09:33.125 "dma_device_type": 1 00:09:33.125 }, 00:09:33.125 { 00:09:33.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.125 "dma_device_type": 2 00:09:33.125 } 00:09:33.125 ], 00:09:33.125 "driver_specific": { 00:09:33.125 "raid": { 00:09:33.125 "uuid": "96e7aec2-42f3-11ef-9f7f-e9a656123a8b", 00:09:33.125 "strip_size_kb": 64, 00:09:33.125 "state": "online", 00:09:33.125 "raid_level": "raid0", 00:09:33.125 "superblock": true, 00:09:33.125 "num_base_bdevs": 3, 00:09:33.125 "num_base_bdevs_discovered": 3, 00:09:33.125 "num_base_bdevs_operational": 3, 00:09:33.125 "base_bdevs_list": [ 00:09:33.125 { 00:09:33.125 "name": "BaseBdev1", 00:09:33.125 "uuid": "95e0006e-42f3-11ef-9f7f-e9a656123a8b", 00:09:33.125 "is_configured": true, 00:09:33.125 "data_offset": 2048, 00:09:33.125 "data_size": 63488 00:09:33.125 }, 00:09:33.125 { 00:09:33.125 "name": "BaseBdev2", 00:09:33.125 "uuid": "9762fa14-42f3-11ef-9f7f-e9a656123a8b", 00:09:33.125 "is_configured": true, 00:09:33.125 "data_offset": 2048, 00:09:33.125 "data_size": 63488 00:09:33.125 }, 00:09:33.125 { 00:09:33.125 "name": "BaseBdev3", 00:09:33.125 "uuid": "9825aed2-42f3-11ef-9f7f-e9a656123a8b", 00:09:33.125 "is_configured": true, 00:09:33.125 "data_offset": 2048, 00:09:33.125 "data_size": 63488 00:09:33.125 } 00:09:33.125 ] 00:09:33.125 } 00:09:33.125 } 00:09:33.125 }' 00:09:33.125 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:33.125 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:09:33.125 BaseBdev2 00:09:33.125 BaseBdev3' 00:09:33.125 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:33.383 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:33.383 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:33.641 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:33.641 "name": "BaseBdev1", 00:09:33.641 "aliases": [ 00:09:33.641 "95e0006e-42f3-11ef-9f7f-e9a656123a8b" 00:09:33.641 ], 00:09:33.641 "product_name": "Malloc disk", 00:09:33.641 "block_size": 512, 00:09:33.641 "num_blocks": 65536, 00:09:33.641 "uuid": "95e0006e-42f3-11ef-9f7f-e9a656123a8b", 00:09:33.641 "assigned_rate_limits": { 00:09:33.641 "rw_ios_per_sec": 0, 00:09:33.641 "rw_mbytes_per_sec": 0, 00:09:33.641 "r_mbytes_per_sec": 0, 00:09:33.641 "w_mbytes_per_sec": 0 00:09:33.641 }, 00:09:33.641 "claimed": true, 00:09:33.641 "claim_type": "exclusive_write", 00:09:33.641 "zoned": false, 00:09:33.641 "supported_io_types": { 00:09:33.641 "read": true, 00:09:33.641 "write": true, 00:09:33.641 "unmap": true, 00:09:33.641 "flush": true, 00:09:33.641 "reset": true, 00:09:33.641 "nvme_admin": false, 00:09:33.641 "nvme_io": false, 00:09:33.641 "nvme_io_md": false, 00:09:33.641 "write_zeroes": true, 00:09:33.641 "zcopy": true, 00:09:33.641 "get_zone_info": false, 00:09:33.641 "zone_management": false, 00:09:33.641 "zone_append": false, 00:09:33.641 "compare": false, 00:09:33.641 "compare_and_write": false, 00:09:33.641 "abort": true, 00:09:33.641 "seek_hole": false, 00:09:33.641 "seek_data": false, 00:09:33.641 "copy": true, 00:09:33.641 "nvme_iov_md": false 00:09:33.641 }, 00:09:33.641 "memory_domains": [ 00:09:33.641 { 00:09:33.641 "dma_device_id": "system", 00:09:33.641 "dma_device_type": 1 00:09:33.641 }, 00:09:33.641 { 00:09:33.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.641 "dma_device_type": 2 00:09:33.642 } 00:09:33.642 ], 00:09:33.642 "driver_specific": {} 00:09:33.642 }' 00:09:33.642 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:33.642 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:33.642 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:33.642 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:33.642 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:33.642 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:33.642 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:33.642 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:33.642 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:33.642 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:33.642 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:33.642 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:33.642 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:33.642 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:33.642 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:33.900 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:33.900 "name": "BaseBdev2", 00:09:33.900 "aliases": [ 00:09:33.900 "9762fa14-42f3-11ef-9f7f-e9a656123a8b" 00:09:33.900 ], 00:09:33.900 "product_name": "Malloc disk", 00:09:33.900 "block_size": 512, 00:09:33.900 "num_blocks": 65536, 00:09:33.900 "uuid": "9762fa14-42f3-11ef-9f7f-e9a656123a8b", 00:09:33.900 "assigned_rate_limits": { 00:09:33.900 "rw_ios_per_sec": 0, 00:09:33.900 "rw_mbytes_per_sec": 0, 00:09:33.900 "r_mbytes_per_sec": 0, 00:09:33.900 "w_mbytes_per_sec": 0 00:09:33.900 }, 00:09:33.900 "claimed": true, 00:09:33.900 "claim_type": "exclusive_write", 00:09:33.900 "zoned": false, 00:09:33.900 "supported_io_types": { 00:09:33.900 "read": true, 00:09:33.900 "write": true, 00:09:33.900 "unmap": true, 00:09:33.900 "flush": true, 00:09:33.900 "reset": true, 00:09:33.900 "nvme_admin": false, 00:09:33.900 "nvme_io": false, 00:09:33.900 "nvme_io_md": false, 00:09:33.900 "write_zeroes": true, 00:09:33.900 "zcopy": true, 00:09:33.900 "get_zone_info": false, 00:09:33.900 "zone_management": false, 00:09:33.900 "zone_append": false, 00:09:33.900 "compare": false, 00:09:33.900 "compare_and_write": false, 00:09:33.900 "abort": true, 00:09:33.900 "seek_hole": false, 00:09:33.900 "seek_data": false, 00:09:33.900 "copy": true, 00:09:33.900 "nvme_iov_md": false 00:09:33.900 }, 00:09:33.900 "memory_domains": [ 00:09:33.900 { 00:09:33.900 "dma_device_id": "system", 00:09:33.900 "dma_device_type": 1 00:09:33.900 }, 00:09:33.900 { 00:09:33.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.900 "dma_device_type": 2 00:09:33.900 } 00:09:33.900 ], 00:09:33.900 "driver_specific": {} 00:09:33.900 }' 00:09:33.900 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:33.900 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:33.900 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:33.900 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:33.900 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:33.900 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:33.900 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:33.900 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:33.900 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:33.900 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:33.900 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:33.900 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:33.900 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:33.900 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:33.900 21:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:09:34.158 21:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:34.158 "name": "BaseBdev3", 00:09:34.158 "aliases": [ 00:09:34.158 "9825aed2-42f3-11ef-9f7f-e9a656123a8b" 00:09:34.158 ], 00:09:34.158 "product_name": "Malloc disk", 00:09:34.158 "block_size": 512, 00:09:34.158 "num_blocks": 65536, 00:09:34.158 "uuid": "9825aed2-42f3-11ef-9f7f-e9a656123a8b", 00:09:34.158 "assigned_rate_limits": { 00:09:34.158 "rw_ios_per_sec": 0, 00:09:34.158 "rw_mbytes_per_sec": 0, 00:09:34.158 "r_mbytes_per_sec": 0, 00:09:34.158 "w_mbytes_per_sec": 0 00:09:34.158 }, 00:09:34.158 "claimed": true, 00:09:34.158 "claim_type": "exclusive_write", 00:09:34.158 "zoned": false, 00:09:34.158 "supported_io_types": { 00:09:34.158 "read": true, 00:09:34.158 "write": true, 00:09:34.158 "unmap": true, 00:09:34.158 "flush": true, 00:09:34.158 "reset": true, 00:09:34.158 "nvme_admin": false, 00:09:34.158 "nvme_io": false, 00:09:34.158 "nvme_io_md": false, 00:09:34.158 "write_zeroes": true, 00:09:34.158 "zcopy": true, 00:09:34.158 "get_zone_info": false, 00:09:34.158 "zone_management": false, 00:09:34.158 "zone_append": false, 00:09:34.158 "compare": false, 00:09:34.158 "compare_and_write": false, 00:09:34.158 "abort": true, 00:09:34.158 "seek_hole": false, 00:09:34.158 "seek_data": false, 00:09:34.158 "copy": true, 00:09:34.158 "nvme_iov_md": false 00:09:34.158 }, 00:09:34.158 "memory_domains": [ 00:09:34.158 { 00:09:34.158 "dma_device_id": "system", 00:09:34.158 "dma_device_type": 1 00:09:34.158 }, 00:09:34.158 { 00:09:34.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.158 "dma_device_type": 2 00:09:34.158 } 00:09:34.158 ], 00:09:34.158 "driver_specific": {} 00:09:34.158 }' 00:09:34.158 21:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:34.158 21:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:34.159 21:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:34.159 21:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:34.159 21:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:34.159 21:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:34.159 21:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:34.159 21:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:34.159 21:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:34.159 21:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:34.159 21:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:34.159 21:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:34.159 21:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:34.417 [2024-07-15 21:45:49.526785] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:34.417 [2024-07-15 21:45:49.526811] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:34.417 [2024-07-15 21:45:49.526849] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:34.417 21:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:09:34.417 21:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:09:34.417 21:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:34.417 21:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:09:34.417 21:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:09:34.417 21:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:34.417 21:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:34.417 21:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:09:34.417 21:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:34.417 21:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:34.417 21:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:34.417 21:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:34.417 21:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:34.417 21:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:34.417 21:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:34.417 21:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:34.417 21:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.675 21:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:34.675 "name": "Existed_Raid", 00:09:34.675 "uuid": "96e7aec2-42f3-11ef-9f7f-e9a656123a8b", 00:09:34.675 "strip_size_kb": 64, 00:09:34.675 "state": "offline", 00:09:34.675 "raid_level": "raid0", 00:09:34.675 "superblock": true, 00:09:34.675 "num_base_bdevs": 3, 00:09:34.675 "num_base_bdevs_discovered": 2, 00:09:34.675 "num_base_bdevs_operational": 2, 00:09:34.675 "base_bdevs_list": [ 00:09:34.675 { 00:09:34.675 "name": null, 00:09:34.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.675 "is_configured": false, 00:09:34.675 "data_offset": 2048, 00:09:34.675 "data_size": 63488 00:09:34.675 }, 00:09:34.675 { 00:09:34.675 "name": "BaseBdev2", 00:09:34.675 "uuid": "9762fa14-42f3-11ef-9f7f-e9a656123a8b", 00:09:34.675 "is_configured": true, 00:09:34.675 "data_offset": 2048, 00:09:34.675 "data_size": 63488 00:09:34.675 }, 00:09:34.675 { 00:09:34.675 "name": "BaseBdev3", 00:09:34.675 "uuid": "9825aed2-42f3-11ef-9f7f-e9a656123a8b", 00:09:34.675 "is_configured": true, 00:09:34.675 "data_offset": 2048, 00:09:34.675 "data_size": 63488 00:09:34.675 } 00:09:34.675 ] 00:09:34.675 }' 00:09:34.675 21:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:34.675 21:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.932 21:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:09:34.932 21:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:34.932 21:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:34.932 21:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:35.190 21:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:35.190 21:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:35.190 21:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:35.755 [2024-07-15 21:45:50.648785] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:35.755 21:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:35.755 21:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:35.755 21:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:35.755 21:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:36.015 21:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:36.015 21:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:36.015 21:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:09:36.015 [2024-07-15 21:45:51.158446] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:36.015 [2024-07-15 21:45:51.158477] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3762aa434a00 name Existed_Raid, state offline 00:09:36.015 21:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:36.015 21:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:36.015 21:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:36.015 21:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:09:36.277 21:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:09:36.277 21:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:09:36.277 21:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:09:36.277 21:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:09:36.277 21:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:36.277 21:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:36.534 BaseBdev2 00:09:36.534 21:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:09:36.534 21:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev2 00:09:36.534 21:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:09:36.534 21:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:09:36.534 21:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:09:36.534 21:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:09:36.534 21:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:36.792 21:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:37.050 [ 00:09:37.050 { 00:09:37.050 "name": "BaseBdev2", 00:09:37.050 "aliases": [ 00:09:37.050 "9af92fe8-42f3-11ef-9f7f-e9a656123a8b" 00:09:37.050 ], 00:09:37.050 "product_name": "Malloc disk", 00:09:37.050 "block_size": 512, 00:09:37.050 "num_blocks": 65536, 00:09:37.050 "uuid": "9af92fe8-42f3-11ef-9f7f-e9a656123a8b", 00:09:37.050 "assigned_rate_limits": { 00:09:37.050 "rw_ios_per_sec": 0, 00:09:37.050 "rw_mbytes_per_sec": 0, 00:09:37.050 "r_mbytes_per_sec": 0, 00:09:37.050 "w_mbytes_per_sec": 0 00:09:37.050 }, 00:09:37.050 "claimed": false, 00:09:37.050 "zoned": false, 00:09:37.050 "supported_io_types": { 00:09:37.050 "read": true, 00:09:37.050 "write": true, 00:09:37.050 "unmap": true, 00:09:37.050 "flush": true, 00:09:37.050 "reset": true, 00:09:37.050 "nvme_admin": false, 00:09:37.050 "nvme_io": false, 00:09:37.050 "nvme_io_md": false, 00:09:37.050 "write_zeroes": true, 00:09:37.050 "zcopy": true, 00:09:37.050 "get_zone_info": false, 00:09:37.050 "zone_management": false, 00:09:37.050 "zone_append": false, 00:09:37.050 "compare": false, 00:09:37.050 "compare_and_write": false, 00:09:37.050 "abort": true, 00:09:37.050 "seek_hole": false, 00:09:37.050 "seek_data": false, 00:09:37.050 "copy": true, 00:09:37.050 "nvme_iov_md": false 00:09:37.050 }, 00:09:37.050 "memory_domains": [ 00:09:37.050 { 00:09:37.050 "dma_device_id": "system", 00:09:37.050 "dma_device_type": 1 00:09:37.050 }, 00:09:37.050 { 00:09:37.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.050 "dma_device_type": 2 00:09:37.050 } 00:09:37.050 ], 00:09:37.050 "driver_specific": {} 00:09:37.050 } 00:09:37.050 ] 00:09:37.050 21:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:09:37.050 21:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:09:37.050 21:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:37.050 21:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:37.309 BaseBdev3 00:09:37.309 21:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:09:37.309 21:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev3 00:09:37.309 21:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:09:37.309 21:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:09:37.309 21:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:09:37.309 21:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:09:37.309 21:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:37.566 21:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:37.825 [ 00:09:37.825 { 00:09:37.825 "name": "BaseBdev3", 00:09:37.825 "aliases": [ 00:09:37.825 "9b6ab689-42f3-11ef-9f7f-e9a656123a8b" 00:09:37.825 ], 00:09:37.825 "product_name": "Malloc disk", 00:09:37.825 "block_size": 512, 00:09:37.825 "num_blocks": 65536, 00:09:37.825 "uuid": "9b6ab689-42f3-11ef-9f7f-e9a656123a8b", 00:09:37.825 "assigned_rate_limits": { 00:09:37.825 "rw_ios_per_sec": 0, 00:09:37.825 "rw_mbytes_per_sec": 0, 00:09:37.825 "r_mbytes_per_sec": 0, 00:09:37.825 "w_mbytes_per_sec": 0 00:09:37.825 }, 00:09:37.825 "claimed": false, 00:09:37.825 "zoned": false, 00:09:37.825 "supported_io_types": { 00:09:37.825 "read": true, 00:09:37.825 "write": true, 00:09:37.825 "unmap": true, 00:09:37.825 "flush": true, 00:09:37.825 "reset": true, 00:09:37.825 "nvme_admin": false, 00:09:37.825 "nvme_io": false, 00:09:37.825 "nvme_io_md": false, 00:09:37.825 "write_zeroes": true, 00:09:37.825 "zcopy": true, 00:09:37.825 "get_zone_info": false, 00:09:37.825 "zone_management": false, 00:09:37.825 "zone_append": false, 00:09:37.825 "compare": false, 00:09:37.825 "compare_and_write": false, 00:09:37.825 "abort": true, 00:09:37.825 "seek_hole": false, 00:09:37.825 "seek_data": false, 00:09:37.825 "copy": true, 00:09:37.825 "nvme_iov_md": false 00:09:37.825 }, 00:09:37.825 "memory_domains": [ 00:09:37.825 { 00:09:37.825 "dma_device_id": "system", 00:09:37.825 "dma_device_type": 1 00:09:37.825 }, 00:09:37.825 { 00:09:37.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.825 "dma_device_type": 2 00:09:37.825 } 00:09:37.825 ], 00:09:37.825 "driver_specific": {} 00:09:37.825 } 00:09:37.825 ] 00:09:37.825 21:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:09:37.825 21:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:09:37.825 21:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:37.825 21:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:38.083 [2024-07-15 21:45:53.180297] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:38.083 [2024-07-15 21:45:53.180360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:38.083 [2024-07-15 21:45:53.180369] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:38.083 [2024-07-15 21:45:53.180939] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:38.083 21:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:38.083 21:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:38.083 21:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:38.083 21:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:38.083 21:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:38.083 21:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:38.083 21:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:38.083 21:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:38.083 21:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:38.083 21:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:38.083 21:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.083 21:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:38.340 21:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:38.340 "name": "Existed_Raid", 00:09:38.340 "uuid": "9be60130-42f3-11ef-9f7f-e9a656123a8b", 00:09:38.340 "strip_size_kb": 64, 00:09:38.340 "state": "configuring", 00:09:38.340 "raid_level": "raid0", 00:09:38.340 "superblock": true, 00:09:38.340 "num_base_bdevs": 3, 00:09:38.340 "num_base_bdevs_discovered": 2, 00:09:38.340 "num_base_bdevs_operational": 3, 00:09:38.340 "base_bdevs_list": [ 00:09:38.340 { 00:09:38.340 "name": "BaseBdev1", 00:09:38.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.340 "is_configured": false, 00:09:38.340 "data_offset": 0, 00:09:38.340 "data_size": 0 00:09:38.340 }, 00:09:38.340 { 00:09:38.340 "name": "BaseBdev2", 00:09:38.340 "uuid": "9af92fe8-42f3-11ef-9f7f-e9a656123a8b", 00:09:38.340 "is_configured": true, 00:09:38.340 "data_offset": 2048, 00:09:38.340 "data_size": 63488 00:09:38.340 }, 00:09:38.340 { 00:09:38.340 "name": "BaseBdev3", 00:09:38.340 "uuid": "9b6ab689-42f3-11ef-9f7f-e9a656123a8b", 00:09:38.340 "is_configured": true, 00:09:38.340 "data_offset": 2048, 00:09:38.340 "data_size": 63488 00:09:38.340 } 00:09:38.340 ] 00:09:38.340 }' 00:09:38.340 21:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:38.340 21:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.597 21:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:09:38.855 [2024-07-15 21:45:53.976336] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:38.855 21:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:38.855 21:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:38.855 21:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:38.855 21:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:38.855 21:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:38.855 21:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:38.855 21:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:38.855 21:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:38.855 21:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:38.855 21:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:38.855 21:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:38.855 21:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.113 21:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:39.113 "name": "Existed_Raid", 00:09:39.113 "uuid": "9be60130-42f3-11ef-9f7f-e9a656123a8b", 00:09:39.113 "strip_size_kb": 64, 00:09:39.113 "state": "configuring", 00:09:39.113 "raid_level": "raid0", 00:09:39.113 "superblock": true, 00:09:39.113 "num_base_bdevs": 3, 00:09:39.113 "num_base_bdevs_discovered": 1, 00:09:39.113 "num_base_bdevs_operational": 3, 00:09:39.113 "base_bdevs_list": [ 00:09:39.113 { 00:09:39.113 "name": "BaseBdev1", 00:09:39.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.113 "is_configured": false, 00:09:39.113 "data_offset": 0, 00:09:39.113 "data_size": 0 00:09:39.113 }, 00:09:39.113 { 00:09:39.113 "name": null, 00:09:39.113 "uuid": "9af92fe8-42f3-11ef-9f7f-e9a656123a8b", 00:09:39.113 "is_configured": false, 00:09:39.113 "data_offset": 2048, 00:09:39.113 "data_size": 63488 00:09:39.113 }, 00:09:39.113 { 00:09:39.113 "name": "BaseBdev3", 00:09:39.113 "uuid": "9b6ab689-42f3-11ef-9f7f-e9a656123a8b", 00:09:39.113 "is_configured": true, 00:09:39.113 "data_offset": 2048, 00:09:39.113 "data_size": 63488 00:09:39.113 } 00:09:39.113 ] 00:09:39.113 }' 00:09:39.113 21:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:39.113 21:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.371 21:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:39.371 21:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:39.629 21:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:09:39.629 21:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:39.886 [2024-07-15 21:45:55.036535] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:39.886 BaseBdev1 00:09:39.886 21:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:09:39.886 21:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev1 00:09:39.886 21:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:09:39.886 21:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:09:39.886 21:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:09:39.886 21:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:09:39.886 21:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:40.451 21:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:40.451 [ 00:09:40.451 { 00:09:40.451 "name": "BaseBdev1", 00:09:40.451 "aliases": [ 00:09:40.451 "9d013a3d-42f3-11ef-9f7f-e9a656123a8b" 00:09:40.451 ], 00:09:40.451 "product_name": "Malloc disk", 00:09:40.451 "block_size": 512, 00:09:40.451 "num_blocks": 65536, 00:09:40.451 "uuid": "9d013a3d-42f3-11ef-9f7f-e9a656123a8b", 00:09:40.451 "assigned_rate_limits": { 00:09:40.451 "rw_ios_per_sec": 0, 00:09:40.451 "rw_mbytes_per_sec": 0, 00:09:40.451 "r_mbytes_per_sec": 0, 00:09:40.451 "w_mbytes_per_sec": 0 00:09:40.451 }, 00:09:40.451 "claimed": true, 00:09:40.451 "claim_type": "exclusive_write", 00:09:40.451 "zoned": false, 00:09:40.451 "supported_io_types": { 00:09:40.451 "read": true, 00:09:40.451 "write": true, 00:09:40.451 "unmap": true, 00:09:40.451 "flush": true, 00:09:40.451 "reset": true, 00:09:40.451 "nvme_admin": false, 00:09:40.451 "nvme_io": false, 00:09:40.451 "nvme_io_md": false, 00:09:40.451 "write_zeroes": true, 00:09:40.451 "zcopy": true, 00:09:40.451 "get_zone_info": false, 00:09:40.451 "zone_management": false, 00:09:40.451 "zone_append": false, 00:09:40.451 "compare": false, 00:09:40.451 "compare_and_write": false, 00:09:40.451 "abort": true, 00:09:40.451 "seek_hole": false, 00:09:40.451 "seek_data": false, 00:09:40.451 "copy": true, 00:09:40.451 "nvme_iov_md": false 00:09:40.451 }, 00:09:40.451 "memory_domains": [ 00:09:40.451 { 00:09:40.451 "dma_device_id": "system", 00:09:40.451 "dma_device_type": 1 00:09:40.451 }, 00:09:40.451 { 00:09:40.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.451 "dma_device_type": 2 00:09:40.451 } 00:09:40.451 ], 00:09:40.451 "driver_specific": {} 00:09:40.451 } 00:09:40.451 ] 00:09:40.451 21:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:09:40.451 21:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:40.451 21:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:40.451 21:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:40.451 21:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:40.451 21:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:40.451 21:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:40.451 21:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:40.451 21:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:40.451 21:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:40.451 21:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:40.451 21:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:40.451 21:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.709 21:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:40.709 "name": "Existed_Raid", 00:09:40.709 "uuid": "9be60130-42f3-11ef-9f7f-e9a656123a8b", 00:09:40.709 "strip_size_kb": 64, 00:09:40.709 "state": "configuring", 00:09:40.709 "raid_level": "raid0", 00:09:40.709 "superblock": true, 00:09:40.709 "num_base_bdevs": 3, 00:09:40.709 "num_base_bdevs_discovered": 2, 00:09:40.709 "num_base_bdevs_operational": 3, 00:09:40.709 "base_bdevs_list": [ 00:09:40.709 { 00:09:40.709 "name": "BaseBdev1", 00:09:40.709 "uuid": "9d013a3d-42f3-11ef-9f7f-e9a656123a8b", 00:09:40.709 "is_configured": true, 00:09:40.709 "data_offset": 2048, 00:09:40.709 "data_size": 63488 00:09:40.709 }, 00:09:40.709 { 00:09:40.709 "name": null, 00:09:40.709 "uuid": "9af92fe8-42f3-11ef-9f7f-e9a656123a8b", 00:09:40.709 "is_configured": false, 00:09:40.709 "data_offset": 2048, 00:09:40.709 "data_size": 63488 00:09:40.709 }, 00:09:40.709 { 00:09:40.709 "name": "BaseBdev3", 00:09:40.709 "uuid": "9b6ab689-42f3-11ef-9f7f-e9a656123a8b", 00:09:40.709 "is_configured": true, 00:09:40.709 "data_offset": 2048, 00:09:40.709 "data_size": 63488 00:09:40.709 } 00:09:40.709 ] 00:09:40.709 }' 00:09:40.709 21:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:40.709 21:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.273 21:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:41.273 21:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:41.537 21:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:09:41.537 21:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:09:41.537 [2024-07-15 21:45:56.712491] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:41.810 21:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:41.810 21:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:41.810 21:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:41.810 21:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:41.810 21:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:41.810 21:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:41.810 21:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:41.810 21:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:41.810 21:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:41.810 21:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:41.810 21:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:41.810 21:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.810 21:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:41.810 "name": "Existed_Raid", 00:09:41.810 "uuid": "9be60130-42f3-11ef-9f7f-e9a656123a8b", 00:09:41.810 "strip_size_kb": 64, 00:09:41.810 "state": "configuring", 00:09:41.810 "raid_level": "raid0", 00:09:41.810 "superblock": true, 00:09:41.810 "num_base_bdevs": 3, 00:09:41.810 "num_base_bdevs_discovered": 1, 00:09:41.810 "num_base_bdevs_operational": 3, 00:09:41.810 "base_bdevs_list": [ 00:09:41.810 { 00:09:41.810 "name": "BaseBdev1", 00:09:41.810 "uuid": "9d013a3d-42f3-11ef-9f7f-e9a656123a8b", 00:09:41.810 "is_configured": true, 00:09:41.810 "data_offset": 2048, 00:09:41.810 "data_size": 63488 00:09:41.810 }, 00:09:41.810 { 00:09:41.810 "name": null, 00:09:41.810 "uuid": "9af92fe8-42f3-11ef-9f7f-e9a656123a8b", 00:09:41.810 "is_configured": false, 00:09:41.810 "data_offset": 2048, 00:09:41.810 "data_size": 63488 00:09:41.810 }, 00:09:41.810 { 00:09:41.810 "name": null, 00:09:41.810 "uuid": "9b6ab689-42f3-11ef-9f7f-e9a656123a8b", 00:09:41.810 "is_configured": false, 00:09:41.810 "data_offset": 2048, 00:09:41.810 "data_size": 63488 00:09:41.810 } 00:09:41.810 ] 00:09:41.810 }' 00:09:41.810 21:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:41.810 21:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.376 21:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:42.376 21:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:42.376 21:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:09:42.376 21:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:42.634 [2024-07-15 21:45:57.752547] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:42.634 21:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:42.634 21:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:42.634 21:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:42.634 21:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:42.634 21:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:42.634 21:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:42.634 21:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:42.634 21:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:42.634 21:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:42.634 21:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:42.634 21:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:42.634 21:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.892 21:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:42.892 "name": "Existed_Raid", 00:09:42.892 "uuid": "9be60130-42f3-11ef-9f7f-e9a656123a8b", 00:09:42.892 "strip_size_kb": 64, 00:09:42.892 "state": "configuring", 00:09:42.892 "raid_level": "raid0", 00:09:42.892 "superblock": true, 00:09:42.892 "num_base_bdevs": 3, 00:09:42.892 "num_base_bdevs_discovered": 2, 00:09:42.892 "num_base_bdevs_operational": 3, 00:09:42.892 "base_bdevs_list": [ 00:09:42.892 { 00:09:42.892 "name": "BaseBdev1", 00:09:42.892 "uuid": "9d013a3d-42f3-11ef-9f7f-e9a656123a8b", 00:09:42.892 "is_configured": true, 00:09:42.892 "data_offset": 2048, 00:09:42.892 "data_size": 63488 00:09:42.892 }, 00:09:42.892 { 00:09:42.892 "name": null, 00:09:42.892 "uuid": "9af92fe8-42f3-11ef-9f7f-e9a656123a8b", 00:09:42.892 "is_configured": false, 00:09:42.892 "data_offset": 2048, 00:09:42.892 "data_size": 63488 00:09:42.892 }, 00:09:42.892 { 00:09:42.892 "name": "BaseBdev3", 00:09:42.892 "uuid": "9b6ab689-42f3-11ef-9f7f-e9a656123a8b", 00:09:42.892 "is_configured": true, 00:09:42.892 "data_offset": 2048, 00:09:42.892 "data_size": 63488 00:09:42.892 } 00:09:42.892 ] 00:09:42.892 }' 00:09:42.892 21:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:42.892 21:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.150 21:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:43.150 21:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:43.407 21:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:09:43.407 21:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:43.665 [2024-07-15 21:45:58.840660] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:43.924 21:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:43.924 21:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:43.924 21:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:43.924 21:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:43.924 21:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:43.924 21:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:43.924 21:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:43.924 21:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:43.924 21:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:43.924 21:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:43.924 21:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.924 21:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:44.181 21:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:44.181 "name": "Existed_Raid", 00:09:44.181 "uuid": "9be60130-42f3-11ef-9f7f-e9a656123a8b", 00:09:44.181 "strip_size_kb": 64, 00:09:44.181 "state": "configuring", 00:09:44.181 "raid_level": "raid0", 00:09:44.181 "superblock": true, 00:09:44.181 "num_base_bdevs": 3, 00:09:44.181 "num_base_bdevs_discovered": 1, 00:09:44.181 "num_base_bdevs_operational": 3, 00:09:44.181 "base_bdevs_list": [ 00:09:44.181 { 00:09:44.181 "name": null, 00:09:44.181 "uuid": "9d013a3d-42f3-11ef-9f7f-e9a656123a8b", 00:09:44.181 "is_configured": false, 00:09:44.181 "data_offset": 2048, 00:09:44.181 "data_size": 63488 00:09:44.181 }, 00:09:44.181 { 00:09:44.181 "name": null, 00:09:44.181 "uuid": "9af92fe8-42f3-11ef-9f7f-e9a656123a8b", 00:09:44.181 "is_configured": false, 00:09:44.181 "data_offset": 2048, 00:09:44.181 "data_size": 63488 00:09:44.181 }, 00:09:44.181 { 00:09:44.181 "name": "BaseBdev3", 00:09:44.181 "uuid": "9b6ab689-42f3-11ef-9f7f-e9a656123a8b", 00:09:44.181 "is_configured": true, 00:09:44.181 "data_offset": 2048, 00:09:44.181 "data_size": 63488 00:09:44.181 } 00:09:44.181 ] 00:09:44.181 }' 00:09:44.181 21:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:44.181 21:45:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.439 21:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:44.439 21:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:44.698 21:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:09:44.698 21:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:44.698 [2024-07-15 21:45:59.838864] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:44.698 21:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:44.698 21:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:44.698 21:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:44.698 21:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:44.698 21:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:44.698 21:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:44.698 21:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:44.698 21:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:44.698 21:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:44.698 21:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:44.698 21:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:44.698 21:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.956 21:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:44.956 "name": "Existed_Raid", 00:09:44.956 "uuid": "9be60130-42f3-11ef-9f7f-e9a656123a8b", 00:09:44.956 "strip_size_kb": 64, 00:09:44.956 "state": "configuring", 00:09:44.956 "raid_level": "raid0", 00:09:44.956 "superblock": true, 00:09:44.956 "num_base_bdevs": 3, 00:09:44.956 "num_base_bdevs_discovered": 2, 00:09:44.956 "num_base_bdevs_operational": 3, 00:09:44.956 "base_bdevs_list": [ 00:09:44.956 { 00:09:44.956 "name": null, 00:09:44.956 "uuid": "9d013a3d-42f3-11ef-9f7f-e9a656123a8b", 00:09:44.956 "is_configured": false, 00:09:44.956 "data_offset": 2048, 00:09:44.956 "data_size": 63488 00:09:44.956 }, 00:09:44.956 { 00:09:44.956 "name": "BaseBdev2", 00:09:44.956 "uuid": "9af92fe8-42f3-11ef-9f7f-e9a656123a8b", 00:09:44.956 "is_configured": true, 00:09:44.956 "data_offset": 2048, 00:09:44.956 "data_size": 63488 00:09:44.957 }, 00:09:44.957 { 00:09:44.957 "name": "BaseBdev3", 00:09:44.957 "uuid": "9b6ab689-42f3-11ef-9f7f-e9a656123a8b", 00:09:44.957 "is_configured": true, 00:09:44.957 "data_offset": 2048, 00:09:44.957 "data_size": 63488 00:09:44.957 } 00:09:44.957 ] 00:09:44.957 }' 00:09:44.957 21:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:44.957 21:46:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.523 21:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:45.523 21:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:45.523 21:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:09:45.523 21:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:45.523 21:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:46.090 21:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 9d013a3d-42f3-11ef-9f7f-e9a656123a8b 00:09:46.090 [2024-07-15 21:46:01.211024] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:46.090 [2024-07-15 21:46:01.211090] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3762aa434a00 00:09:46.090 [2024-07-15 21:46:01.211095] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:46.090 [2024-07-15 21:46:01.211114] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3762aa497e20 00:09:46.090 [2024-07-15 21:46:01.211174] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3762aa434a00 00:09:46.090 [2024-07-15 21:46:01.211178] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3762aa434a00 00:09:46.090 [2024-07-15 21:46:01.211197] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.090 NewBaseBdev 00:09:46.090 21:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:09:46.090 21:46:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=NewBaseBdev 00:09:46.090 21:46:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:09:46.090 21:46:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:09:46.090 21:46:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:09:46.090 21:46:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:09:46.090 21:46:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:46.347 21:46:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:46.606 [ 00:09:46.606 { 00:09:46.606 "name": "NewBaseBdev", 00:09:46.606 "aliases": [ 00:09:46.606 "9d013a3d-42f3-11ef-9f7f-e9a656123a8b" 00:09:46.606 ], 00:09:46.606 "product_name": "Malloc disk", 00:09:46.606 "block_size": 512, 00:09:46.606 "num_blocks": 65536, 00:09:46.606 "uuid": "9d013a3d-42f3-11ef-9f7f-e9a656123a8b", 00:09:46.606 "assigned_rate_limits": { 00:09:46.606 "rw_ios_per_sec": 0, 00:09:46.606 "rw_mbytes_per_sec": 0, 00:09:46.606 "r_mbytes_per_sec": 0, 00:09:46.606 "w_mbytes_per_sec": 0 00:09:46.606 }, 00:09:46.606 "claimed": true, 00:09:46.606 "claim_type": "exclusive_write", 00:09:46.606 "zoned": false, 00:09:46.606 "supported_io_types": { 00:09:46.606 "read": true, 00:09:46.606 "write": true, 00:09:46.606 "unmap": true, 00:09:46.606 "flush": true, 00:09:46.606 "reset": true, 00:09:46.606 "nvme_admin": false, 00:09:46.606 "nvme_io": false, 00:09:46.606 "nvme_io_md": false, 00:09:46.606 "write_zeroes": true, 00:09:46.606 "zcopy": true, 00:09:46.606 "get_zone_info": false, 00:09:46.606 "zone_management": false, 00:09:46.606 "zone_append": false, 00:09:46.606 "compare": false, 00:09:46.606 "compare_and_write": false, 00:09:46.606 "abort": true, 00:09:46.606 "seek_hole": false, 00:09:46.606 "seek_data": false, 00:09:46.606 "copy": true, 00:09:46.606 "nvme_iov_md": false 00:09:46.606 }, 00:09:46.606 "memory_domains": [ 00:09:46.606 { 00:09:46.606 "dma_device_id": "system", 00:09:46.606 "dma_device_type": 1 00:09:46.606 }, 00:09:46.606 { 00:09:46.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.606 "dma_device_type": 2 00:09:46.606 } 00:09:46.606 ], 00:09:46.606 "driver_specific": {} 00:09:46.606 } 00:09:46.606 ] 00:09:46.606 21:46:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:09:46.606 21:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:46.606 21:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:46.606 21:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:46.606 21:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:46.606 21:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:46.606 21:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:46.606 21:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:46.606 21:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:46.606 21:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:46.606 21:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:46.606 21:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:46.606 21:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.879 21:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:46.879 "name": "Existed_Raid", 00:09:46.879 "uuid": "9be60130-42f3-11ef-9f7f-e9a656123a8b", 00:09:46.879 "strip_size_kb": 64, 00:09:46.879 "state": "online", 00:09:46.879 "raid_level": "raid0", 00:09:46.879 "superblock": true, 00:09:46.879 "num_base_bdevs": 3, 00:09:46.879 "num_base_bdevs_discovered": 3, 00:09:46.879 "num_base_bdevs_operational": 3, 00:09:46.879 "base_bdevs_list": [ 00:09:46.879 { 00:09:46.879 "name": "NewBaseBdev", 00:09:46.879 "uuid": "9d013a3d-42f3-11ef-9f7f-e9a656123a8b", 00:09:46.879 "is_configured": true, 00:09:46.879 "data_offset": 2048, 00:09:46.879 "data_size": 63488 00:09:46.879 }, 00:09:46.879 { 00:09:46.879 "name": "BaseBdev2", 00:09:46.879 "uuid": "9af92fe8-42f3-11ef-9f7f-e9a656123a8b", 00:09:46.879 "is_configured": true, 00:09:46.879 "data_offset": 2048, 00:09:46.879 "data_size": 63488 00:09:46.879 }, 00:09:46.879 { 00:09:46.879 "name": "BaseBdev3", 00:09:46.879 "uuid": "9b6ab689-42f3-11ef-9f7f-e9a656123a8b", 00:09:46.879 "is_configured": true, 00:09:46.879 "data_offset": 2048, 00:09:46.879 "data_size": 63488 00:09:46.879 } 00:09:46.879 ] 00:09:46.879 }' 00:09:46.879 21:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:46.879 21:46:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.137 21:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:09:47.137 21:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:47.137 21:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:47.137 21:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:47.137 21:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:47.137 21:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:09:47.137 21:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:47.137 21:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:47.396 [2024-07-15 21:46:02.442984] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.396 21:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:47.396 "name": "Existed_Raid", 00:09:47.396 "aliases": [ 00:09:47.396 "9be60130-42f3-11ef-9f7f-e9a656123a8b" 00:09:47.396 ], 00:09:47.396 "product_name": "Raid Volume", 00:09:47.396 "block_size": 512, 00:09:47.396 "num_blocks": 190464, 00:09:47.396 "uuid": "9be60130-42f3-11ef-9f7f-e9a656123a8b", 00:09:47.396 "assigned_rate_limits": { 00:09:47.396 "rw_ios_per_sec": 0, 00:09:47.396 "rw_mbytes_per_sec": 0, 00:09:47.396 "r_mbytes_per_sec": 0, 00:09:47.396 "w_mbytes_per_sec": 0 00:09:47.396 }, 00:09:47.396 "claimed": false, 00:09:47.396 "zoned": false, 00:09:47.396 "supported_io_types": { 00:09:47.396 "read": true, 00:09:47.396 "write": true, 00:09:47.396 "unmap": true, 00:09:47.397 "flush": true, 00:09:47.397 "reset": true, 00:09:47.397 "nvme_admin": false, 00:09:47.397 "nvme_io": false, 00:09:47.397 "nvme_io_md": false, 00:09:47.397 "write_zeroes": true, 00:09:47.397 "zcopy": false, 00:09:47.397 "get_zone_info": false, 00:09:47.397 "zone_management": false, 00:09:47.397 "zone_append": false, 00:09:47.397 "compare": false, 00:09:47.397 "compare_and_write": false, 00:09:47.397 "abort": false, 00:09:47.397 "seek_hole": false, 00:09:47.397 "seek_data": false, 00:09:47.397 "copy": false, 00:09:47.397 "nvme_iov_md": false 00:09:47.397 }, 00:09:47.397 "memory_domains": [ 00:09:47.397 { 00:09:47.397 "dma_device_id": "system", 00:09:47.397 "dma_device_type": 1 00:09:47.397 }, 00:09:47.397 { 00:09:47.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.397 "dma_device_type": 2 00:09:47.397 }, 00:09:47.397 { 00:09:47.397 "dma_device_id": "system", 00:09:47.397 "dma_device_type": 1 00:09:47.397 }, 00:09:47.397 { 00:09:47.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.397 "dma_device_type": 2 00:09:47.397 }, 00:09:47.397 { 00:09:47.397 "dma_device_id": "system", 00:09:47.397 "dma_device_type": 1 00:09:47.397 }, 00:09:47.397 { 00:09:47.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.397 "dma_device_type": 2 00:09:47.397 } 00:09:47.397 ], 00:09:47.397 "driver_specific": { 00:09:47.397 "raid": { 00:09:47.397 "uuid": "9be60130-42f3-11ef-9f7f-e9a656123a8b", 00:09:47.397 "strip_size_kb": 64, 00:09:47.397 "state": "online", 00:09:47.397 "raid_level": "raid0", 00:09:47.397 "superblock": true, 00:09:47.397 "num_base_bdevs": 3, 00:09:47.397 "num_base_bdevs_discovered": 3, 00:09:47.397 "num_base_bdevs_operational": 3, 00:09:47.397 "base_bdevs_list": [ 00:09:47.397 { 00:09:47.397 "name": "NewBaseBdev", 00:09:47.397 "uuid": "9d013a3d-42f3-11ef-9f7f-e9a656123a8b", 00:09:47.397 "is_configured": true, 00:09:47.397 "data_offset": 2048, 00:09:47.397 "data_size": 63488 00:09:47.397 }, 00:09:47.397 { 00:09:47.397 "name": "BaseBdev2", 00:09:47.397 "uuid": "9af92fe8-42f3-11ef-9f7f-e9a656123a8b", 00:09:47.397 "is_configured": true, 00:09:47.397 "data_offset": 2048, 00:09:47.397 "data_size": 63488 00:09:47.397 }, 00:09:47.397 { 00:09:47.397 "name": "BaseBdev3", 00:09:47.397 "uuid": "9b6ab689-42f3-11ef-9f7f-e9a656123a8b", 00:09:47.397 "is_configured": true, 00:09:47.397 "data_offset": 2048, 00:09:47.397 "data_size": 63488 00:09:47.397 } 00:09:47.397 ] 00:09:47.397 } 00:09:47.397 } 00:09:47.397 }' 00:09:47.397 21:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:47.397 21:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:09:47.397 BaseBdev2 00:09:47.397 BaseBdev3' 00:09:47.397 21:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:47.397 21:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:47.397 21:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:09:47.677 21:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:47.677 "name": "NewBaseBdev", 00:09:47.677 "aliases": [ 00:09:47.677 "9d013a3d-42f3-11ef-9f7f-e9a656123a8b" 00:09:47.677 ], 00:09:47.677 "product_name": "Malloc disk", 00:09:47.677 "block_size": 512, 00:09:47.677 "num_blocks": 65536, 00:09:47.677 "uuid": "9d013a3d-42f3-11ef-9f7f-e9a656123a8b", 00:09:47.677 "assigned_rate_limits": { 00:09:47.677 "rw_ios_per_sec": 0, 00:09:47.677 "rw_mbytes_per_sec": 0, 00:09:47.677 "r_mbytes_per_sec": 0, 00:09:47.677 "w_mbytes_per_sec": 0 00:09:47.677 }, 00:09:47.677 "claimed": true, 00:09:47.677 "claim_type": "exclusive_write", 00:09:47.677 "zoned": false, 00:09:47.677 "supported_io_types": { 00:09:47.677 "read": true, 00:09:47.677 "write": true, 00:09:47.677 "unmap": true, 00:09:47.677 "flush": true, 00:09:47.677 "reset": true, 00:09:47.677 "nvme_admin": false, 00:09:47.677 "nvme_io": false, 00:09:47.677 "nvme_io_md": false, 00:09:47.677 "write_zeroes": true, 00:09:47.677 "zcopy": true, 00:09:47.677 "get_zone_info": false, 00:09:47.677 "zone_management": false, 00:09:47.677 "zone_append": false, 00:09:47.677 "compare": false, 00:09:47.677 "compare_and_write": false, 00:09:47.677 "abort": true, 00:09:47.677 "seek_hole": false, 00:09:47.677 "seek_data": false, 00:09:47.677 "copy": true, 00:09:47.677 "nvme_iov_md": false 00:09:47.677 }, 00:09:47.677 "memory_domains": [ 00:09:47.677 { 00:09:47.677 "dma_device_id": "system", 00:09:47.677 "dma_device_type": 1 00:09:47.677 }, 00:09:47.677 { 00:09:47.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.677 "dma_device_type": 2 00:09:47.677 } 00:09:47.677 ], 00:09:47.677 "driver_specific": {} 00:09:47.677 }' 00:09:47.677 21:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:47.677 21:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:47.677 21:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:47.677 21:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:47.677 21:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:47.677 21:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:47.677 21:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:47.678 21:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:47.678 21:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:47.678 21:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:47.678 21:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:47.678 21:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:47.678 21:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:47.678 21:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:47.678 21:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:47.935 21:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:47.935 "name": "BaseBdev2", 00:09:47.935 "aliases": [ 00:09:47.935 "9af92fe8-42f3-11ef-9f7f-e9a656123a8b" 00:09:47.935 ], 00:09:47.935 "product_name": "Malloc disk", 00:09:47.935 "block_size": 512, 00:09:47.935 "num_blocks": 65536, 00:09:47.935 "uuid": "9af92fe8-42f3-11ef-9f7f-e9a656123a8b", 00:09:47.935 "assigned_rate_limits": { 00:09:47.935 "rw_ios_per_sec": 0, 00:09:47.935 "rw_mbytes_per_sec": 0, 00:09:47.935 "r_mbytes_per_sec": 0, 00:09:47.935 "w_mbytes_per_sec": 0 00:09:47.935 }, 00:09:47.935 "claimed": true, 00:09:47.935 "claim_type": "exclusive_write", 00:09:47.935 "zoned": false, 00:09:47.935 "supported_io_types": { 00:09:47.935 "read": true, 00:09:47.935 "write": true, 00:09:47.935 "unmap": true, 00:09:47.935 "flush": true, 00:09:47.935 "reset": true, 00:09:47.935 "nvme_admin": false, 00:09:47.935 "nvme_io": false, 00:09:47.935 "nvme_io_md": false, 00:09:47.935 "write_zeroes": true, 00:09:47.935 "zcopy": true, 00:09:47.935 "get_zone_info": false, 00:09:47.935 "zone_management": false, 00:09:47.935 "zone_append": false, 00:09:47.935 "compare": false, 00:09:47.935 "compare_and_write": false, 00:09:47.935 "abort": true, 00:09:47.935 "seek_hole": false, 00:09:47.935 "seek_data": false, 00:09:47.935 "copy": true, 00:09:47.935 "nvme_iov_md": false 00:09:47.935 }, 00:09:47.935 "memory_domains": [ 00:09:47.935 { 00:09:47.935 "dma_device_id": "system", 00:09:47.935 "dma_device_type": 1 00:09:47.935 }, 00:09:47.935 { 00:09:47.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.935 "dma_device_type": 2 00:09:47.935 } 00:09:47.935 ], 00:09:47.935 "driver_specific": {} 00:09:47.935 }' 00:09:47.935 21:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:47.935 21:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:47.935 21:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:47.935 21:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:47.935 21:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:48.193 21:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:48.193 21:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:48.193 21:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:48.193 21:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:48.193 21:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:48.193 21:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:48.193 21:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:48.193 21:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:48.193 21:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:09:48.193 21:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:48.450 21:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:48.450 "name": "BaseBdev3", 00:09:48.450 "aliases": [ 00:09:48.450 "9b6ab689-42f3-11ef-9f7f-e9a656123a8b" 00:09:48.450 ], 00:09:48.450 "product_name": "Malloc disk", 00:09:48.450 "block_size": 512, 00:09:48.450 "num_blocks": 65536, 00:09:48.450 "uuid": "9b6ab689-42f3-11ef-9f7f-e9a656123a8b", 00:09:48.450 "assigned_rate_limits": { 00:09:48.450 "rw_ios_per_sec": 0, 00:09:48.450 "rw_mbytes_per_sec": 0, 00:09:48.450 "r_mbytes_per_sec": 0, 00:09:48.450 "w_mbytes_per_sec": 0 00:09:48.450 }, 00:09:48.450 "claimed": true, 00:09:48.450 "claim_type": "exclusive_write", 00:09:48.450 "zoned": false, 00:09:48.450 "supported_io_types": { 00:09:48.450 "read": true, 00:09:48.450 "write": true, 00:09:48.450 "unmap": true, 00:09:48.450 "flush": true, 00:09:48.450 "reset": true, 00:09:48.450 "nvme_admin": false, 00:09:48.450 "nvme_io": false, 00:09:48.450 "nvme_io_md": false, 00:09:48.450 "write_zeroes": true, 00:09:48.450 "zcopy": true, 00:09:48.450 "get_zone_info": false, 00:09:48.450 "zone_management": false, 00:09:48.450 "zone_append": false, 00:09:48.450 "compare": false, 00:09:48.450 "compare_and_write": false, 00:09:48.450 "abort": true, 00:09:48.450 "seek_hole": false, 00:09:48.450 "seek_data": false, 00:09:48.450 "copy": true, 00:09:48.450 "nvme_iov_md": false 00:09:48.450 }, 00:09:48.450 "memory_domains": [ 00:09:48.450 { 00:09:48.450 "dma_device_id": "system", 00:09:48.450 "dma_device_type": 1 00:09:48.450 }, 00:09:48.450 { 00:09:48.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.450 "dma_device_type": 2 00:09:48.450 } 00:09:48.450 ], 00:09:48.450 "driver_specific": {} 00:09:48.450 }' 00:09:48.450 21:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:48.450 21:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:48.450 21:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:48.450 21:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:48.450 21:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:48.450 21:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:48.450 21:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:48.450 21:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:48.450 21:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:48.450 21:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:48.450 21:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:48.450 21:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:48.450 21:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:48.707 [2024-07-15 21:46:03.735025] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:48.707 [2024-07-15 21:46:03.735056] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:48.707 [2024-07-15 21:46:03.735093] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:48.707 [2024-07-15 21:46:03.735106] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:48.707 [2024-07-15 21:46:03.735110] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3762aa434a00 name Existed_Raid, state offline 00:09:48.707 21:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 52701 00:09:48.707 21:46:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@942 -- # '[' -z 52701 ']' 00:09:48.707 21:46:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # kill -0 52701 00:09:48.707 21:46:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@947 -- # uname 00:09:48.707 21:46:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:09:48.707 21:46:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # ps -c -o command 52701 00:09:48.707 21:46:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # tail -1 00:09:48.707 21:46:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:09:48.707 21:46:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:09:48.707 killing process with pid 52701 00:09:48.707 21:46:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # echo 'killing process with pid 52701' 00:09:48.707 21:46:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@961 -- # kill 52701 00:09:48.707 [2024-07-15 21:46:03.764496] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:48.707 21:46:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # wait 52701 00:09:48.707 [2024-07-15 21:46:03.782117] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:48.965 21:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:09:48.965 00:09:48.965 real 0m23.521s 00:09:48.965 user 0m42.677s 00:09:48.965 sys 0m3.512s 00:09:48.965 21:46:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1118 -- # xtrace_disable 00:09:48.965 21:46:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.965 ************************************ 00:09:48.965 END TEST raid_state_function_test_sb 00:09:48.965 ************************************ 00:09:48.965 21:46:04 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:09:48.965 21:46:04 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:48.965 21:46:04 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:09:48.965 21:46:04 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:09:48.965 21:46:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:48.965 ************************************ 00:09:48.965 START TEST raid_superblock_test 00:09:48.965 ************************************ 00:09:48.965 21:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1117 -- # raid_superblock_test raid0 3 00:09:48.965 21:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:09:48.965 21:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:09:48.965 21:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:09:48.965 21:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:09:48.965 21:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:09:48.965 21:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:09:48.965 21:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:09:48.965 21:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:09:48.965 21:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:09:48.965 21:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:09:48.965 21:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:09:48.965 21:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:09:48.965 21:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:09:48.965 21:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:09:48.965 21:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:09:48.965 21:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:09:48.965 21:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=53429 00:09:48.965 21:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 53429 /var/tmp/spdk-raid.sock 00:09:48.965 21:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@823 -- # '[' -z 53429 ']' 00:09:48.966 21:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:09:48.966 21:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:48.966 21:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:09:48.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:48.966 21:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:48.966 21:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:09:48.966 21:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.966 [2024-07-15 21:46:04.031038] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:09:48.966 [2024-07-15 21:46:04.031233] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:09:49.532 EAL: TSC is not safe to use in SMP mode 00:09:49.533 EAL: TSC is not invariant 00:09:49.533 [2024-07-15 21:46:04.582329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.533 [2024-07-15 21:46:04.664581] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:49.533 [2024-07-15 21:46:04.666671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.533 [2024-07-15 21:46:04.667502] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.533 [2024-07-15 21:46:04.667517] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.098 21:46:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:09:50.098 21:46:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # return 0 00:09:50.098 21:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:09:50.098 21:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:50.098 21:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:09:50.098 21:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:09:50.098 21:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:50.098 21:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:50.098 21:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:09:50.098 21:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:50.098 21:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:09:50.355 malloc1 00:09:50.355 21:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:50.613 [2024-07-15 21:46:05.659854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:50.613 [2024-07-15 21:46:05.659917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.613 [2024-07-15 21:46:05.659946] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f41e8634780 00:09:50.613 [2024-07-15 21:46:05.659954] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.613 [2024-07-15 21:46:05.660932] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.613 [2024-07-15 21:46:05.660957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:50.613 pt1 00:09:50.613 21:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:09:50.613 21:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:50.613 21:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:09:50.613 21:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:09:50.613 21:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:50.613 21:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:50.613 21:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:09:50.613 21:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:50.613 21:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:09:50.870 malloc2 00:09:50.870 21:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:51.127 [2024-07-15 21:46:06.235888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:51.127 [2024-07-15 21:46:06.235962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.127 [2024-07-15 21:46:06.235990] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f41e8634c80 00:09:51.127 [2024-07-15 21:46:06.235997] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.127 [2024-07-15 21:46:06.236654] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.127 [2024-07-15 21:46:06.236681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:51.127 pt2 00:09:51.127 21:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:09:51.127 21:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:51.127 21:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:09:51.127 21:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:09:51.127 21:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:51.127 21:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:51.127 21:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:09:51.127 21:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:51.127 21:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:09:51.384 malloc3 00:09:51.384 21:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:51.642 [2024-07-15 21:46:06.719891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:51.642 [2024-07-15 21:46:06.719978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.642 [2024-07-15 21:46:06.719991] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f41e8635180 00:09:51.642 [2024-07-15 21:46:06.719998] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.642 [2024-07-15 21:46:06.720636] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.642 [2024-07-15 21:46:06.720660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:51.642 pt3 00:09:51.642 21:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:09:51.642 21:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:51.642 21:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:09:51.901 [2024-07-15 21:46:06.999900] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:51.901 [2024-07-15 21:46:07.000490] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:51.901 [2024-07-15 21:46:07.000511] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:51.901 [2024-07-15 21:46:07.000562] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2f41e8635400 00:09:51.901 [2024-07-15 21:46:07.000569] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:51.901 [2024-07-15 21:46:07.000601] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2f41e8697e20 00:09:51.901 [2024-07-15 21:46:07.000675] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2f41e8635400 00:09:51.901 [2024-07-15 21:46:07.000680] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2f41e8635400 00:09:51.901 [2024-07-15 21:46:07.000707] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.901 21:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:51.901 21:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:51.901 21:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:51.901 21:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:51.901 21:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:51.901 21:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:51.901 21:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:51.901 21:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:51.901 21:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:51.901 21:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:51.901 21:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:51.901 21:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.158 21:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:52.158 "name": "raid_bdev1", 00:09:52.158 "uuid": "a422b554-42f3-11ef-9f7f-e9a656123a8b", 00:09:52.158 "strip_size_kb": 64, 00:09:52.158 "state": "online", 00:09:52.158 "raid_level": "raid0", 00:09:52.158 "superblock": true, 00:09:52.158 "num_base_bdevs": 3, 00:09:52.158 "num_base_bdevs_discovered": 3, 00:09:52.158 "num_base_bdevs_operational": 3, 00:09:52.158 "base_bdevs_list": [ 00:09:52.158 { 00:09:52.158 "name": "pt1", 00:09:52.158 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:52.158 "is_configured": true, 00:09:52.158 "data_offset": 2048, 00:09:52.158 "data_size": 63488 00:09:52.158 }, 00:09:52.158 { 00:09:52.159 "name": "pt2", 00:09:52.159 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:52.159 "is_configured": true, 00:09:52.159 "data_offset": 2048, 00:09:52.159 "data_size": 63488 00:09:52.159 }, 00:09:52.159 { 00:09:52.159 "name": "pt3", 00:09:52.159 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:52.159 "is_configured": true, 00:09:52.159 "data_offset": 2048, 00:09:52.159 "data_size": 63488 00:09:52.159 } 00:09:52.159 ] 00:09:52.159 }' 00:09:52.159 21:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:52.159 21:46:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.724 21:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:09:52.724 21:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:09:52.724 21:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:52.724 21:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:52.724 21:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:52.724 21:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:52.724 21:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:52.724 21:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:52.724 [2024-07-15 21:46:07.827971] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:52.724 21:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:52.724 "name": "raid_bdev1", 00:09:52.724 "aliases": [ 00:09:52.724 "a422b554-42f3-11ef-9f7f-e9a656123a8b" 00:09:52.724 ], 00:09:52.724 "product_name": "Raid Volume", 00:09:52.724 "block_size": 512, 00:09:52.724 "num_blocks": 190464, 00:09:52.724 "uuid": "a422b554-42f3-11ef-9f7f-e9a656123a8b", 00:09:52.724 "assigned_rate_limits": { 00:09:52.724 "rw_ios_per_sec": 0, 00:09:52.724 "rw_mbytes_per_sec": 0, 00:09:52.724 "r_mbytes_per_sec": 0, 00:09:52.724 "w_mbytes_per_sec": 0 00:09:52.724 }, 00:09:52.724 "claimed": false, 00:09:52.724 "zoned": false, 00:09:52.724 "supported_io_types": { 00:09:52.724 "read": true, 00:09:52.724 "write": true, 00:09:52.724 "unmap": true, 00:09:52.724 "flush": true, 00:09:52.724 "reset": true, 00:09:52.724 "nvme_admin": false, 00:09:52.724 "nvme_io": false, 00:09:52.724 "nvme_io_md": false, 00:09:52.724 "write_zeroes": true, 00:09:52.724 "zcopy": false, 00:09:52.724 "get_zone_info": false, 00:09:52.724 "zone_management": false, 00:09:52.724 "zone_append": false, 00:09:52.724 "compare": false, 00:09:52.724 "compare_and_write": false, 00:09:52.724 "abort": false, 00:09:52.724 "seek_hole": false, 00:09:52.724 "seek_data": false, 00:09:52.724 "copy": false, 00:09:52.724 "nvme_iov_md": false 00:09:52.724 }, 00:09:52.724 "memory_domains": [ 00:09:52.724 { 00:09:52.724 "dma_device_id": "system", 00:09:52.724 "dma_device_type": 1 00:09:52.724 }, 00:09:52.724 { 00:09:52.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.724 "dma_device_type": 2 00:09:52.724 }, 00:09:52.724 { 00:09:52.724 "dma_device_id": "system", 00:09:52.724 "dma_device_type": 1 00:09:52.724 }, 00:09:52.724 { 00:09:52.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.724 "dma_device_type": 2 00:09:52.724 }, 00:09:52.724 { 00:09:52.724 "dma_device_id": "system", 00:09:52.724 "dma_device_type": 1 00:09:52.724 }, 00:09:52.724 { 00:09:52.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.724 "dma_device_type": 2 00:09:52.724 } 00:09:52.724 ], 00:09:52.724 "driver_specific": { 00:09:52.724 "raid": { 00:09:52.725 "uuid": "a422b554-42f3-11ef-9f7f-e9a656123a8b", 00:09:52.725 "strip_size_kb": 64, 00:09:52.725 "state": "online", 00:09:52.725 "raid_level": "raid0", 00:09:52.725 "superblock": true, 00:09:52.725 "num_base_bdevs": 3, 00:09:52.725 "num_base_bdevs_discovered": 3, 00:09:52.725 "num_base_bdevs_operational": 3, 00:09:52.725 "base_bdevs_list": [ 00:09:52.725 { 00:09:52.725 "name": "pt1", 00:09:52.725 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:52.725 "is_configured": true, 00:09:52.725 "data_offset": 2048, 00:09:52.725 "data_size": 63488 00:09:52.725 }, 00:09:52.725 { 00:09:52.725 "name": "pt2", 00:09:52.725 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:52.725 "is_configured": true, 00:09:52.725 "data_offset": 2048, 00:09:52.725 "data_size": 63488 00:09:52.725 }, 00:09:52.725 { 00:09:52.725 "name": "pt3", 00:09:52.725 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:52.725 "is_configured": true, 00:09:52.725 "data_offset": 2048, 00:09:52.725 "data_size": 63488 00:09:52.725 } 00:09:52.725 ] 00:09:52.725 } 00:09:52.725 } 00:09:52.725 }' 00:09:52.725 21:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:52.725 21:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:09:52.725 pt2 00:09:52.725 pt3' 00:09:52.725 21:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:52.725 21:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:52.725 21:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:09:52.983 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:52.983 "name": "pt1", 00:09:52.983 "aliases": [ 00:09:52.983 "00000000-0000-0000-0000-000000000001" 00:09:52.983 ], 00:09:52.983 "product_name": "passthru", 00:09:52.983 "block_size": 512, 00:09:52.983 "num_blocks": 65536, 00:09:52.983 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:52.983 "assigned_rate_limits": { 00:09:52.983 "rw_ios_per_sec": 0, 00:09:52.983 "rw_mbytes_per_sec": 0, 00:09:52.983 "r_mbytes_per_sec": 0, 00:09:52.983 "w_mbytes_per_sec": 0 00:09:52.983 }, 00:09:52.983 "claimed": true, 00:09:52.983 "claim_type": "exclusive_write", 00:09:52.983 "zoned": false, 00:09:52.983 "supported_io_types": { 00:09:52.983 "read": true, 00:09:52.983 "write": true, 00:09:52.983 "unmap": true, 00:09:52.983 "flush": true, 00:09:52.983 "reset": true, 00:09:52.983 "nvme_admin": false, 00:09:52.983 "nvme_io": false, 00:09:52.983 "nvme_io_md": false, 00:09:52.983 "write_zeroes": true, 00:09:52.983 "zcopy": true, 00:09:52.983 "get_zone_info": false, 00:09:52.983 "zone_management": false, 00:09:52.983 "zone_append": false, 00:09:52.983 "compare": false, 00:09:52.983 "compare_and_write": false, 00:09:52.983 "abort": true, 00:09:52.983 "seek_hole": false, 00:09:52.983 "seek_data": false, 00:09:52.983 "copy": true, 00:09:52.983 "nvme_iov_md": false 00:09:52.983 }, 00:09:52.983 "memory_domains": [ 00:09:52.983 { 00:09:52.983 "dma_device_id": "system", 00:09:52.983 "dma_device_type": 1 00:09:52.983 }, 00:09:52.983 { 00:09:52.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.983 "dma_device_type": 2 00:09:52.983 } 00:09:52.983 ], 00:09:52.983 "driver_specific": { 00:09:52.983 "passthru": { 00:09:52.983 "name": "pt1", 00:09:52.983 "base_bdev_name": "malloc1" 00:09:52.983 } 00:09:52.983 } 00:09:52.983 }' 00:09:52.983 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:52.983 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:52.983 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:52.983 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:52.983 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:52.983 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:52.983 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:52.983 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:52.983 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:52.983 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:52.983 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:52.983 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:52.983 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:52.983 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:09:52.983 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:53.241 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:53.241 "name": "pt2", 00:09:53.241 "aliases": [ 00:09:53.241 "00000000-0000-0000-0000-000000000002" 00:09:53.241 ], 00:09:53.241 "product_name": "passthru", 00:09:53.241 "block_size": 512, 00:09:53.241 "num_blocks": 65536, 00:09:53.241 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:53.241 "assigned_rate_limits": { 00:09:53.241 "rw_ios_per_sec": 0, 00:09:53.241 "rw_mbytes_per_sec": 0, 00:09:53.241 "r_mbytes_per_sec": 0, 00:09:53.241 "w_mbytes_per_sec": 0 00:09:53.241 }, 00:09:53.241 "claimed": true, 00:09:53.241 "claim_type": "exclusive_write", 00:09:53.241 "zoned": false, 00:09:53.241 "supported_io_types": { 00:09:53.241 "read": true, 00:09:53.241 "write": true, 00:09:53.241 "unmap": true, 00:09:53.241 "flush": true, 00:09:53.241 "reset": true, 00:09:53.241 "nvme_admin": false, 00:09:53.241 "nvme_io": false, 00:09:53.241 "nvme_io_md": false, 00:09:53.241 "write_zeroes": true, 00:09:53.241 "zcopy": true, 00:09:53.241 "get_zone_info": false, 00:09:53.241 "zone_management": false, 00:09:53.241 "zone_append": false, 00:09:53.241 "compare": false, 00:09:53.241 "compare_and_write": false, 00:09:53.241 "abort": true, 00:09:53.241 "seek_hole": false, 00:09:53.241 "seek_data": false, 00:09:53.241 "copy": true, 00:09:53.241 "nvme_iov_md": false 00:09:53.241 }, 00:09:53.241 "memory_domains": [ 00:09:53.241 { 00:09:53.241 "dma_device_id": "system", 00:09:53.241 "dma_device_type": 1 00:09:53.241 }, 00:09:53.241 { 00:09:53.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.241 "dma_device_type": 2 00:09:53.241 } 00:09:53.241 ], 00:09:53.241 "driver_specific": { 00:09:53.241 "passthru": { 00:09:53.241 "name": "pt2", 00:09:53.241 "base_bdev_name": "malloc2" 00:09:53.241 } 00:09:53.241 } 00:09:53.241 }' 00:09:53.241 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:53.499 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:53.499 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:53.499 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:53.499 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:53.499 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:53.499 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:53.499 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:53.499 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:53.499 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:53.499 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:53.499 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:53.499 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:53.499 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:09:53.499 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:53.757 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:53.757 "name": "pt3", 00:09:53.757 "aliases": [ 00:09:53.757 "00000000-0000-0000-0000-000000000003" 00:09:53.757 ], 00:09:53.757 "product_name": "passthru", 00:09:53.757 "block_size": 512, 00:09:53.757 "num_blocks": 65536, 00:09:53.757 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:53.757 "assigned_rate_limits": { 00:09:53.757 "rw_ios_per_sec": 0, 00:09:53.757 "rw_mbytes_per_sec": 0, 00:09:53.757 "r_mbytes_per_sec": 0, 00:09:53.757 "w_mbytes_per_sec": 0 00:09:53.757 }, 00:09:53.757 "claimed": true, 00:09:53.757 "claim_type": "exclusive_write", 00:09:53.757 "zoned": false, 00:09:53.757 "supported_io_types": { 00:09:53.757 "read": true, 00:09:53.757 "write": true, 00:09:53.757 "unmap": true, 00:09:53.757 "flush": true, 00:09:53.757 "reset": true, 00:09:53.757 "nvme_admin": false, 00:09:53.757 "nvme_io": false, 00:09:53.757 "nvme_io_md": false, 00:09:53.757 "write_zeroes": true, 00:09:53.757 "zcopy": true, 00:09:53.757 "get_zone_info": false, 00:09:53.757 "zone_management": false, 00:09:53.757 "zone_append": false, 00:09:53.757 "compare": false, 00:09:53.757 "compare_and_write": false, 00:09:53.757 "abort": true, 00:09:53.757 "seek_hole": false, 00:09:53.757 "seek_data": false, 00:09:53.757 "copy": true, 00:09:53.757 "nvme_iov_md": false 00:09:53.757 }, 00:09:53.757 "memory_domains": [ 00:09:53.757 { 00:09:53.757 "dma_device_id": "system", 00:09:53.757 "dma_device_type": 1 00:09:53.757 }, 00:09:53.757 { 00:09:53.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.757 "dma_device_type": 2 00:09:53.757 } 00:09:53.757 ], 00:09:53.757 "driver_specific": { 00:09:53.757 "passthru": { 00:09:53.757 "name": "pt3", 00:09:53.757 "base_bdev_name": "malloc3" 00:09:53.757 } 00:09:53.757 } 00:09:53.757 }' 00:09:53.757 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:53.757 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:53.757 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:53.757 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:53.757 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:53.757 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:53.757 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:53.757 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:53.757 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:53.757 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:53.757 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:53.757 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:53.757 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:53.757 21:46:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:09:54.015 [2024-07-15 21:46:09.088021] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:54.015 21:46:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=a422b554-42f3-11ef-9f7f-e9a656123a8b 00:09:54.015 21:46:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z a422b554-42f3-11ef-9f7f-e9a656123a8b ']' 00:09:54.015 21:46:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:54.272 [2024-07-15 21:46:09.420012] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:54.272 [2024-07-15 21:46:09.420036] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:54.272 [2024-07-15 21:46:09.420074] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:54.272 [2024-07-15 21:46:09.420088] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:54.272 [2024-07-15 21:46:09.420098] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2f41e8635400 name raid_bdev1, state offline 00:09:54.272 21:46:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:54.272 21:46:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:09:54.529 21:46:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:09:54.529 21:46:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:09:54.529 21:46:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:09:54.529 21:46:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:09:54.785 21:46:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:09:54.785 21:46:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:55.041 21:46:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:09:55.041 21:46:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:09:55.298 21:46:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:09:55.298 21:46:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:55.555 21:46:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:09:55.555 21:46:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:09:55.555 21:46:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # local es=0 00:09:55.555 21:46:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:09:55.555 21:46:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@630 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:55.555 21:46:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:09:55.555 21:46:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:55.555 21:46:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:09:55.555 21:46:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:55.555 21:46:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:09:55.555 21:46:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:55.555 21:46:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:55.555 21:46:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:09:55.812 [2024-07-15 21:46:10.912054] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:55.812 [2024-07-15 21:46:10.912647] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:55.812 [2024-07-15 21:46:10.912667] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:55.812 [2024-07-15 21:46:10.912681] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:55.812 [2024-07-15 21:46:10.912718] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:55.812 [2024-07-15 21:46:10.912729] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:55.812 [2024-07-15 21:46:10.912738] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:55.812 [2024-07-15 21:46:10.912742] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2f41e8635180 name raid_bdev1, state configuring 00:09:55.812 request: 00:09:55.812 { 00:09:55.812 "name": "raid_bdev1", 00:09:55.812 "raid_level": "raid0", 00:09:55.812 "base_bdevs": [ 00:09:55.812 "malloc1", 00:09:55.812 "malloc2", 00:09:55.812 "malloc3" 00:09:55.812 ], 00:09:55.812 "strip_size_kb": 64, 00:09:55.812 "superblock": false, 00:09:55.812 "method": "bdev_raid_create", 00:09:55.812 "req_id": 1 00:09:55.812 } 00:09:55.812 Got JSON-RPC error response 00:09:55.812 response: 00:09:55.812 { 00:09:55.812 "code": -17, 00:09:55.812 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:55.812 } 00:09:55.812 21:46:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # es=1 00:09:55.812 21:46:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:09:55.812 21:46:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:09:55.812 21:46:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:09:55.812 21:46:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:55.812 21:46:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:09:56.069 21:46:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:09:56.069 21:46:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:09:56.069 21:46:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:56.325 [2024-07-15 21:46:11.440048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:56.325 [2024-07-15 21:46:11.440110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.325 [2024-07-15 21:46:11.440138] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f41e8634c80 00:09:56.325 [2024-07-15 21:46:11.440146] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.325 [2024-07-15 21:46:11.440775] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.325 [2024-07-15 21:46:11.440802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:56.325 [2024-07-15 21:46:11.440828] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:56.325 [2024-07-15 21:46:11.440839] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:56.325 pt1 00:09:56.325 21:46:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:56.325 21:46:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:56.325 21:46:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:56.325 21:46:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:56.325 21:46:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:56.325 21:46:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:56.325 21:46:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:56.325 21:46:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:56.325 21:46:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:56.325 21:46:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:56.325 21:46:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:56.325 21:46:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:56.582 21:46:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:56.582 "name": "raid_bdev1", 00:09:56.582 "uuid": "a422b554-42f3-11ef-9f7f-e9a656123a8b", 00:09:56.582 "strip_size_kb": 64, 00:09:56.582 "state": "configuring", 00:09:56.582 "raid_level": "raid0", 00:09:56.582 "superblock": true, 00:09:56.582 "num_base_bdevs": 3, 00:09:56.582 "num_base_bdevs_discovered": 1, 00:09:56.582 "num_base_bdevs_operational": 3, 00:09:56.582 "base_bdevs_list": [ 00:09:56.582 { 00:09:56.582 "name": "pt1", 00:09:56.582 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:56.582 "is_configured": true, 00:09:56.582 "data_offset": 2048, 00:09:56.582 "data_size": 63488 00:09:56.582 }, 00:09:56.582 { 00:09:56.582 "name": null, 00:09:56.582 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:56.582 "is_configured": false, 00:09:56.582 "data_offset": 2048, 00:09:56.582 "data_size": 63488 00:09:56.582 }, 00:09:56.582 { 00:09:56.582 "name": null, 00:09:56.582 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:56.582 "is_configured": false, 00:09:56.582 "data_offset": 2048, 00:09:56.582 "data_size": 63488 00:09:56.582 } 00:09:56.582 ] 00:09:56.582 }' 00:09:56.582 21:46:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:56.582 21:46:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.145 21:46:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:09:57.145 21:46:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:57.401 [2024-07-15 21:46:12.344129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:57.401 [2024-07-15 21:46:12.344218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.401 [2024-07-15 21:46:12.344245] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f41e8635680 00:09:57.401 [2024-07-15 21:46:12.344253] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.401 [2024-07-15 21:46:12.344366] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.401 [2024-07-15 21:46:12.344376] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:57.401 [2024-07-15 21:46:12.344401] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:57.401 [2024-07-15 21:46:12.344410] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:57.401 pt2 00:09:57.401 21:46:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:57.401 [2024-07-15 21:46:12.584139] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:57.658 21:46:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:57.658 21:46:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:57.658 21:46:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:57.658 21:46:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:57.658 21:46:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:57.658 21:46:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:57.658 21:46:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:57.658 21:46:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:57.658 21:46:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:57.658 21:46:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:57.658 21:46:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:57.658 21:46:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.915 21:46:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:57.915 "name": "raid_bdev1", 00:09:57.915 "uuid": "a422b554-42f3-11ef-9f7f-e9a656123a8b", 00:09:57.915 "strip_size_kb": 64, 00:09:57.915 "state": "configuring", 00:09:57.915 "raid_level": "raid0", 00:09:57.915 "superblock": true, 00:09:57.915 "num_base_bdevs": 3, 00:09:57.915 "num_base_bdevs_discovered": 1, 00:09:57.915 "num_base_bdevs_operational": 3, 00:09:57.915 "base_bdevs_list": [ 00:09:57.915 { 00:09:57.915 "name": "pt1", 00:09:57.915 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:57.915 "is_configured": true, 00:09:57.915 "data_offset": 2048, 00:09:57.915 "data_size": 63488 00:09:57.915 }, 00:09:57.915 { 00:09:57.915 "name": null, 00:09:57.915 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:57.915 "is_configured": false, 00:09:57.915 "data_offset": 2048, 00:09:57.915 "data_size": 63488 00:09:57.915 }, 00:09:57.915 { 00:09:57.915 "name": null, 00:09:57.915 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:57.915 "is_configured": false, 00:09:57.915 "data_offset": 2048, 00:09:57.915 "data_size": 63488 00:09:57.915 } 00:09:57.915 ] 00:09:57.915 }' 00:09:57.915 21:46:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:57.915 21:46:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.183 21:46:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:09:58.183 21:46:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:09:58.183 21:46:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:58.441 [2024-07-15 21:46:13.400181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:58.441 [2024-07-15 21:46:13.400251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.441 [2024-07-15 21:46:13.400279] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f41e8635680 00:09:58.441 [2024-07-15 21:46:13.400287] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.441 [2024-07-15 21:46:13.400407] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.441 [2024-07-15 21:46:13.400417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:58.441 [2024-07-15 21:46:13.400441] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:58.441 [2024-07-15 21:46:13.400450] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:58.441 pt2 00:09:58.441 21:46:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:09:58.441 21:46:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:09:58.441 21:46:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:58.700 [2024-07-15 21:46:13.636188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:58.700 [2024-07-15 21:46:13.636252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.700 [2024-07-15 21:46:13.636279] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f41e8635400 00:09:58.700 [2024-07-15 21:46:13.636287] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.700 [2024-07-15 21:46:13.636396] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.700 [2024-07-15 21:46:13.636406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:58.700 [2024-07-15 21:46:13.636430] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:58.700 [2024-07-15 21:46:13.636438] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:58.700 [2024-07-15 21:46:13.636466] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2f41e8634780 00:09:58.700 [2024-07-15 21:46:13.636470] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:58.700 [2024-07-15 21:46:13.636490] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2f41e8697e20 00:09:58.700 [2024-07-15 21:46:13.636542] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2f41e8634780 00:09:58.700 [2024-07-15 21:46:13.636547] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2f41e8634780 00:09:58.700 [2024-07-15 21:46:13.636568] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.700 pt3 00:09:58.700 21:46:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:09:58.700 21:46:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:09:58.700 21:46:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:58.700 21:46:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:58.700 21:46:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:58.700 21:46:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:58.700 21:46:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:58.700 21:46:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:58.700 21:46:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:58.700 21:46:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:58.700 21:46:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:58.700 21:46:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:58.700 21:46:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:58.700 21:46:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:58.958 21:46:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:58.958 "name": "raid_bdev1", 00:09:58.958 "uuid": "a422b554-42f3-11ef-9f7f-e9a656123a8b", 00:09:58.958 "strip_size_kb": 64, 00:09:58.958 "state": "online", 00:09:58.958 "raid_level": "raid0", 00:09:58.958 "superblock": true, 00:09:58.958 "num_base_bdevs": 3, 00:09:58.958 "num_base_bdevs_discovered": 3, 00:09:58.958 "num_base_bdevs_operational": 3, 00:09:58.958 "base_bdevs_list": [ 00:09:58.958 { 00:09:58.958 "name": "pt1", 00:09:58.958 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:58.958 "is_configured": true, 00:09:58.958 "data_offset": 2048, 00:09:58.958 "data_size": 63488 00:09:58.958 }, 00:09:58.958 { 00:09:58.958 "name": "pt2", 00:09:58.958 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:58.958 "is_configured": true, 00:09:58.958 "data_offset": 2048, 00:09:58.958 "data_size": 63488 00:09:58.958 }, 00:09:58.958 { 00:09:58.958 "name": "pt3", 00:09:58.958 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:58.958 "is_configured": true, 00:09:58.958 "data_offset": 2048, 00:09:58.958 "data_size": 63488 00:09:58.958 } 00:09:58.958 ] 00:09:58.958 }' 00:09:58.958 21:46:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:58.958 21:46:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.215 21:46:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:09:59.215 21:46:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:09:59.215 21:46:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:59.215 21:46:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:59.215 21:46:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:59.215 21:46:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:59.215 21:46:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:59.215 21:46:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:59.473 [2024-07-15 21:46:14.512250] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:59.473 21:46:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:59.473 "name": "raid_bdev1", 00:09:59.473 "aliases": [ 00:09:59.473 "a422b554-42f3-11ef-9f7f-e9a656123a8b" 00:09:59.473 ], 00:09:59.473 "product_name": "Raid Volume", 00:09:59.473 "block_size": 512, 00:09:59.473 "num_blocks": 190464, 00:09:59.473 "uuid": "a422b554-42f3-11ef-9f7f-e9a656123a8b", 00:09:59.473 "assigned_rate_limits": { 00:09:59.473 "rw_ios_per_sec": 0, 00:09:59.473 "rw_mbytes_per_sec": 0, 00:09:59.473 "r_mbytes_per_sec": 0, 00:09:59.473 "w_mbytes_per_sec": 0 00:09:59.473 }, 00:09:59.473 "claimed": false, 00:09:59.473 "zoned": false, 00:09:59.473 "supported_io_types": { 00:09:59.473 "read": true, 00:09:59.473 "write": true, 00:09:59.473 "unmap": true, 00:09:59.473 "flush": true, 00:09:59.473 "reset": true, 00:09:59.473 "nvme_admin": false, 00:09:59.473 "nvme_io": false, 00:09:59.473 "nvme_io_md": false, 00:09:59.473 "write_zeroes": true, 00:09:59.473 "zcopy": false, 00:09:59.473 "get_zone_info": false, 00:09:59.473 "zone_management": false, 00:09:59.473 "zone_append": false, 00:09:59.473 "compare": false, 00:09:59.473 "compare_and_write": false, 00:09:59.473 "abort": false, 00:09:59.473 "seek_hole": false, 00:09:59.473 "seek_data": false, 00:09:59.473 "copy": false, 00:09:59.473 "nvme_iov_md": false 00:09:59.473 }, 00:09:59.473 "memory_domains": [ 00:09:59.473 { 00:09:59.473 "dma_device_id": "system", 00:09:59.473 "dma_device_type": 1 00:09:59.473 }, 00:09:59.473 { 00:09:59.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.473 "dma_device_type": 2 00:09:59.473 }, 00:09:59.473 { 00:09:59.473 "dma_device_id": "system", 00:09:59.473 "dma_device_type": 1 00:09:59.473 }, 00:09:59.473 { 00:09:59.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.473 "dma_device_type": 2 00:09:59.473 }, 00:09:59.473 { 00:09:59.473 "dma_device_id": "system", 00:09:59.473 "dma_device_type": 1 00:09:59.473 }, 00:09:59.473 { 00:09:59.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.473 "dma_device_type": 2 00:09:59.473 } 00:09:59.473 ], 00:09:59.473 "driver_specific": { 00:09:59.473 "raid": { 00:09:59.473 "uuid": "a422b554-42f3-11ef-9f7f-e9a656123a8b", 00:09:59.473 "strip_size_kb": 64, 00:09:59.473 "state": "online", 00:09:59.473 "raid_level": "raid0", 00:09:59.473 "superblock": true, 00:09:59.473 "num_base_bdevs": 3, 00:09:59.473 "num_base_bdevs_discovered": 3, 00:09:59.473 "num_base_bdevs_operational": 3, 00:09:59.473 "base_bdevs_list": [ 00:09:59.473 { 00:09:59.473 "name": "pt1", 00:09:59.473 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:59.473 "is_configured": true, 00:09:59.473 "data_offset": 2048, 00:09:59.473 "data_size": 63488 00:09:59.473 }, 00:09:59.473 { 00:09:59.473 "name": "pt2", 00:09:59.473 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:59.473 "is_configured": true, 00:09:59.473 "data_offset": 2048, 00:09:59.473 "data_size": 63488 00:09:59.473 }, 00:09:59.473 { 00:09:59.473 "name": "pt3", 00:09:59.473 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:59.473 "is_configured": true, 00:09:59.473 "data_offset": 2048, 00:09:59.473 "data_size": 63488 00:09:59.473 } 00:09:59.473 ] 00:09:59.473 } 00:09:59.473 } 00:09:59.473 }' 00:09:59.473 21:46:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:59.473 21:46:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:09:59.473 pt2 00:09:59.473 pt3' 00:09:59.473 21:46:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:59.473 21:46:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:09:59.473 21:46:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:59.741 21:46:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:59.741 "name": "pt1", 00:09:59.741 "aliases": [ 00:09:59.741 "00000000-0000-0000-0000-000000000001" 00:09:59.741 ], 00:09:59.741 "product_name": "passthru", 00:09:59.741 "block_size": 512, 00:09:59.741 "num_blocks": 65536, 00:09:59.741 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:59.741 "assigned_rate_limits": { 00:09:59.741 "rw_ios_per_sec": 0, 00:09:59.741 "rw_mbytes_per_sec": 0, 00:09:59.741 "r_mbytes_per_sec": 0, 00:09:59.741 "w_mbytes_per_sec": 0 00:09:59.741 }, 00:09:59.741 "claimed": true, 00:09:59.741 "claim_type": "exclusive_write", 00:09:59.741 "zoned": false, 00:09:59.741 "supported_io_types": { 00:09:59.741 "read": true, 00:09:59.741 "write": true, 00:09:59.741 "unmap": true, 00:09:59.741 "flush": true, 00:09:59.741 "reset": true, 00:09:59.741 "nvme_admin": false, 00:09:59.741 "nvme_io": false, 00:09:59.741 "nvme_io_md": false, 00:09:59.741 "write_zeroes": true, 00:09:59.741 "zcopy": true, 00:09:59.741 "get_zone_info": false, 00:09:59.741 "zone_management": false, 00:09:59.741 "zone_append": false, 00:09:59.741 "compare": false, 00:09:59.741 "compare_and_write": false, 00:09:59.741 "abort": true, 00:09:59.741 "seek_hole": false, 00:09:59.741 "seek_data": false, 00:09:59.741 "copy": true, 00:09:59.741 "nvme_iov_md": false 00:09:59.741 }, 00:09:59.741 "memory_domains": [ 00:09:59.741 { 00:09:59.741 "dma_device_id": "system", 00:09:59.741 "dma_device_type": 1 00:09:59.741 }, 00:09:59.741 { 00:09:59.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.741 "dma_device_type": 2 00:09:59.741 } 00:09:59.741 ], 00:09:59.741 "driver_specific": { 00:09:59.741 "passthru": { 00:09:59.741 "name": "pt1", 00:09:59.741 "base_bdev_name": "malloc1" 00:09:59.741 } 00:09:59.741 } 00:09:59.741 }' 00:09:59.741 21:46:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:59.741 21:46:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:59.741 21:46:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:59.741 21:46:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:59.741 21:46:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:59.741 21:46:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:59.741 21:46:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:59.741 21:46:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:59.741 21:46:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:59.741 21:46:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:59.741 21:46:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:59.741 21:46:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:59.741 21:46:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:59.741 21:46:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:59.741 21:46:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:00.014 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:00.014 "name": "pt2", 00:10:00.014 "aliases": [ 00:10:00.014 "00000000-0000-0000-0000-000000000002" 00:10:00.014 ], 00:10:00.014 "product_name": "passthru", 00:10:00.014 "block_size": 512, 00:10:00.014 "num_blocks": 65536, 00:10:00.014 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:00.014 "assigned_rate_limits": { 00:10:00.014 "rw_ios_per_sec": 0, 00:10:00.014 "rw_mbytes_per_sec": 0, 00:10:00.014 "r_mbytes_per_sec": 0, 00:10:00.014 "w_mbytes_per_sec": 0 00:10:00.014 }, 00:10:00.014 "claimed": true, 00:10:00.014 "claim_type": "exclusive_write", 00:10:00.014 "zoned": false, 00:10:00.014 "supported_io_types": { 00:10:00.014 "read": true, 00:10:00.014 "write": true, 00:10:00.014 "unmap": true, 00:10:00.014 "flush": true, 00:10:00.014 "reset": true, 00:10:00.014 "nvme_admin": false, 00:10:00.014 "nvme_io": false, 00:10:00.014 "nvme_io_md": false, 00:10:00.014 "write_zeroes": true, 00:10:00.014 "zcopy": true, 00:10:00.014 "get_zone_info": false, 00:10:00.014 "zone_management": false, 00:10:00.014 "zone_append": false, 00:10:00.014 "compare": false, 00:10:00.014 "compare_and_write": false, 00:10:00.014 "abort": true, 00:10:00.014 "seek_hole": false, 00:10:00.014 "seek_data": false, 00:10:00.014 "copy": true, 00:10:00.014 "nvme_iov_md": false 00:10:00.014 }, 00:10:00.014 "memory_domains": [ 00:10:00.014 { 00:10:00.014 "dma_device_id": "system", 00:10:00.014 "dma_device_type": 1 00:10:00.014 }, 00:10:00.014 { 00:10:00.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.014 "dma_device_type": 2 00:10:00.014 } 00:10:00.014 ], 00:10:00.014 "driver_specific": { 00:10:00.014 "passthru": { 00:10:00.014 "name": "pt2", 00:10:00.014 "base_bdev_name": "malloc2" 00:10:00.014 } 00:10:00.014 } 00:10:00.014 }' 00:10:00.014 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:00.014 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:00.014 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:00.014 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:00.014 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:00.014 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:00.014 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:00.014 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:00.014 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:00.014 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:00.014 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:00.014 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:00.014 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:00.014 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:10:00.014 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:00.580 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:00.580 "name": "pt3", 00:10:00.580 "aliases": [ 00:10:00.580 "00000000-0000-0000-0000-000000000003" 00:10:00.580 ], 00:10:00.580 "product_name": "passthru", 00:10:00.580 "block_size": 512, 00:10:00.580 "num_blocks": 65536, 00:10:00.580 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:00.580 "assigned_rate_limits": { 00:10:00.580 "rw_ios_per_sec": 0, 00:10:00.580 "rw_mbytes_per_sec": 0, 00:10:00.580 "r_mbytes_per_sec": 0, 00:10:00.580 "w_mbytes_per_sec": 0 00:10:00.580 }, 00:10:00.580 "claimed": true, 00:10:00.580 "claim_type": "exclusive_write", 00:10:00.580 "zoned": false, 00:10:00.580 "supported_io_types": { 00:10:00.580 "read": true, 00:10:00.580 "write": true, 00:10:00.580 "unmap": true, 00:10:00.580 "flush": true, 00:10:00.580 "reset": true, 00:10:00.580 "nvme_admin": false, 00:10:00.580 "nvme_io": false, 00:10:00.580 "nvme_io_md": false, 00:10:00.580 "write_zeroes": true, 00:10:00.580 "zcopy": true, 00:10:00.580 "get_zone_info": false, 00:10:00.580 "zone_management": false, 00:10:00.580 "zone_append": false, 00:10:00.580 "compare": false, 00:10:00.580 "compare_and_write": false, 00:10:00.580 "abort": true, 00:10:00.580 "seek_hole": false, 00:10:00.580 "seek_data": false, 00:10:00.580 "copy": true, 00:10:00.580 "nvme_iov_md": false 00:10:00.580 }, 00:10:00.580 "memory_domains": [ 00:10:00.580 { 00:10:00.580 "dma_device_id": "system", 00:10:00.581 "dma_device_type": 1 00:10:00.581 }, 00:10:00.581 { 00:10:00.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.581 "dma_device_type": 2 00:10:00.581 } 00:10:00.581 ], 00:10:00.581 "driver_specific": { 00:10:00.581 "passthru": { 00:10:00.581 "name": "pt3", 00:10:00.581 "base_bdev_name": "malloc3" 00:10:00.581 } 00:10:00.581 } 00:10:00.581 }' 00:10:00.581 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:00.581 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:00.581 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:00.581 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:00.581 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:00.581 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:00.581 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:00.581 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:00.581 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:00.581 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:00.581 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:00.581 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:00.581 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:00.581 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:10:00.839 [2024-07-15 21:46:15.804350] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:00.839 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' a422b554-42f3-11ef-9f7f-e9a656123a8b '!=' a422b554-42f3-11ef-9f7f-e9a656123a8b ']' 00:10:00.839 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:10:00.839 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:00.839 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:00.839 21:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 53429 00:10:00.839 21:46:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@942 -- # '[' -z 53429 ']' 00:10:00.839 21:46:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # kill -0 53429 00:10:00.839 21:46:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@947 -- # uname 00:10:00.839 21:46:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:10:00.839 21:46:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # ps -c -o command 53429 00:10:00.839 21:46:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # tail -1 00:10:00.839 21:46:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:10:00.839 21:46:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:10:00.839 killing process with pid 53429 00:10:00.839 21:46:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 53429' 00:10:00.839 21:46:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@961 -- # kill 53429 00:10:00.839 [2024-07-15 21:46:15.837374] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:00.839 [2024-07-15 21:46:15.837399] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:00.839 [2024-07-15 21:46:15.837412] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:00.839 [2024-07-15 21:46:15.837416] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2f41e8634780 name raid_bdev1, state offline 00:10:00.839 21:46:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # wait 53429 00:10:00.839 [2024-07-15 21:46:15.854554] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:00.839 21:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:10:00.839 00:10:00.839 real 0m12.003s 00:10:00.839 user 0m21.286s 00:10:00.839 sys 0m1.927s 00:10:00.839 21:46:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:10:00.839 21:46:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.839 ************************************ 00:10:00.839 END TEST raid_superblock_test 00:10:00.839 ************************************ 00:10:01.097 21:46:16 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:10:01.097 21:46:16 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:10:01.097 21:46:16 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:10:01.097 21:46:16 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:10:01.097 21:46:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:01.097 ************************************ 00:10:01.097 START TEST raid_read_error_test 00:10:01.097 ************************************ 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1117 -- # raid_io_error_test raid0 3 read 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.DKNNOFVv8T 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=53780 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 53780 /var/tmp/spdk-raid.sock 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@823 -- # '[' -z 53780 ']' 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:10:01.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:01.097 21:46:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:10:01.098 21:46:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.098 [2024-07-15 21:46:16.087559] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:10:01.098 [2024-07-15 21:46:16.087702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:01.664 EAL: TSC is not safe to use in SMP mode 00:10:01.664 EAL: TSC is not invariant 00:10:01.664 [2024-07-15 21:46:16.658050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.664 [2024-07-15 21:46:16.747270] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:01.664 [2024-07-15 21:46:16.749403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.664 [2024-07-15 21:46:16.750161] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:01.664 [2024-07-15 21:46:16.750174] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.231 21:46:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:10:02.231 21:46:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # return 0 00:10:02.231 21:46:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:02.231 21:46:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:02.490 BaseBdev1_malloc 00:10:02.490 21:46:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:02.749 true 00:10:02.749 21:46:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:03.008 [2024-07-15 21:46:17.986557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:03.008 [2024-07-15 21:46:17.986625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:03.008 [2024-07-15 21:46:17.986654] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e32ca234780 00:10:03.008 [2024-07-15 21:46:17.986663] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:03.008 [2024-07-15 21:46:17.987345] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:03.008 [2024-07-15 21:46:17.987371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:03.008 BaseBdev1 00:10:03.008 21:46:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:03.008 21:46:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:03.266 BaseBdev2_malloc 00:10:03.266 21:46:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:03.525 true 00:10:03.525 21:46:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:03.784 [2024-07-15 21:46:18.726557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:03.784 [2024-07-15 21:46:18.726608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:03.784 [2024-07-15 21:46:18.726633] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e32ca234c80 00:10:03.784 [2024-07-15 21:46:18.726641] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:03.784 [2024-07-15 21:46:18.727315] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:03.784 [2024-07-15 21:46:18.727337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:03.784 BaseBdev2 00:10:03.784 21:46:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:03.784 21:46:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:04.042 BaseBdev3_malloc 00:10:04.042 21:46:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:10:04.300 true 00:10:04.300 21:46:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:04.559 [2024-07-15 21:46:19.502570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:04.559 [2024-07-15 21:46:19.502642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.559 [2024-07-15 21:46:19.502685] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e32ca235180 00:10:04.559 [2024-07-15 21:46:19.502693] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.559 [2024-07-15 21:46:19.503374] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.559 [2024-07-15 21:46:19.503397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:04.559 BaseBdev3 00:10:04.559 21:46:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:10:04.818 [2024-07-15 21:46:19.750586] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:04.818 [2024-07-15 21:46:19.751217] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:04.818 [2024-07-15 21:46:19.751242] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.818 [2024-07-15 21:46:19.751300] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3e32ca235400 00:10:04.818 [2024-07-15 21:46:19.751306] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:04.818 [2024-07-15 21:46:19.751343] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3e32ca2a0e20 00:10:04.818 [2024-07-15 21:46:19.751414] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3e32ca235400 00:10:04.818 [2024-07-15 21:46:19.751419] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3e32ca235400 00:10:04.818 [2024-07-15 21:46:19.751447] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.818 21:46:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:04.818 21:46:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:04.818 21:46:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:04.818 21:46:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:04.818 21:46:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:04.818 21:46:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:04.818 21:46:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:04.818 21:46:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:04.818 21:46:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:04.818 21:46:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:04.818 21:46:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:04.818 21:46:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:05.076 21:46:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:05.076 "name": "raid_bdev1", 00:10:05.076 "uuid": "abbc4ef3-42f3-11ef-9f7f-e9a656123a8b", 00:10:05.076 "strip_size_kb": 64, 00:10:05.076 "state": "online", 00:10:05.076 "raid_level": "raid0", 00:10:05.076 "superblock": true, 00:10:05.076 "num_base_bdevs": 3, 00:10:05.076 "num_base_bdevs_discovered": 3, 00:10:05.076 "num_base_bdevs_operational": 3, 00:10:05.077 "base_bdevs_list": [ 00:10:05.077 { 00:10:05.077 "name": "BaseBdev1", 00:10:05.077 "uuid": "651a05fb-2a70-bb56-95ba-502f151533bd", 00:10:05.077 "is_configured": true, 00:10:05.077 "data_offset": 2048, 00:10:05.077 "data_size": 63488 00:10:05.077 }, 00:10:05.077 { 00:10:05.077 "name": "BaseBdev2", 00:10:05.077 "uuid": "ee7d4d80-4acb-c15b-985a-038ad9664b71", 00:10:05.077 "is_configured": true, 00:10:05.077 "data_offset": 2048, 00:10:05.077 "data_size": 63488 00:10:05.077 }, 00:10:05.077 { 00:10:05.077 "name": "BaseBdev3", 00:10:05.077 "uuid": "d33003c7-6edb-5f58-a8cf-14fcd4fb7f22", 00:10:05.077 "is_configured": true, 00:10:05.077 "data_offset": 2048, 00:10:05.077 "data_size": 63488 00:10:05.077 } 00:10:05.077 ] 00:10:05.077 }' 00:10:05.077 21:46:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:05.077 21:46:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.334 21:46:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:05.334 21:46:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:05.334 [2024-07-15 21:46:20.422830] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3e32ca2a0ec0 00:10:06.269 21:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:06.526 21:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:06.526 21:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:06.526 21:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:10:06.526 21:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:06.526 21:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:06.526 21:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:06.526 21:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:06.526 21:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:06.526 21:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:06.526 21:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:06.526 21:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:06.526 21:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:06.526 21:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:06.526 21:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:06.526 21:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:07.092 21:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:07.092 "name": "raid_bdev1", 00:10:07.092 "uuid": "abbc4ef3-42f3-11ef-9f7f-e9a656123a8b", 00:10:07.092 "strip_size_kb": 64, 00:10:07.092 "state": "online", 00:10:07.092 "raid_level": "raid0", 00:10:07.092 "superblock": true, 00:10:07.092 "num_base_bdevs": 3, 00:10:07.092 "num_base_bdevs_discovered": 3, 00:10:07.092 "num_base_bdevs_operational": 3, 00:10:07.092 "base_bdevs_list": [ 00:10:07.092 { 00:10:07.092 "name": "BaseBdev1", 00:10:07.092 "uuid": "651a05fb-2a70-bb56-95ba-502f151533bd", 00:10:07.092 "is_configured": true, 00:10:07.092 "data_offset": 2048, 00:10:07.092 "data_size": 63488 00:10:07.092 }, 00:10:07.092 { 00:10:07.092 "name": "BaseBdev2", 00:10:07.092 "uuid": "ee7d4d80-4acb-c15b-985a-038ad9664b71", 00:10:07.092 "is_configured": true, 00:10:07.092 "data_offset": 2048, 00:10:07.092 "data_size": 63488 00:10:07.092 }, 00:10:07.092 { 00:10:07.092 "name": "BaseBdev3", 00:10:07.092 "uuid": "d33003c7-6edb-5f58-a8cf-14fcd4fb7f22", 00:10:07.093 "is_configured": true, 00:10:07.093 "data_offset": 2048, 00:10:07.093 "data_size": 63488 00:10:07.093 } 00:10:07.093 ] 00:10:07.093 }' 00:10:07.093 21:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:07.093 21:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.351 21:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:07.609 [2024-07-15 21:46:22.645370] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:07.610 [2024-07-15 21:46:22.645405] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:07.610 [2024-07-15 21:46:22.645777] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:07.610 [2024-07-15 21:46:22.645792] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:07.610 [2024-07-15 21:46:22.645803] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:07.610 [2024-07-15 21:46:22.645810] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3e32ca235400 name raid_bdev1, state offline 00:10:07.610 0 00:10:07.610 21:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 53780 00:10:07.610 21:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@942 -- # '[' -z 53780 ']' 00:10:07.610 21:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # kill -0 53780 00:10:07.610 21:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@947 -- # uname 00:10:07.610 21:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:10:07.610 21:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # ps -c -o command 53780 00:10:07.610 21:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # tail -1 00:10:07.610 21:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # process_name=bdevperf 00:10:07.610 21:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' bdevperf = sudo ']' 00:10:07.610 killing process with pid 53780 00:10:07.610 21:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 53780' 00:10:07.610 21:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@961 -- # kill 53780 00:10:07.610 [2024-07-15 21:46:22.674682] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:07.610 21:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # wait 53780 00:10:07.610 [2024-07-15 21:46:22.692098] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:07.868 21:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.DKNNOFVv8T 00:10:07.868 21:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:07.868 21:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:07.868 21:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:10:07.868 21:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:10:07.868 21:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:07.868 21:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:07.868 21:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:10:07.868 00:10:07.868 real 0m6.823s 00:10:07.868 user 0m10.744s 00:10:07.868 sys 0m1.120s 00:10:07.868 21:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:10:07.868 ************************************ 00:10:07.868 21:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.868 END TEST raid_read_error_test 00:10:07.868 ************************************ 00:10:07.868 21:46:22 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:10:07.868 21:46:22 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:10:07.868 21:46:22 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:10:07.868 21:46:22 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:10:07.868 21:46:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:07.868 ************************************ 00:10:07.868 START TEST raid_write_error_test 00:10:07.868 ************************************ 00:10:07.868 21:46:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1117 -- # raid_io_error_test raid0 3 write 00:10:07.868 21:46:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:10:07.868 21:46:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:10:07.868 21:46:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:10:07.868 21:46:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:07.868 21:46:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:07.868 21:46:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:07.868 21:46:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:07.868 21:46:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:07.868 21:46:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:07.868 21:46:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:07.868 21:46:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:07.868 21:46:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:10:07.868 21:46:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:07.868 21:46:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:07.868 21:46:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:07.868 21:46:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:07.868 21:46:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:07.868 21:46:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:07.868 21:46:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:07.868 21:46:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:07.868 21:46:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:07.869 21:46:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:10:07.869 21:46:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:07.869 21:46:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:07.869 21:46:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:07.869 21:46:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.ecbYxPjgW5 00:10:07.869 21:46:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=53911 00:10:07.869 21:46:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:07.869 21:46:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 53911 /var/tmp/spdk-raid.sock 00:10:07.869 21:46:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@823 -- # '[' -z 53911 ']' 00:10:07.869 21:46:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:07.869 21:46:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:10:07.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:07.869 21:46:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:07.869 21:46:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:10:07.869 21:46:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.869 [2024-07-15 21:46:22.947846] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:10:07.869 [2024-07-15 21:46:22.948064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:08.435 EAL: TSC is not safe to use in SMP mode 00:10:08.435 EAL: TSC is not invariant 00:10:08.435 [2024-07-15 21:46:23.478501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.435 [2024-07-15 21:46:23.579316] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:08.435 [2024-07-15 21:46:23.581762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.435 [2024-07-15 21:46:23.582734] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.435 [2024-07-15 21:46:23.582752] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.002 21:46:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:10:09.002 21:46:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # return 0 00:10:09.002 21:46:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:09.002 21:46:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:09.260 BaseBdev1_malloc 00:10:09.260 21:46:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:09.517 true 00:10:09.775 21:46:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:10.032 [2024-07-15 21:46:24.988110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:10.032 [2024-07-15 21:46:24.988205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.032 [2024-07-15 21:46:24.988246] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf20cba34780 00:10:10.032 [2024-07-15 21:46:24.988263] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.032 [2024-07-15 21:46:24.989038] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.032 [2024-07-15 21:46:24.989090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:10.032 BaseBdev1 00:10:10.032 21:46:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:10.032 21:46:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:10.289 BaseBdev2_malloc 00:10:10.289 21:46:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:10.545 true 00:10:10.545 21:46:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:10.803 [2024-07-15 21:46:25.916143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:10.803 [2024-07-15 21:46:25.916232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.803 [2024-07-15 21:46:25.916259] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf20cba34c80 00:10:10.803 [2024-07-15 21:46:25.916268] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.803 [2024-07-15 21:46:25.917002] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.803 [2024-07-15 21:46:25.917033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:10.803 BaseBdev2 00:10:10.803 21:46:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:10.803 21:46:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:11.060 BaseBdev3_malloc 00:10:11.060 21:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:10:11.317 true 00:10:11.317 21:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:11.611 [2024-07-15 21:46:26.688215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:11.611 [2024-07-15 21:46:26.688280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.611 [2024-07-15 21:46:26.688311] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf20cba35180 00:10:11.611 [2024-07-15 21:46:26.688320] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.611 [2024-07-15 21:46:26.689012] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.611 [2024-07-15 21:46:26.689045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:11.611 BaseBdev3 00:10:11.611 21:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:10:11.869 [2024-07-15 21:46:26.960250] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:11.869 [2024-07-15 21:46:26.960885] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:11.869 [2024-07-15 21:46:26.960924] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:11.869 [2024-07-15 21:46:26.961004] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xf20cba35400 00:10:11.869 [2024-07-15 21:46:26.961014] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:11.869 [2024-07-15 21:46:26.961070] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf20cbaa0e20 00:10:11.869 [2024-07-15 21:46:26.961164] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf20cba35400 00:10:11.869 [2024-07-15 21:46:26.961172] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xf20cba35400 00:10:11.869 [2024-07-15 21:46:26.961213] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:11.869 21:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:11.869 21:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:11.869 21:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:11.869 21:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:11.869 21:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:11.869 21:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:11.869 21:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:11.869 21:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:11.869 21:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:11.869 21:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:11.869 21:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:11.869 21:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.126 21:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:12.126 "name": "raid_bdev1", 00:10:12.126 "uuid": "b0086a2e-42f3-11ef-9f7f-e9a656123a8b", 00:10:12.126 "strip_size_kb": 64, 00:10:12.126 "state": "online", 00:10:12.126 "raid_level": "raid0", 00:10:12.126 "superblock": true, 00:10:12.126 "num_base_bdevs": 3, 00:10:12.126 "num_base_bdevs_discovered": 3, 00:10:12.126 "num_base_bdevs_operational": 3, 00:10:12.126 "base_bdevs_list": [ 00:10:12.127 { 00:10:12.127 "name": "BaseBdev1", 00:10:12.127 "uuid": "529108d2-9e1d-665e-8056-a75c837e28cc", 00:10:12.127 "is_configured": true, 00:10:12.127 "data_offset": 2048, 00:10:12.127 "data_size": 63488 00:10:12.127 }, 00:10:12.127 { 00:10:12.127 "name": "BaseBdev2", 00:10:12.127 "uuid": "82ed1726-6848-b35f-87ec-909b51f9075a", 00:10:12.127 "is_configured": true, 00:10:12.127 "data_offset": 2048, 00:10:12.127 "data_size": 63488 00:10:12.127 }, 00:10:12.127 { 00:10:12.127 "name": "BaseBdev3", 00:10:12.127 "uuid": "463bf826-f3ef-495a-8837-79d578ab3380", 00:10:12.127 "is_configured": true, 00:10:12.127 "data_offset": 2048, 00:10:12.127 "data_size": 63488 00:10:12.127 } 00:10:12.127 ] 00:10:12.127 }' 00:10:12.127 21:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:12.127 21:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.690 21:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:12.690 21:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:12.690 [2024-07-15 21:46:27.856431] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf20cbaa0ec0 00:10:13.622 21:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:14.202 21:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:14.202 21:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:14.202 21:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:10:14.202 21:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:14.202 21:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:14.202 21:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:14.202 21:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:14.202 21:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:14.202 21:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:14.202 21:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:14.202 21:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:14.202 21:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:14.202 21:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:14.202 21:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:14.202 21:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:14.460 21:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:14.460 "name": "raid_bdev1", 00:10:14.460 "uuid": "b0086a2e-42f3-11ef-9f7f-e9a656123a8b", 00:10:14.460 "strip_size_kb": 64, 00:10:14.460 "state": "online", 00:10:14.460 "raid_level": "raid0", 00:10:14.460 "superblock": true, 00:10:14.460 "num_base_bdevs": 3, 00:10:14.460 "num_base_bdevs_discovered": 3, 00:10:14.460 "num_base_bdevs_operational": 3, 00:10:14.460 "base_bdevs_list": [ 00:10:14.460 { 00:10:14.460 "name": "BaseBdev1", 00:10:14.460 "uuid": "529108d2-9e1d-665e-8056-a75c837e28cc", 00:10:14.460 "is_configured": true, 00:10:14.460 "data_offset": 2048, 00:10:14.460 "data_size": 63488 00:10:14.460 }, 00:10:14.460 { 00:10:14.460 "name": "BaseBdev2", 00:10:14.460 "uuid": "82ed1726-6848-b35f-87ec-909b51f9075a", 00:10:14.460 "is_configured": true, 00:10:14.460 "data_offset": 2048, 00:10:14.460 "data_size": 63488 00:10:14.460 }, 00:10:14.460 { 00:10:14.460 "name": "BaseBdev3", 00:10:14.460 "uuid": "463bf826-f3ef-495a-8837-79d578ab3380", 00:10:14.460 "is_configured": true, 00:10:14.460 "data_offset": 2048, 00:10:14.461 "data_size": 63488 00:10:14.461 } 00:10:14.461 ] 00:10:14.461 }' 00:10:14.461 21:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:14.461 21:46:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.719 21:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:14.977 [2024-07-15 21:46:30.160743] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:14.977 [2024-07-15 21:46:30.160782] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:14.977 [2024-07-15 21:46:30.161244] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.977 [2024-07-15 21:46:30.161271] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:14.977 [2024-07-15 21:46:30.161284] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:14.977 [2024-07-15 21:46:30.161291] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf20cba35400 name raid_bdev1, state offline 00:10:15.234 0 00:10:15.234 21:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 53911 00:10:15.234 21:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@942 -- # '[' -z 53911 ']' 00:10:15.234 21:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # kill -0 53911 00:10:15.234 21:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@947 -- # uname 00:10:15.234 21:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:10:15.234 21:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # ps -c -o command 53911 00:10:15.234 21:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # tail -1 00:10:15.234 21:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # process_name=bdevperf 00:10:15.234 killing process with pid 53911 00:10:15.234 21:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' bdevperf = sudo ']' 00:10:15.234 21:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 53911' 00:10:15.234 21:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@961 -- # kill 53911 00:10:15.234 [2024-07-15 21:46:30.185636] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:15.234 21:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # wait 53911 00:10:15.234 [2024-07-15 21:46:30.203088] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:15.234 21:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.ecbYxPjgW5 00:10:15.234 21:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:15.234 21:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:15.234 21:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:10:15.234 21:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:10:15.234 21:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:15.234 21:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:15.234 21:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:10:15.234 00:10:15.234 real 0m7.468s 00:10:15.234 user 0m12.059s 00:10:15.234 sys 0m1.130s 00:10:15.234 21:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:10:15.234 21:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.234 ************************************ 00:10:15.234 END TEST raid_write_error_test 00:10:15.234 ************************************ 00:10:15.492 21:46:30 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:10:15.492 21:46:30 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:10:15.492 21:46:30 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:10:15.492 21:46:30 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:10:15.492 21:46:30 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:10:15.492 21:46:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:15.492 ************************************ 00:10:15.493 START TEST raid_state_function_test 00:10:15.493 ************************************ 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1117 -- # raid_state_function_test concat 3 false 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=54048 00:10:15.493 Process raid pid: 54048 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 54048' 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 54048 /var/tmp/spdk-raid.sock 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@823 -- # '[' -z 54048 ']' 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:15.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:10:15.493 21:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.493 [2024-07-15 21:46:30.439832] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:10:15.493 [2024-07-15 21:46:30.439974] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:16.084 EAL: TSC is not safe to use in SMP mode 00:10:16.084 EAL: TSC is not invariant 00:10:16.084 [2024-07-15 21:46:31.021899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.084 [2024-07-15 21:46:31.122980] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:16.084 [2024-07-15 21:46:31.125426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.084 [2024-07-15 21:46:31.126464] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.084 [2024-07-15 21:46:31.126482] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.650 21:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:10:16.650 21:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # return 0 00:10:16.650 21:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:16.650 [2024-07-15 21:46:31.793153] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:16.650 [2024-07-15 21:46:31.793222] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:16.650 [2024-07-15 21:46:31.793227] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:16.650 [2024-07-15 21:46:31.793251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:16.650 [2024-07-15 21:46:31.793254] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:16.650 [2024-07-15 21:46:31.793261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:16.650 21:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:16.650 21:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:16.650 21:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:16.650 21:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:16.650 21:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:16.650 21:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:16.650 21:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:16.650 21:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:16.650 21:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:16.650 21:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:16.650 21:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:16.650 21:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.908 21:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:16.908 "name": "Existed_Raid", 00:10:16.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.908 "strip_size_kb": 64, 00:10:16.908 "state": "configuring", 00:10:16.908 "raid_level": "concat", 00:10:16.908 "superblock": false, 00:10:16.908 "num_base_bdevs": 3, 00:10:16.908 "num_base_bdevs_discovered": 0, 00:10:16.908 "num_base_bdevs_operational": 3, 00:10:16.908 "base_bdevs_list": [ 00:10:16.908 { 00:10:16.908 "name": "BaseBdev1", 00:10:16.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.908 "is_configured": false, 00:10:16.908 "data_offset": 0, 00:10:16.908 "data_size": 0 00:10:16.908 }, 00:10:16.908 { 00:10:16.908 "name": "BaseBdev2", 00:10:16.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.908 "is_configured": false, 00:10:16.908 "data_offset": 0, 00:10:16.908 "data_size": 0 00:10:16.908 }, 00:10:16.908 { 00:10:16.908 "name": "BaseBdev3", 00:10:16.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.908 "is_configured": false, 00:10:16.908 "data_offset": 0, 00:10:16.908 "data_size": 0 00:10:16.908 } 00:10:16.908 ] 00:10:16.908 }' 00:10:16.908 21:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:16.908 21:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.167 21:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:17.425 [2024-07-15 21:46:32.517159] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:17.425 [2024-07-15 21:46:32.517183] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3d1e71234500 name Existed_Raid, state configuring 00:10:17.425 21:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:17.683 [2024-07-15 21:46:32.737172] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:17.684 [2024-07-15 21:46:32.737233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:17.684 [2024-07-15 21:46:32.737237] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:17.684 [2024-07-15 21:46:32.737261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:17.684 [2024-07-15 21:46:32.737265] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:17.684 [2024-07-15 21:46:32.737272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:17.684 21:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:17.943 [2024-07-15 21:46:32.946197] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:17.943 BaseBdev1 00:10:17.943 21:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:17.943 21:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev1 00:10:17.943 21:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:10:17.943 21:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:10:17.943 21:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:10:17.943 21:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:10:17.943 21:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:18.202 21:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:18.459 [ 00:10:18.459 { 00:10:18.459 "name": "BaseBdev1", 00:10:18.459 "aliases": [ 00:10:18.459 "b399a526-42f3-11ef-9f7f-e9a656123a8b" 00:10:18.459 ], 00:10:18.459 "product_name": "Malloc disk", 00:10:18.459 "block_size": 512, 00:10:18.460 "num_blocks": 65536, 00:10:18.460 "uuid": "b399a526-42f3-11ef-9f7f-e9a656123a8b", 00:10:18.460 "assigned_rate_limits": { 00:10:18.460 "rw_ios_per_sec": 0, 00:10:18.460 "rw_mbytes_per_sec": 0, 00:10:18.460 "r_mbytes_per_sec": 0, 00:10:18.460 "w_mbytes_per_sec": 0 00:10:18.460 }, 00:10:18.460 "claimed": true, 00:10:18.460 "claim_type": "exclusive_write", 00:10:18.460 "zoned": false, 00:10:18.460 "supported_io_types": { 00:10:18.460 "read": true, 00:10:18.460 "write": true, 00:10:18.460 "unmap": true, 00:10:18.460 "flush": true, 00:10:18.460 "reset": true, 00:10:18.460 "nvme_admin": false, 00:10:18.460 "nvme_io": false, 00:10:18.460 "nvme_io_md": false, 00:10:18.460 "write_zeroes": true, 00:10:18.460 "zcopy": true, 00:10:18.460 "get_zone_info": false, 00:10:18.460 "zone_management": false, 00:10:18.460 "zone_append": false, 00:10:18.460 "compare": false, 00:10:18.460 "compare_and_write": false, 00:10:18.460 "abort": true, 00:10:18.460 "seek_hole": false, 00:10:18.460 "seek_data": false, 00:10:18.460 "copy": true, 00:10:18.460 "nvme_iov_md": false 00:10:18.460 }, 00:10:18.460 "memory_domains": [ 00:10:18.460 { 00:10:18.460 "dma_device_id": "system", 00:10:18.460 "dma_device_type": 1 00:10:18.460 }, 00:10:18.460 { 00:10:18.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.460 "dma_device_type": 2 00:10:18.460 } 00:10:18.460 ], 00:10:18.460 "driver_specific": {} 00:10:18.460 } 00:10:18.460 ] 00:10:18.460 21:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:10:18.460 21:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:18.460 21:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:18.460 21:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:18.460 21:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:18.460 21:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:18.460 21:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:18.460 21:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:18.460 21:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:18.460 21:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:18.460 21:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:18.460 21:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:18.460 21:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.460 21:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:18.460 "name": "Existed_Raid", 00:10:18.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.460 "strip_size_kb": 64, 00:10:18.460 "state": "configuring", 00:10:18.460 "raid_level": "concat", 00:10:18.460 "superblock": false, 00:10:18.460 "num_base_bdevs": 3, 00:10:18.460 "num_base_bdevs_discovered": 1, 00:10:18.460 "num_base_bdevs_operational": 3, 00:10:18.460 "base_bdevs_list": [ 00:10:18.460 { 00:10:18.460 "name": "BaseBdev1", 00:10:18.460 "uuid": "b399a526-42f3-11ef-9f7f-e9a656123a8b", 00:10:18.460 "is_configured": true, 00:10:18.460 "data_offset": 0, 00:10:18.460 "data_size": 65536 00:10:18.460 }, 00:10:18.460 { 00:10:18.460 "name": "BaseBdev2", 00:10:18.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.460 "is_configured": false, 00:10:18.460 "data_offset": 0, 00:10:18.460 "data_size": 0 00:10:18.460 }, 00:10:18.460 { 00:10:18.460 "name": "BaseBdev3", 00:10:18.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.460 "is_configured": false, 00:10:18.460 "data_offset": 0, 00:10:18.460 "data_size": 0 00:10:18.460 } 00:10:18.460 ] 00:10:18.460 }' 00:10:18.460 21:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:18.460 21:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.028 21:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:19.028 [2024-07-15 21:46:34.153196] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:19.028 [2024-07-15 21:46:34.153225] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3d1e71234500 name Existed_Raid, state configuring 00:10:19.028 21:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:19.286 [2024-07-15 21:46:34.373213] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:19.286 [2024-07-15 21:46:34.374061] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:19.286 [2024-07-15 21:46:34.374096] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:19.286 [2024-07-15 21:46:34.374102] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:19.286 [2024-07-15 21:46:34.374110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:19.286 21:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:19.286 21:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:19.286 21:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:19.286 21:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:19.286 21:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:19.286 21:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:19.286 21:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:19.286 21:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:19.286 21:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:19.286 21:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:19.286 21:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:19.286 21:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:19.286 21:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:19.286 21:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.544 21:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:19.544 "name": "Existed_Raid", 00:10:19.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.544 "strip_size_kb": 64, 00:10:19.544 "state": "configuring", 00:10:19.544 "raid_level": "concat", 00:10:19.544 "superblock": false, 00:10:19.544 "num_base_bdevs": 3, 00:10:19.544 "num_base_bdevs_discovered": 1, 00:10:19.544 "num_base_bdevs_operational": 3, 00:10:19.544 "base_bdevs_list": [ 00:10:19.544 { 00:10:19.544 "name": "BaseBdev1", 00:10:19.544 "uuid": "b399a526-42f3-11ef-9f7f-e9a656123a8b", 00:10:19.544 "is_configured": true, 00:10:19.544 "data_offset": 0, 00:10:19.544 "data_size": 65536 00:10:19.544 }, 00:10:19.544 { 00:10:19.544 "name": "BaseBdev2", 00:10:19.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.544 "is_configured": false, 00:10:19.544 "data_offset": 0, 00:10:19.544 "data_size": 0 00:10:19.544 }, 00:10:19.544 { 00:10:19.544 "name": "BaseBdev3", 00:10:19.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.544 "is_configured": false, 00:10:19.544 "data_offset": 0, 00:10:19.544 "data_size": 0 00:10:19.544 } 00:10:19.544 ] 00:10:19.544 }' 00:10:19.544 21:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:19.544 21:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.803 21:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:20.061 [2024-07-15 21:46:35.209403] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:20.061 BaseBdev2 00:10:20.061 21:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:20.061 21:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev2 00:10:20.061 21:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:10:20.061 21:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:10:20.061 21:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:10:20.061 21:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:10:20.061 21:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:20.626 21:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:20.884 [ 00:10:20.884 { 00:10:20.884 "name": "BaseBdev2", 00:10:20.884 "aliases": [ 00:10:20.884 "b4f31dbc-42f3-11ef-9f7f-e9a656123a8b" 00:10:20.884 ], 00:10:20.884 "product_name": "Malloc disk", 00:10:20.884 "block_size": 512, 00:10:20.884 "num_blocks": 65536, 00:10:20.884 "uuid": "b4f31dbc-42f3-11ef-9f7f-e9a656123a8b", 00:10:20.884 "assigned_rate_limits": { 00:10:20.884 "rw_ios_per_sec": 0, 00:10:20.884 "rw_mbytes_per_sec": 0, 00:10:20.884 "r_mbytes_per_sec": 0, 00:10:20.884 "w_mbytes_per_sec": 0 00:10:20.884 }, 00:10:20.884 "claimed": true, 00:10:20.884 "claim_type": "exclusive_write", 00:10:20.884 "zoned": false, 00:10:20.884 "supported_io_types": { 00:10:20.884 "read": true, 00:10:20.884 "write": true, 00:10:20.884 "unmap": true, 00:10:20.884 "flush": true, 00:10:20.884 "reset": true, 00:10:20.884 "nvme_admin": false, 00:10:20.884 "nvme_io": false, 00:10:20.884 "nvme_io_md": false, 00:10:20.884 "write_zeroes": true, 00:10:20.884 "zcopy": true, 00:10:20.884 "get_zone_info": false, 00:10:20.884 "zone_management": false, 00:10:20.884 "zone_append": false, 00:10:20.884 "compare": false, 00:10:20.884 "compare_and_write": false, 00:10:20.884 "abort": true, 00:10:20.884 "seek_hole": false, 00:10:20.884 "seek_data": false, 00:10:20.884 "copy": true, 00:10:20.884 "nvme_iov_md": false 00:10:20.884 }, 00:10:20.884 "memory_domains": [ 00:10:20.884 { 00:10:20.884 "dma_device_id": "system", 00:10:20.884 "dma_device_type": 1 00:10:20.884 }, 00:10:20.884 { 00:10:20.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.884 "dma_device_type": 2 00:10:20.884 } 00:10:20.884 ], 00:10:20.884 "driver_specific": {} 00:10:20.884 } 00:10:20.884 ] 00:10:20.884 21:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:10:20.884 21:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:20.884 21:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:20.884 21:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:20.884 21:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:20.884 21:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:20.884 21:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:20.884 21:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:20.884 21:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:20.884 21:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:20.884 21:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:20.884 21:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:20.884 21:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:20.884 21:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:20.884 21:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.142 21:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:21.142 "name": "Existed_Raid", 00:10:21.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.142 "strip_size_kb": 64, 00:10:21.142 "state": "configuring", 00:10:21.142 "raid_level": "concat", 00:10:21.142 "superblock": false, 00:10:21.142 "num_base_bdevs": 3, 00:10:21.142 "num_base_bdevs_discovered": 2, 00:10:21.142 "num_base_bdevs_operational": 3, 00:10:21.142 "base_bdevs_list": [ 00:10:21.142 { 00:10:21.142 "name": "BaseBdev1", 00:10:21.142 "uuid": "b399a526-42f3-11ef-9f7f-e9a656123a8b", 00:10:21.142 "is_configured": true, 00:10:21.142 "data_offset": 0, 00:10:21.142 "data_size": 65536 00:10:21.142 }, 00:10:21.142 { 00:10:21.142 "name": "BaseBdev2", 00:10:21.142 "uuid": "b4f31dbc-42f3-11ef-9f7f-e9a656123a8b", 00:10:21.142 "is_configured": true, 00:10:21.142 "data_offset": 0, 00:10:21.142 "data_size": 65536 00:10:21.142 }, 00:10:21.142 { 00:10:21.142 "name": "BaseBdev3", 00:10:21.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.142 "is_configured": false, 00:10:21.142 "data_offset": 0, 00:10:21.142 "data_size": 0 00:10:21.142 } 00:10:21.142 ] 00:10:21.142 }' 00:10:21.142 21:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:21.142 21:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.399 21:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:21.657 [2024-07-15 21:46:36.769501] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:21.657 [2024-07-15 21:46:36.769548] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3d1e71234a00 00:10:21.657 [2024-07-15 21:46:36.769553] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:21.657 [2024-07-15 21:46:36.769574] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3d1e71297e20 00:10:21.657 [2024-07-15 21:46:36.769678] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3d1e71234a00 00:10:21.657 [2024-07-15 21:46:36.769682] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3d1e71234a00 00:10:21.657 [2024-07-15 21:46:36.769715] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.657 BaseBdev3 00:10:21.657 21:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:10:21.657 21:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev3 00:10:21.657 21:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:10:21.657 21:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:10:21.657 21:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:10:21.657 21:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:10:21.657 21:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:21.916 21:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:22.175 [ 00:10:22.175 { 00:10:22.175 "name": "BaseBdev3", 00:10:22.175 "aliases": [ 00:10:22.175 "b5e12c03-42f3-11ef-9f7f-e9a656123a8b" 00:10:22.175 ], 00:10:22.175 "product_name": "Malloc disk", 00:10:22.175 "block_size": 512, 00:10:22.175 "num_blocks": 65536, 00:10:22.175 "uuid": "b5e12c03-42f3-11ef-9f7f-e9a656123a8b", 00:10:22.175 "assigned_rate_limits": { 00:10:22.175 "rw_ios_per_sec": 0, 00:10:22.175 "rw_mbytes_per_sec": 0, 00:10:22.175 "r_mbytes_per_sec": 0, 00:10:22.175 "w_mbytes_per_sec": 0 00:10:22.175 }, 00:10:22.175 "claimed": true, 00:10:22.175 "claim_type": "exclusive_write", 00:10:22.175 "zoned": false, 00:10:22.175 "supported_io_types": { 00:10:22.175 "read": true, 00:10:22.175 "write": true, 00:10:22.175 "unmap": true, 00:10:22.175 "flush": true, 00:10:22.175 "reset": true, 00:10:22.175 "nvme_admin": false, 00:10:22.175 "nvme_io": false, 00:10:22.175 "nvme_io_md": false, 00:10:22.175 "write_zeroes": true, 00:10:22.175 "zcopy": true, 00:10:22.175 "get_zone_info": false, 00:10:22.175 "zone_management": false, 00:10:22.175 "zone_append": false, 00:10:22.175 "compare": false, 00:10:22.175 "compare_and_write": false, 00:10:22.175 "abort": true, 00:10:22.175 "seek_hole": false, 00:10:22.175 "seek_data": false, 00:10:22.175 "copy": true, 00:10:22.175 "nvme_iov_md": false 00:10:22.175 }, 00:10:22.175 "memory_domains": [ 00:10:22.175 { 00:10:22.175 "dma_device_id": "system", 00:10:22.175 "dma_device_type": 1 00:10:22.175 }, 00:10:22.175 { 00:10:22.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.175 "dma_device_type": 2 00:10:22.175 } 00:10:22.175 ], 00:10:22.175 "driver_specific": {} 00:10:22.175 } 00:10:22.175 ] 00:10:22.175 21:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:10:22.175 21:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:22.175 21:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:22.175 21:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:22.175 21:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:22.175 21:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:22.175 21:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:22.175 21:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:22.175 21:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:22.175 21:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:22.175 21:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:22.175 21:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:22.175 21:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:22.175 21:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.175 21:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:22.433 21:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:22.433 "name": "Existed_Raid", 00:10:22.433 "uuid": "b5e132dc-42f3-11ef-9f7f-e9a656123a8b", 00:10:22.433 "strip_size_kb": 64, 00:10:22.433 "state": "online", 00:10:22.433 "raid_level": "concat", 00:10:22.433 "superblock": false, 00:10:22.433 "num_base_bdevs": 3, 00:10:22.433 "num_base_bdevs_discovered": 3, 00:10:22.433 "num_base_bdevs_operational": 3, 00:10:22.433 "base_bdevs_list": [ 00:10:22.433 { 00:10:22.433 "name": "BaseBdev1", 00:10:22.433 "uuid": "b399a526-42f3-11ef-9f7f-e9a656123a8b", 00:10:22.433 "is_configured": true, 00:10:22.433 "data_offset": 0, 00:10:22.433 "data_size": 65536 00:10:22.433 }, 00:10:22.433 { 00:10:22.433 "name": "BaseBdev2", 00:10:22.433 "uuid": "b4f31dbc-42f3-11ef-9f7f-e9a656123a8b", 00:10:22.433 "is_configured": true, 00:10:22.433 "data_offset": 0, 00:10:22.433 "data_size": 65536 00:10:22.433 }, 00:10:22.433 { 00:10:22.433 "name": "BaseBdev3", 00:10:22.433 "uuid": "b5e12c03-42f3-11ef-9f7f-e9a656123a8b", 00:10:22.433 "is_configured": true, 00:10:22.433 "data_offset": 0, 00:10:22.433 "data_size": 65536 00:10:22.433 } 00:10:22.433 ] 00:10:22.433 }' 00:10:22.433 21:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:22.433 21:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.692 21:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:22.692 21:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:22.692 21:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:22.692 21:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:22.692 21:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:22.692 21:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:22.692 21:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:22.692 21:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:22.950 [2024-07-15 21:46:38.041430] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:22.950 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:22.950 "name": "Existed_Raid", 00:10:22.950 "aliases": [ 00:10:22.950 "b5e132dc-42f3-11ef-9f7f-e9a656123a8b" 00:10:22.950 ], 00:10:22.950 "product_name": "Raid Volume", 00:10:22.950 "block_size": 512, 00:10:22.950 "num_blocks": 196608, 00:10:22.950 "uuid": "b5e132dc-42f3-11ef-9f7f-e9a656123a8b", 00:10:22.950 "assigned_rate_limits": { 00:10:22.950 "rw_ios_per_sec": 0, 00:10:22.950 "rw_mbytes_per_sec": 0, 00:10:22.950 "r_mbytes_per_sec": 0, 00:10:22.950 "w_mbytes_per_sec": 0 00:10:22.950 }, 00:10:22.950 "claimed": false, 00:10:22.950 "zoned": false, 00:10:22.950 "supported_io_types": { 00:10:22.950 "read": true, 00:10:22.950 "write": true, 00:10:22.950 "unmap": true, 00:10:22.950 "flush": true, 00:10:22.950 "reset": true, 00:10:22.950 "nvme_admin": false, 00:10:22.950 "nvme_io": false, 00:10:22.950 "nvme_io_md": false, 00:10:22.950 "write_zeroes": true, 00:10:22.950 "zcopy": false, 00:10:22.950 "get_zone_info": false, 00:10:22.950 "zone_management": false, 00:10:22.950 "zone_append": false, 00:10:22.950 "compare": false, 00:10:22.950 "compare_and_write": false, 00:10:22.950 "abort": false, 00:10:22.950 "seek_hole": false, 00:10:22.950 "seek_data": false, 00:10:22.950 "copy": false, 00:10:22.950 "nvme_iov_md": false 00:10:22.950 }, 00:10:22.950 "memory_domains": [ 00:10:22.950 { 00:10:22.950 "dma_device_id": "system", 00:10:22.950 "dma_device_type": 1 00:10:22.950 }, 00:10:22.950 { 00:10:22.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.950 "dma_device_type": 2 00:10:22.950 }, 00:10:22.950 { 00:10:22.950 "dma_device_id": "system", 00:10:22.950 "dma_device_type": 1 00:10:22.950 }, 00:10:22.950 { 00:10:22.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.950 "dma_device_type": 2 00:10:22.950 }, 00:10:22.950 { 00:10:22.950 "dma_device_id": "system", 00:10:22.950 "dma_device_type": 1 00:10:22.950 }, 00:10:22.950 { 00:10:22.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.950 "dma_device_type": 2 00:10:22.950 } 00:10:22.950 ], 00:10:22.950 "driver_specific": { 00:10:22.950 "raid": { 00:10:22.950 "uuid": "b5e132dc-42f3-11ef-9f7f-e9a656123a8b", 00:10:22.950 "strip_size_kb": 64, 00:10:22.951 "state": "online", 00:10:22.951 "raid_level": "concat", 00:10:22.951 "superblock": false, 00:10:22.951 "num_base_bdevs": 3, 00:10:22.951 "num_base_bdevs_discovered": 3, 00:10:22.951 "num_base_bdevs_operational": 3, 00:10:22.951 "base_bdevs_list": [ 00:10:22.951 { 00:10:22.951 "name": "BaseBdev1", 00:10:22.951 "uuid": "b399a526-42f3-11ef-9f7f-e9a656123a8b", 00:10:22.951 "is_configured": true, 00:10:22.951 "data_offset": 0, 00:10:22.951 "data_size": 65536 00:10:22.951 }, 00:10:22.951 { 00:10:22.951 "name": "BaseBdev2", 00:10:22.951 "uuid": "b4f31dbc-42f3-11ef-9f7f-e9a656123a8b", 00:10:22.951 "is_configured": true, 00:10:22.951 "data_offset": 0, 00:10:22.951 "data_size": 65536 00:10:22.951 }, 00:10:22.951 { 00:10:22.951 "name": "BaseBdev3", 00:10:22.951 "uuid": "b5e12c03-42f3-11ef-9f7f-e9a656123a8b", 00:10:22.951 "is_configured": true, 00:10:22.951 "data_offset": 0, 00:10:22.951 "data_size": 65536 00:10:22.951 } 00:10:22.951 ] 00:10:22.951 } 00:10:22.951 } 00:10:22.951 }' 00:10:22.951 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:22.951 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:22.951 BaseBdev2 00:10:22.951 BaseBdev3' 00:10:22.951 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:22.951 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:22.951 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:23.209 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:23.209 "name": "BaseBdev1", 00:10:23.209 "aliases": [ 00:10:23.209 "b399a526-42f3-11ef-9f7f-e9a656123a8b" 00:10:23.209 ], 00:10:23.209 "product_name": "Malloc disk", 00:10:23.209 "block_size": 512, 00:10:23.209 "num_blocks": 65536, 00:10:23.209 "uuid": "b399a526-42f3-11ef-9f7f-e9a656123a8b", 00:10:23.209 "assigned_rate_limits": { 00:10:23.209 "rw_ios_per_sec": 0, 00:10:23.209 "rw_mbytes_per_sec": 0, 00:10:23.209 "r_mbytes_per_sec": 0, 00:10:23.209 "w_mbytes_per_sec": 0 00:10:23.209 }, 00:10:23.209 "claimed": true, 00:10:23.209 "claim_type": "exclusive_write", 00:10:23.209 "zoned": false, 00:10:23.209 "supported_io_types": { 00:10:23.209 "read": true, 00:10:23.209 "write": true, 00:10:23.209 "unmap": true, 00:10:23.209 "flush": true, 00:10:23.209 "reset": true, 00:10:23.209 "nvme_admin": false, 00:10:23.209 "nvme_io": false, 00:10:23.209 "nvme_io_md": false, 00:10:23.209 "write_zeroes": true, 00:10:23.209 "zcopy": true, 00:10:23.209 "get_zone_info": false, 00:10:23.209 "zone_management": false, 00:10:23.209 "zone_append": false, 00:10:23.209 "compare": false, 00:10:23.209 "compare_and_write": false, 00:10:23.209 "abort": true, 00:10:23.209 "seek_hole": false, 00:10:23.209 "seek_data": false, 00:10:23.209 "copy": true, 00:10:23.209 "nvme_iov_md": false 00:10:23.209 }, 00:10:23.209 "memory_domains": [ 00:10:23.209 { 00:10:23.209 "dma_device_id": "system", 00:10:23.209 "dma_device_type": 1 00:10:23.209 }, 00:10:23.209 { 00:10:23.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.209 "dma_device_type": 2 00:10:23.209 } 00:10:23.209 ], 00:10:23.209 "driver_specific": {} 00:10:23.209 }' 00:10:23.209 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:23.209 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:23.209 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:23.209 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:23.209 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:23.209 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:23.209 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:23.209 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:23.209 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:23.209 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:23.468 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:23.468 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:23.468 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:23.468 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:23.468 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:23.468 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:23.468 "name": "BaseBdev2", 00:10:23.468 "aliases": [ 00:10:23.468 "b4f31dbc-42f3-11ef-9f7f-e9a656123a8b" 00:10:23.468 ], 00:10:23.468 "product_name": "Malloc disk", 00:10:23.468 "block_size": 512, 00:10:23.468 "num_blocks": 65536, 00:10:23.468 "uuid": "b4f31dbc-42f3-11ef-9f7f-e9a656123a8b", 00:10:23.468 "assigned_rate_limits": { 00:10:23.468 "rw_ios_per_sec": 0, 00:10:23.468 "rw_mbytes_per_sec": 0, 00:10:23.468 "r_mbytes_per_sec": 0, 00:10:23.468 "w_mbytes_per_sec": 0 00:10:23.468 }, 00:10:23.468 "claimed": true, 00:10:23.468 "claim_type": "exclusive_write", 00:10:23.468 "zoned": false, 00:10:23.468 "supported_io_types": { 00:10:23.468 "read": true, 00:10:23.468 "write": true, 00:10:23.468 "unmap": true, 00:10:23.468 "flush": true, 00:10:23.468 "reset": true, 00:10:23.468 "nvme_admin": false, 00:10:23.468 "nvme_io": false, 00:10:23.468 "nvme_io_md": false, 00:10:23.468 "write_zeroes": true, 00:10:23.468 "zcopy": true, 00:10:23.468 "get_zone_info": false, 00:10:23.468 "zone_management": false, 00:10:23.468 "zone_append": false, 00:10:23.469 "compare": false, 00:10:23.469 "compare_and_write": false, 00:10:23.469 "abort": true, 00:10:23.469 "seek_hole": false, 00:10:23.469 "seek_data": false, 00:10:23.469 "copy": true, 00:10:23.469 "nvme_iov_md": false 00:10:23.469 }, 00:10:23.469 "memory_domains": [ 00:10:23.469 { 00:10:23.469 "dma_device_id": "system", 00:10:23.469 "dma_device_type": 1 00:10:23.469 }, 00:10:23.469 { 00:10:23.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.469 "dma_device_type": 2 00:10:23.469 } 00:10:23.469 ], 00:10:23.469 "driver_specific": {} 00:10:23.469 }' 00:10:23.469 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:23.469 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:23.469 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:23.469 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:23.728 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:23.728 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:23.728 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:23.728 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:23.728 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:23.728 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:23.728 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:23.728 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:23.728 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:23.728 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:23.728 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:23.986 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:23.986 "name": "BaseBdev3", 00:10:23.986 "aliases": [ 00:10:23.986 "b5e12c03-42f3-11ef-9f7f-e9a656123a8b" 00:10:23.986 ], 00:10:23.986 "product_name": "Malloc disk", 00:10:23.986 "block_size": 512, 00:10:23.986 "num_blocks": 65536, 00:10:23.986 "uuid": "b5e12c03-42f3-11ef-9f7f-e9a656123a8b", 00:10:23.986 "assigned_rate_limits": { 00:10:23.986 "rw_ios_per_sec": 0, 00:10:23.986 "rw_mbytes_per_sec": 0, 00:10:23.986 "r_mbytes_per_sec": 0, 00:10:23.986 "w_mbytes_per_sec": 0 00:10:23.986 }, 00:10:23.986 "claimed": true, 00:10:23.986 "claim_type": "exclusive_write", 00:10:23.986 "zoned": false, 00:10:23.986 "supported_io_types": { 00:10:23.986 "read": true, 00:10:23.986 "write": true, 00:10:23.986 "unmap": true, 00:10:23.986 "flush": true, 00:10:23.986 "reset": true, 00:10:23.986 "nvme_admin": false, 00:10:23.986 "nvme_io": false, 00:10:23.986 "nvme_io_md": false, 00:10:23.986 "write_zeroes": true, 00:10:23.986 "zcopy": true, 00:10:23.986 "get_zone_info": false, 00:10:23.986 "zone_management": false, 00:10:23.986 "zone_append": false, 00:10:23.986 "compare": false, 00:10:23.986 "compare_and_write": false, 00:10:23.986 "abort": true, 00:10:23.986 "seek_hole": false, 00:10:23.986 "seek_data": false, 00:10:23.986 "copy": true, 00:10:23.986 "nvme_iov_md": false 00:10:23.987 }, 00:10:23.987 "memory_domains": [ 00:10:23.987 { 00:10:23.987 "dma_device_id": "system", 00:10:23.987 "dma_device_type": 1 00:10:23.987 }, 00:10:23.987 { 00:10:23.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.987 "dma_device_type": 2 00:10:23.987 } 00:10:23.987 ], 00:10:23.987 "driver_specific": {} 00:10:23.987 }' 00:10:23.987 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:23.987 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:23.987 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:23.987 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:23.987 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:23.987 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:23.987 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:23.987 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:23.987 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:23.987 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:23.987 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:23.987 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:23.987 21:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:24.245 [2024-07-15 21:46:39.209461] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:24.245 [2024-07-15 21:46:39.209490] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:24.245 [2024-07-15 21:46:39.209527] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:24.245 21:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:24.245 21:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:10:24.245 21:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:24.245 21:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:24.245 21:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:10:24.245 21:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:24.245 21:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:24.245 21:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:10:24.245 21:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:24.245 21:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:24.245 21:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:24.245 21:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:24.245 21:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:24.245 21:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:24.245 21:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:24.245 21:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:24.245 21:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.504 21:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:24.504 "name": "Existed_Raid", 00:10:24.504 "uuid": "b5e132dc-42f3-11ef-9f7f-e9a656123a8b", 00:10:24.504 "strip_size_kb": 64, 00:10:24.504 "state": "offline", 00:10:24.504 "raid_level": "concat", 00:10:24.504 "superblock": false, 00:10:24.504 "num_base_bdevs": 3, 00:10:24.504 "num_base_bdevs_discovered": 2, 00:10:24.504 "num_base_bdevs_operational": 2, 00:10:24.504 "base_bdevs_list": [ 00:10:24.504 { 00:10:24.504 "name": null, 00:10:24.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.504 "is_configured": false, 00:10:24.504 "data_offset": 0, 00:10:24.504 "data_size": 65536 00:10:24.504 }, 00:10:24.504 { 00:10:24.504 "name": "BaseBdev2", 00:10:24.504 "uuid": "b4f31dbc-42f3-11ef-9f7f-e9a656123a8b", 00:10:24.504 "is_configured": true, 00:10:24.504 "data_offset": 0, 00:10:24.504 "data_size": 65536 00:10:24.504 }, 00:10:24.504 { 00:10:24.504 "name": "BaseBdev3", 00:10:24.504 "uuid": "b5e12c03-42f3-11ef-9f7f-e9a656123a8b", 00:10:24.504 "is_configured": true, 00:10:24.504 "data_offset": 0, 00:10:24.504 "data_size": 65536 00:10:24.504 } 00:10:24.504 ] 00:10:24.504 }' 00:10:24.504 21:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:24.504 21:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.763 21:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:24.763 21:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:24.763 21:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:24.763 21:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:25.021 21:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:25.021 21:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:25.021 21:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:25.312 [2024-07-15 21:46:40.331762] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:25.312 21:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:25.312 21:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:25.312 21:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:25.312 21:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:25.570 21:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:25.570 21:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:25.570 21:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:10:25.828 [2024-07-15 21:46:40.833674] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:25.828 [2024-07-15 21:46:40.833734] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3d1e71234a00 name Existed_Raid, state offline 00:10:25.828 21:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:25.828 21:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:25.828 21:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:25.828 21:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:26.087 21:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:26.087 21:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:26.087 21:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:10:26.087 21:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:10:26.087 21:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:26.087 21:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:26.345 BaseBdev2 00:10:26.345 21:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:10:26.345 21:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev2 00:10:26.345 21:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:10:26.345 21:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:10:26.345 21:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:10:26.345 21:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:10:26.345 21:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:26.604 21:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:26.862 [ 00:10:26.862 { 00:10:26.862 "name": "BaseBdev2", 00:10:26.862 "aliases": [ 00:10:26.862 "b895b0b5-42f3-11ef-9f7f-e9a656123a8b" 00:10:26.862 ], 00:10:26.862 "product_name": "Malloc disk", 00:10:26.862 "block_size": 512, 00:10:26.862 "num_blocks": 65536, 00:10:26.862 "uuid": "b895b0b5-42f3-11ef-9f7f-e9a656123a8b", 00:10:26.862 "assigned_rate_limits": { 00:10:26.862 "rw_ios_per_sec": 0, 00:10:26.862 "rw_mbytes_per_sec": 0, 00:10:26.862 "r_mbytes_per_sec": 0, 00:10:26.862 "w_mbytes_per_sec": 0 00:10:26.862 }, 00:10:26.862 "claimed": false, 00:10:26.862 "zoned": false, 00:10:26.862 "supported_io_types": { 00:10:26.862 "read": true, 00:10:26.862 "write": true, 00:10:26.862 "unmap": true, 00:10:26.862 "flush": true, 00:10:26.862 "reset": true, 00:10:26.862 "nvme_admin": false, 00:10:26.862 "nvme_io": false, 00:10:26.862 "nvme_io_md": false, 00:10:26.862 "write_zeroes": true, 00:10:26.862 "zcopy": true, 00:10:26.862 "get_zone_info": false, 00:10:26.862 "zone_management": false, 00:10:26.862 "zone_append": false, 00:10:26.862 "compare": false, 00:10:26.862 "compare_and_write": false, 00:10:26.862 "abort": true, 00:10:26.862 "seek_hole": false, 00:10:26.862 "seek_data": false, 00:10:26.862 "copy": true, 00:10:26.862 "nvme_iov_md": false 00:10:26.862 }, 00:10:26.862 "memory_domains": [ 00:10:26.862 { 00:10:26.862 "dma_device_id": "system", 00:10:26.862 "dma_device_type": 1 00:10:26.862 }, 00:10:26.862 { 00:10:26.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.862 "dma_device_type": 2 00:10:26.862 } 00:10:26.862 ], 00:10:26.862 "driver_specific": {} 00:10:26.862 } 00:10:26.862 ] 00:10:26.862 21:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:10:26.862 21:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:26.862 21:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:26.862 21:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:27.121 BaseBdev3 00:10:27.121 21:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:10:27.121 21:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev3 00:10:27.121 21:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:10:27.121 21:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:10:27.121 21:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:10:27.121 21:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:10:27.121 21:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:27.379 21:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:27.638 [ 00:10:27.638 { 00:10:27.638 "name": "BaseBdev3", 00:10:27.638 "aliases": [ 00:10:27.638 "b9105f27-42f3-11ef-9f7f-e9a656123a8b" 00:10:27.638 ], 00:10:27.638 "product_name": "Malloc disk", 00:10:27.638 "block_size": 512, 00:10:27.638 "num_blocks": 65536, 00:10:27.638 "uuid": "b9105f27-42f3-11ef-9f7f-e9a656123a8b", 00:10:27.638 "assigned_rate_limits": { 00:10:27.638 "rw_ios_per_sec": 0, 00:10:27.638 "rw_mbytes_per_sec": 0, 00:10:27.638 "r_mbytes_per_sec": 0, 00:10:27.638 "w_mbytes_per_sec": 0 00:10:27.638 }, 00:10:27.638 "claimed": false, 00:10:27.638 "zoned": false, 00:10:27.638 "supported_io_types": { 00:10:27.638 "read": true, 00:10:27.638 "write": true, 00:10:27.638 "unmap": true, 00:10:27.638 "flush": true, 00:10:27.638 "reset": true, 00:10:27.638 "nvme_admin": false, 00:10:27.638 "nvme_io": false, 00:10:27.638 "nvme_io_md": false, 00:10:27.638 "write_zeroes": true, 00:10:27.638 "zcopy": true, 00:10:27.638 "get_zone_info": false, 00:10:27.638 "zone_management": false, 00:10:27.638 "zone_append": false, 00:10:27.638 "compare": false, 00:10:27.638 "compare_and_write": false, 00:10:27.638 "abort": true, 00:10:27.638 "seek_hole": false, 00:10:27.638 "seek_data": false, 00:10:27.638 "copy": true, 00:10:27.638 "nvme_iov_md": false 00:10:27.638 }, 00:10:27.638 "memory_domains": [ 00:10:27.638 { 00:10:27.638 "dma_device_id": "system", 00:10:27.638 "dma_device_type": 1 00:10:27.638 }, 00:10:27.638 { 00:10:27.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.638 "dma_device_type": 2 00:10:27.638 } 00:10:27.638 ], 00:10:27.638 "driver_specific": {} 00:10:27.638 } 00:10:27.638 ] 00:10:27.638 21:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:10:27.638 21:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:27.638 21:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:27.638 21:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:27.897 [2024-07-15 21:46:43.055904] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:27.897 [2024-07-15 21:46:43.055965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:27.897 [2024-07-15 21:46:43.055975] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:27.897 [2024-07-15 21:46:43.056577] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:27.897 21:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:27.897 21:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:27.897 21:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:27.897 21:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:27.897 21:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:27.897 21:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:27.897 21:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:27.897 21:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:27.897 21:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:27.897 21:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:27.897 21:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:27.897 21:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.464 21:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:28.464 "name": "Existed_Raid", 00:10:28.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.464 "strip_size_kb": 64, 00:10:28.464 "state": "configuring", 00:10:28.464 "raid_level": "concat", 00:10:28.464 "superblock": false, 00:10:28.464 "num_base_bdevs": 3, 00:10:28.464 "num_base_bdevs_discovered": 2, 00:10:28.464 "num_base_bdevs_operational": 3, 00:10:28.464 "base_bdevs_list": [ 00:10:28.464 { 00:10:28.464 "name": "BaseBdev1", 00:10:28.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.464 "is_configured": false, 00:10:28.464 "data_offset": 0, 00:10:28.464 "data_size": 0 00:10:28.464 }, 00:10:28.464 { 00:10:28.464 "name": "BaseBdev2", 00:10:28.464 "uuid": "b895b0b5-42f3-11ef-9f7f-e9a656123a8b", 00:10:28.464 "is_configured": true, 00:10:28.464 "data_offset": 0, 00:10:28.464 "data_size": 65536 00:10:28.464 }, 00:10:28.464 { 00:10:28.464 "name": "BaseBdev3", 00:10:28.464 "uuid": "b9105f27-42f3-11ef-9f7f-e9a656123a8b", 00:10:28.464 "is_configured": true, 00:10:28.464 "data_offset": 0, 00:10:28.464 "data_size": 65536 00:10:28.464 } 00:10:28.464 ] 00:10:28.464 }' 00:10:28.464 21:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:28.464 21:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.721 21:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:10:28.979 [2024-07-15 21:46:44.111920] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:28.979 21:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:28.979 21:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:28.979 21:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:28.979 21:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:28.979 21:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:28.979 21:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:28.979 21:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:28.979 21:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:28.979 21:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:28.979 21:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:28.979 21:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:28.979 21:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.545 21:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:29.545 "name": "Existed_Raid", 00:10:29.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.545 "strip_size_kb": 64, 00:10:29.545 "state": "configuring", 00:10:29.545 "raid_level": "concat", 00:10:29.545 "superblock": false, 00:10:29.545 "num_base_bdevs": 3, 00:10:29.545 "num_base_bdevs_discovered": 1, 00:10:29.545 "num_base_bdevs_operational": 3, 00:10:29.545 "base_bdevs_list": [ 00:10:29.545 { 00:10:29.545 "name": "BaseBdev1", 00:10:29.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.545 "is_configured": false, 00:10:29.545 "data_offset": 0, 00:10:29.545 "data_size": 0 00:10:29.545 }, 00:10:29.545 { 00:10:29.545 "name": null, 00:10:29.545 "uuid": "b895b0b5-42f3-11ef-9f7f-e9a656123a8b", 00:10:29.545 "is_configured": false, 00:10:29.545 "data_offset": 0, 00:10:29.545 "data_size": 65536 00:10:29.545 }, 00:10:29.545 { 00:10:29.545 "name": "BaseBdev3", 00:10:29.545 "uuid": "b9105f27-42f3-11ef-9f7f-e9a656123a8b", 00:10:29.545 "is_configured": true, 00:10:29.545 "data_offset": 0, 00:10:29.545 "data_size": 65536 00:10:29.545 } 00:10:29.545 ] 00:10:29.545 }' 00:10:29.545 21:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:29.545 21:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.802 21:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:29.802 21:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:30.061 21:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:10:30.061 21:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:30.320 [2024-07-15 21:46:45.364144] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:30.320 BaseBdev1 00:10:30.320 21:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:10:30.320 21:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev1 00:10:30.320 21:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:10:30.320 21:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:10:30.320 21:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:10:30.320 21:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:10:30.320 21:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:30.579 21:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:30.836 [ 00:10:30.836 { 00:10:30.836 "name": "BaseBdev1", 00:10:30.836 "aliases": [ 00:10:30.836 "bb009c31-42f3-11ef-9f7f-e9a656123a8b" 00:10:30.836 ], 00:10:30.836 "product_name": "Malloc disk", 00:10:30.836 "block_size": 512, 00:10:30.836 "num_blocks": 65536, 00:10:30.836 "uuid": "bb009c31-42f3-11ef-9f7f-e9a656123a8b", 00:10:30.836 "assigned_rate_limits": { 00:10:30.836 "rw_ios_per_sec": 0, 00:10:30.836 "rw_mbytes_per_sec": 0, 00:10:30.836 "r_mbytes_per_sec": 0, 00:10:30.836 "w_mbytes_per_sec": 0 00:10:30.836 }, 00:10:30.836 "claimed": true, 00:10:30.836 "claim_type": "exclusive_write", 00:10:30.836 "zoned": false, 00:10:30.836 "supported_io_types": { 00:10:30.836 "read": true, 00:10:30.836 "write": true, 00:10:30.836 "unmap": true, 00:10:30.836 "flush": true, 00:10:30.836 "reset": true, 00:10:30.836 "nvme_admin": false, 00:10:30.836 "nvme_io": false, 00:10:30.836 "nvme_io_md": false, 00:10:30.836 "write_zeroes": true, 00:10:30.836 "zcopy": true, 00:10:30.836 "get_zone_info": false, 00:10:30.836 "zone_management": false, 00:10:30.836 "zone_append": false, 00:10:30.836 "compare": false, 00:10:30.836 "compare_and_write": false, 00:10:30.836 "abort": true, 00:10:30.836 "seek_hole": false, 00:10:30.836 "seek_data": false, 00:10:30.836 "copy": true, 00:10:30.836 "nvme_iov_md": false 00:10:30.836 }, 00:10:30.836 "memory_domains": [ 00:10:30.836 { 00:10:30.836 "dma_device_id": "system", 00:10:30.836 "dma_device_type": 1 00:10:30.836 }, 00:10:30.836 { 00:10:30.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.836 "dma_device_type": 2 00:10:30.836 } 00:10:30.836 ], 00:10:30.836 "driver_specific": {} 00:10:30.836 } 00:10:30.836 ] 00:10:30.836 21:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:10:30.836 21:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:30.836 21:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:30.836 21:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:30.836 21:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:30.836 21:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:30.836 21:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:30.836 21:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:30.836 21:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:30.836 21:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:30.836 21:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:30.836 21:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:30.836 21:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.094 21:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:31.094 "name": "Existed_Raid", 00:10:31.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.094 "strip_size_kb": 64, 00:10:31.094 "state": "configuring", 00:10:31.094 "raid_level": "concat", 00:10:31.094 "superblock": false, 00:10:31.094 "num_base_bdevs": 3, 00:10:31.094 "num_base_bdevs_discovered": 2, 00:10:31.094 "num_base_bdevs_operational": 3, 00:10:31.094 "base_bdevs_list": [ 00:10:31.094 { 00:10:31.094 "name": "BaseBdev1", 00:10:31.094 "uuid": "bb009c31-42f3-11ef-9f7f-e9a656123a8b", 00:10:31.094 "is_configured": true, 00:10:31.094 "data_offset": 0, 00:10:31.094 "data_size": 65536 00:10:31.094 }, 00:10:31.094 { 00:10:31.094 "name": null, 00:10:31.094 "uuid": "b895b0b5-42f3-11ef-9f7f-e9a656123a8b", 00:10:31.094 "is_configured": false, 00:10:31.094 "data_offset": 0, 00:10:31.094 "data_size": 65536 00:10:31.094 }, 00:10:31.094 { 00:10:31.094 "name": "BaseBdev3", 00:10:31.094 "uuid": "b9105f27-42f3-11ef-9f7f-e9a656123a8b", 00:10:31.094 "is_configured": true, 00:10:31.094 "data_offset": 0, 00:10:31.094 "data_size": 65536 00:10:31.094 } 00:10:31.094 ] 00:10:31.094 }' 00:10:31.094 21:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:31.094 21:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.659 21:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:31.659 21:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:31.659 21:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:10:31.659 21:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:10:31.917 [2024-07-15 21:46:47.064090] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:31.917 21:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:31.917 21:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:31.917 21:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:31.917 21:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:31.917 21:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:31.917 21:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:31.917 21:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:31.917 21:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:31.917 21:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:31.917 21:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:31.917 21:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:31.917 21:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.175 21:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:32.175 "name": "Existed_Raid", 00:10:32.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.175 "strip_size_kb": 64, 00:10:32.175 "state": "configuring", 00:10:32.175 "raid_level": "concat", 00:10:32.175 "superblock": false, 00:10:32.175 "num_base_bdevs": 3, 00:10:32.175 "num_base_bdevs_discovered": 1, 00:10:32.175 "num_base_bdevs_operational": 3, 00:10:32.175 "base_bdevs_list": [ 00:10:32.175 { 00:10:32.175 "name": "BaseBdev1", 00:10:32.175 "uuid": "bb009c31-42f3-11ef-9f7f-e9a656123a8b", 00:10:32.175 "is_configured": true, 00:10:32.175 "data_offset": 0, 00:10:32.175 "data_size": 65536 00:10:32.175 }, 00:10:32.175 { 00:10:32.175 "name": null, 00:10:32.175 "uuid": "b895b0b5-42f3-11ef-9f7f-e9a656123a8b", 00:10:32.175 "is_configured": false, 00:10:32.175 "data_offset": 0, 00:10:32.175 "data_size": 65536 00:10:32.175 }, 00:10:32.175 { 00:10:32.175 "name": null, 00:10:32.175 "uuid": "b9105f27-42f3-11ef-9f7f-e9a656123a8b", 00:10:32.175 "is_configured": false, 00:10:32.175 "data_offset": 0, 00:10:32.175 "data_size": 65536 00:10:32.175 } 00:10:32.175 ] 00:10:32.175 }' 00:10:32.175 21:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:32.175 21:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.432 21:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:32.432 21:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:32.997 21:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:10:32.997 21:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:32.997 [2024-07-15 21:46:48.136166] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:32.997 21:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:32.997 21:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:32.997 21:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:32.997 21:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:32.997 21:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:32.997 21:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:32.997 21:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:32.997 21:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:32.997 21:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:32.997 21:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:32.997 21:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.997 21:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:33.255 21:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:33.255 "name": "Existed_Raid", 00:10:33.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.255 "strip_size_kb": 64, 00:10:33.255 "state": "configuring", 00:10:33.255 "raid_level": "concat", 00:10:33.255 "superblock": false, 00:10:33.255 "num_base_bdevs": 3, 00:10:33.255 "num_base_bdevs_discovered": 2, 00:10:33.255 "num_base_bdevs_operational": 3, 00:10:33.255 "base_bdevs_list": [ 00:10:33.255 { 00:10:33.255 "name": "BaseBdev1", 00:10:33.255 "uuid": "bb009c31-42f3-11ef-9f7f-e9a656123a8b", 00:10:33.255 "is_configured": true, 00:10:33.255 "data_offset": 0, 00:10:33.255 "data_size": 65536 00:10:33.255 }, 00:10:33.255 { 00:10:33.255 "name": null, 00:10:33.255 "uuid": "b895b0b5-42f3-11ef-9f7f-e9a656123a8b", 00:10:33.255 "is_configured": false, 00:10:33.255 "data_offset": 0, 00:10:33.255 "data_size": 65536 00:10:33.255 }, 00:10:33.255 { 00:10:33.255 "name": "BaseBdev3", 00:10:33.255 "uuid": "b9105f27-42f3-11ef-9f7f-e9a656123a8b", 00:10:33.255 "is_configured": true, 00:10:33.255 "data_offset": 0, 00:10:33.255 "data_size": 65536 00:10:33.255 } 00:10:33.255 ] 00:10:33.255 }' 00:10:33.514 21:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:33.514 21:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.772 21:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:33.772 21:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:34.037 21:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:10:34.037 21:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:34.295 [2024-07-15 21:46:49.328467] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:34.295 21:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:34.295 21:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:34.295 21:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:34.295 21:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:34.295 21:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:34.295 21:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:34.295 21:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:34.295 21:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:34.295 21:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:34.295 21:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:34.295 21:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:34.295 21:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.552 21:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:34.552 "name": "Existed_Raid", 00:10:34.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.552 "strip_size_kb": 64, 00:10:34.552 "state": "configuring", 00:10:34.552 "raid_level": "concat", 00:10:34.552 "superblock": false, 00:10:34.552 "num_base_bdevs": 3, 00:10:34.552 "num_base_bdevs_discovered": 1, 00:10:34.552 "num_base_bdevs_operational": 3, 00:10:34.552 "base_bdevs_list": [ 00:10:34.552 { 00:10:34.552 "name": null, 00:10:34.552 "uuid": "bb009c31-42f3-11ef-9f7f-e9a656123a8b", 00:10:34.552 "is_configured": false, 00:10:34.552 "data_offset": 0, 00:10:34.552 "data_size": 65536 00:10:34.552 }, 00:10:34.552 { 00:10:34.552 "name": null, 00:10:34.552 "uuid": "b895b0b5-42f3-11ef-9f7f-e9a656123a8b", 00:10:34.552 "is_configured": false, 00:10:34.552 "data_offset": 0, 00:10:34.552 "data_size": 65536 00:10:34.552 }, 00:10:34.552 { 00:10:34.552 "name": "BaseBdev3", 00:10:34.552 "uuid": "b9105f27-42f3-11ef-9f7f-e9a656123a8b", 00:10:34.552 "is_configured": true, 00:10:34.552 "data_offset": 0, 00:10:34.552 "data_size": 65536 00:10:34.552 } 00:10:34.552 ] 00:10:34.552 }' 00:10:34.552 21:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:34.552 21:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.808 21:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:34.808 21:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:35.065 21:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:10:35.065 21:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:35.322 [2024-07-15 21:46:50.494692] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:35.579 21:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:35.579 21:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:35.579 21:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:35.579 21:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:35.579 21:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:35.579 21:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:35.579 21:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:35.580 21:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:35.580 21:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:35.580 21:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:35.580 21:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:35.580 21:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.837 21:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:35.837 "name": "Existed_Raid", 00:10:35.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.837 "strip_size_kb": 64, 00:10:35.837 "state": "configuring", 00:10:35.837 "raid_level": "concat", 00:10:35.837 "superblock": false, 00:10:35.837 "num_base_bdevs": 3, 00:10:35.837 "num_base_bdevs_discovered": 2, 00:10:35.837 "num_base_bdevs_operational": 3, 00:10:35.837 "base_bdevs_list": [ 00:10:35.837 { 00:10:35.837 "name": null, 00:10:35.837 "uuid": "bb009c31-42f3-11ef-9f7f-e9a656123a8b", 00:10:35.837 "is_configured": false, 00:10:35.837 "data_offset": 0, 00:10:35.837 "data_size": 65536 00:10:35.837 }, 00:10:35.837 { 00:10:35.837 "name": "BaseBdev2", 00:10:35.837 "uuid": "b895b0b5-42f3-11ef-9f7f-e9a656123a8b", 00:10:35.837 "is_configured": true, 00:10:35.837 "data_offset": 0, 00:10:35.837 "data_size": 65536 00:10:35.837 }, 00:10:35.837 { 00:10:35.837 "name": "BaseBdev3", 00:10:35.837 "uuid": "b9105f27-42f3-11ef-9f7f-e9a656123a8b", 00:10:35.837 "is_configured": true, 00:10:35.837 "data_offset": 0, 00:10:35.837 "data_size": 65536 00:10:35.837 } 00:10:35.837 ] 00:10:35.837 }' 00:10:35.837 21:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:35.837 21:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.094 21:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:36.094 21:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:36.351 21:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:10:36.351 21:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:36.351 21:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:36.915 21:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u bb009c31-42f3-11ef-9f7f-e9a656123a8b 00:10:37.171 [2024-07-15 21:46:52.150904] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:37.171 [2024-07-15 21:46:52.150948] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3d1e71234a00 00:10:37.171 [2024-07-15 21:46:52.150955] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:37.172 [2024-07-15 21:46:52.150987] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3d1e71297e20 00:10:37.172 [2024-07-15 21:46:52.151069] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3d1e71234a00 00:10:37.172 [2024-07-15 21:46:52.151076] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3d1e71234a00 00:10:37.172 [2024-07-15 21:46:52.151122] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:37.172 NewBaseBdev 00:10:37.172 21:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:10:37.172 21:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=NewBaseBdev 00:10:37.172 21:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:10:37.172 21:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:10:37.172 21:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:10:37.172 21:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:10:37.172 21:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:37.428 21:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:37.686 [ 00:10:37.686 { 00:10:37.686 "name": "NewBaseBdev", 00:10:37.686 "aliases": [ 00:10:37.686 "bb009c31-42f3-11ef-9f7f-e9a656123a8b" 00:10:37.686 ], 00:10:37.686 "product_name": "Malloc disk", 00:10:37.686 "block_size": 512, 00:10:37.686 "num_blocks": 65536, 00:10:37.686 "uuid": "bb009c31-42f3-11ef-9f7f-e9a656123a8b", 00:10:37.686 "assigned_rate_limits": { 00:10:37.686 "rw_ios_per_sec": 0, 00:10:37.686 "rw_mbytes_per_sec": 0, 00:10:37.686 "r_mbytes_per_sec": 0, 00:10:37.686 "w_mbytes_per_sec": 0 00:10:37.686 }, 00:10:37.686 "claimed": true, 00:10:37.686 "claim_type": "exclusive_write", 00:10:37.686 "zoned": false, 00:10:37.686 "supported_io_types": { 00:10:37.686 "read": true, 00:10:37.686 "write": true, 00:10:37.686 "unmap": true, 00:10:37.686 "flush": true, 00:10:37.686 "reset": true, 00:10:37.686 "nvme_admin": false, 00:10:37.686 "nvme_io": false, 00:10:37.686 "nvme_io_md": false, 00:10:37.686 "write_zeroes": true, 00:10:37.686 "zcopy": true, 00:10:37.686 "get_zone_info": false, 00:10:37.686 "zone_management": false, 00:10:37.686 "zone_append": false, 00:10:37.686 "compare": false, 00:10:37.686 "compare_and_write": false, 00:10:37.686 "abort": true, 00:10:37.686 "seek_hole": false, 00:10:37.686 "seek_data": false, 00:10:37.686 "copy": true, 00:10:37.686 "nvme_iov_md": false 00:10:37.686 }, 00:10:37.686 "memory_domains": [ 00:10:37.686 { 00:10:37.686 "dma_device_id": "system", 00:10:37.686 "dma_device_type": 1 00:10:37.686 }, 00:10:37.686 { 00:10:37.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.686 "dma_device_type": 2 00:10:37.686 } 00:10:37.686 ], 00:10:37.686 "driver_specific": {} 00:10:37.686 } 00:10:37.686 ] 00:10:37.686 21:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:10:37.686 21:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:37.686 21:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:37.686 21:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:37.686 21:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:37.686 21:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:37.686 21:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:37.686 21:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:37.687 21:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:37.687 21:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:37.687 21:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:37.687 21:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:37.687 21:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.943 21:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:37.943 "name": "Existed_Raid", 00:10:37.943 "uuid": "bf0c370b-42f3-11ef-9f7f-e9a656123a8b", 00:10:37.943 "strip_size_kb": 64, 00:10:37.943 "state": "online", 00:10:37.943 "raid_level": "concat", 00:10:37.943 "superblock": false, 00:10:37.943 "num_base_bdevs": 3, 00:10:37.943 "num_base_bdevs_discovered": 3, 00:10:37.943 "num_base_bdevs_operational": 3, 00:10:37.943 "base_bdevs_list": [ 00:10:37.943 { 00:10:37.943 "name": "NewBaseBdev", 00:10:37.943 "uuid": "bb009c31-42f3-11ef-9f7f-e9a656123a8b", 00:10:37.943 "is_configured": true, 00:10:37.943 "data_offset": 0, 00:10:37.943 "data_size": 65536 00:10:37.943 }, 00:10:37.943 { 00:10:37.943 "name": "BaseBdev2", 00:10:37.943 "uuid": "b895b0b5-42f3-11ef-9f7f-e9a656123a8b", 00:10:37.943 "is_configured": true, 00:10:37.943 "data_offset": 0, 00:10:37.943 "data_size": 65536 00:10:37.943 }, 00:10:37.943 { 00:10:37.943 "name": "BaseBdev3", 00:10:37.943 "uuid": "b9105f27-42f3-11ef-9f7f-e9a656123a8b", 00:10:37.943 "is_configured": true, 00:10:37.943 "data_offset": 0, 00:10:37.943 "data_size": 65536 00:10:37.943 } 00:10:37.943 ] 00:10:37.943 }' 00:10:37.943 21:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:37.944 21:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.207 21:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:10:38.207 21:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:38.207 21:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:38.207 21:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:38.207 21:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:38.207 21:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:38.207 21:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:38.207 21:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:38.489 [2024-07-15 21:46:53.566806] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:38.489 21:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:38.489 "name": "Existed_Raid", 00:10:38.489 "aliases": [ 00:10:38.489 "bf0c370b-42f3-11ef-9f7f-e9a656123a8b" 00:10:38.489 ], 00:10:38.489 "product_name": "Raid Volume", 00:10:38.489 "block_size": 512, 00:10:38.489 "num_blocks": 196608, 00:10:38.489 "uuid": "bf0c370b-42f3-11ef-9f7f-e9a656123a8b", 00:10:38.489 "assigned_rate_limits": { 00:10:38.489 "rw_ios_per_sec": 0, 00:10:38.489 "rw_mbytes_per_sec": 0, 00:10:38.489 "r_mbytes_per_sec": 0, 00:10:38.489 "w_mbytes_per_sec": 0 00:10:38.489 }, 00:10:38.489 "claimed": false, 00:10:38.489 "zoned": false, 00:10:38.489 "supported_io_types": { 00:10:38.489 "read": true, 00:10:38.489 "write": true, 00:10:38.489 "unmap": true, 00:10:38.489 "flush": true, 00:10:38.489 "reset": true, 00:10:38.489 "nvme_admin": false, 00:10:38.489 "nvme_io": false, 00:10:38.489 "nvme_io_md": false, 00:10:38.489 "write_zeroes": true, 00:10:38.489 "zcopy": false, 00:10:38.489 "get_zone_info": false, 00:10:38.489 "zone_management": false, 00:10:38.489 "zone_append": false, 00:10:38.489 "compare": false, 00:10:38.489 "compare_and_write": false, 00:10:38.489 "abort": false, 00:10:38.489 "seek_hole": false, 00:10:38.489 "seek_data": false, 00:10:38.489 "copy": false, 00:10:38.489 "nvme_iov_md": false 00:10:38.489 }, 00:10:38.489 "memory_domains": [ 00:10:38.489 { 00:10:38.489 "dma_device_id": "system", 00:10:38.489 "dma_device_type": 1 00:10:38.489 }, 00:10:38.489 { 00:10:38.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.489 "dma_device_type": 2 00:10:38.489 }, 00:10:38.489 { 00:10:38.489 "dma_device_id": "system", 00:10:38.489 "dma_device_type": 1 00:10:38.489 }, 00:10:38.489 { 00:10:38.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.489 "dma_device_type": 2 00:10:38.489 }, 00:10:38.489 { 00:10:38.489 "dma_device_id": "system", 00:10:38.489 "dma_device_type": 1 00:10:38.489 }, 00:10:38.489 { 00:10:38.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.489 "dma_device_type": 2 00:10:38.489 } 00:10:38.489 ], 00:10:38.489 "driver_specific": { 00:10:38.489 "raid": { 00:10:38.489 "uuid": "bf0c370b-42f3-11ef-9f7f-e9a656123a8b", 00:10:38.489 "strip_size_kb": 64, 00:10:38.489 "state": "online", 00:10:38.489 "raid_level": "concat", 00:10:38.489 "superblock": false, 00:10:38.489 "num_base_bdevs": 3, 00:10:38.489 "num_base_bdevs_discovered": 3, 00:10:38.489 "num_base_bdevs_operational": 3, 00:10:38.489 "base_bdevs_list": [ 00:10:38.489 { 00:10:38.489 "name": "NewBaseBdev", 00:10:38.489 "uuid": "bb009c31-42f3-11ef-9f7f-e9a656123a8b", 00:10:38.489 "is_configured": true, 00:10:38.489 "data_offset": 0, 00:10:38.489 "data_size": 65536 00:10:38.489 }, 00:10:38.489 { 00:10:38.489 "name": "BaseBdev2", 00:10:38.489 "uuid": "b895b0b5-42f3-11ef-9f7f-e9a656123a8b", 00:10:38.489 "is_configured": true, 00:10:38.489 "data_offset": 0, 00:10:38.489 "data_size": 65536 00:10:38.489 }, 00:10:38.489 { 00:10:38.489 "name": "BaseBdev3", 00:10:38.489 "uuid": "b9105f27-42f3-11ef-9f7f-e9a656123a8b", 00:10:38.489 "is_configured": true, 00:10:38.489 "data_offset": 0, 00:10:38.489 "data_size": 65536 00:10:38.489 } 00:10:38.489 ] 00:10:38.489 } 00:10:38.489 } 00:10:38.489 }' 00:10:38.489 21:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:38.489 21:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:10:38.489 BaseBdev2 00:10:38.489 BaseBdev3' 00:10:38.489 21:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:38.489 21:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:38.489 21:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:10:38.763 21:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:38.763 "name": "NewBaseBdev", 00:10:38.763 "aliases": [ 00:10:38.763 "bb009c31-42f3-11ef-9f7f-e9a656123a8b" 00:10:38.763 ], 00:10:38.763 "product_name": "Malloc disk", 00:10:38.763 "block_size": 512, 00:10:38.763 "num_blocks": 65536, 00:10:38.763 "uuid": "bb009c31-42f3-11ef-9f7f-e9a656123a8b", 00:10:38.763 "assigned_rate_limits": { 00:10:38.763 "rw_ios_per_sec": 0, 00:10:38.763 "rw_mbytes_per_sec": 0, 00:10:38.763 "r_mbytes_per_sec": 0, 00:10:38.763 "w_mbytes_per_sec": 0 00:10:38.763 }, 00:10:38.763 "claimed": true, 00:10:38.763 "claim_type": "exclusive_write", 00:10:38.763 "zoned": false, 00:10:38.763 "supported_io_types": { 00:10:38.763 "read": true, 00:10:38.763 "write": true, 00:10:38.763 "unmap": true, 00:10:38.763 "flush": true, 00:10:38.763 "reset": true, 00:10:38.763 "nvme_admin": false, 00:10:38.763 "nvme_io": false, 00:10:38.763 "nvme_io_md": false, 00:10:38.763 "write_zeroes": true, 00:10:38.763 "zcopy": true, 00:10:38.763 "get_zone_info": false, 00:10:38.763 "zone_management": false, 00:10:38.763 "zone_append": false, 00:10:38.763 "compare": false, 00:10:38.763 "compare_and_write": false, 00:10:38.763 "abort": true, 00:10:38.763 "seek_hole": false, 00:10:38.763 "seek_data": false, 00:10:38.763 "copy": true, 00:10:38.763 "nvme_iov_md": false 00:10:38.763 }, 00:10:38.763 "memory_domains": [ 00:10:38.763 { 00:10:38.763 "dma_device_id": "system", 00:10:38.763 "dma_device_type": 1 00:10:38.763 }, 00:10:38.763 { 00:10:38.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.763 "dma_device_type": 2 00:10:38.763 } 00:10:38.763 ], 00:10:38.763 "driver_specific": {} 00:10:38.763 }' 00:10:38.763 21:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:38.763 21:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:38.763 21:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:38.763 21:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:38.763 21:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:38.763 21:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:38.763 21:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:38.763 21:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:38.763 21:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:38.763 21:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:38.763 21:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:38.763 21:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:38.763 21:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:38.763 21:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:38.763 21:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:39.026 21:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:39.026 "name": "BaseBdev2", 00:10:39.026 "aliases": [ 00:10:39.026 "b895b0b5-42f3-11ef-9f7f-e9a656123a8b" 00:10:39.026 ], 00:10:39.026 "product_name": "Malloc disk", 00:10:39.026 "block_size": 512, 00:10:39.026 "num_blocks": 65536, 00:10:39.026 "uuid": "b895b0b5-42f3-11ef-9f7f-e9a656123a8b", 00:10:39.026 "assigned_rate_limits": { 00:10:39.026 "rw_ios_per_sec": 0, 00:10:39.026 "rw_mbytes_per_sec": 0, 00:10:39.026 "r_mbytes_per_sec": 0, 00:10:39.026 "w_mbytes_per_sec": 0 00:10:39.026 }, 00:10:39.026 "claimed": true, 00:10:39.026 "claim_type": "exclusive_write", 00:10:39.026 "zoned": false, 00:10:39.026 "supported_io_types": { 00:10:39.026 "read": true, 00:10:39.026 "write": true, 00:10:39.026 "unmap": true, 00:10:39.026 "flush": true, 00:10:39.026 "reset": true, 00:10:39.026 "nvme_admin": false, 00:10:39.026 "nvme_io": false, 00:10:39.026 "nvme_io_md": false, 00:10:39.026 "write_zeroes": true, 00:10:39.026 "zcopy": true, 00:10:39.026 "get_zone_info": false, 00:10:39.026 "zone_management": false, 00:10:39.026 "zone_append": false, 00:10:39.026 "compare": false, 00:10:39.026 "compare_and_write": false, 00:10:39.026 "abort": true, 00:10:39.026 "seek_hole": false, 00:10:39.026 "seek_data": false, 00:10:39.026 "copy": true, 00:10:39.026 "nvme_iov_md": false 00:10:39.026 }, 00:10:39.026 "memory_domains": [ 00:10:39.026 { 00:10:39.026 "dma_device_id": "system", 00:10:39.026 "dma_device_type": 1 00:10:39.026 }, 00:10:39.026 { 00:10:39.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.026 "dma_device_type": 2 00:10:39.026 } 00:10:39.026 ], 00:10:39.026 "driver_specific": {} 00:10:39.026 }' 00:10:39.026 21:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:39.026 21:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:39.026 21:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:39.026 21:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:39.026 21:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:39.026 21:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:39.026 21:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:39.026 21:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:39.026 21:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:39.026 21:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:39.026 21:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:39.026 21:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:39.026 21:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:39.026 21:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:39.026 21:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:39.284 21:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:39.284 "name": "BaseBdev3", 00:10:39.284 "aliases": [ 00:10:39.284 "b9105f27-42f3-11ef-9f7f-e9a656123a8b" 00:10:39.284 ], 00:10:39.284 "product_name": "Malloc disk", 00:10:39.284 "block_size": 512, 00:10:39.284 "num_blocks": 65536, 00:10:39.284 "uuid": "b9105f27-42f3-11ef-9f7f-e9a656123a8b", 00:10:39.284 "assigned_rate_limits": { 00:10:39.284 "rw_ios_per_sec": 0, 00:10:39.284 "rw_mbytes_per_sec": 0, 00:10:39.284 "r_mbytes_per_sec": 0, 00:10:39.284 "w_mbytes_per_sec": 0 00:10:39.284 }, 00:10:39.284 "claimed": true, 00:10:39.284 "claim_type": "exclusive_write", 00:10:39.284 "zoned": false, 00:10:39.284 "supported_io_types": { 00:10:39.284 "read": true, 00:10:39.284 "write": true, 00:10:39.284 "unmap": true, 00:10:39.284 "flush": true, 00:10:39.284 "reset": true, 00:10:39.284 "nvme_admin": false, 00:10:39.284 "nvme_io": false, 00:10:39.284 "nvme_io_md": false, 00:10:39.284 "write_zeroes": true, 00:10:39.284 "zcopy": true, 00:10:39.284 "get_zone_info": false, 00:10:39.284 "zone_management": false, 00:10:39.284 "zone_append": false, 00:10:39.284 "compare": false, 00:10:39.284 "compare_and_write": false, 00:10:39.284 "abort": true, 00:10:39.284 "seek_hole": false, 00:10:39.284 "seek_data": false, 00:10:39.284 "copy": true, 00:10:39.284 "nvme_iov_md": false 00:10:39.284 }, 00:10:39.284 "memory_domains": [ 00:10:39.284 { 00:10:39.284 "dma_device_id": "system", 00:10:39.284 "dma_device_type": 1 00:10:39.284 }, 00:10:39.284 { 00:10:39.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.284 "dma_device_type": 2 00:10:39.284 } 00:10:39.284 ], 00:10:39.284 "driver_specific": {} 00:10:39.284 }' 00:10:39.284 21:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:39.284 21:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:39.284 21:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:39.284 21:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:39.284 21:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:39.284 21:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:39.284 21:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:39.284 21:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:39.284 21:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:39.284 21:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:39.284 21:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:39.284 21:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:39.284 21:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:39.543 [2024-07-15 21:46:54.662858] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:39.543 [2024-07-15 21:46:54.662884] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:39.543 [2024-07-15 21:46:54.662921] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:39.543 [2024-07-15 21:46:54.662935] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:39.543 [2024-07-15 21:46:54.662940] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3d1e71234a00 name Existed_Raid, state offline 00:10:39.543 21:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 54048 00:10:39.543 21:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@942 -- # '[' -z 54048 ']' 00:10:39.543 21:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # kill -0 54048 00:10:39.543 21:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@947 -- # uname 00:10:39.543 21:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:10:39.543 21:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # tail -1 00:10:39.543 21:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # ps -c -o command 54048 00:10:39.543 21:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:10:39.543 21:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:10:39.543 killing process with pid 54048 00:10:39.543 21:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 54048' 00:10:39.543 21:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@961 -- # kill 54048 00:10:39.543 [2024-07-15 21:46:54.692531] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:39.543 21:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # wait 54048 00:10:39.543 [2024-07-15 21:46:54.709471] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:10:39.802 00:10:39.802 real 0m24.454s 00:10:39.802 user 0m44.718s 00:10:39.802 sys 0m3.323s 00:10:39.802 ************************************ 00:10:39.802 END TEST raid_state_function_test 00:10:39.802 ************************************ 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.802 21:46:54 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:10:39.802 21:46:54 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:10:39.802 21:46:54 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:10:39.802 21:46:54 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:10:39.802 21:46:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:39.802 ************************************ 00:10:39.802 START TEST raid_state_function_test_sb 00:10:39.802 ************************************ 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1117 -- # raid_state_function_test concat 3 true 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=54777 00:10:39.802 Process raid pid: 54777 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 54777' 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 54777 /var/tmp/spdk-raid.sock 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@823 -- # '[' -z 54777 ']' 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@828 -- # local max_retries=100 00:10:39.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # xtrace_disable 00:10:39.802 21:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.802 [2024-07-15 21:46:54.949366] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:10:39.802 [2024-07-15 21:46:54.949550] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:40.738 EAL: TSC is not safe to use in SMP mode 00:10:40.738 EAL: TSC is not invariant 00:10:40.738 [2024-07-15 21:46:55.655370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.738 [2024-07-15 21:46:55.739011] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:40.738 [2024-07-15 21:46:55.741066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.738 [2024-07-15 21:46:55.741833] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:40.738 [2024-07-15 21:46:55.741847] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:40.996 21:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:10:40.996 21:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # return 0 00:10:40.997 21:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:41.255 [2024-07-15 21:46:56.248877] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:41.255 [2024-07-15 21:46:56.248946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:41.255 [2024-07-15 21:46:56.248951] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:41.255 [2024-07-15 21:46:56.248959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:41.255 [2024-07-15 21:46:56.248963] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:41.255 [2024-07-15 21:46:56.248971] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:41.255 21:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:41.255 21:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:41.255 21:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:41.255 21:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:41.255 21:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:41.255 21:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:41.255 21:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:41.255 21:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:41.255 21:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:41.256 21:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:41.256 21:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:41.256 21:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.514 21:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:41.514 "name": "Existed_Raid", 00:10:41.514 "uuid": "c17d8178-42f3-11ef-9f7f-e9a656123a8b", 00:10:41.514 "strip_size_kb": 64, 00:10:41.514 "state": "configuring", 00:10:41.514 "raid_level": "concat", 00:10:41.514 "superblock": true, 00:10:41.514 "num_base_bdevs": 3, 00:10:41.514 "num_base_bdevs_discovered": 0, 00:10:41.514 "num_base_bdevs_operational": 3, 00:10:41.514 "base_bdevs_list": [ 00:10:41.514 { 00:10:41.514 "name": "BaseBdev1", 00:10:41.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.514 "is_configured": false, 00:10:41.514 "data_offset": 0, 00:10:41.514 "data_size": 0 00:10:41.514 }, 00:10:41.514 { 00:10:41.514 "name": "BaseBdev2", 00:10:41.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.514 "is_configured": false, 00:10:41.514 "data_offset": 0, 00:10:41.514 "data_size": 0 00:10:41.514 }, 00:10:41.514 { 00:10:41.514 "name": "BaseBdev3", 00:10:41.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.514 "is_configured": false, 00:10:41.514 "data_offset": 0, 00:10:41.514 "data_size": 0 00:10:41.514 } 00:10:41.514 ] 00:10:41.514 }' 00:10:41.514 21:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:41.514 21:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.773 21:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:42.031 [2024-07-15 21:46:57.092850] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:42.031 [2024-07-15 21:46:57.092877] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2873bfe34500 name Existed_Raid, state configuring 00:10:42.031 21:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:42.289 [2024-07-15 21:46:57.324871] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:42.289 [2024-07-15 21:46:57.324934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:42.289 [2024-07-15 21:46:57.324939] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:42.289 [2024-07-15 21:46:57.324947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:42.289 [2024-07-15 21:46:57.324950] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:42.289 [2024-07-15 21:46:57.324958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:42.289 21:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:42.547 [2024-07-15 21:46:57.557812] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:42.547 BaseBdev1 00:10:42.547 21:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:42.547 21:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev1 00:10:42.547 21:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:10:42.547 21:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:10:42.547 21:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:10:42.547 21:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:10:42.547 21:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:42.806 21:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:43.065 [ 00:10:43.065 { 00:10:43.065 "name": "BaseBdev1", 00:10:43.065 "aliases": [ 00:10:43.065 "c2451777-42f3-11ef-9f7f-e9a656123a8b" 00:10:43.065 ], 00:10:43.065 "product_name": "Malloc disk", 00:10:43.065 "block_size": 512, 00:10:43.065 "num_blocks": 65536, 00:10:43.065 "uuid": "c2451777-42f3-11ef-9f7f-e9a656123a8b", 00:10:43.065 "assigned_rate_limits": { 00:10:43.065 "rw_ios_per_sec": 0, 00:10:43.065 "rw_mbytes_per_sec": 0, 00:10:43.065 "r_mbytes_per_sec": 0, 00:10:43.065 "w_mbytes_per_sec": 0 00:10:43.065 }, 00:10:43.065 "claimed": true, 00:10:43.065 "claim_type": "exclusive_write", 00:10:43.065 "zoned": false, 00:10:43.065 "supported_io_types": { 00:10:43.065 "read": true, 00:10:43.065 "write": true, 00:10:43.065 "unmap": true, 00:10:43.065 "flush": true, 00:10:43.065 "reset": true, 00:10:43.065 "nvme_admin": false, 00:10:43.065 "nvme_io": false, 00:10:43.065 "nvme_io_md": false, 00:10:43.065 "write_zeroes": true, 00:10:43.065 "zcopy": true, 00:10:43.065 "get_zone_info": false, 00:10:43.065 "zone_management": false, 00:10:43.065 "zone_append": false, 00:10:43.065 "compare": false, 00:10:43.065 "compare_and_write": false, 00:10:43.065 "abort": true, 00:10:43.065 "seek_hole": false, 00:10:43.065 "seek_data": false, 00:10:43.065 "copy": true, 00:10:43.065 "nvme_iov_md": false 00:10:43.065 }, 00:10:43.065 "memory_domains": [ 00:10:43.065 { 00:10:43.065 "dma_device_id": "system", 00:10:43.065 "dma_device_type": 1 00:10:43.065 }, 00:10:43.065 { 00:10:43.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.065 "dma_device_type": 2 00:10:43.065 } 00:10:43.065 ], 00:10:43.065 "driver_specific": {} 00:10:43.065 } 00:10:43.065 ] 00:10:43.065 21:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:10:43.065 21:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:43.065 21:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:43.065 21:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:43.065 21:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:43.065 21:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:43.065 21:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:43.065 21:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:43.065 21:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:43.065 21:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:43.065 21:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:43.065 21:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:43.065 21:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.324 21:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:43.324 "name": "Existed_Raid", 00:10:43.324 "uuid": "c221b093-42f3-11ef-9f7f-e9a656123a8b", 00:10:43.324 "strip_size_kb": 64, 00:10:43.324 "state": "configuring", 00:10:43.324 "raid_level": "concat", 00:10:43.324 "superblock": true, 00:10:43.324 "num_base_bdevs": 3, 00:10:43.324 "num_base_bdevs_discovered": 1, 00:10:43.324 "num_base_bdevs_operational": 3, 00:10:43.324 "base_bdevs_list": [ 00:10:43.324 { 00:10:43.324 "name": "BaseBdev1", 00:10:43.324 "uuid": "c2451777-42f3-11ef-9f7f-e9a656123a8b", 00:10:43.324 "is_configured": true, 00:10:43.324 "data_offset": 2048, 00:10:43.324 "data_size": 63488 00:10:43.324 }, 00:10:43.324 { 00:10:43.324 "name": "BaseBdev2", 00:10:43.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.324 "is_configured": false, 00:10:43.324 "data_offset": 0, 00:10:43.324 "data_size": 0 00:10:43.324 }, 00:10:43.324 { 00:10:43.324 "name": "BaseBdev3", 00:10:43.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.324 "is_configured": false, 00:10:43.324 "data_offset": 0, 00:10:43.324 "data_size": 0 00:10:43.324 } 00:10:43.324 ] 00:10:43.324 }' 00:10:43.324 21:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:43.324 21:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.584 21:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:43.891 [2024-07-15 21:46:58.856938] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:43.891 [2024-07-15 21:46:58.856987] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2873bfe34500 name Existed_Raid, state configuring 00:10:43.891 21:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:44.149 [2024-07-15 21:46:59.124958] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:44.149 [2024-07-15 21:46:59.125835] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:44.149 [2024-07-15 21:46:59.125906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:44.149 [2024-07-15 21:46:59.125911] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:44.149 [2024-07-15 21:46:59.125962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:44.149 21:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:44.149 21:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:44.149 21:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:44.149 21:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:44.149 21:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:44.149 21:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:44.149 21:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:44.149 21:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:44.149 21:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:44.149 21:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:44.149 21:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:44.149 21:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:44.149 21:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:44.149 21:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.407 21:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:44.407 "name": "Existed_Raid", 00:10:44.407 "uuid": "c3345c33-42f3-11ef-9f7f-e9a656123a8b", 00:10:44.407 "strip_size_kb": 64, 00:10:44.407 "state": "configuring", 00:10:44.407 "raid_level": "concat", 00:10:44.407 "superblock": true, 00:10:44.407 "num_base_bdevs": 3, 00:10:44.407 "num_base_bdevs_discovered": 1, 00:10:44.407 "num_base_bdevs_operational": 3, 00:10:44.407 "base_bdevs_list": [ 00:10:44.407 { 00:10:44.407 "name": "BaseBdev1", 00:10:44.407 "uuid": "c2451777-42f3-11ef-9f7f-e9a656123a8b", 00:10:44.407 "is_configured": true, 00:10:44.407 "data_offset": 2048, 00:10:44.407 "data_size": 63488 00:10:44.407 }, 00:10:44.407 { 00:10:44.407 "name": "BaseBdev2", 00:10:44.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.407 "is_configured": false, 00:10:44.407 "data_offset": 0, 00:10:44.407 "data_size": 0 00:10:44.407 }, 00:10:44.407 { 00:10:44.407 "name": "BaseBdev3", 00:10:44.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.407 "is_configured": false, 00:10:44.407 "data_offset": 0, 00:10:44.407 "data_size": 0 00:10:44.407 } 00:10:44.407 ] 00:10:44.407 }' 00:10:44.407 21:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:44.407 21:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.664 21:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:44.922 [2024-07-15 21:46:59.869097] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:44.922 BaseBdev2 00:10:44.922 21:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:44.922 21:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev2 00:10:44.922 21:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:10:44.922 21:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:10:44.922 21:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:10:44.922 21:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:10:44.922 21:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:45.180 21:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:45.440 [ 00:10:45.440 { 00:10:45.440 "name": "BaseBdev2", 00:10:45.440 "aliases": [ 00:10:45.441 "c3a5e3ab-42f3-11ef-9f7f-e9a656123a8b" 00:10:45.441 ], 00:10:45.441 "product_name": "Malloc disk", 00:10:45.441 "block_size": 512, 00:10:45.441 "num_blocks": 65536, 00:10:45.441 "uuid": "c3a5e3ab-42f3-11ef-9f7f-e9a656123a8b", 00:10:45.441 "assigned_rate_limits": { 00:10:45.441 "rw_ios_per_sec": 0, 00:10:45.441 "rw_mbytes_per_sec": 0, 00:10:45.441 "r_mbytes_per_sec": 0, 00:10:45.441 "w_mbytes_per_sec": 0 00:10:45.441 }, 00:10:45.441 "claimed": true, 00:10:45.441 "claim_type": "exclusive_write", 00:10:45.441 "zoned": false, 00:10:45.441 "supported_io_types": { 00:10:45.441 "read": true, 00:10:45.441 "write": true, 00:10:45.441 "unmap": true, 00:10:45.441 "flush": true, 00:10:45.441 "reset": true, 00:10:45.441 "nvme_admin": false, 00:10:45.441 "nvme_io": false, 00:10:45.441 "nvme_io_md": false, 00:10:45.441 "write_zeroes": true, 00:10:45.441 "zcopy": true, 00:10:45.441 "get_zone_info": false, 00:10:45.441 "zone_management": false, 00:10:45.441 "zone_append": false, 00:10:45.441 "compare": false, 00:10:45.441 "compare_and_write": false, 00:10:45.441 "abort": true, 00:10:45.441 "seek_hole": false, 00:10:45.441 "seek_data": false, 00:10:45.441 "copy": true, 00:10:45.441 "nvme_iov_md": false 00:10:45.441 }, 00:10:45.441 "memory_domains": [ 00:10:45.441 { 00:10:45.441 "dma_device_id": "system", 00:10:45.441 "dma_device_type": 1 00:10:45.441 }, 00:10:45.441 { 00:10:45.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.441 "dma_device_type": 2 00:10:45.441 } 00:10:45.441 ], 00:10:45.441 "driver_specific": {} 00:10:45.441 } 00:10:45.441 ] 00:10:45.441 21:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:10:45.441 21:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:45.441 21:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:45.441 21:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:45.441 21:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:45.441 21:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:45.441 21:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:45.441 21:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:45.441 21:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:45.441 21:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:45.441 21:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:45.441 21:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:45.441 21:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:45.441 21:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:45.441 21:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.700 21:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:45.700 "name": "Existed_Raid", 00:10:45.700 "uuid": "c3345c33-42f3-11ef-9f7f-e9a656123a8b", 00:10:45.700 "strip_size_kb": 64, 00:10:45.700 "state": "configuring", 00:10:45.700 "raid_level": "concat", 00:10:45.700 "superblock": true, 00:10:45.700 "num_base_bdevs": 3, 00:10:45.700 "num_base_bdevs_discovered": 2, 00:10:45.700 "num_base_bdevs_operational": 3, 00:10:45.700 "base_bdevs_list": [ 00:10:45.700 { 00:10:45.700 "name": "BaseBdev1", 00:10:45.700 "uuid": "c2451777-42f3-11ef-9f7f-e9a656123a8b", 00:10:45.700 "is_configured": true, 00:10:45.700 "data_offset": 2048, 00:10:45.700 "data_size": 63488 00:10:45.700 }, 00:10:45.700 { 00:10:45.700 "name": "BaseBdev2", 00:10:45.700 "uuid": "c3a5e3ab-42f3-11ef-9f7f-e9a656123a8b", 00:10:45.700 "is_configured": true, 00:10:45.700 "data_offset": 2048, 00:10:45.700 "data_size": 63488 00:10:45.700 }, 00:10:45.700 { 00:10:45.700 "name": "BaseBdev3", 00:10:45.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.700 "is_configured": false, 00:10:45.700 "data_offset": 0, 00:10:45.700 "data_size": 0 00:10:45.700 } 00:10:45.700 ] 00:10:45.700 }' 00:10:45.700 21:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:45.700 21:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.958 21:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:46.217 [2024-07-15 21:47:01.325201] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:46.217 [2024-07-15 21:47:01.325266] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2873bfe34a00 00:10:46.217 [2024-07-15 21:47:01.325273] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:46.217 [2024-07-15 21:47:01.325293] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2873bfe97e20 00:10:46.217 [2024-07-15 21:47:01.325344] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2873bfe34a00 00:10:46.217 [2024-07-15 21:47:01.325349] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2873bfe34a00 00:10:46.217 [2024-07-15 21:47:01.325369] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:46.217 BaseBdev3 00:10:46.217 21:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:10:46.217 21:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev3 00:10:46.217 21:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:10:46.217 21:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:10:46.217 21:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:10:46.217 21:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:10:46.217 21:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:46.475 21:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:46.734 [ 00:10:46.734 { 00:10:46.734 "name": "BaseBdev3", 00:10:46.734 "aliases": [ 00:10:46.734 "c484131c-42f3-11ef-9f7f-e9a656123a8b" 00:10:46.734 ], 00:10:46.734 "product_name": "Malloc disk", 00:10:46.734 "block_size": 512, 00:10:46.734 "num_blocks": 65536, 00:10:46.734 "uuid": "c484131c-42f3-11ef-9f7f-e9a656123a8b", 00:10:46.734 "assigned_rate_limits": { 00:10:46.734 "rw_ios_per_sec": 0, 00:10:46.734 "rw_mbytes_per_sec": 0, 00:10:46.734 "r_mbytes_per_sec": 0, 00:10:46.734 "w_mbytes_per_sec": 0 00:10:46.734 }, 00:10:46.734 "claimed": true, 00:10:46.734 "claim_type": "exclusive_write", 00:10:46.734 "zoned": false, 00:10:46.734 "supported_io_types": { 00:10:46.734 "read": true, 00:10:46.734 "write": true, 00:10:46.734 "unmap": true, 00:10:46.734 "flush": true, 00:10:46.734 "reset": true, 00:10:46.734 "nvme_admin": false, 00:10:46.734 "nvme_io": false, 00:10:46.734 "nvme_io_md": false, 00:10:46.734 "write_zeroes": true, 00:10:46.734 "zcopy": true, 00:10:46.734 "get_zone_info": false, 00:10:46.734 "zone_management": false, 00:10:46.734 "zone_append": false, 00:10:46.734 "compare": false, 00:10:46.734 "compare_and_write": false, 00:10:46.734 "abort": true, 00:10:46.734 "seek_hole": false, 00:10:46.734 "seek_data": false, 00:10:46.734 "copy": true, 00:10:46.734 "nvme_iov_md": false 00:10:46.734 }, 00:10:46.734 "memory_domains": [ 00:10:46.734 { 00:10:46.734 "dma_device_id": "system", 00:10:46.734 "dma_device_type": 1 00:10:46.734 }, 00:10:46.734 { 00:10:46.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.734 "dma_device_type": 2 00:10:46.734 } 00:10:46.734 ], 00:10:46.734 "driver_specific": {} 00:10:46.734 } 00:10:46.734 ] 00:10:46.993 21:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:10:46.993 21:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:46.993 21:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:46.993 21:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:46.993 21:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:46.993 21:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:46.993 21:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:46.993 21:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:46.993 21:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:46.993 21:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:46.993 21:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:46.993 21:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:46.993 21:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:46.993 21:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:46.993 21:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.251 21:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:47.251 "name": "Existed_Raid", 00:10:47.251 "uuid": "c3345c33-42f3-11ef-9f7f-e9a656123a8b", 00:10:47.251 "strip_size_kb": 64, 00:10:47.251 "state": "online", 00:10:47.251 "raid_level": "concat", 00:10:47.251 "superblock": true, 00:10:47.251 "num_base_bdevs": 3, 00:10:47.251 "num_base_bdevs_discovered": 3, 00:10:47.251 "num_base_bdevs_operational": 3, 00:10:47.251 "base_bdevs_list": [ 00:10:47.251 { 00:10:47.251 "name": "BaseBdev1", 00:10:47.251 "uuid": "c2451777-42f3-11ef-9f7f-e9a656123a8b", 00:10:47.251 "is_configured": true, 00:10:47.251 "data_offset": 2048, 00:10:47.251 "data_size": 63488 00:10:47.251 }, 00:10:47.251 { 00:10:47.251 "name": "BaseBdev2", 00:10:47.251 "uuid": "c3a5e3ab-42f3-11ef-9f7f-e9a656123a8b", 00:10:47.251 "is_configured": true, 00:10:47.251 "data_offset": 2048, 00:10:47.251 "data_size": 63488 00:10:47.251 }, 00:10:47.251 { 00:10:47.251 "name": "BaseBdev3", 00:10:47.251 "uuid": "c484131c-42f3-11ef-9f7f-e9a656123a8b", 00:10:47.251 "is_configured": true, 00:10:47.251 "data_offset": 2048, 00:10:47.251 "data_size": 63488 00:10:47.251 } 00:10:47.251 ] 00:10:47.251 }' 00:10:47.251 21:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:47.251 21:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.509 21:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:47.509 21:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:47.509 21:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:47.509 21:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:47.509 21:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:47.509 21:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:10:47.509 21:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:47.509 21:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:47.768 [2024-07-15 21:47:02.785183] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:47.768 21:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:47.768 "name": "Existed_Raid", 00:10:47.768 "aliases": [ 00:10:47.768 "c3345c33-42f3-11ef-9f7f-e9a656123a8b" 00:10:47.768 ], 00:10:47.768 "product_name": "Raid Volume", 00:10:47.768 "block_size": 512, 00:10:47.768 "num_blocks": 190464, 00:10:47.768 "uuid": "c3345c33-42f3-11ef-9f7f-e9a656123a8b", 00:10:47.768 "assigned_rate_limits": { 00:10:47.768 "rw_ios_per_sec": 0, 00:10:47.768 "rw_mbytes_per_sec": 0, 00:10:47.768 "r_mbytes_per_sec": 0, 00:10:47.768 "w_mbytes_per_sec": 0 00:10:47.768 }, 00:10:47.768 "claimed": false, 00:10:47.768 "zoned": false, 00:10:47.768 "supported_io_types": { 00:10:47.768 "read": true, 00:10:47.768 "write": true, 00:10:47.768 "unmap": true, 00:10:47.768 "flush": true, 00:10:47.768 "reset": true, 00:10:47.768 "nvme_admin": false, 00:10:47.768 "nvme_io": false, 00:10:47.768 "nvme_io_md": false, 00:10:47.768 "write_zeroes": true, 00:10:47.768 "zcopy": false, 00:10:47.768 "get_zone_info": false, 00:10:47.768 "zone_management": false, 00:10:47.768 "zone_append": false, 00:10:47.768 "compare": false, 00:10:47.768 "compare_and_write": false, 00:10:47.768 "abort": false, 00:10:47.768 "seek_hole": false, 00:10:47.768 "seek_data": false, 00:10:47.768 "copy": false, 00:10:47.768 "nvme_iov_md": false 00:10:47.768 }, 00:10:47.768 "memory_domains": [ 00:10:47.768 { 00:10:47.768 "dma_device_id": "system", 00:10:47.768 "dma_device_type": 1 00:10:47.768 }, 00:10:47.768 { 00:10:47.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.768 "dma_device_type": 2 00:10:47.768 }, 00:10:47.768 { 00:10:47.768 "dma_device_id": "system", 00:10:47.768 "dma_device_type": 1 00:10:47.768 }, 00:10:47.768 { 00:10:47.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.768 "dma_device_type": 2 00:10:47.768 }, 00:10:47.768 { 00:10:47.768 "dma_device_id": "system", 00:10:47.768 "dma_device_type": 1 00:10:47.768 }, 00:10:47.768 { 00:10:47.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.768 "dma_device_type": 2 00:10:47.768 } 00:10:47.768 ], 00:10:47.768 "driver_specific": { 00:10:47.768 "raid": { 00:10:47.768 "uuid": "c3345c33-42f3-11ef-9f7f-e9a656123a8b", 00:10:47.768 "strip_size_kb": 64, 00:10:47.768 "state": "online", 00:10:47.768 "raid_level": "concat", 00:10:47.768 "superblock": true, 00:10:47.768 "num_base_bdevs": 3, 00:10:47.768 "num_base_bdevs_discovered": 3, 00:10:47.768 "num_base_bdevs_operational": 3, 00:10:47.768 "base_bdevs_list": [ 00:10:47.768 { 00:10:47.768 "name": "BaseBdev1", 00:10:47.768 "uuid": "c2451777-42f3-11ef-9f7f-e9a656123a8b", 00:10:47.768 "is_configured": true, 00:10:47.768 "data_offset": 2048, 00:10:47.768 "data_size": 63488 00:10:47.768 }, 00:10:47.768 { 00:10:47.768 "name": "BaseBdev2", 00:10:47.768 "uuid": "c3a5e3ab-42f3-11ef-9f7f-e9a656123a8b", 00:10:47.768 "is_configured": true, 00:10:47.768 "data_offset": 2048, 00:10:47.768 "data_size": 63488 00:10:47.768 }, 00:10:47.768 { 00:10:47.768 "name": "BaseBdev3", 00:10:47.768 "uuid": "c484131c-42f3-11ef-9f7f-e9a656123a8b", 00:10:47.768 "is_configured": true, 00:10:47.768 "data_offset": 2048, 00:10:47.768 "data_size": 63488 00:10:47.768 } 00:10:47.768 ] 00:10:47.768 } 00:10:47.768 } 00:10:47.768 }' 00:10:47.768 21:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:47.768 21:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:47.768 BaseBdev2 00:10:47.768 BaseBdev3' 00:10:47.768 21:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:47.768 21:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:47.768 21:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:48.027 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:48.027 "name": "BaseBdev1", 00:10:48.027 "aliases": [ 00:10:48.027 "c2451777-42f3-11ef-9f7f-e9a656123a8b" 00:10:48.027 ], 00:10:48.027 "product_name": "Malloc disk", 00:10:48.027 "block_size": 512, 00:10:48.027 "num_blocks": 65536, 00:10:48.027 "uuid": "c2451777-42f3-11ef-9f7f-e9a656123a8b", 00:10:48.027 "assigned_rate_limits": { 00:10:48.027 "rw_ios_per_sec": 0, 00:10:48.027 "rw_mbytes_per_sec": 0, 00:10:48.027 "r_mbytes_per_sec": 0, 00:10:48.027 "w_mbytes_per_sec": 0 00:10:48.027 }, 00:10:48.027 "claimed": true, 00:10:48.027 "claim_type": "exclusive_write", 00:10:48.027 "zoned": false, 00:10:48.027 "supported_io_types": { 00:10:48.027 "read": true, 00:10:48.027 "write": true, 00:10:48.027 "unmap": true, 00:10:48.027 "flush": true, 00:10:48.027 "reset": true, 00:10:48.027 "nvme_admin": false, 00:10:48.027 "nvme_io": false, 00:10:48.027 "nvme_io_md": false, 00:10:48.027 "write_zeroes": true, 00:10:48.027 "zcopy": true, 00:10:48.027 "get_zone_info": false, 00:10:48.027 "zone_management": false, 00:10:48.027 "zone_append": false, 00:10:48.027 "compare": false, 00:10:48.027 "compare_and_write": false, 00:10:48.027 "abort": true, 00:10:48.027 "seek_hole": false, 00:10:48.027 "seek_data": false, 00:10:48.027 "copy": true, 00:10:48.027 "nvme_iov_md": false 00:10:48.027 }, 00:10:48.027 "memory_domains": [ 00:10:48.027 { 00:10:48.027 "dma_device_id": "system", 00:10:48.027 "dma_device_type": 1 00:10:48.027 }, 00:10:48.027 { 00:10:48.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.027 "dma_device_type": 2 00:10:48.027 } 00:10:48.027 ], 00:10:48.027 "driver_specific": {} 00:10:48.027 }' 00:10:48.027 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:48.027 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:48.027 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:48.027 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:48.027 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:48.027 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:48.027 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:48.027 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:48.027 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:48.027 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:48.027 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:48.027 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:48.027 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:48.027 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:48.027 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:48.286 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:48.286 "name": "BaseBdev2", 00:10:48.286 "aliases": [ 00:10:48.286 "c3a5e3ab-42f3-11ef-9f7f-e9a656123a8b" 00:10:48.286 ], 00:10:48.286 "product_name": "Malloc disk", 00:10:48.286 "block_size": 512, 00:10:48.286 "num_blocks": 65536, 00:10:48.286 "uuid": "c3a5e3ab-42f3-11ef-9f7f-e9a656123a8b", 00:10:48.286 "assigned_rate_limits": { 00:10:48.286 "rw_ios_per_sec": 0, 00:10:48.286 "rw_mbytes_per_sec": 0, 00:10:48.286 "r_mbytes_per_sec": 0, 00:10:48.286 "w_mbytes_per_sec": 0 00:10:48.286 }, 00:10:48.286 "claimed": true, 00:10:48.286 "claim_type": "exclusive_write", 00:10:48.286 "zoned": false, 00:10:48.286 "supported_io_types": { 00:10:48.286 "read": true, 00:10:48.286 "write": true, 00:10:48.286 "unmap": true, 00:10:48.286 "flush": true, 00:10:48.286 "reset": true, 00:10:48.286 "nvme_admin": false, 00:10:48.286 "nvme_io": false, 00:10:48.286 "nvme_io_md": false, 00:10:48.286 "write_zeroes": true, 00:10:48.286 "zcopy": true, 00:10:48.286 "get_zone_info": false, 00:10:48.286 "zone_management": false, 00:10:48.286 "zone_append": false, 00:10:48.286 "compare": false, 00:10:48.286 "compare_and_write": false, 00:10:48.286 "abort": true, 00:10:48.286 "seek_hole": false, 00:10:48.286 "seek_data": false, 00:10:48.286 "copy": true, 00:10:48.286 "nvme_iov_md": false 00:10:48.286 }, 00:10:48.286 "memory_domains": [ 00:10:48.286 { 00:10:48.286 "dma_device_id": "system", 00:10:48.286 "dma_device_type": 1 00:10:48.286 }, 00:10:48.286 { 00:10:48.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.286 "dma_device_type": 2 00:10:48.286 } 00:10:48.286 ], 00:10:48.286 "driver_specific": {} 00:10:48.286 }' 00:10:48.286 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:48.286 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:48.286 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:48.286 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:48.545 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:48.545 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:48.545 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:48.545 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:48.545 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:48.545 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:48.545 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:48.545 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:48.545 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:48.545 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:48.545 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:48.860 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:48.860 "name": "BaseBdev3", 00:10:48.860 "aliases": [ 00:10:48.860 "c484131c-42f3-11ef-9f7f-e9a656123a8b" 00:10:48.860 ], 00:10:48.860 "product_name": "Malloc disk", 00:10:48.860 "block_size": 512, 00:10:48.860 "num_blocks": 65536, 00:10:48.860 "uuid": "c484131c-42f3-11ef-9f7f-e9a656123a8b", 00:10:48.860 "assigned_rate_limits": { 00:10:48.860 "rw_ios_per_sec": 0, 00:10:48.860 "rw_mbytes_per_sec": 0, 00:10:48.860 "r_mbytes_per_sec": 0, 00:10:48.860 "w_mbytes_per_sec": 0 00:10:48.860 }, 00:10:48.860 "claimed": true, 00:10:48.860 "claim_type": "exclusive_write", 00:10:48.860 "zoned": false, 00:10:48.860 "supported_io_types": { 00:10:48.860 "read": true, 00:10:48.860 "write": true, 00:10:48.860 "unmap": true, 00:10:48.860 "flush": true, 00:10:48.860 "reset": true, 00:10:48.860 "nvme_admin": false, 00:10:48.860 "nvme_io": false, 00:10:48.860 "nvme_io_md": false, 00:10:48.860 "write_zeroes": true, 00:10:48.860 "zcopy": true, 00:10:48.860 "get_zone_info": false, 00:10:48.860 "zone_management": false, 00:10:48.860 "zone_append": false, 00:10:48.860 "compare": false, 00:10:48.860 "compare_and_write": false, 00:10:48.860 "abort": true, 00:10:48.860 "seek_hole": false, 00:10:48.860 "seek_data": false, 00:10:48.860 "copy": true, 00:10:48.860 "nvme_iov_md": false 00:10:48.860 }, 00:10:48.860 "memory_domains": [ 00:10:48.860 { 00:10:48.860 "dma_device_id": "system", 00:10:48.860 "dma_device_type": 1 00:10:48.860 }, 00:10:48.860 { 00:10:48.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.860 "dma_device_type": 2 00:10:48.860 } 00:10:48.860 ], 00:10:48.860 "driver_specific": {} 00:10:48.860 }' 00:10:48.860 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:48.860 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:48.860 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:48.860 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:48.860 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:48.860 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:48.860 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:48.860 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:48.860 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:48.860 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:48.860 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:48.860 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:48.860 21:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:49.149 [2024-07-15 21:47:04.117312] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:49.149 [2024-07-15 21:47:04.117340] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:49.149 [2024-07-15 21:47:04.117362] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.149 21:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:49.149 21:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:10:49.149 21:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:49.149 21:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:10:49.149 21:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:10:49.149 21:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:49.149 21:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:49.149 21:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:10:49.149 21:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:49.149 21:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:49.149 21:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:49.149 21:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:49.149 21:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:49.150 21:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:49.150 21:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:49.150 21:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:49.150 21:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.409 21:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:49.409 "name": "Existed_Raid", 00:10:49.409 "uuid": "c3345c33-42f3-11ef-9f7f-e9a656123a8b", 00:10:49.409 "strip_size_kb": 64, 00:10:49.409 "state": "offline", 00:10:49.409 "raid_level": "concat", 00:10:49.409 "superblock": true, 00:10:49.409 "num_base_bdevs": 3, 00:10:49.409 "num_base_bdevs_discovered": 2, 00:10:49.409 "num_base_bdevs_operational": 2, 00:10:49.409 "base_bdevs_list": [ 00:10:49.409 { 00:10:49.409 "name": null, 00:10:49.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.409 "is_configured": false, 00:10:49.409 "data_offset": 2048, 00:10:49.409 "data_size": 63488 00:10:49.409 }, 00:10:49.409 { 00:10:49.409 "name": "BaseBdev2", 00:10:49.409 "uuid": "c3a5e3ab-42f3-11ef-9f7f-e9a656123a8b", 00:10:49.409 "is_configured": true, 00:10:49.409 "data_offset": 2048, 00:10:49.409 "data_size": 63488 00:10:49.409 }, 00:10:49.409 { 00:10:49.409 "name": "BaseBdev3", 00:10:49.409 "uuid": "c484131c-42f3-11ef-9f7f-e9a656123a8b", 00:10:49.409 "is_configured": true, 00:10:49.409 "data_offset": 2048, 00:10:49.409 "data_size": 63488 00:10:49.409 } 00:10:49.409 ] 00:10:49.409 }' 00:10:49.409 21:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:49.409 21:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.666 21:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:49.666 21:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:49.666 21:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:49.666 21:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:49.924 21:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:49.924 21:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:49.924 21:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:50.182 [2024-07-15 21:47:05.211190] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:50.182 21:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:50.182 21:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:50.182 21:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:50.182 21:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:50.440 21:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:50.440 21:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:50.440 21:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:10:50.698 [2024-07-15 21:47:05.652944] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:50.698 [2024-07-15 21:47:05.652993] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2873bfe34a00 name Existed_Raid, state offline 00:10:50.698 21:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:50.698 21:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:50.698 21:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:50.698 21:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:50.955 21:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:50.955 21:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:50.955 21:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:10:50.955 21:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:10:50.955 21:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:50.955 21:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:51.213 BaseBdev2 00:10:51.213 21:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:10:51.213 21:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev2 00:10:51.213 21:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:10:51.213 21:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:10:51.213 21:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:10:51.213 21:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:10:51.213 21:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:51.471 21:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:51.471 [ 00:10:51.471 { 00:10:51.471 "name": "BaseBdev2", 00:10:51.471 "aliases": [ 00:10:51.471 "c76294e8-42f3-11ef-9f7f-e9a656123a8b" 00:10:51.471 ], 00:10:51.471 "product_name": "Malloc disk", 00:10:51.471 "block_size": 512, 00:10:51.471 "num_blocks": 65536, 00:10:51.471 "uuid": "c76294e8-42f3-11ef-9f7f-e9a656123a8b", 00:10:51.471 "assigned_rate_limits": { 00:10:51.471 "rw_ios_per_sec": 0, 00:10:51.471 "rw_mbytes_per_sec": 0, 00:10:51.471 "r_mbytes_per_sec": 0, 00:10:51.471 "w_mbytes_per_sec": 0 00:10:51.471 }, 00:10:51.471 "claimed": false, 00:10:51.471 "zoned": false, 00:10:51.471 "supported_io_types": { 00:10:51.471 "read": true, 00:10:51.471 "write": true, 00:10:51.471 "unmap": true, 00:10:51.471 "flush": true, 00:10:51.471 "reset": true, 00:10:51.471 "nvme_admin": false, 00:10:51.471 "nvme_io": false, 00:10:51.471 "nvme_io_md": false, 00:10:51.471 "write_zeroes": true, 00:10:51.471 "zcopy": true, 00:10:51.471 "get_zone_info": false, 00:10:51.471 "zone_management": false, 00:10:51.471 "zone_append": false, 00:10:51.471 "compare": false, 00:10:51.471 "compare_and_write": false, 00:10:51.471 "abort": true, 00:10:51.471 "seek_hole": false, 00:10:51.471 "seek_data": false, 00:10:51.471 "copy": true, 00:10:51.471 "nvme_iov_md": false 00:10:51.471 }, 00:10:51.471 "memory_domains": [ 00:10:51.471 { 00:10:51.471 "dma_device_id": "system", 00:10:51.471 "dma_device_type": 1 00:10:51.471 }, 00:10:51.471 { 00:10:51.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.471 "dma_device_type": 2 00:10:51.471 } 00:10:51.471 ], 00:10:51.471 "driver_specific": {} 00:10:51.471 } 00:10:51.471 ] 00:10:51.471 21:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:10:51.471 21:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:51.471 21:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:51.471 21:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:51.728 BaseBdev3 00:10:51.728 21:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:10:51.728 21:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev3 00:10:51.728 21:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:10:51.728 21:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:10:51.728 21:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:10:51.728 21:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:10:51.728 21:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:51.985 21:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:52.243 [ 00:10:52.243 { 00:10:52.243 "name": "BaseBdev3", 00:10:52.243 "aliases": [ 00:10:52.243 "c7d07388-42f3-11ef-9f7f-e9a656123a8b" 00:10:52.243 ], 00:10:52.243 "product_name": "Malloc disk", 00:10:52.243 "block_size": 512, 00:10:52.243 "num_blocks": 65536, 00:10:52.243 "uuid": "c7d07388-42f3-11ef-9f7f-e9a656123a8b", 00:10:52.243 "assigned_rate_limits": { 00:10:52.243 "rw_ios_per_sec": 0, 00:10:52.243 "rw_mbytes_per_sec": 0, 00:10:52.243 "r_mbytes_per_sec": 0, 00:10:52.243 "w_mbytes_per_sec": 0 00:10:52.243 }, 00:10:52.243 "claimed": false, 00:10:52.243 "zoned": false, 00:10:52.243 "supported_io_types": { 00:10:52.243 "read": true, 00:10:52.243 "write": true, 00:10:52.243 "unmap": true, 00:10:52.243 "flush": true, 00:10:52.243 "reset": true, 00:10:52.243 "nvme_admin": false, 00:10:52.243 "nvme_io": false, 00:10:52.243 "nvme_io_md": false, 00:10:52.243 "write_zeroes": true, 00:10:52.243 "zcopy": true, 00:10:52.243 "get_zone_info": false, 00:10:52.243 "zone_management": false, 00:10:52.243 "zone_append": false, 00:10:52.243 "compare": false, 00:10:52.243 "compare_and_write": false, 00:10:52.243 "abort": true, 00:10:52.243 "seek_hole": false, 00:10:52.243 "seek_data": false, 00:10:52.243 "copy": true, 00:10:52.243 "nvme_iov_md": false 00:10:52.243 }, 00:10:52.243 "memory_domains": [ 00:10:52.243 { 00:10:52.243 "dma_device_id": "system", 00:10:52.243 "dma_device_type": 1 00:10:52.243 }, 00:10:52.243 { 00:10:52.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.243 "dma_device_type": 2 00:10:52.243 } 00:10:52.243 ], 00:10:52.243 "driver_specific": {} 00:10:52.243 } 00:10:52.243 ] 00:10:52.243 21:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:10:52.243 21:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:52.243 21:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:52.243 21:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:52.501 [2024-07-15 21:47:07.570796] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:52.501 [2024-07-15 21:47:07.570862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:52.501 [2024-07-15 21:47:07.570886] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:52.501 [2024-07-15 21:47:07.571447] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:52.501 21:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:52.501 21:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:52.501 21:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:52.501 21:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:52.501 21:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:52.501 21:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:52.501 21:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:52.501 21:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:52.501 21:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:52.501 21:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:52.501 21:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:52.501 21:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.760 21:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:52.760 "name": "Existed_Raid", 00:10:52.760 "uuid": "c83d1807-42f3-11ef-9f7f-e9a656123a8b", 00:10:52.760 "strip_size_kb": 64, 00:10:52.760 "state": "configuring", 00:10:52.760 "raid_level": "concat", 00:10:52.760 "superblock": true, 00:10:52.760 "num_base_bdevs": 3, 00:10:52.760 "num_base_bdevs_discovered": 2, 00:10:52.760 "num_base_bdevs_operational": 3, 00:10:52.760 "base_bdevs_list": [ 00:10:52.760 { 00:10:52.760 "name": "BaseBdev1", 00:10:52.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.760 "is_configured": false, 00:10:52.760 "data_offset": 0, 00:10:52.760 "data_size": 0 00:10:52.760 }, 00:10:52.760 { 00:10:52.760 "name": "BaseBdev2", 00:10:52.760 "uuid": "c76294e8-42f3-11ef-9f7f-e9a656123a8b", 00:10:52.760 "is_configured": true, 00:10:52.760 "data_offset": 2048, 00:10:52.760 "data_size": 63488 00:10:52.760 }, 00:10:52.760 { 00:10:52.760 "name": "BaseBdev3", 00:10:52.760 "uuid": "c7d07388-42f3-11ef-9f7f-e9a656123a8b", 00:10:52.760 "is_configured": true, 00:10:52.760 "data_offset": 2048, 00:10:52.760 "data_size": 63488 00:10:52.760 } 00:10:52.760 ] 00:10:52.760 }' 00:10:52.760 21:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:52.760 21:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.018 21:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:10:53.280 [2024-07-15 21:47:08.354823] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:53.280 21:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:53.280 21:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:53.280 21:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:53.280 21:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:53.280 21:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:53.280 21:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:53.280 21:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:53.280 21:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:53.280 21:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:53.280 21:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:53.280 21:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:53.280 21:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.544 21:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:53.544 "name": "Existed_Raid", 00:10:53.544 "uuid": "c83d1807-42f3-11ef-9f7f-e9a656123a8b", 00:10:53.544 "strip_size_kb": 64, 00:10:53.544 "state": "configuring", 00:10:53.544 "raid_level": "concat", 00:10:53.544 "superblock": true, 00:10:53.544 "num_base_bdevs": 3, 00:10:53.544 "num_base_bdevs_discovered": 1, 00:10:53.544 "num_base_bdevs_operational": 3, 00:10:53.544 "base_bdevs_list": [ 00:10:53.544 { 00:10:53.544 "name": "BaseBdev1", 00:10:53.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.544 "is_configured": false, 00:10:53.544 "data_offset": 0, 00:10:53.544 "data_size": 0 00:10:53.544 }, 00:10:53.544 { 00:10:53.544 "name": null, 00:10:53.544 "uuid": "c76294e8-42f3-11ef-9f7f-e9a656123a8b", 00:10:53.544 "is_configured": false, 00:10:53.544 "data_offset": 2048, 00:10:53.544 "data_size": 63488 00:10:53.544 }, 00:10:53.544 { 00:10:53.544 "name": "BaseBdev3", 00:10:53.544 "uuid": "c7d07388-42f3-11ef-9f7f-e9a656123a8b", 00:10:53.544 "is_configured": true, 00:10:53.544 "data_offset": 2048, 00:10:53.544 "data_size": 63488 00:10:53.544 } 00:10:53.544 ] 00:10:53.544 }' 00:10:53.544 21:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:53.544 21:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.802 21:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:53.802 21:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:54.060 21:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:10:54.060 21:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:54.318 [2024-07-15 21:47:09.447028] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:54.318 BaseBdev1 00:10:54.318 21:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:10:54.318 21:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev1 00:10:54.318 21:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:10:54.318 21:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:10:54.318 21:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:10:54.318 21:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:10:54.318 21:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:54.577 21:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:54.836 [ 00:10:54.836 { 00:10:54.836 "name": "BaseBdev1", 00:10:54.836 "aliases": [ 00:10:54.836 "c95b5df2-42f3-11ef-9f7f-e9a656123a8b" 00:10:54.836 ], 00:10:54.836 "product_name": "Malloc disk", 00:10:54.836 "block_size": 512, 00:10:54.836 "num_blocks": 65536, 00:10:54.836 "uuid": "c95b5df2-42f3-11ef-9f7f-e9a656123a8b", 00:10:54.836 "assigned_rate_limits": { 00:10:54.836 "rw_ios_per_sec": 0, 00:10:54.836 "rw_mbytes_per_sec": 0, 00:10:54.836 "r_mbytes_per_sec": 0, 00:10:54.836 "w_mbytes_per_sec": 0 00:10:54.836 }, 00:10:54.836 "claimed": true, 00:10:54.836 "claim_type": "exclusive_write", 00:10:54.836 "zoned": false, 00:10:54.836 "supported_io_types": { 00:10:54.836 "read": true, 00:10:54.836 "write": true, 00:10:54.836 "unmap": true, 00:10:54.836 "flush": true, 00:10:54.836 "reset": true, 00:10:54.836 "nvme_admin": false, 00:10:54.836 "nvme_io": false, 00:10:54.836 "nvme_io_md": false, 00:10:54.836 "write_zeroes": true, 00:10:54.836 "zcopy": true, 00:10:54.836 "get_zone_info": false, 00:10:54.836 "zone_management": false, 00:10:54.836 "zone_append": false, 00:10:54.836 "compare": false, 00:10:54.836 "compare_and_write": false, 00:10:54.836 "abort": true, 00:10:54.836 "seek_hole": false, 00:10:54.836 "seek_data": false, 00:10:54.836 "copy": true, 00:10:54.836 "nvme_iov_md": false 00:10:54.836 }, 00:10:54.836 "memory_domains": [ 00:10:54.836 { 00:10:54.836 "dma_device_id": "system", 00:10:54.836 "dma_device_type": 1 00:10:54.836 }, 00:10:54.836 { 00:10:54.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.836 "dma_device_type": 2 00:10:54.836 } 00:10:54.836 ], 00:10:54.836 "driver_specific": {} 00:10:54.836 } 00:10:54.836 ] 00:10:54.836 21:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:10:54.836 21:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:54.836 21:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:54.836 21:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:54.836 21:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:54.836 21:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:54.836 21:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:54.836 21:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:54.836 21:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:54.836 21:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:54.836 21:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:54.836 21:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:54.836 21:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.095 21:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:55.095 "name": "Existed_Raid", 00:10:55.095 "uuid": "c83d1807-42f3-11ef-9f7f-e9a656123a8b", 00:10:55.095 "strip_size_kb": 64, 00:10:55.095 "state": "configuring", 00:10:55.095 "raid_level": "concat", 00:10:55.095 "superblock": true, 00:10:55.095 "num_base_bdevs": 3, 00:10:55.095 "num_base_bdevs_discovered": 2, 00:10:55.095 "num_base_bdevs_operational": 3, 00:10:55.095 "base_bdevs_list": [ 00:10:55.095 { 00:10:55.095 "name": "BaseBdev1", 00:10:55.095 "uuid": "c95b5df2-42f3-11ef-9f7f-e9a656123a8b", 00:10:55.095 "is_configured": true, 00:10:55.095 "data_offset": 2048, 00:10:55.095 "data_size": 63488 00:10:55.095 }, 00:10:55.095 { 00:10:55.095 "name": null, 00:10:55.095 "uuid": "c76294e8-42f3-11ef-9f7f-e9a656123a8b", 00:10:55.095 "is_configured": false, 00:10:55.095 "data_offset": 2048, 00:10:55.095 "data_size": 63488 00:10:55.095 }, 00:10:55.095 { 00:10:55.095 "name": "BaseBdev3", 00:10:55.095 "uuid": "c7d07388-42f3-11ef-9f7f-e9a656123a8b", 00:10:55.095 "is_configured": true, 00:10:55.095 "data_offset": 2048, 00:10:55.095 "data_size": 63488 00:10:55.095 } 00:10:55.095 ] 00:10:55.095 }' 00:10:55.095 21:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:55.095 21:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.354 21:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:55.354 21:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:55.613 21:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:10:55.613 21:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:10:55.872 [2024-07-15 21:47:10.815004] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:55.872 21:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:55.872 21:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:55.872 21:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:55.872 21:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:55.872 21:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:55.872 21:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:55.872 21:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:55.872 21:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:55.872 21:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:55.872 21:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:55.872 21:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:55.872 21:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.131 21:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:56.131 "name": "Existed_Raid", 00:10:56.131 "uuid": "c83d1807-42f3-11ef-9f7f-e9a656123a8b", 00:10:56.131 "strip_size_kb": 64, 00:10:56.131 "state": "configuring", 00:10:56.131 "raid_level": "concat", 00:10:56.131 "superblock": true, 00:10:56.131 "num_base_bdevs": 3, 00:10:56.131 "num_base_bdevs_discovered": 1, 00:10:56.131 "num_base_bdevs_operational": 3, 00:10:56.131 "base_bdevs_list": [ 00:10:56.131 { 00:10:56.131 "name": "BaseBdev1", 00:10:56.131 "uuid": "c95b5df2-42f3-11ef-9f7f-e9a656123a8b", 00:10:56.131 "is_configured": true, 00:10:56.131 "data_offset": 2048, 00:10:56.131 "data_size": 63488 00:10:56.131 }, 00:10:56.131 { 00:10:56.131 "name": null, 00:10:56.131 "uuid": "c76294e8-42f3-11ef-9f7f-e9a656123a8b", 00:10:56.131 "is_configured": false, 00:10:56.131 "data_offset": 2048, 00:10:56.131 "data_size": 63488 00:10:56.131 }, 00:10:56.131 { 00:10:56.131 "name": null, 00:10:56.131 "uuid": "c7d07388-42f3-11ef-9f7f-e9a656123a8b", 00:10:56.131 "is_configured": false, 00:10:56.131 "data_offset": 2048, 00:10:56.131 "data_size": 63488 00:10:56.131 } 00:10:56.131 ] 00:10:56.131 }' 00:10:56.131 21:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:56.131 21:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.390 21:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:56.390 21:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:56.648 21:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:10:56.648 21:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:56.907 [2024-07-15 21:47:11.851043] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:56.907 21:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:56.907 21:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:56.907 21:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:56.907 21:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:56.907 21:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:56.907 21:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:56.907 21:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:56.907 21:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:56.907 21:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:56.907 21:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:56.907 21:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:56.907 21:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.166 21:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:57.166 "name": "Existed_Raid", 00:10:57.166 "uuid": "c83d1807-42f3-11ef-9f7f-e9a656123a8b", 00:10:57.166 "strip_size_kb": 64, 00:10:57.166 "state": "configuring", 00:10:57.166 "raid_level": "concat", 00:10:57.166 "superblock": true, 00:10:57.166 "num_base_bdevs": 3, 00:10:57.166 "num_base_bdevs_discovered": 2, 00:10:57.166 "num_base_bdevs_operational": 3, 00:10:57.166 "base_bdevs_list": [ 00:10:57.166 { 00:10:57.166 "name": "BaseBdev1", 00:10:57.166 "uuid": "c95b5df2-42f3-11ef-9f7f-e9a656123a8b", 00:10:57.166 "is_configured": true, 00:10:57.166 "data_offset": 2048, 00:10:57.166 "data_size": 63488 00:10:57.166 }, 00:10:57.166 { 00:10:57.166 "name": null, 00:10:57.166 "uuid": "c76294e8-42f3-11ef-9f7f-e9a656123a8b", 00:10:57.166 "is_configured": false, 00:10:57.166 "data_offset": 2048, 00:10:57.166 "data_size": 63488 00:10:57.166 }, 00:10:57.166 { 00:10:57.166 "name": "BaseBdev3", 00:10:57.166 "uuid": "c7d07388-42f3-11ef-9f7f-e9a656123a8b", 00:10:57.166 "is_configured": true, 00:10:57.166 "data_offset": 2048, 00:10:57.166 "data_size": 63488 00:10:57.166 } 00:10:57.166 ] 00:10:57.166 }' 00:10:57.166 21:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:57.166 21:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.452 21:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:57.452 21:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:57.718 21:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:10:57.718 21:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:57.977 [2024-07-15 21:47:12.955070] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:57.977 21:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:57.977 21:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:57.977 21:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:57.977 21:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:57.977 21:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:57.977 21:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:57.977 21:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:57.977 21:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:57.977 21:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:57.977 21:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:57.977 21:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:57.977 21:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.247 21:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:58.248 "name": "Existed_Raid", 00:10:58.248 "uuid": "c83d1807-42f3-11ef-9f7f-e9a656123a8b", 00:10:58.248 "strip_size_kb": 64, 00:10:58.248 "state": "configuring", 00:10:58.248 "raid_level": "concat", 00:10:58.248 "superblock": true, 00:10:58.248 "num_base_bdevs": 3, 00:10:58.248 "num_base_bdevs_discovered": 1, 00:10:58.248 "num_base_bdevs_operational": 3, 00:10:58.248 "base_bdevs_list": [ 00:10:58.248 { 00:10:58.248 "name": null, 00:10:58.248 "uuid": "c95b5df2-42f3-11ef-9f7f-e9a656123a8b", 00:10:58.248 "is_configured": false, 00:10:58.248 "data_offset": 2048, 00:10:58.248 "data_size": 63488 00:10:58.248 }, 00:10:58.248 { 00:10:58.248 "name": null, 00:10:58.248 "uuid": "c76294e8-42f3-11ef-9f7f-e9a656123a8b", 00:10:58.248 "is_configured": false, 00:10:58.248 "data_offset": 2048, 00:10:58.248 "data_size": 63488 00:10:58.248 }, 00:10:58.248 { 00:10:58.248 "name": "BaseBdev3", 00:10:58.248 "uuid": "c7d07388-42f3-11ef-9f7f-e9a656123a8b", 00:10:58.248 "is_configured": true, 00:10:58.248 "data_offset": 2048, 00:10:58.248 "data_size": 63488 00:10:58.248 } 00:10:58.248 ] 00:10:58.248 }' 00:10:58.248 21:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:58.248 21:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.511 21:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:58.511 21:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:58.511 21:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:10:58.511 21:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:58.770 [2024-07-15 21:47:13.892924] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:58.770 21:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:58.770 21:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:58.770 21:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:58.770 21:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:58.770 21:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:58.770 21:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:58.770 21:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:58.770 21:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:58.770 21:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:58.770 21:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:58.770 21:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.770 21:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:59.028 21:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:59.028 "name": "Existed_Raid", 00:10:59.028 "uuid": "c83d1807-42f3-11ef-9f7f-e9a656123a8b", 00:10:59.028 "strip_size_kb": 64, 00:10:59.028 "state": "configuring", 00:10:59.028 "raid_level": "concat", 00:10:59.028 "superblock": true, 00:10:59.028 "num_base_bdevs": 3, 00:10:59.028 "num_base_bdevs_discovered": 2, 00:10:59.028 "num_base_bdevs_operational": 3, 00:10:59.028 "base_bdevs_list": [ 00:10:59.028 { 00:10:59.028 "name": null, 00:10:59.028 "uuid": "c95b5df2-42f3-11ef-9f7f-e9a656123a8b", 00:10:59.028 "is_configured": false, 00:10:59.028 "data_offset": 2048, 00:10:59.028 "data_size": 63488 00:10:59.028 }, 00:10:59.028 { 00:10:59.028 "name": "BaseBdev2", 00:10:59.028 "uuid": "c76294e8-42f3-11ef-9f7f-e9a656123a8b", 00:10:59.028 "is_configured": true, 00:10:59.028 "data_offset": 2048, 00:10:59.028 "data_size": 63488 00:10:59.028 }, 00:10:59.028 { 00:10:59.028 "name": "BaseBdev3", 00:10:59.028 "uuid": "c7d07388-42f3-11ef-9f7f-e9a656123a8b", 00:10:59.028 "is_configured": true, 00:10:59.028 "data_offset": 2048, 00:10:59.028 "data_size": 63488 00:10:59.028 } 00:10:59.028 ] 00:10:59.028 }' 00:10:59.028 21:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:59.028 21:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.287 21:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:59.287 21:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:59.546 21:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:10:59.546 21:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:59.546 21:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:59.804 21:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u c95b5df2-42f3-11ef-9f7f-e9a656123a8b 00:11:00.063 [2024-07-15 21:47:15.125083] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:00.063 [2024-07-15 21:47:15.125143] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2873bfe34a00 00:11:00.063 [2024-07-15 21:47:15.125148] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:00.063 [2024-07-15 21:47:15.125165] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2873bfe97e20 00:11:00.063 [2024-07-15 21:47:15.125225] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2873bfe34a00 00:11:00.063 [2024-07-15 21:47:15.125242] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2873bfe34a00 00:11:00.063 [2024-07-15 21:47:15.125262] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.063 NewBaseBdev 00:11:00.063 21:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:11:00.063 21:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=NewBaseBdev 00:11:00.063 21:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:11:00.063 21:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:11:00.063 21:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:11:00.063 21:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:11:00.063 21:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:00.321 21:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:00.579 [ 00:11:00.579 { 00:11:00.579 "name": "NewBaseBdev", 00:11:00.579 "aliases": [ 00:11:00.579 "c95b5df2-42f3-11ef-9f7f-e9a656123a8b" 00:11:00.579 ], 00:11:00.579 "product_name": "Malloc disk", 00:11:00.579 "block_size": 512, 00:11:00.579 "num_blocks": 65536, 00:11:00.579 "uuid": "c95b5df2-42f3-11ef-9f7f-e9a656123a8b", 00:11:00.579 "assigned_rate_limits": { 00:11:00.579 "rw_ios_per_sec": 0, 00:11:00.579 "rw_mbytes_per_sec": 0, 00:11:00.579 "r_mbytes_per_sec": 0, 00:11:00.579 "w_mbytes_per_sec": 0 00:11:00.579 }, 00:11:00.579 "claimed": true, 00:11:00.579 "claim_type": "exclusive_write", 00:11:00.579 "zoned": false, 00:11:00.579 "supported_io_types": { 00:11:00.579 "read": true, 00:11:00.579 "write": true, 00:11:00.579 "unmap": true, 00:11:00.579 "flush": true, 00:11:00.579 "reset": true, 00:11:00.579 "nvme_admin": false, 00:11:00.579 "nvme_io": false, 00:11:00.579 "nvme_io_md": false, 00:11:00.579 "write_zeroes": true, 00:11:00.579 "zcopy": true, 00:11:00.579 "get_zone_info": false, 00:11:00.579 "zone_management": false, 00:11:00.579 "zone_append": false, 00:11:00.579 "compare": false, 00:11:00.579 "compare_and_write": false, 00:11:00.579 "abort": true, 00:11:00.579 "seek_hole": false, 00:11:00.579 "seek_data": false, 00:11:00.579 "copy": true, 00:11:00.579 "nvme_iov_md": false 00:11:00.579 }, 00:11:00.579 "memory_domains": [ 00:11:00.579 { 00:11:00.579 "dma_device_id": "system", 00:11:00.579 "dma_device_type": 1 00:11:00.579 }, 00:11:00.579 { 00:11:00.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.579 "dma_device_type": 2 00:11:00.579 } 00:11:00.579 ], 00:11:00.579 "driver_specific": {} 00:11:00.579 } 00:11:00.579 ] 00:11:00.579 21:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:11:00.579 21:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:00.579 21:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:00.579 21:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:00.579 21:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:00.579 21:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:00.579 21:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:00.579 21:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:00.579 21:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:00.579 21:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:00.579 21:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:00.579 21:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:00.579 21:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.839 21:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:00.839 "name": "Existed_Raid", 00:11:00.839 "uuid": "c83d1807-42f3-11ef-9f7f-e9a656123a8b", 00:11:00.839 "strip_size_kb": 64, 00:11:00.839 "state": "online", 00:11:00.839 "raid_level": "concat", 00:11:00.839 "superblock": true, 00:11:00.839 "num_base_bdevs": 3, 00:11:00.839 "num_base_bdevs_discovered": 3, 00:11:00.839 "num_base_bdevs_operational": 3, 00:11:00.839 "base_bdevs_list": [ 00:11:00.839 { 00:11:00.839 "name": "NewBaseBdev", 00:11:00.839 "uuid": "c95b5df2-42f3-11ef-9f7f-e9a656123a8b", 00:11:00.839 "is_configured": true, 00:11:00.839 "data_offset": 2048, 00:11:00.839 "data_size": 63488 00:11:00.839 }, 00:11:00.839 { 00:11:00.839 "name": "BaseBdev2", 00:11:00.839 "uuid": "c76294e8-42f3-11ef-9f7f-e9a656123a8b", 00:11:00.839 "is_configured": true, 00:11:00.839 "data_offset": 2048, 00:11:00.839 "data_size": 63488 00:11:00.839 }, 00:11:00.839 { 00:11:00.839 "name": "BaseBdev3", 00:11:00.839 "uuid": "c7d07388-42f3-11ef-9f7f-e9a656123a8b", 00:11:00.839 "is_configured": true, 00:11:00.839 "data_offset": 2048, 00:11:00.839 "data_size": 63488 00:11:00.839 } 00:11:00.839 ] 00:11:00.839 }' 00:11:00.839 21:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:00.839 21:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.098 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:11:01.098 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:01.098 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:01.098 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:01.098 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:01.098 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:11:01.098 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:01.098 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:01.357 [2024-07-15 21:47:16.317037] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:01.357 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:01.357 "name": "Existed_Raid", 00:11:01.357 "aliases": [ 00:11:01.357 "c83d1807-42f3-11ef-9f7f-e9a656123a8b" 00:11:01.357 ], 00:11:01.357 "product_name": "Raid Volume", 00:11:01.357 "block_size": 512, 00:11:01.357 "num_blocks": 190464, 00:11:01.357 "uuid": "c83d1807-42f3-11ef-9f7f-e9a656123a8b", 00:11:01.357 "assigned_rate_limits": { 00:11:01.357 "rw_ios_per_sec": 0, 00:11:01.357 "rw_mbytes_per_sec": 0, 00:11:01.357 "r_mbytes_per_sec": 0, 00:11:01.357 "w_mbytes_per_sec": 0 00:11:01.357 }, 00:11:01.357 "claimed": false, 00:11:01.357 "zoned": false, 00:11:01.357 "supported_io_types": { 00:11:01.357 "read": true, 00:11:01.357 "write": true, 00:11:01.357 "unmap": true, 00:11:01.357 "flush": true, 00:11:01.357 "reset": true, 00:11:01.357 "nvme_admin": false, 00:11:01.357 "nvme_io": false, 00:11:01.357 "nvme_io_md": false, 00:11:01.357 "write_zeroes": true, 00:11:01.357 "zcopy": false, 00:11:01.357 "get_zone_info": false, 00:11:01.357 "zone_management": false, 00:11:01.357 "zone_append": false, 00:11:01.357 "compare": false, 00:11:01.357 "compare_and_write": false, 00:11:01.357 "abort": false, 00:11:01.357 "seek_hole": false, 00:11:01.357 "seek_data": false, 00:11:01.357 "copy": false, 00:11:01.357 "nvme_iov_md": false 00:11:01.357 }, 00:11:01.357 "memory_domains": [ 00:11:01.357 { 00:11:01.357 "dma_device_id": "system", 00:11:01.357 "dma_device_type": 1 00:11:01.357 }, 00:11:01.357 { 00:11:01.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.357 "dma_device_type": 2 00:11:01.357 }, 00:11:01.357 { 00:11:01.357 "dma_device_id": "system", 00:11:01.357 "dma_device_type": 1 00:11:01.357 }, 00:11:01.357 { 00:11:01.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.357 "dma_device_type": 2 00:11:01.357 }, 00:11:01.357 { 00:11:01.357 "dma_device_id": "system", 00:11:01.357 "dma_device_type": 1 00:11:01.357 }, 00:11:01.357 { 00:11:01.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.357 "dma_device_type": 2 00:11:01.357 } 00:11:01.357 ], 00:11:01.357 "driver_specific": { 00:11:01.357 "raid": { 00:11:01.357 "uuid": "c83d1807-42f3-11ef-9f7f-e9a656123a8b", 00:11:01.357 "strip_size_kb": 64, 00:11:01.357 "state": "online", 00:11:01.357 "raid_level": "concat", 00:11:01.357 "superblock": true, 00:11:01.357 "num_base_bdevs": 3, 00:11:01.357 "num_base_bdevs_discovered": 3, 00:11:01.357 "num_base_bdevs_operational": 3, 00:11:01.357 "base_bdevs_list": [ 00:11:01.357 { 00:11:01.357 "name": "NewBaseBdev", 00:11:01.357 "uuid": "c95b5df2-42f3-11ef-9f7f-e9a656123a8b", 00:11:01.357 "is_configured": true, 00:11:01.357 "data_offset": 2048, 00:11:01.357 "data_size": 63488 00:11:01.357 }, 00:11:01.357 { 00:11:01.357 "name": "BaseBdev2", 00:11:01.357 "uuid": "c76294e8-42f3-11ef-9f7f-e9a656123a8b", 00:11:01.357 "is_configured": true, 00:11:01.357 "data_offset": 2048, 00:11:01.357 "data_size": 63488 00:11:01.357 }, 00:11:01.357 { 00:11:01.357 "name": "BaseBdev3", 00:11:01.357 "uuid": "c7d07388-42f3-11ef-9f7f-e9a656123a8b", 00:11:01.357 "is_configured": true, 00:11:01.357 "data_offset": 2048, 00:11:01.357 "data_size": 63488 00:11:01.357 } 00:11:01.357 ] 00:11:01.357 } 00:11:01.357 } 00:11:01.357 }' 00:11:01.357 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:01.357 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:11:01.357 BaseBdev2 00:11:01.357 BaseBdev3' 00:11:01.357 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:01.357 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:11:01.357 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:01.616 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:01.616 "name": "NewBaseBdev", 00:11:01.616 "aliases": [ 00:11:01.616 "c95b5df2-42f3-11ef-9f7f-e9a656123a8b" 00:11:01.616 ], 00:11:01.616 "product_name": "Malloc disk", 00:11:01.616 "block_size": 512, 00:11:01.616 "num_blocks": 65536, 00:11:01.616 "uuid": "c95b5df2-42f3-11ef-9f7f-e9a656123a8b", 00:11:01.616 "assigned_rate_limits": { 00:11:01.616 "rw_ios_per_sec": 0, 00:11:01.616 "rw_mbytes_per_sec": 0, 00:11:01.616 "r_mbytes_per_sec": 0, 00:11:01.616 "w_mbytes_per_sec": 0 00:11:01.616 }, 00:11:01.616 "claimed": true, 00:11:01.616 "claim_type": "exclusive_write", 00:11:01.616 "zoned": false, 00:11:01.616 "supported_io_types": { 00:11:01.616 "read": true, 00:11:01.616 "write": true, 00:11:01.616 "unmap": true, 00:11:01.616 "flush": true, 00:11:01.616 "reset": true, 00:11:01.616 "nvme_admin": false, 00:11:01.616 "nvme_io": false, 00:11:01.616 "nvme_io_md": false, 00:11:01.616 "write_zeroes": true, 00:11:01.616 "zcopy": true, 00:11:01.616 "get_zone_info": false, 00:11:01.616 "zone_management": false, 00:11:01.616 "zone_append": false, 00:11:01.616 "compare": false, 00:11:01.616 "compare_and_write": false, 00:11:01.616 "abort": true, 00:11:01.616 "seek_hole": false, 00:11:01.616 "seek_data": false, 00:11:01.616 "copy": true, 00:11:01.616 "nvme_iov_md": false 00:11:01.616 }, 00:11:01.616 "memory_domains": [ 00:11:01.616 { 00:11:01.616 "dma_device_id": "system", 00:11:01.616 "dma_device_type": 1 00:11:01.616 }, 00:11:01.616 { 00:11:01.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.616 "dma_device_type": 2 00:11:01.616 } 00:11:01.616 ], 00:11:01.616 "driver_specific": {} 00:11:01.616 }' 00:11:01.616 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:01.616 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:01.616 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:01.616 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:01.616 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:01.616 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:01.616 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:01.616 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:01.616 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:01.616 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:01.616 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:01.616 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:01.616 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:01.616 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:01.616 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:01.876 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:01.876 "name": "BaseBdev2", 00:11:01.876 "aliases": [ 00:11:01.876 "c76294e8-42f3-11ef-9f7f-e9a656123a8b" 00:11:01.876 ], 00:11:01.876 "product_name": "Malloc disk", 00:11:01.876 "block_size": 512, 00:11:01.876 "num_blocks": 65536, 00:11:01.876 "uuid": "c76294e8-42f3-11ef-9f7f-e9a656123a8b", 00:11:01.876 "assigned_rate_limits": { 00:11:01.876 "rw_ios_per_sec": 0, 00:11:01.876 "rw_mbytes_per_sec": 0, 00:11:01.876 "r_mbytes_per_sec": 0, 00:11:01.876 "w_mbytes_per_sec": 0 00:11:01.876 }, 00:11:01.876 "claimed": true, 00:11:01.876 "claim_type": "exclusive_write", 00:11:01.876 "zoned": false, 00:11:01.876 "supported_io_types": { 00:11:01.876 "read": true, 00:11:01.876 "write": true, 00:11:01.876 "unmap": true, 00:11:01.876 "flush": true, 00:11:01.876 "reset": true, 00:11:01.876 "nvme_admin": false, 00:11:01.876 "nvme_io": false, 00:11:01.876 "nvme_io_md": false, 00:11:01.876 "write_zeroes": true, 00:11:01.876 "zcopy": true, 00:11:01.876 "get_zone_info": false, 00:11:01.876 "zone_management": false, 00:11:01.876 "zone_append": false, 00:11:01.876 "compare": false, 00:11:01.876 "compare_and_write": false, 00:11:01.876 "abort": true, 00:11:01.876 "seek_hole": false, 00:11:01.876 "seek_data": false, 00:11:01.876 "copy": true, 00:11:01.876 "nvme_iov_md": false 00:11:01.876 }, 00:11:01.876 "memory_domains": [ 00:11:01.876 { 00:11:01.876 "dma_device_id": "system", 00:11:01.876 "dma_device_type": 1 00:11:01.876 }, 00:11:01.876 { 00:11:01.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.876 "dma_device_type": 2 00:11:01.876 } 00:11:01.876 ], 00:11:01.876 "driver_specific": {} 00:11:01.876 }' 00:11:01.876 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:01.876 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:01.876 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:01.876 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:01.876 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:01.876 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:01.876 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:01.876 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:01.876 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:01.876 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:01.876 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:01.876 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:01.876 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:01.876 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:01.876 21:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:02.134 21:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:02.134 "name": "BaseBdev3", 00:11:02.134 "aliases": [ 00:11:02.134 "c7d07388-42f3-11ef-9f7f-e9a656123a8b" 00:11:02.135 ], 00:11:02.135 "product_name": "Malloc disk", 00:11:02.135 "block_size": 512, 00:11:02.135 "num_blocks": 65536, 00:11:02.135 "uuid": "c7d07388-42f3-11ef-9f7f-e9a656123a8b", 00:11:02.135 "assigned_rate_limits": { 00:11:02.135 "rw_ios_per_sec": 0, 00:11:02.135 "rw_mbytes_per_sec": 0, 00:11:02.135 "r_mbytes_per_sec": 0, 00:11:02.135 "w_mbytes_per_sec": 0 00:11:02.135 }, 00:11:02.135 "claimed": true, 00:11:02.135 "claim_type": "exclusive_write", 00:11:02.135 "zoned": false, 00:11:02.135 "supported_io_types": { 00:11:02.135 "read": true, 00:11:02.135 "write": true, 00:11:02.135 "unmap": true, 00:11:02.135 "flush": true, 00:11:02.135 "reset": true, 00:11:02.135 "nvme_admin": false, 00:11:02.135 "nvme_io": false, 00:11:02.135 "nvme_io_md": false, 00:11:02.135 "write_zeroes": true, 00:11:02.135 "zcopy": true, 00:11:02.135 "get_zone_info": false, 00:11:02.135 "zone_management": false, 00:11:02.135 "zone_append": false, 00:11:02.135 "compare": false, 00:11:02.135 "compare_and_write": false, 00:11:02.135 "abort": true, 00:11:02.135 "seek_hole": false, 00:11:02.135 "seek_data": false, 00:11:02.135 "copy": true, 00:11:02.135 "nvme_iov_md": false 00:11:02.135 }, 00:11:02.135 "memory_domains": [ 00:11:02.135 { 00:11:02.135 "dma_device_id": "system", 00:11:02.135 "dma_device_type": 1 00:11:02.135 }, 00:11:02.135 { 00:11:02.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.135 "dma_device_type": 2 00:11:02.135 } 00:11:02.135 ], 00:11:02.135 "driver_specific": {} 00:11:02.135 }' 00:11:02.135 21:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:02.135 21:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:02.135 21:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:02.135 21:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:02.135 21:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:02.135 21:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:02.135 21:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:02.135 21:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:02.135 21:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:02.135 21:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:02.135 21:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:02.135 21:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:02.135 21:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:02.394 [2024-07-15 21:47:17.493071] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:02.394 [2024-07-15 21:47:17.493093] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:02.394 [2024-07-15 21:47:17.493130] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:02.394 [2024-07-15 21:47:17.493143] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:02.394 [2024-07-15 21:47:17.493147] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2873bfe34a00 name Existed_Raid, state offline 00:11:02.394 21:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 54777 00:11:02.394 21:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@942 -- # '[' -z 54777 ']' 00:11:02.394 21:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # kill -0 54777 00:11:02.394 21:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@947 -- # uname 00:11:02.394 21:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:11:02.394 21:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # ps -c -o command 54777 00:11:02.394 21:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # tail -1 00:11:02.394 21:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:11:02.394 21:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:11:02.394 killing process with pid 54777 00:11:02.394 21:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # echo 'killing process with pid 54777' 00:11:02.394 21:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@961 -- # kill 54777 00:11:02.394 [2024-07-15 21:47:17.520238] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:02.394 21:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # wait 54777 00:11:02.394 [2024-07-15 21:47:17.538281] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:02.654 21:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:11:02.654 00:11:02.654 real 0m22.765s 00:11:02.654 user 0m41.167s 00:11:02.654 sys 0m3.579s 00:11:02.654 21:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1118 -- # xtrace_disable 00:11:02.654 21:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.654 ************************************ 00:11:02.654 END TEST raid_state_function_test_sb 00:11:02.654 ************************************ 00:11:02.654 21:47:17 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:11:02.654 21:47:17 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:11:02.654 21:47:17 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:11:02.654 21:47:17 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:11:02.654 21:47:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:02.654 ************************************ 00:11:02.654 START TEST raid_superblock_test 00:11:02.654 ************************************ 00:11:02.654 21:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1117 -- # raid_superblock_test concat 3 00:11:02.654 21:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:11:02.654 21:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:11:02.654 21:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:11:02.654 21:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:11:02.654 21:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:11:02.654 21:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:11:02.654 21:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:11:02.654 21:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:11:02.654 21:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:11:02.654 21:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:11:02.654 21:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:11:02.654 21:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:11:02.654 21:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:11:02.654 21:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:11:02.654 21:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:11:02.654 21:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:11:02.654 21:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=55501 00:11:02.654 21:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 55501 /var/tmp/spdk-raid.sock 00:11:02.654 21:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@823 -- # '[' -z 55501 ']' 00:11:02.654 21:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:02.654 21:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:11:02.654 21:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:11:02.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:02.654 21:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:02.654 21:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:11:02.654 21:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.654 [2024-07-15 21:47:17.758406] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:11:02.654 [2024-07-15 21:47:17.758571] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:11:03.222 EAL: TSC is not safe to use in SMP mode 00:11:03.222 EAL: TSC is not invariant 00:11:03.222 [2024-07-15 21:47:18.274291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.222 [2024-07-15 21:47:18.352646] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:03.222 [2024-07-15 21:47:18.354928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.222 [2024-07-15 21:47:18.355788] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.222 [2024-07-15 21:47:18.355817] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.790 21:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:11:03.790 21:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # return 0 00:11:03.790 21:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:11:03.790 21:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:03.790 21:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:11:03.790 21:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:11:03.790 21:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:03.790 21:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:03.790 21:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:03.790 21:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:03.790 21:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:11:04.049 malloc1 00:11:04.049 21:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:04.308 [2024-07-15 21:47:19.338761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:04.308 [2024-07-15 21:47:19.338848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.308 [2024-07-15 21:47:19.338877] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb701d234780 00:11:04.308 [2024-07-15 21:47:19.338886] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.308 [2024-07-15 21:47:19.339758] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.308 [2024-07-15 21:47:19.339785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:04.308 pt1 00:11:04.308 21:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:04.308 21:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:04.308 21:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:11:04.308 21:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:11:04.308 21:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:04.308 21:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:04.308 21:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:04.308 21:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:04.308 21:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:11:04.567 malloc2 00:11:04.567 21:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:04.827 [2024-07-15 21:47:19.822779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:04.827 [2024-07-15 21:47:19.822851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.827 [2024-07-15 21:47:19.822884] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb701d234c80 00:11:04.827 [2024-07-15 21:47:19.822891] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.827 [2024-07-15 21:47:19.823570] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.827 [2024-07-15 21:47:19.823594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:04.827 pt2 00:11:04.827 21:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:04.827 21:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:04.827 21:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:11:04.827 21:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:11:04.827 21:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:04.827 21:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:04.827 21:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:04.827 21:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:04.827 21:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:11:05.086 malloc3 00:11:05.086 21:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:05.086 [2024-07-15 21:47:20.266794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:05.086 [2024-07-15 21:47:20.266858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.086 [2024-07-15 21:47:20.266895] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb701d235180 00:11:05.086 [2024-07-15 21:47:20.266902] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.086 [2024-07-15 21:47:20.267649] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.086 [2024-07-15 21:47:20.267672] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:05.086 pt3 00:11:05.346 21:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:05.346 21:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:05.346 21:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:11:05.346 [2024-07-15 21:47:20.486808] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:05.346 [2024-07-15 21:47:20.487351] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:05.346 [2024-07-15 21:47:20.487373] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:05.346 [2024-07-15 21:47:20.487419] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xb701d235400 00:11:05.346 [2024-07-15 21:47:20.487425] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:05.346 [2024-07-15 21:47:20.487454] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb701d297e20 00:11:05.346 [2024-07-15 21:47:20.487529] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb701d235400 00:11:05.346 [2024-07-15 21:47:20.487534] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xb701d235400 00:11:05.346 [2024-07-15 21:47:20.487559] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.346 21:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:05.346 21:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:05.346 21:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:05.346 21:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:05.346 21:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:05.346 21:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:05.346 21:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:05.346 21:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:05.346 21:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:05.346 21:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:05.346 21:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:05.346 21:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.605 21:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:05.605 "name": "raid_bdev1", 00:11:05.605 "uuid": "cfefebba-42f3-11ef-9f7f-e9a656123a8b", 00:11:05.605 "strip_size_kb": 64, 00:11:05.605 "state": "online", 00:11:05.605 "raid_level": "concat", 00:11:05.605 "superblock": true, 00:11:05.605 "num_base_bdevs": 3, 00:11:05.605 "num_base_bdevs_discovered": 3, 00:11:05.605 "num_base_bdevs_operational": 3, 00:11:05.605 "base_bdevs_list": [ 00:11:05.605 { 00:11:05.605 "name": "pt1", 00:11:05.605 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:05.605 "is_configured": true, 00:11:05.605 "data_offset": 2048, 00:11:05.605 "data_size": 63488 00:11:05.605 }, 00:11:05.605 { 00:11:05.605 "name": "pt2", 00:11:05.605 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:05.605 "is_configured": true, 00:11:05.605 "data_offset": 2048, 00:11:05.605 "data_size": 63488 00:11:05.605 }, 00:11:05.605 { 00:11:05.605 "name": "pt3", 00:11:05.605 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:05.605 "is_configured": true, 00:11:05.605 "data_offset": 2048, 00:11:05.605 "data_size": 63488 00:11:05.605 } 00:11:05.605 ] 00:11:05.605 }' 00:11:05.605 21:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:05.605 21:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.864 21:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:11:05.864 21:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:05.864 21:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:05.864 21:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:05.864 21:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:05.864 21:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:05.864 21:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:05.864 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:06.122 [2024-07-15 21:47:21.206878] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:06.123 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:06.123 "name": "raid_bdev1", 00:11:06.123 "aliases": [ 00:11:06.123 "cfefebba-42f3-11ef-9f7f-e9a656123a8b" 00:11:06.123 ], 00:11:06.123 "product_name": "Raid Volume", 00:11:06.123 "block_size": 512, 00:11:06.123 "num_blocks": 190464, 00:11:06.123 "uuid": "cfefebba-42f3-11ef-9f7f-e9a656123a8b", 00:11:06.123 "assigned_rate_limits": { 00:11:06.123 "rw_ios_per_sec": 0, 00:11:06.123 "rw_mbytes_per_sec": 0, 00:11:06.123 "r_mbytes_per_sec": 0, 00:11:06.123 "w_mbytes_per_sec": 0 00:11:06.123 }, 00:11:06.123 "claimed": false, 00:11:06.123 "zoned": false, 00:11:06.123 "supported_io_types": { 00:11:06.123 "read": true, 00:11:06.123 "write": true, 00:11:06.123 "unmap": true, 00:11:06.123 "flush": true, 00:11:06.123 "reset": true, 00:11:06.123 "nvme_admin": false, 00:11:06.123 "nvme_io": false, 00:11:06.123 "nvme_io_md": false, 00:11:06.123 "write_zeroes": true, 00:11:06.123 "zcopy": false, 00:11:06.123 "get_zone_info": false, 00:11:06.123 "zone_management": false, 00:11:06.123 "zone_append": false, 00:11:06.123 "compare": false, 00:11:06.123 "compare_and_write": false, 00:11:06.123 "abort": false, 00:11:06.123 "seek_hole": false, 00:11:06.123 "seek_data": false, 00:11:06.123 "copy": false, 00:11:06.123 "nvme_iov_md": false 00:11:06.123 }, 00:11:06.123 "memory_domains": [ 00:11:06.123 { 00:11:06.123 "dma_device_id": "system", 00:11:06.123 "dma_device_type": 1 00:11:06.123 }, 00:11:06.123 { 00:11:06.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.123 "dma_device_type": 2 00:11:06.123 }, 00:11:06.123 { 00:11:06.123 "dma_device_id": "system", 00:11:06.123 "dma_device_type": 1 00:11:06.123 }, 00:11:06.123 { 00:11:06.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.123 "dma_device_type": 2 00:11:06.123 }, 00:11:06.123 { 00:11:06.123 "dma_device_id": "system", 00:11:06.123 "dma_device_type": 1 00:11:06.123 }, 00:11:06.123 { 00:11:06.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.123 "dma_device_type": 2 00:11:06.123 } 00:11:06.123 ], 00:11:06.123 "driver_specific": { 00:11:06.123 "raid": { 00:11:06.123 "uuid": "cfefebba-42f3-11ef-9f7f-e9a656123a8b", 00:11:06.123 "strip_size_kb": 64, 00:11:06.123 "state": "online", 00:11:06.123 "raid_level": "concat", 00:11:06.123 "superblock": true, 00:11:06.123 "num_base_bdevs": 3, 00:11:06.123 "num_base_bdevs_discovered": 3, 00:11:06.123 "num_base_bdevs_operational": 3, 00:11:06.123 "base_bdevs_list": [ 00:11:06.123 { 00:11:06.123 "name": "pt1", 00:11:06.123 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:06.123 "is_configured": true, 00:11:06.123 "data_offset": 2048, 00:11:06.123 "data_size": 63488 00:11:06.123 }, 00:11:06.123 { 00:11:06.123 "name": "pt2", 00:11:06.123 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:06.123 "is_configured": true, 00:11:06.123 "data_offset": 2048, 00:11:06.123 "data_size": 63488 00:11:06.123 }, 00:11:06.123 { 00:11:06.123 "name": "pt3", 00:11:06.123 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:06.123 "is_configured": true, 00:11:06.123 "data_offset": 2048, 00:11:06.123 "data_size": 63488 00:11:06.123 } 00:11:06.123 ] 00:11:06.123 } 00:11:06.123 } 00:11:06.123 }' 00:11:06.123 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:06.123 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:06.123 pt2 00:11:06.123 pt3' 00:11:06.123 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:06.123 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:06.123 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:06.381 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:06.381 "name": "pt1", 00:11:06.381 "aliases": [ 00:11:06.381 "00000000-0000-0000-0000-000000000001" 00:11:06.381 ], 00:11:06.381 "product_name": "passthru", 00:11:06.381 "block_size": 512, 00:11:06.381 "num_blocks": 65536, 00:11:06.381 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:06.381 "assigned_rate_limits": { 00:11:06.381 "rw_ios_per_sec": 0, 00:11:06.381 "rw_mbytes_per_sec": 0, 00:11:06.381 "r_mbytes_per_sec": 0, 00:11:06.381 "w_mbytes_per_sec": 0 00:11:06.381 }, 00:11:06.381 "claimed": true, 00:11:06.381 "claim_type": "exclusive_write", 00:11:06.381 "zoned": false, 00:11:06.381 "supported_io_types": { 00:11:06.381 "read": true, 00:11:06.381 "write": true, 00:11:06.381 "unmap": true, 00:11:06.381 "flush": true, 00:11:06.381 "reset": true, 00:11:06.381 "nvme_admin": false, 00:11:06.381 "nvme_io": false, 00:11:06.381 "nvme_io_md": false, 00:11:06.381 "write_zeroes": true, 00:11:06.381 "zcopy": true, 00:11:06.381 "get_zone_info": false, 00:11:06.381 "zone_management": false, 00:11:06.381 "zone_append": false, 00:11:06.381 "compare": false, 00:11:06.381 "compare_and_write": false, 00:11:06.381 "abort": true, 00:11:06.381 "seek_hole": false, 00:11:06.381 "seek_data": false, 00:11:06.381 "copy": true, 00:11:06.382 "nvme_iov_md": false 00:11:06.382 }, 00:11:06.382 "memory_domains": [ 00:11:06.382 { 00:11:06.382 "dma_device_id": "system", 00:11:06.382 "dma_device_type": 1 00:11:06.382 }, 00:11:06.382 { 00:11:06.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.382 "dma_device_type": 2 00:11:06.382 } 00:11:06.382 ], 00:11:06.382 "driver_specific": { 00:11:06.382 "passthru": { 00:11:06.382 "name": "pt1", 00:11:06.382 "base_bdev_name": "malloc1" 00:11:06.382 } 00:11:06.382 } 00:11:06.382 }' 00:11:06.382 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:06.382 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:06.382 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:06.382 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:06.382 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:06.382 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:06.382 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:06.382 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:06.382 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:06.382 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:06.382 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:06.382 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:06.382 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:06.382 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:06.382 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:06.640 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:06.640 "name": "pt2", 00:11:06.640 "aliases": [ 00:11:06.640 "00000000-0000-0000-0000-000000000002" 00:11:06.640 ], 00:11:06.640 "product_name": "passthru", 00:11:06.640 "block_size": 512, 00:11:06.640 "num_blocks": 65536, 00:11:06.640 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:06.640 "assigned_rate_limits": { 00:11:06.640 "rw_ios_per_sec": 0, 00:11:06.640 "rw_mbytes_per_sec": 0, 00:11:06.640 "r_mbytes_per_sec": 0, 00:11:06.640 "w_mbytes_per_sec": 0 00:11:06.640 }, 00:11:06.640 "claimed": true, 00:11:06.640 "claim_type": "exclusive_write", 00:11:06.640 "zoned": false, 00:11:06.640 "supported_io_types": { 00:11:06.640 "read": true, 00:11:06.640 "write": true, 00:11:06.640 "unmap": true, 00:11:06.640 "flush": true, 00:11:06.640 "reset": true, 00:11:06.640 "nvme_admin": false, 00:11:06.640 "nvme_io": false, 00:11:06.640 "nvme_io_md": false, 00:11:06.640 "write_zeroes": true, 00:11:06.640 "zcopy": true, 00:11:06.640 "get_zone_info": false, 00:11:06.640 "zone_management": false, 00:11:06.640 "zone_append": false, 00:11:06.640 "compare": false, 00:11:06.640 "compare_and_write": false, 00:11:06.640 "abort": true, 00:11:06.640 "seek_hole": false, 00:11:06.640 "seek_data": false, 00:11:06.640 "copy": true, 00:11:06.640 "nvme_iov_md": false 00:11:06.640 }, 00:11:06.640 "memory_domains": [ 00:11:06.640 { 00:11:06.640 "dma_device_id": "system", 00:11:06.640 "dma_device_type": 1 00:11:06.640 }, 00:11:06.640 { 00:11:06.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.640 "dma_device_type": 2 00:11:06.640 } 00:11:06.640 ], 00:11:06.640 "driver_specific": { 00:11:06.640 "passthru": { 00:11:06.640 "name": "pt2", 00:11:06.640 "base_bdev_name": "malloc2" 00:11:06.640 } 00:11:06.640 } 00:11:06.640 }' 00:11:06.640 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:06.640 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:06.640 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:06.640 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:06.640 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:06.640 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:06.640 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:06.640 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:06.898 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:06.898 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:06.898 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:06.898 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:06.898 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:06.898 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:11:06.898 21:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:07.156 21:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:07.156 "name": "pt3", 00:11:07.156 "aliases": [ 00:11:07.156 "00000000-0000-0000-0000-000000000003" 00:11:07.156 ], 00:11:07.156 "product_name": "passthru", 00:11:07.156 "block_size": 512, 00:11:07.156 "num_blocks": 65536, 00:11:07.156 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:07.156 "assigned_rate_limits": { 00:11:07.156 "rw_ios_per_sec": 0, 00:11:07.156 "rw_mbytes_per_sec": 0, 00:11:07.156 "r_mbytes_per_sec": 0, 00:11:07.156 "w_mbytes_per_sec": 0 00:11:07.156 }, 00:11:07.156 "claimed": true, 00:11:07.156 "claim_type": "exclusive_write", 00:11:07.156 "zoned": false, 00:11:07.156 "supported_io_types": { 00:11:07.156 "read": true, 00:11:07.156 "write": true, 00:11:07.156 "unmap": true, 00:11:07.156 "flush": true, 00:11:07.156 "reset": true, 00:11:07.156 "nvme_admin": false, 00:11:07.156 "nvme_io": false, 00:11:07.156 "nvme_io_md": false, 00:11:07.156 "write_zeroes": true, 00:11:07.156 "zcopy": true, 00:11:07.156 "get_zone_info": false, 00:11:07.156 "zone_management": false, 00:11:07.156 "zone_append": false, 00:11:07.156 "compare": false, 00:11:07.156 "compare_and_write": false, 00:11:07.156 "abort": true, 00:11:07.156 "seek_hole": false, 00:11:07.156 "seek_data": false, 00:11:07.156 "copy": true, 00:11:07.156 "nvme_iov_md": false 00:11:07.156 }, 00:11:07.156 "memory_domains": [ 00:11:07.156 { 00:11:07.156 "dma_device_id": "system", 00:11:07.156 "dma_device_type": 1 00:11:07.156 }, 00:11:07.156 { 00:11:07.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.156 "dma_device_type": 2 00:11:07.156 } 00:11:07.156 ], 00:11:07.156 "driver_specific": { 00:11:07.156 "passthru": { 00:11:07.156 "name": "pt3", 00:11:07.156 "base_bdev_name": "malloc3" 00:11:07.156 } 00:11:07.156 } 00:11:07.156 }' 00:11:07.156 21:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:07.156 21:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:07.157 21:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:07.157 21:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:07.157 21:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:07.157 21:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:07.157 21:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:07.157 21:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:07.157 21:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:07.157 21:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:07.157 21:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:07.157 21:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:07.157 21:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:07.157 21:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:11:07.414 [2024-07-15 21:47:22.378956] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.414 21:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=cfefebba-42f3-11ef-9f7f-e9a656123a8b 00:11:07.414 21:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z cfefebba-42f3-11ef-9f7f-e9a656123a8b ']' 00:11:07.414 21:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:07.672 [2024-07-15 21:47:22.638907] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:07.672 [2024-07-15 21:47:22.638930] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:07.672 [2024-07-15 21:47:22.638968] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:07.672 [2024-07-15 21:47:22.638982] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:07.672 [2024-07-15 21:47:22.638986] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb701d235400 name raid_bdev1, state offline 00:11:07.672 21:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:11:07.672 21:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:07.930 21:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:11:07.930 21:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:11:07.930 21:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:11:07.930 21:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:08.188 21:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:11:08.188 21:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:08.188 21:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:11:08.188 21:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:11:08.461 21:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:11:08.461 21:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:08.718 21:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:11:08.718 21:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:11:08.718 21:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # local es=0 00:11:08.718 21:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:11:08.718 21:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@630 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:08.718 21:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:11:08.718 21:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:08.718 21:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:11:08.718 21:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:08.718 21:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:11:08.718 21:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:08.718 21:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:08.718 21:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:11:08.975 [2024-07-15 21:47:24.003027] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:08.975 [2024-07-15 21:47:24.003609] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:08.975 [2024-07-15 21:47:24.003628] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:08.975 [2024-07-15 21:47:24.003642] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:08.975 [2024-07-15 21:47:24.003674] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:08.975 [2024-07-15 21:47:24.003709] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:08.975 [2024-07-15 21:47:24.003730] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:08.975 [2024-07-15 21:47:24.003735] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb701d235180 name raid_bdev1, state configuring 00:11:08.975 request: 00:11:08.975 { 00:11:08.975 "name": "raid_bdev1", 00:11:08.975 "raid_level": "concat", 00:11:08.975 "base_bdevs": [ 00:11:08.975 "malloc1", 00:11:08.975 "malloc2", 00:11:08.975 "malloc3" 00:11:08.975 ], 00:11:08.975 "strip_size_kb": 64, 00:11:08.975 "superblock": false, 00:11:08.975 "method": "bdev_raid_create", 00:11:08.975 "req_id": 1 00:11:08.975 } 00:11:08.975 Got JSON-RPC error response 00:11:08.975 response: 00:11:08.975 { 00:11:08.975 "code": -17, 00:11:08.975 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:08.975 } 00:11:08.975 21:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # es=1 00:11:08.975 21:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:11:08.975 21:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:11:08.975 21:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:11:08.975 21:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:08.975 21:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:11:09.233 21:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:11:09.233 21:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:11:09.233 21:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:09.491 [2024-07-15 21:47:24.431028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:09.491 [2024-07-15 21:47:24.431091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.491 [2024-07-15 21:47:24.431119] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb701d234c80 00:11:09.491 [2024-07-15 21:47:24.431127] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.491 [2024-07-15 21:47:24.431865] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.491 [2024-07-15 21:47:24.431890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:09.491 [2024-07-15 21:47:24.431914] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:09.491 [2024-07-15 21:47:24.431926] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:09.491 pt1 00:11:09.491 21:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:09.491 21:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:09.491 21:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:09.491 21:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:09.491 21:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:09.491 21:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:09.491 21:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:09.491 21:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:09.491 21:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:09.492 21:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:09.492 21:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:09.492 21:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.750 21:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:09.750 "name": "raid_bdev1", 00:11:09.750 "uuid": "cfefebba-42f3-11ef-9f7f-e9a656123a8b", 00:11:09.750 "strip_size_kb": 64, 00:11:09.750 "state": "configuring", 00:11:09.750 "raid_level": "concat", 00:11:09.750 "superblock": true, 00:11:09.750 "num_base_bdevs": 3, 00:11:09.750 "num_base_bdevs_discovered": 1, 00:11:09.750 "num_base_bdevs_operational": 3, 00:11:09.750 "base_bdevs_list": [ 00:11:09.750 { 00:11:09.750 "name": "pt1", 00:11:09.750 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:09.750 "is_configured": true, 00:11:09.750 "data_offset": 2048, 00:11:09.750 "data_size": 63488 00:11:09.750 }, 00:11:09.750 { 00:11:09.750 "name": null, 00:11:09.750 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:09.750 "is_configured": false, 00:11:09.750 "data_offset": 2048, 00:11:09.750 "data_size": 63488 00:11:09.750 }, 00:11:09.750 { 00:11:09.750 "name": null, 00:11:09.750 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:09.750 "is_configured": false, 00:11:09.750 "data_offset": 2048, 00:11:09.750 "data_size": 63488 00:11:09.750 } 00:11:09.750 ] 00:11:09.750 }' 00:11:09.750 21:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:09.750 21:47:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.008 21:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:11:10.008 21:47:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:10.266 [2024-07-15 21:47:25.203041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:10.266 [2024-07-15 21:47:25.203107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.266 [2024-07-15 21:47:25.203135] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb701d235680 00:11:10.266 [2024-07-15 21:47:25.203142] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.266 [2024-07-15 21:47:25.203267] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.266 [2024-07-15 21:47:25.203276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:10.266 [2024-07-15 21:47:25.203330] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:10.266 [2024-07-15 21:47:25.203339] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:10.266 pt2 00:11:10.266 21:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:10.266 [2024-07-15 21:47:25.419043] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:10.266 21:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:10.266 21:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:10.266 21:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:10.266 21:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:10.266 21:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:10.266 21:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:10.266 21:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:10.266 21:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:10.266 21:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:10.266 21:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:10.266 21:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:10.266 21:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.524 21:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:10.524 "name": "raid_bdev1", 00:11:10.524 "uuid": "cfefebba-42f3-11ef-9f7f-e9a656123a8b", 00:11:10.524 "strip_size_kb": 64, 00:11:10.524 "state": "configuring", 00:11:10.524 "raid_level": "concat", 00:11:10.524 "superblock": true, 00:11:10.524 "num_base_bdevs": 3, 00:11:10.524 "num_base_bdevs_discovered": 1, 00:11:10.524 "num_base_bdevs_operational": 3, 00:11:10.524 "base_bdevs_list": [ 00:11:10.524 { 00:11:10.524 "name": "pt1", 00:11:10.524 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:10.524 "is_configured": true, 00:11:10.524 "data_offset": 2048, 00:11:10.524 "data_size": 63488 00:11:10.524 }, 00:11:10.524 { 00:11:10.524 "name": null, 00:11:10.524 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:10.524 "is_configured": false, 00:11:10.524 "data_offset": 2048, 00:11:10.524 "data_size": 63488 00:11:10.524 }, 00:11:10.524 { 00:11:10.524 "name": null, 00:11:10.524 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:10.524 "is_configured": false, 00:11:10.524 "data_offset": 2048, 00:11:10.524 "data_size": 63488 00:11:10.524 } 00:11:10.524 ] 00:11:10.524 }' 00:11:10.524 21:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:10.524 21:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.781 21:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:11:10.781 21:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:11:10.781 21:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:11.038 [2024-07-15 21:47:26.175076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:11.038 [2024-07-15 21:47:26.175136] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.038 [2024-07-15 21:47:26.175163] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb701d235680 00:11:11.038 [2024-07-15 21:47:26.175170] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.038 [2024-07-15 21:47:26.175293] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.038 [2024-07-15 21:47:26.175318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:11.038 [2024-07-15 21:47:26.175358] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:11.038 [2024-07-15 21:47:26.175371] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:11.038 pt2 00:11:11.038 21:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:11:11.038 21:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:11:11.038 21:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:11.294 [2024-07-15 21:47:26.447093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:11.294 [2024-07-15 21:47:26.447153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.294 [2024-07-15 21:47:26.447180] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb701d235400 00:11:11.294 [2024-07-15 21:47:26.447187] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.294 [2024-07-15 21:47:26.447314] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.294 [2024-07-15 21:47:26.447331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:11.294 [2024-07-15 21:47:26.447353] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:11.294 [2024-07-15 21:47:26.447378] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:11.294 [2024-07-15 21:47:26.447421] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xb701d234780 00:11:11.294 [2024-07-15 21:47:26.447426] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:11.294 [2024-07-15 21:47:26.447447] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb701d297e20 00:11:11.294 [2024-07-15 21:47:26.447499] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb701d234780 00:11:11.294 [2024-07-15 21:47:26.447503] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xb701d234780 00:11:11.294 [2024-07-15 21:47:26.447524] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.294 pt3 00:11:11.294 21:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:11:11.294 21:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:11:11.294 21:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:11.294 21:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:11.294 21:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:11.294 21:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:11.294 21:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:11.294 21:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:11.294 21:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:11.294 21:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:11.294 21:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:11.294 21:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:11.294 21:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:11.294 21:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.551 21:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:11.551 "name": "raid_bdev1", 00:11:11.551 "uuid": "cfefebba-42f3-11ef-9f7f-e9a656123a8b", 00:11:11.551 "strip_size_kb": 64, 00:11:11.551 "state": "online", 00:11:11.551 "raid_level": "concat", 00:11:11.551 "superblock": true, 00:11:11.551 "num_base_bdevs": 3, 00:11:11.551 "num_base_bdevs_discovered": 3, 00:11:11.551 "num_base_bdevs_operational": 3, 00:11:11.551 "base_bdevs_list": [ 00:11:11.551 { 00:11:11.551 "name": "pt1", 00:11:11.551 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:11.551 "is_configured": true, 00:11:11.551 "data_offset": 2048, 00:11:11.551 "data_size": 63488 00:11:11.551 }, 00:11:11.551 { 00:11:11.551 "name": "pt2", 00:11:11.551 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:11.551 "is_configured": true, 00:11:11.551 "data_offset": 2048, 00:11:11.551 "data_size": 63488 00:11:11.551 }, 00:11:11.551 { 00:11:11.551 "name": "pt3", 00:11:11.551 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:11.551 "is_configured": true, 00:11:11.551 "data_offset": 2048, 00:11:11.551 "data_size": 63488 00:11:11.551 } 00:11:11.551 ] 00:11:11.551 }' 00:11:11.551 21:47:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:11.551 21:47:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.116 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:11:12.116 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:12.116 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:12.116 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:12.116 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:12.116 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:12.116 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:12.116 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:12.116 [2024-07-15 21:47:27.227147] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:12.116 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:12.116 "name": "raid_bdev1", 00:11:12.116 "aliases": [ 00:11:12.116 "cfefebba-42f3-11ef-9f7f-e9a656123a8b" 00:11:12.116 ], 00:11:12.116 "product_name": "Raid Volume", 00:11:12.116 "block_size": 512, 00:11:12.116 "num_blocks": 190464, 00:11:12.116 "uuid": "cfefebba-42f3-11ef-9f7f-e9a656123a8b", 00:11:12.116 "assigned_rate_limits": { 00:11:12.116 "rw_ios_per_sec": 0, 00:11:12.116 "rw_mbytes_per_sec": 0, 00:11:12.116 "r_mbytes_per_sec": 0, 00:11:12.116 "w_mbytes_per_sec": 0 00:11:12.116 }, 00:11:12.116 "claimed": false, 00:11:12.116 "zoned": false, 00:11:12.116 "supported_io_types": { 00:11:12.116 "read": true, 00:11:12.116 "write": true, 00:11:12.116 "unmap": true, 00:11:12.116 "flush": true, 00:11:12.116 "reset": true, 00:11:12.116 "nvme_admin": false, 00:11:12.116 "nvme_io": false, 00:11:12.116 "nvme_io_md": false, 00:11:12.116 "write_zeroes": true, 00:11:12.116 "zcopy": false, 00:11:12.116 "get_zone_info": false, 00:11:12.116 "zone_management": false, 00:11:12.116 "zone_append": false, 00:11:12.116 "compare": false, 00:11:12.116 "compare_and_write": false, 00:11:12.116 "abort": false, 00:11:12.116 "seek_hole": false, 00:11:12.116 "seek_data": false, 00:11:12.116 "copy": false, 00:11:12.116 "nvme_iov_md": false 00:11:12.116 }, 00:11:12.116 "memory_domains": [ 00:11:12.116 { 00:11:12.116 "dma_device_id": "system", 00:11:12.116 "dma_device_type": 1 00:11:12.116 }, 00:11:12.116 { 00:11:12.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.116 "dma_device_type": 2 00:11:12.116 }, 00:11:12.116 { 00:11:12.116 "dma_device_id": "system", 00:11:12.116 "dma_device_type": 1 00:11:12.116 }, 00:11:12.116 { 00:11:12.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.116 "dma_device_type": 2 00:11:12.116 }, 00:11:12.116 { 00:11:12.116 "dma_device_id": "system", 00:11:12.116 "dma_device_type": 1 00:11:12.116 }, 00:11:12.116 { 00:11:12.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.116 "dma_device_type": 2 00:11:12.116 } 00:11:12.116 ], 00:11:12.116 "driver_specific": { 00:11:12.116 "raid": { 00:11:12.116 "uuid": "cfefebba-42f3-11ef-9f7f-e9a656123a8b", 00:11:12.116 "strip_size_kb": 64, 00:11:12.116 "state": "online", 00:11:12.116 "raid_level": "concat", 00:11:12.116 "superblock": true, 00:11:12.116 "num_base_bdevs": 3, 00:11:12.116 "num_base_bdevs_discovered": 3, 00:11:12.116 "num_base_bdevs_operational": 3, 00:11:12.116 "base_bdevs_list": [ 00:11:12.116 { 00:11:12.116 "name": "pt1", 00:11:12.116 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:12.116 "is_configured": true, 00:11:12.116 "data_offset": 2048, 00:11:12.116 "data_size": 63488 00:11:12.116 }, 00:11:12.116 { 00:11:12.116 "name": "pt2", 00:11:12.116 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:12.116 "is_configured": true, 00:11:12.117 "data_offset": 2048, 00:11:12.117 "data_size": 63488 00:11:12.117 }, 00:11:12.117 { 00:11:12.117 "name": "pt3", 00:11:12.117 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:12.117 "is_configured": true, 00:11:12.117 "data_offset": 2048, 00:11:12.117 "data_size": 63488 00:11:12.117 } 00:11:12.117 ] 00:11:12.117 } 00:11:12.117 } 00:11:12.117 }' 00:11:12.117 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:12.117 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:12.117 pt2 00:11:12.117 pt3' 00:11:12.117 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:12.117 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:12.117 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:12.451 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:12.451 "name": "pt1", 00:11:12.451 "aliases": [ 00:11:12.451 "00000000-0000-0000-0000-000000000001" 00:11:12.451 ], 00:11:12.451 "product_name": "passthru", 00:11:12.451 "block_size": 512, 00:11:12.451 "num_blocks": 65536, 00:11:12.451 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:12.451 "assigned_rate_limits": { 00:11:12.451 "rw_ios_per_sec": 0, 00:11:12.451 "rw_mbytes_per_sec": 0, 00:11:12.451 "r_mbytes_per_sec": 0, 00:11:12.451 "w_mbytes_per_sec": 0 00:11:12.451 }, 00:11:12.451 "claimed": true, 00:11:12.451 "claim_type": "exclusive_write", 00:11:12.451 "zoned": false, 00:11:12.451 "supported_io_types": { 00:11:12.451 "read": true, 00:11:12.451 "write": true, 00:11:12.451 "unmap": true, 00:11:12.451 "flush": true, 00:11:12.451 "reset": true, 00:11:12.451 "nvme_admin": false, 00:11:12.451 "nvme_io": false, 00:11:12.451 "nvme_io_md": false, 00:11:12.452 "write_zeroes": true, 00:11:12.452 "zcopy": true, 00:11:12.452 "get_zone_info": false, 00:11:12.452 "zone_management": false, 00:11:12.452 "zone_append": false, 00:11:12.452 "compare": false, 00:11:12.452 "compare_and_write": false, 00:11:12.452 "abort": true, 00:11:12.452 "seek_hole": false, 00:11:12.452 "seek_data": false, 00:11:12.452 "copy": true, 00:11:12.452 "nvme_iov_md": false 00:11:12.452 }, 00:11:12.452 "memory_domains": [ 00:11:12.452 { 00:11:12.452 "dma_device_id": "system", 00:11:12.452 "dma_device_type": 1 00:11:12.452 }, 00:11:12.452 { 00:11:12.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.452 "dma_device_type": 2 00:11:12.452 } 00:11:12.452 ], 00:11:12.452 "driver_specific": { 00:11:12.452 "passthru": { 00:11:12.452 "name": "pt1", 00:11:12.452 "base_bdev_name": "malloc1" 00:11:12.452 } 00:11:12.452 } 00:11:12.452 }' 00:11:12.452 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:12.452 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:12.452 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:12.452 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:12.452 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:12.452 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:12.452 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:12.452 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:12.452 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:12.452 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:12.452 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:12.452 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:12.452 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:12.452 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:12.452 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:12.738 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:12.738 "name": "pt2", 00:11:12.738 "aliases": [ 00:11:12.738 "00000000-0000-0000-0000-000000000002" 00:11:12.738 ], 00:11:12.738 "product_name": "passthru", 00:11:12.738 "block_size": 512, 00:11:12.738 "num_blocks": 65536, 00:11:12.738 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:12.738 "assigned_rate_limits": { 00:11:12.738 "rw_ios_per_sec": 0, 00:11:12.738 "rw_mbytes_per_sec": 0, 00:11:12.738 "r_mbytes_per_sec": 0, 00:11:12.738 "w_mbytes_per_sec": 0 00:11:12.738 }, 00:11:12.738 "claimed": true, 00:11:12.738 "claim_type": "exclusive_write", 00:11:12.738 "zoned": false, 00:11:12.738 "supported_io_types": { 00:11:12.738 "read": true, 00:11:12.738 "write": true, 00:11:12.738 "unmap": true, 00:11:12.738 "flush": true, 00:11:12.738 "reset": true, 00:11:12.738 "nvme_admin": false, 00:11:12.738 "nvme_io": false, 00:11:12.738 "nvme_io_md": false, 00:11:12.738 "write_zeroes": true, 00:11:12.738 "zcopy": true, 00:11:12.738 "get_zone_info": false, 00:11:12.738 "zone_management": false, 00:11:12.738 "zone_append": false, 00:11:12.738 "compare": false, 00:11:12.738 "compare_and_write": false, 00:11:12.738 "abort": true, 00:11:12.738 "seek_hole": false, 00:11:12.738 "seek_data": false, 00:11:12.738 "copy": true, 00:11:12.738 "nvme_iov_md": false 00:11:12.738 }, 00:11:12.738 "memory_domains": [ 00:11:12.738 { 00:11:12.738 "dma_device_id": "system", 00:11:12.738 "dma_device_type": 1 00:11:12.738 }, 00:11:12.738 { 00:11:12.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.738 "dma_device_type": 2 00:11:12.738 } 00:11:12.738 ], 00:11:12.738 "driver_specific": { 00:11:12.738 "passthru": { 00:11:12.738 "name": "pt2", 00:11:12.738 "base_bdev_name": "malloc2" 00:11:12.738 } 00:11:12.738 } 00:11:12.738 }' 00:11:12.738 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:12.738 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:12.738 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:12.738 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:12.738 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:12.738 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:12.738 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:12.738 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:12.738 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:12.738 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:12.738 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:12.738 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:12.738 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:12.738 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:11:12.738 21:47:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:12.997 21:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:12.997 "name": "pt3", 00:11:12.997 "aliases": [ 00:11:12.997 "00000000-0000-0000-0000-000000000003" 00:11:12.997 ], 00:11:12.997 "product_name": "passthru", 00:11:12.997 "block_size": 512, 00:11:12.997 "num_blocks": 65536, 00:11:12.997 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:12.997 "assigned_rate_limits": { 00:11:12.997 "rw_ios_per_sec": 0, 00:11:12.997 "rw_mbytes_per_sec": 0, 00:11:12.997 "r_mbytes_per_sec": 0, 00:11:12.997 "w_mbytes_per_sec": 0 00:11:12.997 }, 00:11:12.997 "claimed": true, 00:11:12.997 "claim_type": "exclusive_write", 00:11:12.997 "zoned": false, 00:11:12.997 "supported_io_types": { 00:11:12.997 "read": true, 00:11:12.997 "write": true, 00:11:12.997 "unmap": true, 00:11:12.997 "flush": true, 00:11:12.997 "reset": true, 00:11:12.997 "nvme_admin": false, 00:11:12.997 "nvme_io": false, 00:11:12.997 "nvme_io_md": false, 00:11:12.997 "write_zeroes": true, 00:11:12.997 "zcopy": true, 00:11:12.997 "get_zone_info": false, 00:11:12.997 "zone_management": false, 00:11:12.997 "zone_append": false, 00:11:12.997 "compare": false, 00:11:12.997 "compare_and_write": false, 00:11:12.997 "abort": true, 00:11:12.997 "seek_hole": false, 00:11:12.997 "seek_data": false, 00:11:12.997 "copy": true, 00:11:12.997 "nvme_iov_md": false 00:11:12.997 }, 00:11:12.997 "memory_domains": [ 00:11:12.997 { 00:11:12.997 "dma_device_id": "system", 00:11:12.997 "dma_device_type": 1 00:11:12.997 }, 00:11:12.997 { 00:11:12.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.997 "dma_device_type": 2 00:11:12.997 } 00:11:12.997 ], 00:11:12.997 "driver_specific": { 00:11:12.997 "passthru": { 00:11:12.997 "name": "pt3", 00:11:12.997 "base_bdev_name": "malloc3" 00:11:12.997 } 00:11:12.997 } 00:11:12.997 }' 00:11:12.997 21:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:12.997 21:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:12.997 21:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:12.997 21:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:12.997 21:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:12.997 21:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:12.997 21:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:12.997 21:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:12.997 21:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:12.997 21:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:12.997 21:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:12.997 21:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:12.997 21:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:12.997 21:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:11:13.256 [2024-07-15 21:47:28.315171] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:13.256 21:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' cfefebba-42f3-11ef-9f7f-e9a656123a8b '!=' cfefebba-42f3-11ef-9f7f-e9a656123a8b ']' 00:11:13.256 21:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:11:13.256 21:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:13.256 21:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:13.256 21:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 55501 00:11:13.256 21:47:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@942 -- # '[' -z 55501 ']' 00:11:13.256 21:47:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # kill -0 55501 00:11:13.256 21:47:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@947 -- # uname 00:11:13.256 21:47:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:11:13.256 21:47:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # ps -c -o command 55501 00:11:13.256 21:47:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # tail -1 00:11:13.256 21:47:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:11:13.256 21:47:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:11:13.256 killing process with pid 55501 00:11:13.256 21:47:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 55501' 00:11:13.256 21:47:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@961 -- # kill 55501 00:11:13.256 [2024-07-15 21:47:28.344784] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:13.256 [2024-07-15 21:47:28.344807] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:13.256 [2024-07-15 21:47:28.344821] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:13.256 [2024-07-15 21:47:28.344825] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb701d234780 name raid_bdev1, state offline 00:11:13.256 21:47:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # wait 55501 00:11:13.256 [2024-07-15 21:47:28.361977] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:13.513 21:47:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:11:13.513 00:11:13.513 real 0m10.776s 00:11:13.513 user 0m19.198s 00:11:13.513 sys 0m1.581s 00:11:13.513 21:47:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:11:13.513 21:47:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.513 ************************************ 00:11:13.513 END TEST raid_superblock_test 00:11:13.513 ************************************ 00:11:13.514 21:47:28 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:11:13.514 21:47:28 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:11:13.514 21:47:28 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:11:13.514 21:47:28 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:11:13.514 21:47:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:13.514 ************************************ 00:11:13.514 START TEST raid_read_error_test 00:11:13.514 ************************************ 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1117 -- # raid_io_error_test concat 3 read 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.OFFLJqr8GG 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=55848 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 55848 /var/tmp/spdk-raid.sock 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@823 -- # '[' -z 55848 ']' 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:11:13.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:11:13.514 21:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.514 [2024-07-15 21:47:28.589984] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:11:13.514 [2024-07-15 21:47:28.590173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:11:14.079 EAL: TSC is not safe to use in SMP mode 00:11:14.079 EAL: TSC is not invariant 00:11:14.079 [2024-07-15 21:47:29.097852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.079 [2024-07-15 21:47:29.177868] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:14.079 [2024-07-15 21:47:29.180028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.079 [2024-07-15 21:47:29.180835] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.079 [2024-07-15 21:47:29.180849] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.644 21:47:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:11:14.644 21:47:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # return 0 00:11:14.644 21:47:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:14.644 21:47:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:14.644 BaseBdev1_malloc 00:11:14.644 21:47:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:14.901 true 00:11:14.901 21:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:15.158 [2024-07-15 21:47:30.256373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:15.158 [2024-07-15 21:47:30.256440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.158 [2024-07-15 21:47:30.256481] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7484d434780 00:11:15.158 [2024-07-15 21:47:30.256489] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.158 [2024-07-15 21:47:30.257149] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.158 [2024-07-15 21:47:30.257184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:15.158 BaseBdev1 00:11:15.158 21:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:15.158 21:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:15.414 BaseBdev2_malloc 00:11:15.414 21:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:15.671 true 00:11:15.671 21:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:15.929 [2024-07-15 21:47:30.948403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:15.929 [2024-07-15 21:47:30.948468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.929 [2024-07-15 21:47:30.948507] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7484d434c80 00:11:15.929 [2024-07-15 21:47:30.948515] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.929 [2024-07-15 21:47:30.949183] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.929 [2024-07-15 21:47:30.949208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:15.929 BaseBdev2 00:11:15.929 21:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:15.929 21:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:16.186 BaseBdev3_malloc 00:11:16.186 21:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:11:16.443 true 00:11:16.443 21:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:16.443 [2024-07-15 21:47:31.628411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:16.443 [2024-07-15 21:47:31.628479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.443 [2024-07-15 21:47:31.628518] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7484d435180 00:11:16.443 [2024-07-15 21:47:31.628526] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.443 [2024-07-15 21:47:31.629195] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.443 [2024-07-15 21:47:31.629221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:16.701 BaseBdev3 00:11:16.701 21:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:11:16.701 [2024-07-15 21:47:31.848429] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:16.701 [2024-07-15 21:47:31.849027] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:16.701 [2024-07-15 21:47:31.849052] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:16.701 [2024-07-15 21:47:31.849107] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x7484d435400 00:11:16.701 [2024-07-15 21:47:31.849113] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:16.701 [2024-07-15 21:47:31.849150] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x7484d4a0e20 00:11:16.701 [2024-07-15 21:47:31.849220] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x7484d435400 00:11:16.701 [2024-07-15 21:47:31.849225] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x7484d435400 00:11:16.701 [2024-07-15 21:47:31.849252] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.701 21:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:16.701 21:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:16.701 21:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:16.701 21:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:16.701 21:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:16.701 21:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:16.701 21:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:16.701 21:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:16.701 21:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:16.701 21:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:16.701 21:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:16.701 21:47:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.960 21:47:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:16.960 "name": "raid_bdev1", 00:11:16.960 "uuid": "d6b590f0-42f3-11ef-9f7f-e9a656123a8b", 00:11:16.960 "strip_size_kb": 64, 00:11:16.960 "state": "online", 00:11:16.960 "raid_level": "concat", 00:11:16.960 "superblock": true, 00:11:16.960 "num_base_bdevs": 3, 00:11:16.960 "num_base_bdevs_discovered": 3, 00:11:16.960 "num_base_bdevs_operational": 3, 00:11:16.960 "base_bdevs_list": [ 00:11:16.960 { 00:11:16.960 "name": "BaseBdev1", 00:11:16.960 "uuid": "f0ee4aaa-28d7-1c5f-9603-59caed126848", 00:11:16.960 "is_configured": true, 00:11:16.960 "data_offset": 2048, 00:11:16.960 "data_size": 63488 00:11:16.960 }, 00:11:16.960 { 00:11:16.960 "name": "BaseBdev2", 00:11:16.960 "uuid": "9ebe8656-e14d-2d56-a068-38493ecd8192", 00:11:16.960 "is_configured": true, 00:11:16.960 "data_offset": 2048, 00:11:16.960 "data_size": 63488 00:11:16.960 }, 00:11:16.960 { 00:11:16.960 "name": "BaseBdev3", 00:11:16.960 "uuid": "1a1b8a1b-561a-0353-9633-811214dbf888", 00:11:16.960 "is_configured": true, 00:11:16.960 "data_offset": 2048, 00:11:16.960 "data_size": 63488 00:11:16.960 } 00:11:16.960 ] 00:11:16.960 }' 00:11:16.960 21:47:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:16.960 21:47:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.528 21:47:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:11:17.528 21:47:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:17.529 [2024-07-15 21:47:32.528598] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x7484d4a0ec0 00:11:18.465 21:47:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:18.723 21:47:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:11:18.723 21:47:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:11:18.723 21:47:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:18.723 21:47:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:18.723 21:47:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:18.723 21:47:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:18.723 21:47:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:18.723 21:47:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:18.723 21:47:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:18.723 21:47:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:18.723 21:47:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:18.723 21:47:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:18.723 21:47:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:18.723 21:47:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:18.723 21:47:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.981 21:47:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:18.981 "name": "raid_bdev1", 00:11:18.981 "uuid": "d6b590f0-42f3-11ef-9f7f-e9a656123a8b", 00:11:18.981 "strip_size_kb": 64, 00:11:18.981 "state": "online", 00:11:18.981 "raid_level": "concat", 00:11:18.981 "superblock": true, 00:11:18.981 "num_base_bdevs": 3, 00:11:18.981 "num_base_bdevs_discovered": 3, 00:11:18.981 "num_base_bdevs_operational": 3, 00:11:18.981 "base_bdevs_list": [ 00:11:18.981 { 00:11:18.981 "name": "BaseBdev1", 00:11:18.981 "uuid": "f0ee4aaa-28d7-1c5f-9603-59caed126848", 00:11:18.981 "is_configured": true, 00:11:18.981 "data_offset": 2048, 00:11:18.981 "data_size": 63488 00:11:18.981 }, 00:11:18.981 { 00:11:18.981 "name": "BaseBdev2", 00:11:18.981 "uuid": "9ebe8656-e14d-2d56-a068-38493ecd8192", 00:11:18.981 "is_configured": true, 00:11:18.981 "data_offset": 2048, 00:11:18.981 "data_size": 63488 00:11:18.981 }, 00:11:18.981 { 00:11:18.981 "name": "BaseBdev3", 00:11:18.981 "uuid": "1a1b8a1b-561a-0353-9633-811214dbf888", 00:11:18.981 "is_configured": true, 00:11:18.981 "data_offset": 2048, 00:11:18.981 "data_size": 63488 00:11:18.981 } 00:11:18.981 ] 00:11:18.981 }' 00:11:18.981 21:47:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:18.981 21:47:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.275 21:47:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:19.533 [2024-07-15 21:47:34.534152] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:19.533 [2024-07-15 21:47:34.534196] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:19.533 [2024-07-15 21:47:34.534511] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.533 [2024-07-15 21:47:34.534521] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.533 [2024-07-15 21:47:34.534543] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:19.533 [2024-07-15 21:47:34.534547] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x7484d435400 name raid_bdev1, state offline 00:11:19.533 0 00:11:19.533 21:47:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 55848 00:11:19.533 21:47:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@942 -- # '[' -z 55848 ']' 00:11:19.533 21:47:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # kill -0 55848 00:11:19.533 21:47:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@947 -- # uname 00:11:19.533 21:47:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:11:19.533 21:47:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # tail -1 00:11:19.533 21:47:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # ps -c -o command 55848 00:11:19.533 21:47:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # process_name=bdevperf 00:11:19.533 killing process with pid 55848 00:11:19.533 21:47:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' bdevperf = sudo ']' 00:11:19.533 21:47:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 55848' 00:11:19.533 21:47:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@961 -- # kill 55848 00:11:19.533 [2024-07-15 21:47:34.561898] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:19.533 21:47:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # wait 55848 00:11:19.534 [2024-07-15 21:47:34.578642] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:19.792 21:47:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.OFFLJqr8GG 00:11:19.792 21:47:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:11:19.792 21:47:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:11:19.792 21:47:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.50 00:11:19.792 21:47:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:11:19.792 21:47:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:19.792 21:47:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:19.792 21:47:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.50 != \0\.\0\0 ]] 00:11:19.792 00:11:19.792 real 0m6.177s 00:11:19.792 user 0m9.546s 00:11:19.792 sys 0m1.100s 00:11:19.792 21:47:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:11:19.792 21:47:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.792 ************************************ 00:11:19.792 END TEST raid_read_error_test 00:11:19.792 ************************************ 00:11:19.792 21:47:34 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:11:19.792 21:47:34 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:11:19.792 21:47:34 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:11:19.792 21:47:34 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:11:19.792 21:47:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:19.792 ************************************ 00:11:19.792 START TEST raid_write_error_test 00:11:19.792 ************************************ 00:11:19.792 21:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1117 -- # raid_io_error_test concat 3 write 00:11:19.792 21:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:11:19.792 21:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:11:19.792 21:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:11:19.792 21:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:11:19.792 21:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:19.792 21:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:11:19.792 21:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:19.792 21:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:19.792 21:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:11:19.792 21:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:19.792 21:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:19.792 21:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:11:19.792 21:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:19.792 21:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:19.792 21:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:19.792 21:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:11:19.792 21:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:11:19.792 21:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:11:19.792 21:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:11:19.792 21:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:11:19.792 21:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:11:19.792 21:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:11:19.792 21:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:11:19.792 21:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:11:19.792 21:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:11:19.792 21:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.D3uxBHegGz 00:11:19.792 21:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=55979 00:11:19.792 21:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 55979 /var/tmp/spdk-raid.sock 00:11:19.792 21:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@823 -- # '[' -z 55979 ']' 00:11:19.792 21:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:19.792 21:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:11:19.793 21:47:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:19.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:19.793 21:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:19.793 21:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:11:19.793 21:47:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.793 [2024-07-15 21:47:34.812806] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:11:19.793 [2024-07-15 21:47:34.812982] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:11:20.387 EAL: TSC is not safe to use in SMP mode 00:11:20.387 EAL: TSC is not invariant 00:11:20.387 [2024-07-15 21:47:35.335104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.387 [2024-07-15 21:47:35.418764] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:20.387 [2024-07-15 21:47:35.420963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.387 [2024-07-15 21:47:35.421711] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.387 [2024-07-15 21:47:35.421725] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.954 21:47:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:11:20.954 21:47:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # return 0 00:11:20.954 21:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:20.954 21:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:20.954 BaseBdev1_malloc 00:11:20.954 21:47:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:21.213 true 00:11:21.213 21:47:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:21.472 [2024-07-15 21:47:36.549906] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:21.472 [2024-07-15 21:47:36.549979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.472 [2024-07-15 21:47:36.550022] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1094b3434780 00:11:21.472 [2024-07-15 21:47:36.550031] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.473 [2024-07-15 21:47:36.550707] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.473 [2024-07-15 21:47:36.550732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:21.473 BaseBdev1 00:11:21.473 21:47:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:21.473 21:47:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:21.731 BaseBdev2_malloc 00:11:21.731 21:47:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:21.989 true 00:11:21.989 21:47:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:22.248 [2024-07-15 21:47:37.205933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:22.248 [2024-07-15 21:47:37.206004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.248 [2024-07-15 21:47:37.206043] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1094b3434c80 00:11:22.248 [2024-07-15 21:47:37.206056] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.249 [2024-07-15 21:47:37.206772] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.249 [2024-07-15 21:47:37.206796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:22.249 BaseBdev2 00:11:22.249 21:47:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:22.249 21:47:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:22.507 BaseBdev3_malloc 00:11:22.507 21:47:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:11:22.766 true 00:11:22.766 21:47:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:22.766 [2024-07-15 21:47:37.941966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:22.766 [2024-07-15 21:47:37.942037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.766 [2024-07-15 21:47:37.942077] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1094b3435180 00:11:22.766 [2024-07-15 21:47:37.942085] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.766 [2024-07-15 21:47:37.942737] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.766 [2024-07-15 21:47:37.942764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:22.766 BaseBdev3 00:11:23.024 21:47:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:11:23.024 [2024-07-15 21:47:38.201970] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:23.024 [2024-07-15 21:47:38.202642] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:23.024 [2024-07-15 21:47:38.202667] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:23.024 [2024-07-15 21:47:38.202722] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1094b3435400 00:11:23.025 [2024-07-15 21:47:38.202729] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:23.025 [2024-07-15 21:47:38.202780] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1094b34a0e20 00:11:23.025 [2024-07-15 21:47:38.202897] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1094b3435400 00:11:23.025 [2024-07-15 21:47:38.202902] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1094b3435400 00:11:23.025 [2024-07-15 21:47:38.202928] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.283 21:47:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:23.283 21:47:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:23.283 21:47:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:23.284 21:47:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:23.284 21:47:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:23.284 21:47:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:23.284 21:47:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:23.284 21:47:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:23.284 21:47:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:23.284 21:47:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:23.284 21:47:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.284 21:47:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:23.284 21:47:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:23.284 "name": "raid_bdev1", 00:11:23.284 "uuid": "da7f0a20-42f3-11ef-9f7f-e9a656123a8b", 00:11:23.284 "strip_size_kb": 64, 00:11:23.284 "state": "online", 00:11:23.284 "raid_level": "concat", 00:11:23.284 "superblock": true, 00:11:23.284 "num_base_bdevs": 3, 00:11:23.284 "num_base_bdevs_discovered": 3, 00:11:23.284 "num_base_bdevs_operational": 3, 00:11:23.284 "base_bdevs_list": [ 00:11:23.284 { 00:11:23.284 "name": "BaseBdev1", 00:11:23.284 "uuid": "2d0b815e-da81-6f55-9abf-b9eb06d455fb", 00:11:23.284 "is_configured": true, 00:11:23.284 "data_offset": 2048, 00:11:23.284 "data_size": 63488 00:11:23.284 }, 00:11:23.284 { 00:11:23.284 "name": "BaseBdev2", 00:11:23.284 "uuid": "e3b1b768-0685-d65e-87d6-dbd6f4fe5364", 00:11:23.284 "is_configured": true, 00:11:23.284 "data_offset": 2048, 00:11:23.284 "data_size": 63488 00:11:23.284 }, 00:11:23.284 { 00:11:23.284 "name": "BaseBdev3", 00:11:23.284 "uuid": "151fbce9-c31e-d559-ab0e-b099e20b9243", 00:11:23.284 "is_configured": true, 00:11:23.284 "data_offset": 2048, 00:11:23.284 "data_size": 63488 00:11:23.284 } 00:11:23.284 ] 00:11:23.284 }' 00:11:23.284 21:47:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:23.284 21:47:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.851 21:47:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:11:23.851 21:47:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:23.851 [2024-07-15 21:47:38.822164] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1094b34a0ec0 00:11:24.816 21:47:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:25.074 21:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:11:25.074 21:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:11:25.074 21:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:25.074 21:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:25.074 21:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:25.074 21:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:25.074 21:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:25.074 21:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:25.074 21:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:25.074 21:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:25.074 21:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:25.074 21:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:25.074 21:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:25.074 21:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:25.074 21:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.332 21:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:25.332 "name": "raid_bdev1", 00:11:25.332 "uuid": "da7f0a20-42f3-11ef-9f7f-e9a656123a8b", 00:11:25.332 "strip_size_kb": 64, 00:11:25.332 "state": "online", 00:11:25.332 "raid_level": "concat", 00:11:25.332 "superblock": true, 00:11:25.332 "num_base_bdevs": 3, 00:11:25.332 "num_base_bdevs_discovered": 3, 00:11:25.332 "num_base_bdevs_operational": 3, 00:11:25.332 "base_bdevs_list": [ 00:11:25.332 { 00:11:25.332 "name": "BaseBdev1", 00:11:25.332 "uuid": "2d0b815e-da81-6f55-9abf-b9eb06d455fb", 00:11:25.332 "is_configured": true, 00:11:25.332 "data_offset": 2048, 00:11:25.332 "data_size": 63488 00:11:25.332 }, 00:11:25.332 { 00:11:25.332 "name": "BaseBdev2", 00:11:25.333 "uuid": "e3b1b768-0685-d65e-87d6-dbd6f4fe5364", 00:11:25.333 "is_configured": true, 00:11:25.333 "data_offset": 2048, 00:11:25.333 "data_size": 63488 00:11:25.333 }, 00:11:25.333 { 00:11:25.333 "name": "BaseBdev3", 00:11:25.333 "uuid": "151fbce9-c31e-d559-ab0e-b099e20b9243", 00:11:25.333 "is_configured": true, 00:11:25.333 "data_offset": 2048, 00:11:25.333 "data_size": 63488 00:11:25.333 } 00:11:25.333 ] 00:11:25.333 }' 00:11:25.333 21:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:25.333 21:47:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.591 21:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:25.849 [2024-07-15 21:47:40.852137] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:25.849 [2024-07-15 21:47:40.852194] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:25.849 [2024-07-15 21:47:40.852533] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:25.849 [2024-07-15 21:47:40.852549] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.849 [2024-07-15 21:47:40.852571] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:25.849 [2024-07-15 21:47:40.852575] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1094b3435400 name raid_bdev1, state offline 00:11:25.849 0 00:11:25.849 21:47:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 55979 00:11:25.849 21:47:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@942 -- # '[' -z 55979 ']' 00:11:25.849 21:47:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # kill -0 55979 00:11:25.849 21:47:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@947 -- # uname 00:11:25.849 21:47:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:11:25.849 21:47:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # ps -c -o command 55979 00:11:25.849 21:47:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # tail -1 00:11:25.849 21:47:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # process_name=bdevperf 00:11:25.850 21:47:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' bdevperf = sudo ']' 00:11:25.850 killing process with pid 55979 00:11:25.850 21:47:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 55979' 00:11:25.850 21:47:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@961 -- # kill 55979 00:11:25.850 [2024-07-15 21:47:40.880615] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:25.850 21:47:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # wait 55979 00:11:25.850 [2024-07-15 21:47:40.897825] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:26.109 21:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.D3uxBHegGz 00:11:26.109 21:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:11:26.109 21:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:11:26.109 21:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.49 00:11:26.109 21:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:11:26.109 21:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:26.109 21:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:26.109 21:47:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.49 != \0\.\0\0 ]] 00:11:26.109 00:11:26.109 real 0m6.275s 00:11:26.109 user 0m9.770s 00:11:26.109 sys 0m1.031s 00:11:26.109 21:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:11:26.109 ************************************ 00:11:26.109 END TEST raid_write_error_test 00:11:26.109 ************************************ 00:11:26.109 21:47:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.109 21:47:41 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:11:26.109 21:47:41 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:11:26.109 21:47:41 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:11:26.109 21:47:41 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:11:26.109 21:47:41 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:11:26.109 21:47:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:26.109 ************************************ 00:11:26.109 START TEST raid_state_function_test 00:11:26.109 ************************************ 00:11:26.109 21:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1117 -- # raid_state_function_test raid1 3 false 00:11:26.109 21:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=56108 00:11:26.110 Process raid pid: 56108 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 56108' 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 56108 /var/tmp/spdk-raid.sock 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@823 -- # '[' -z 56108 ']' 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:11:26.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:11:26.110 21:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.110 [2024-07-15 21:47:41.132646] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:11:26.110 [2024-07-15 21:47:41.132925] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:11:26.678 EAL: TSC is not safe to use in SMP mode 00:11:26.678 EAL: TSC is not invariant 00:11:26.678 [2024-07-15 21:47:41.678955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.678 [2024-07-15 21:47:41.760284] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:26.678 [2024-07-15 21:47:41.762538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.678 [2024-07-15 21:47:41.763353] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:26.678 [2024-07-15 21:47:41.763367] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:27.245 21:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:11:27.245 21:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # return 0 00:11:27.245 21:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:27.245 [2024-07-15 21:47:42.419064] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:27.245 [2024-07-15 21:47:42.419142] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:27.245 [2024-07-15 21:47:42.419147] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:27.245 [2024-07-15 21:47:42.419171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:27.245 [2024-07-15 21:47:42.419190] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:27.245 [2024-07-15 21:47:42.419197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:27.505 21:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:27.505 21:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:27.505 21:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:27.505 21:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:27.505 21:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:27.505 21:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:27.505 21:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:27.505 21:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:27.505 21:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:27.505 21:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:27.505 21:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:27.505 21:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.765 21:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:27.765 "name": "Existed_Raid", 00:11:27.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.765 "strip_size_kb": 0, 00:11:27.765 "state": "configuring", 00:11:27.765 "raid_level": "raid1", 00:11:27.765 "superblock": false, 00:11:27.765 "num_base_bdevs": 3, 00:11:27.765 "num_base_bdevs_discovered": 0, 00:11:27.765 "num_base_bdevs_operational": 3, 00:11:27.765 "base_bdevs_list": [ 00:11:27.765 { 00:11:27.765 "name": "BaseBdev1", 00:11:27.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.765 "is_configured": false, 00:11:27.765 "data_offset": 0, 00:11:27.765 "data_size": 0 00:11:27.765 }, 00:11:27.765 { 00:11:27.765 "name": "BaseBdev2", 00:11:27.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.765 "is_configured": false, 00:11:27.765 "data_offset": 0, 00:11:27.765 "data_size": 0 00:11:27.765 }, 00:11:27.765 { 00:11:27.765 "name": "BaseBdev3", 00:11:27.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.765 "is_configured": false, 00:11:27.765 "data_offset": 0, 00:11:27.765 "data_size": 0 00:11:27.765 } 00:11:27.765 ] 00:11:27.765 }' 00:11:27.765 21:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:27.765 21:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.025 21:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:28.283 [2024-07-15 21:47:43.231066] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:28.283 [2024-07-15 21:47:43.231114] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23e45d234500 name Existed_Raid, state configuring 00:11:28.283 21:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:28.283 [2024-07-15 21:47:43.451075] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:28.283 [2024-07-15 21:47:43.451138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:28.283 [2024-07-15 21:47:43.451143] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:28.283 [2024-07-15 21:47:43.451167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:28.283 [2024-07-15 21:47:43.451170] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:28.283 [2024-07-15 21:47:43.451193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:28.283 21:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:28.541 [2024-07-15 21:47:43.688109] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:28.541 BaseBdev1 00:11:28.541 21:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:28.541 21:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev1 00:11:28.541 21:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:11:28.541 21:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:11:28.541 21:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:11:28.541 21:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:11:28.541 21:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:28.799 21:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:29.057 [ 00:11:29.057 { 00:11:29.057 "name": "BaseBdev1", 00:11:29.057 "aliases": [ 00:11:29.057 "ddc400e9-42f3-11ef-9f7f-e9a656123a8b" 00:11:29.057 ], 00:11:29.057 "product_name": "Malloc disk", 00:11:29.057 "block_size": 512, 00:11:29.057 "num_blocks": 65536, 00:11:29.057 "uuid": "ddc400e9-42f3-11ef-9f7f-e9a656123a8b", 00:11:29.057 "assigned_rate_limits": { 00:11:29.057 "rw_ios_per_sec": 0, 00:11:29.057 "rw_mbytes_per_sec": 0, 00:11:29.057 "r_mbytes_per_sec": 0, 00:11:29.057 "w_mbytes_per_sec": 0 00:11:29.057 }, 00:11:29.057 "claimed": true, 00:11:29.057 "claim_type": "exclusive_write", 00:11:29.057 "zoned": false, 00:11:29.057 "supported_io_types": { 00:11:29.057 "read": true, 00:11:29.057 "write": true, 00:11:29.057 "unmap": true, 00:11:29.057 "flush": true, 00:11:29.057 "reset": true, 00:11:29.057 "nvme_admin": false, 00:11:29.057 "nvme_io": false, 00:11:29.057 "nvme_io_md": false, 00:11:29.057 "write_zeroes": true, 00:11:29.057 "zcopy": true, 00:11:29.057 "get_zone_info": false, 00:11:29.057 "zone_management": false, 00:11:29.057 "zone_append": false, 00:11:29.057 "compare": false, 00:11:29.057 "compare_and_write": false, 00:11:29.057 "abort": true, 00:11:29.057 "seek_hole": false, 00:11:29.057 "seek_data": false, 00:11:29.057 "copy": true, 00:11:29.057 "nvme_iov_md": false 00:11:29.057 }, 00:11:29.057 "memory_domains": [ 00:11:29.057 { 00:11:29.057 "dma_device_id": "system", 00:11:29.057 "dma_device_type": 1 00:11:29.057 }, 00:11:29.057 { 00:11:29.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.057 "dma_device_type": 2 00:11:29.057 } 00:11:29.057 ], 00:11:29.057 "driver_specific": {} 00:11:29.057 } 00:11:29.057 ] 00:11:29.057 21:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:11:29.057 21:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:29.057 21:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:29.057 21:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:29.057 21:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:29.057 21:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:29.057 21:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:29.057 21:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:29.057 21:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:29.057 21:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:29.057 21:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:29.057 21:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:29.057 21:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.316 21:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:29.316 "name": "Existed_Raid", 00:11:29.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.316 "strip_size_kb": 0, 00:11:29.316 "state": "configuring", 00:11:29.316 "raid_level": "raid1", 00:11:29.316 "superblock": false, 00:11:29.316 "num_base_bdevs": 3, 00:11:29.316 "num_base_bdevs_discovered": 1, 00:11:29.316 "num_base_bdevs_operational": 3, 00:11:29.316 "base_bdevs_list": [ 00:11:29.316 { 00:11:29.316 "name": "BaseBdev1", 00:11:29.316 "uuid": "ddc400e9-42f3-11ef-9f7f-e9a656123a8b", 00:11:29.316 "is_configured": true, 00:11:29.316 "data_offset": 0, 00:11:29.316 "data_size": 65536 00:11:29.316 }, 00:11:29.316 { 00:11:29.316 "name": "BaseBdev2", 00:11:29.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.316 "is_configured": false, 00:11:29.316 "data_offset": 0, 00:11:29.316 "data_size": 0 00:11:29.316 }, 00:11:29.316 { 00:11:29.316 "name": "BaseBdev3", 00:11:29.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.316 "is_configured": false, 00:11:29.316 "data_offset": 0, 00:11:29.316 "data_size": 0 00:11:29.316 } 00:11:29.316 ] 00:11:29.316 }' 00:11:29.316 21:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:29.316 21:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.882 21:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:29.882 [2024-07-15 21:47:45.027120] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:29.882 [2024-07-15 21:47:45.027147] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23e45d234500 name Existed_Raid, state configuring 00:11:29.882 21:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:30.140 [2024-07-15 21:47:45.255155] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:30.140 [2024-07-15 21:47:45.256064] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:30.140 [2024-07-15 21:47:45.256131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:30.140 [2024-07-15 21:47:45.256151] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:30.140 [2024-07-15 21:47:45.256158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:30.140 21:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:30.140 21:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:30.140 21:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:30.140 21:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:30.140 21:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:30.140 21:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:30.140 21:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:30.140 21:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:30.140 21:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:30.140 21:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:30.140 21:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:30.140 21:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:30.140 21:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:30.140 21:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.399 21:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:30.399 "name": "Existed_Raid", 00:11:30.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.399 "strip_size_kb": 0, 00:11:30.399 "state": "configuring", 00:11:30.399 "raid_level": "raid1", 00:11:30.399 "superblock": false, 00:11:30.399 "num_base_bdevs": 3, 00:11:30.399 "num_base_bdevs_discovered": 1, 00:11:30.399 "num_base_bdevs_operational": 3, 00:11:30.399 "base_bdevs_list": [ 00:11:30.399 { 00:11:30.399 "name": "BaseBdev1", 00:11:30.399 "uuid": "ddc400e9-42f3-11ef-9f7f-e9a656123a8b", 00:11:30.399 "is_configured": true, 00:11:30.399 "data_offset": 0, 00:11:30.400 "data_size": 65536 00:11:30.400 }, 00:11:30.400 { 00:11:30.400 "name": "BaseBdev2", 00:11:30.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.400 "is_configured": false, 00:11:30.400 "data_offset": 0, 00:11:30.400 "data_size": 0 00:11:30.400 }, 00:11:30.400 { 00:11:30.400 "name": "BaseBdev3", 00:11:30.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.400 "is_configured": false, 00:11:30.400 "data_offset": 0, 00:11:30.400 "data_size": 0 00:11:30.400 } 00:11:30.400 ] 00:11:30.400 }' 00:11:30.400 21:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:30.400 21:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.658 21:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:30.916 [2024-07-15 21:47:46.015434] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:30.916 BaseBdev2 00:11:30.916 21:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:30.916 21:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev2 00:11:30.916 21:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:11:30.916 21:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:11:30.916 21:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:11:30.916 21:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:11:30.916 21:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:31.174 21:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:31.432 [ 00:11:31.432 { 00:11:31.432 "name": "BaseBdev2", 00:11:31.432 "aliases": [ 00:11:31.433 "df274342-42f3-11ef-9f7f-e9a656123a8b" 00:11:31.433 ], 00:11:31.433 "product_name": "Malloc disk", 00:11:31.433 "block_size": 512, 00:11:31.433 "num_blocks": 65536, 00:11:31.433 "uuid": "df274342-42f3-11ef-9f7f-e9a656123a8b", 00:11:31.433 "assigned_rate_limits": { 00:11:31.433 "rw_ios_per_sec": 0, 00:11:31.433 "rw_mbytes_per_sec": 0, 00:11:31.433 "r_mbytes_per_sec": 0, 00:11:31.433 "w_mbytes_per_sec": 0 00:11:31.433 }, 00:11:31.433 "claimed": true, 00:11:31.433 "claim_type": "exclusive_write", 00:11:31.433 "zoned": false, 00:11:31.433 "supported_io_types": { 00:11:31.433 "read": true, 00:11:31.433 "write": true, 00:11:31.433 "unmap": true, 00:11:31.433 "flush": true, 00:11:31.433 "reset": true, 00:11:31.433 "nvme_admin": false, 00:11:31.433 "nvme_io": false, 00:11:31.433 "nvme_io_md": false, 00:11:31.433 "write_zeroes": true, 00:11:31.433 "zcopy": true, 00:11:31.433 "get_zone_info": false, 00:11:31.433 "zone_management": false, 00:11:31.433 "zone_append": false, 00:11:31.433 "compare": false, 00:11:31.433 "compare_and_write": false, 00:11:31.433 "abort": true, 00:11:31.433 "seek_hole": false, 00:11:31.433 "seek_data": false, 00:11:31.433 "copy": true, 00:11:31.433 "nvme_iov_md": false 00:11:31.433 }, 00:11:31.433 "memory_domains": [ 00:11:31.433 { 00:11:31.433 "dma_device_id": "system", 00:11:31.433 "dma_device_type": 1 00:11:31.433 }, 00:11:31.433 { 00:11:31.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.433 "dma_device_type": 2 00:11:31.433 } 00:11:31.433 ], 00:11:31.433 "driver_specific": {} 00:11:31.433 } 00:11:31.433 ] 00:11:31.433 21:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:11:31.433 21:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:31.433 21:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:31.433 21:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:31.433 21:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:31.433 21:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:31.433 21:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:31.433 21:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:31.433 21:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:31.433 21:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:31.433 21:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:31.433 21:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:31.433 21:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:31.433 21:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:31.433 21:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.691 21:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:31.691 "name": "Existed_Raid", 00:11:31.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.691 "strip_size_kb": 0, 00:11:31.691 "state": "configuring", 00:11:31.691 "raid_level": "raid1", 00:11:31.691 "superblock": false, 00:11:31.691 "num_base_bdevs": 3, 00:11:31.691 "num_base_bdevs_discovered": 2, 00:11:31.691 "num_base_bdevs_operational": 3, 00:11:31.691 "base_bdevs_list": [ 00:11:31.691 { 00:11:31.691 "name": "BaseBdev1", 00:11:31.691 "uuid": "ddc400e9-42f3-11ef-9f7f-e9a656123a8b", 00:11:31.691 "is_configured": true, 00:11:31.691 "data_offset": 0, 00:11:31.691 "data_size": 65536 00:11:31.691 }, 00:11:31.692 { 00:11:31.692 "name": "BaseBdev2", 00:11:31.692 "uuid": "df274342-42f3-11ef-9f7f-e9a656123a8b", 00:11:31.692 "is_configured": true, 00:11:31.692 "data_offset": 0, 00:11:31.692 "data_size": 65536 00:11:31.692 }, 00:11:31.692 { 00:11:31.692 "name": "BaseBdev3", 00:11:31.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.692 "is_configured": false, 00:11:31.692 "data_offset": 0, 00:11:31.692 "data_size": 0 00:11:31.692 } 00:11:31.692 ] 00:11:31.692 }' 00:11:31.692 21:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:31.692 21:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.949 21:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:32.207 [2024-07-15 21:47:47.219494] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:32.207 [2024-07-15 21:47:47.219517] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x23e45d234a00 00:11:32.207 [2024-07-15 21:47:47.219537] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:32.207 [2024-07-15 21:47:47.219558] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23e45d297e20 00:11:32.207 [2024-07-15 21:47:47.219647] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x23e45d234a00 00:11:32.207 [2024-07-15 21:47:47.219652] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x23e45d234a00 00:11:32.207 [2024-07-15 21:47:47.219683] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.207 BaseBdev3 00:11:32.207 21:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:11:32.207 21:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev3 00:11:32.207 21:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:11:32.207 21:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:11:32.207 21:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:11:32.207 21:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:11:32.207 21:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:32.466 21:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:32.724 [ 00:11:32.724 { 00:11:32.724 "name": "BaseBdev3", 00:11:32.724 "aliases": [ 00:11:32.724 "dfdefd26-42f3-11ef-9f7f-e9a656123a8b" 00:11:32.724 ], 00:11:32.724 "product_name": "Malloc disk", 00:11:32.724 "block_size": 512, 00:11:32.724 "num_blocks": 65536, 00:11:32.724 "uuid": "dfdefd26-42f3-11ef-9f7f-e9a656123a8b", 00:11:32.724 "assigned_rate_limits": { 00:11:32.724 "rw_ios_per_sec": 0, 00:11:32.724 "rw_mbytes_per_sec": 0, 00:11:32.724 "r_mbytes_per_sec": 0, 00:11:32.724 "w_mbytes_per_sec": 0 00:11:32.724 }, 00:11:32.724 "claimed": true, 00:11:32.724 "claim_type": "exclusive_write", 00:11:32.724 "zoned": false, 00:11:32.724 "supported_io_types": { 00:11:32.724 "read": true, 00:11:32.724 "write": true, 00:11:32.724 "unmap": true, 00:11:32.724 "flush": true, 00:11:32.724 "reset": true, 00:11:32.724 "nvme_admin": false, 00:11:32.724 "nvme_io": false, 00:11:32.724 "nvme_io_md": false, 00:11:32.724 "write_zeroes": true, 00:11:32.724 "zcopy": true, 00:11:32.724 "get_zone_info": false, 00:11:32.724 "zone_management": false, 00:11:32.724 "zone_append": false, 00:11:32.724 "compare": false, 00:11:32.724 "compare_and_write": false, 00:11:32.724 "abort": true, 00:11:32.724 "seek_hole": false, 00:11:32.724 "seek_data": false, 00:11:32.724 "copy": true, 00:11:32.724 "nvme_iov_md": false 00:11:32.724 }, 00:11:32.724 "memory_domains": [ 00:11:32.724 { 00:11:32.724 "dma_device_id": "system", 00:11:32.724 "dma_device_type": 1 00:11:32.724 }, 00:11:32.724 { 00:11:32.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.724 "dma_device_type": 2 00:11:32.724 } 00:11:32.724 ], 00:11:32.724 "driver_specific": {} 00:11:32.724 } 00:11:32.724 ] 00:11:32.724 21:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:11:32.725 21:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:32.725 21:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:32.725 21:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:32.725 21:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:32.725 21:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:32.725 21:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:32.725 21:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:32.725 21:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:32.725 21:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:32.725 21:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:32.725 21:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:32.725 21:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:32.725 21:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:32.725 21:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.007 21:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:33.007 "name": "Existed_Raid", 00:11:33.007 "uuid": "dfdf0362-42f3-11ef-9f7f-e9a656123a8b", 00:11:33.007 "strip_size_kb": 0, 00:11:33.007 "state": "online", 00:11:33.007 "raid_level": "raid1", 00:11:33.007 "superblock": false, 00:11:33.007 "num_base_bdevs": 3, 00:11:33.007 "num_base_bdevs_discovered": 3, 00:11:33.007 "num_base_bdevs_operational": 3, 00:11:33.007 "base_bdevs_list": [ 00:11:33.007 { 00:11:33.007 "name": "BaseBdev1", 00:11:33.007 "uuid": "ddc400e9-42f3-11ef-9f7f-e9a656123a8b", 00:11:33.007 "is_configured": true, 00:11:33.007 "data_offset": 0, 00:11:33.007 "data_size": 65536 00:11:33.007 }, 00:11:33.007 { 00:11:33.007 "name": "BaseBdev2", 00:11:33.007 "uuid": "df274342-42f3-11ef-9f7f-e9a656123a8b", 00:11:33.007 "is_configured": true, 00:11:33.007 "data_offset": 0, 00:11:33.007 "data_size": 65536 00:11:33.007 }, 00:11:33.007 { 00:11:33.007 "name": "BaseBdev3", 00:11:33.007 "uuid": "dfdefd26-42f3-11ef-9f7f-e9a656123a8b", 00:11:33.007 "is_configured": true, 00:11:33.007 "data_offset": 0, 00:11:33.007 "data_size": 65536 00:11:33.007 } 00:11:33.007 ] 00:11:33.007 }' 00:11:33.007 21:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:33.007 21:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.265 21:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:33.265 21:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:33.265 21:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:33.265 21:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:33.265 21:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:33.265 21:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:33.265 21:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:33.265 21:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:33.523 [2024-07-15 21:47:48.511433] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:33.523 21:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:33.523 "name": "Existed_Raid", 00:11:33.523 "aliases": [ 00:11:33.523 "dfdf0362-42f3-11ef-9f7f-e9a656123a8b" 00:11:33.523 ], 00:11:33.523 "product_name": "Raid Volume", 00:11:33.523 "block_size": 512, 00:11:33.523 "num_blocks": 65536, 00:11:33.523 "uuid": "dfdf0362-42f3-11ef-9f7f-e9a656123a8b", 00:11:33.523 "assigned_rate_limits": { 00:11:33.523 "rw_ios_per_sec": 0, 00:11:33.523 "rw_mbytes_per_sec": 0, 00:11:33.523 "r_mbytes_per_sec": 0, 00:11:33.523 "w_mbytes_per_sec": 0 00:11:33.523 }, 00:11:33.523 "claimed": false, 00:11:33.523 "zoned": false, 00:11:33.523 "supported_io_types": { 00:11:33.523 "read": true, 00:11:33.523 "write": true, 00:11:33.523 "unmap": false, 00:11:33.523 "flush": false, 00:11:33.523 "reset": true, 00:11:33.523 "nvme_admin": false, 00:11:33.523 "nvme_io": false, 00:11:33.523 "nvme_io_md": false, 00:11:33.523 "write_zeroes": true, 00:11:33.523 "zcopy": false, 00:11:33.523 "get_zone_info": false, 00:11:33.523 "zone_management": false, 00:11:33.523 "zone_append": false, 00:11:33.523 "compare": false, 00:11:33.523 "compare_and_write": false, 00:11:33.523 "abort": false, 00:11:33.523 "seek_hole": false, 00:11:33.523 "seek_data": false, 00:11:33.523 "copy": false, 00:11:33.523 "nvme_iov_md": false 00:11:33.523 }, 00:11:33.523 "memory_domains": [ 00:11:33.523 { 00:11:33.523 "dma_device_id": "system", 00:11:33.523 "dma_device_type": 1 00:11:33.523 }, 00:11:33.523 { 00:11:33.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.523 "dma_device_type": 2 00:11:33.523 }, 00:11:33.523 { 00:11:33.523 "dma_device_id": "system", 00:11:33.523 "dma_device_type": 1 00:11:33.523 }, 00:11:33.523 { 00:11:33.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.523 "dma_device_type": 2 00:11:33.523 }, 00:11:33.523 { 00:11:33.524 "dma_device_id": "system", 00:11:33.524 "dma_device_type": 1 00:11:33.524 }, 00:11:33.524 { 00:11:33.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.524 "dma_device_type": 2 00:11:33.524 } 00:11:33.524 ], 00:11:33.524 "driver_specific": { 00:11:33.524 "raid": { 00:11:33.524 "uuid": "dfdf0362-42f3-11ef-9f7f-e9a656123a8b", 00:11:33.524 "strip_size_kb": 0, 00:11:33.524 "state": "online", 00:11:33.524 "raid_level": "raid1", 00:11:33.524 "superblock": false, 00:11:33.524 "num_base_bdevs": 3, 00:11:33.524 "num_base_bdevs_discovered": 3, 00:11:33.524 "num_base_bdevs_operational": 3, 00:11:33.524 "base_bdevs_list": [ 00:11:33.524 { 00:11:33.524 "name": "BaseBdev1", 00:11:33.524 "uuid": "ddc400e9-42f3-11ef-9f7f-e9a656123a8b", 00:11:33.524 "is_configured": true, 00:11:33.524 "data_offset": 0, 00:11:33.524 "data_size": 65536 00:11:33.524 }, 00:11:33.524 { 00:11:33.524 "name": "BaseBdev2", 00:11:33.524 "uuid": "df274342-42f3-11ef-9f7f-e9a656123a8b", 00:11:33.524 "is_configured": true, 00:11:33.524 "data_offset": 0, 00:11:33.524 "data_size": 65536 00:11:33.524 }, 00:11:33.524 { 00:11:33.524 "name": "BaseBdev3", 00:11:33.524 "uuid": "dfdefd26-42f3-11ef-9f7f-e9a656123a8b", 00:11:33.524 "is_configured": true, 00:11:33.524 "data_offset": 0, 00:11:33.524 "data_size": 65536 00:11:33.524 } 00:11:33.524 ] 00:11:33.524 } 00:11:33.524 } 00:11:33.524 }' 00:11:33.524 21:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:33.524 21:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:33.524 BaseBdev2 00:11:33.524 BaseBdev3' 00:11:33.524 21:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:33.524 21:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:33.524 21:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:33.781 21:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:33.781 "name": "BaseBdev1", 00:11:33.781 "aliases": [ 00:11:33.781 "ddc400e9-42f3-11ef-9f7f-e9a656123a8b" 00:11:33.781 ], 00:11:33.781 "product_name": "Malloc disk", 00:11:33.781 "block_size": 512, 00:11:33.781 "num_blocks": 65536, 00:11:33.781 "uuid": "ddc400e9-42f3-11ef-9f7f-e9a656123a8b", 00:11:33.781 "assigned_rate_limits": { 00:11:33.781 "rw_ios_per_sec": 0, 00:11:33.781 "rw_mbytes_per_sec": 0, 00:11:33.781 "r_mbytes_per_sec": 0, 00:11:33.781 "w_mbytes_per_sec": 0 00:11:33.781 }, 00:11:33.782 "claimed": true, 00:11:33.782 "claim_type": "exclusive_write", 00:11:33.782 "zoned": false, 00:11:33.782 "supported_io_types": { 00:11:33.782 "read": true, 00:11:33.782 "write": true, 00:11:33.782 "unmap": true, 00:11:33.782 "flush": true, 00:11:33.782 "reset": true, 00:11:33.782 "nvme_admin": false, 00:11:33.782 "nvme_io": false, 00:11:33.782 "nvme_io_md": false, 00:11:33.782 "write_zeroes": true, 00:11:33.782 "zcopy": true, 00:11:33.782 "get_zone_info": false, 00:11:33.782 "zone_management": false, 00:11:33.782 "zone_append": false, 00:11:33.782 "compare": false, 00:11:33.782 "compare_and_write": false, 00:11:33.782 "abort": true, 00:11:33.782 "seek_hole": false, 00:11:33.782 "seek_data": false, 00:11:33.782 "copy": true, 00:11:33.782 "nvme_iov_md": false 00:11:33.782 }, 00:11:33.782 "memory_domains": [ 00:11:33.782 { 00:11:33.782 "dma_device_id": "system", 00:11:33.782 "dma_device_type": 1 00:11:33.782 }, 00:11:33.782 { 00:11:33.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.782 "dma_device_type": 2 00:11:33.782 } 00:11:33.782 ], 00:11:33.782 "driver_specific": {} 00:11:33.782 }' 00:11:33.782 21:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:33.782 21:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:33.782 21:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:33.782 21:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:33.782 21:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:33.782 21:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:33.782 21:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:33.782 21:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:33.782 21:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:33.782 21:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:33.782 21:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:33.782 21:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:33.782 21:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:33.782 21:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:33.782 21:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:34.039 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:34.039 "name": "BaseBdev2", 00:11:34.039 "aliases": [ 00:11:34.039 "df274342-42f3-11ef-9f7f-e9a656123a8b" 00:11:34.039 ], 00:11:34.039 "product_name": "Malloc disk", 00:11:34.039 "block_size": 512, 00:11:34.039 "num_blocks": 65536, 00:11:34.039 "uuid": "df274342-42f3-11ef-9f7f-e9a656123a8b", 00:11:34.039 "assigned_rate_limits": { 00:11:34.039 "rw_ios_per_sec": 0, 00:11:34.039 "rw_mbytes_per_sec": 0, 00:11:34.039 "r_mbytes_per_sec": 0, 00:11:34.039 "w_mbytes_per_sec": 0 00:11:34.039 }, 00:11:34.039 "claimed": true, 00:11:34.039 "claim_type": "exclusive_write", 00:11:34.039 "zoned": false, 00:11:34.039 "supported_io_types": { 00:11:34.039 "read": true, 00:11:34.039 "write": true, 00:11:34.039 "unmap": true, 00:11:34.039 "flush": true, 00:11:34.039 "reset": true, 00:11:34.039 "nvme_admin": false, 00:11:34.039 "nvme_io": false, 00:11:34.039 "nvme_io_md": false, 00:11:34.039 "write_zeroes": true, 00:11:34.039 "zcopy": true, 00:11:34.039 "get_zone_info": false, 00:11:34.039 "zone_management": false, 00:11:34.039 "zone_append": false, 00:11:34.039 "compare": false, 00:11:34.039 "compare_and_write": false, 00:11:34.039 "abort": true, 00:11:34.039 "seek_hole": false, 00:11:34.039 "seek_data": false, 00:11:34.039 "copy": true, 00:11:34.039 "nvme_iov_md": false 00:11:34.039 }, 00:11:34.039 "memory_domains": [ 00:11:34.039 { 00:11:34.039 "dma_device_id": "system", 00:11:34.039 "dma_device_type": 1 00:11:34.039 }, 00:11:34.039 { 00:11:34.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.039 "dma_device_type": 2 00:11:34.039 } 00:11:34.039 ], 00:11:34.039 "driver_specific": {} 00:11:34.039 }' 00:11:34.039 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:34.039 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:34.039 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:34.039 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:34.039 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:34.039 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:34.039 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:34.039 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:34.039 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:34.039 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:34.039 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:34.039 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:34.039 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:34.039 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:34.039 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:34.297 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:34.297 "name": "BaseBdev3", 00:11:34.297 "aliases": [ 00:11:34.297 "dfdefd26-42f3-11ef-9f7f-e9a656123a8b" 00:11:34.297 ], 00:11:34.297 "product_name": "Malloc disk", 00:11:34.297 "block_size": 512, 00:11:34.297 "num_blocks": 65536, 00:11:34.297 "uuid": "dfdefd26-42f3-11ef-9f7f-e9a656123a8b", 00:11:34.297 "assigned_rate_limits": { 00:11:34.297 "rw_ios_per_sec": 0, 00:11:34.297 "rw_mbytes_per_sec": 0, 00:11:34.297 "r_mbytes_per_sec": 0, 00:11:34.297 "w_mbytes_per_sec": 0 00:11:34.297 }, 00:11:34.297 "claimed": true, 00:11:34.297 "claim_type": "exclusive_write", 00:11:34.297 "zoned": false, 00:11:34.297 "supported_io_types": { 00:11:34.297 "read": true, 00:11:34.297 "write": true, 00:11:34.297 "unmap": true, 00:11:34.297 "flush": true, 00:11:34.297 "reset": true, 00:11:34.297 "nvme_admin": false, 00:11:34.297 "nvme_io": false, 00:11:34.297 "nvme_io_md": false, 00:11:34.297 "write_zeroes": true, 00:11:34.297 "zcopy": true, 00:11:34.297 "get_zone_info": false, 00:11:34.297 "zone_management": false, 00:11:34.297 "zone_append": false, 00:11:34.297 "compare": false, 00:11:34.297 "compare_and_write": false, 00:11:34.297 "abort": true, 00:11:34.297 "seek_hole": false, 00:11:34.297 "seek_data": false, 00:11:34.297 "copy": true, 00:11:34.297 "nvme_iov_md": false 00:11:34.297 }, 00:11:34.297 "memory_domains": [ 00:11:34.297 { 00:11:34.297 "dma_device_id": "system", 00:11:34.297 "dma_device_type": 1 00:11:34.297 }, 00:11:34.297 { 00:11:34.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.297 "dma_device_type": 2 00:11:34.297 } 00:11:34.297 ], 00:11:34.297 "driver_specific": {} 00:11:34.297 }' 00:11:34.297 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:34.297 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:34.297 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:34.297 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:34.297 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:34.297 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:34.297 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:34.554 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:34.554 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:34.554 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:34.554 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:34.554 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:34.554 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:34.554 [2024-07-15 21:47:49.719520] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:34.554 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:34.554 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:11:34.555 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:34.555 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:11:34.555 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:11:34.555 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:34.555 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:34.555 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:34.555 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:34.555 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:34.555 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:34.555 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:34.555 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:34.555 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:34.555 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:34.812 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:34.812 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.812 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:34.812 "name": "Existed_Raid", 00:11:34.813 "uuid": "dfdf0362-42f3-11ef-9f7f-e9a656123a8b", 00:11:34.813 "strip_size_kb": 0, 00:11:34.813 "state": "online", 00:11:34.813 "raid_level": "raid1", 00:11:34.813 "superblock": false, 00:11:34.813 "num_base_bdevs": 3, 00:11:34.813 "num_base_bdevs_discovered": 2, 00:11:34.813 "num_base_bdevs_operational": 2, 00:11:34.813 "base_bdevs_list": [ 00:11:34.813 { 00:11:34.813 "name": null, 00:11:34.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.813 "is_configured": false, 00:11:34.813 "data_offset": 0, 00:11:34.813 "data_size": 65536 00:11:34.813 }, 00:11:34.813 { 00:11:34.813 "name": "BaseBdev2", 00:11:34.813 "uuid": "df274342-42f3-11ef-9f7f-e9a656123a8b", 00:11:34.813 "is_configured": true, 00:11:34.813 "data_offset": 0, 00:11:34.813 "data_size": 65536 00:11:34.813 }, 00:11:34.813 { 00:11:34.813 "name": "BaseBdev3", 00:11:34.813 "uuid": "dfdefd26-42f3-11ef-9f7f-e9a656123a8b", 00:11:34.813 "is_configured": true, 00:11:34.813 "data_offset": 0, 00:11:34.813 "data_size": 65536 00:11:34.813 } 00:11:34.813 ] 00:11:34.813 }' 00:11:34.813 21:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:34.813 21:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.378 21:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:35.378 21:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:35.378 21:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:35.378 21:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:35.378 21:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:35.378 21:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:35.378 21:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:35.636 [2024-07-15 21:47:50.773594] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:35.636 21:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:35.636 21:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:35.636 21:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:35.636 21:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:35.893 21:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:35.893 21:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:35.893 21:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:36.150 [2024-07-15 21:47:51.307562] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:36.150 [2024-07-15 21:47:51.307613] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:36.150 [2024-07-15 21:47:51.314249] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:36.150 [2024-07-15 21:47:51.314267] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:36.150 [2024-07-15 21:47:51.314272] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23e45d234a00 name Existed_Raid, state offline 00:11:36.150 21:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:36.150 21:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:36.150 21:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:36.150 21:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:36.406 21:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:36.406 21:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:36.407 21:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:11:36.407 21:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:11:36.407 21:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:36.407 21:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:36.664 BaseBdev2 00:11:36.664 21:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:11:36.664 21:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev2 00:11:36.664 21:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:11:36.664 21:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:11:36.664 21:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:11:36.664 21:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:11:36.664 21:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:36.921 21:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:37.179 [ 00:11:37.179 { 00:11:37.179 "name": "BaseBdev2", 00:11:37.179 "aliases": [ 00:11:37.179 "e292f4f3-42f3-11ef-9f7f-e9a656123a8b" 00:11:37.179 ], 00:11:37.179 "product_name": "Malloc disk", 00:11:37.179 "block_size": 512, 00:11:37.179 "num_blocks": 65536, 00:11:37.179 "uuid": "e292f4f3-42f3-11ef-9f7f-e9a656123a8b", 00:11:37.179 "assigned_rate_limits": { 00:11:37.179 "rw_ios_per_sec": 0, 00:11:37.179 "rw_mbytes_per_sec": 0, 00:11:37.179 "r_mbytes_per_sec": 0, 00:11:37.179 "w_mbytes_per_sec": 0 00:11:37.179 }, 00:11:37.179 "claimed": false, 00:11:37.179 "zoned": false, 00:11:37.179 "supported_io_types": { 00:11:37.179 "read": true, 00:11:37.179 "write": true, 00:11:37.179 "unmap": true, 00:11:37.179 "flush": true, 00:11:37.179 "reset": true, 00:11:37.179 "nvme_admin": false, 00:11:37.179 "nvme_io": false, 00:11:37.179 "nvme_io_md": false, 00:11:37.179 "write_zeroes": true, 00:11:37.179 "zcopy": true, 00:11:37.179 "get_zone_info": false, 00:11:37.179 "zone_management": false, 00:11:37.179 "zone_append": false, 00:11:37.179 "compare": false, 00:11:37.179 "compare_and_write": false, 00:11:37.179 "abort": true, 00:11:37.179 "seek_hole": false, 00:11:37.179 "seek_data": false, 00:11:37.179 "copy": true, 00:11:37.179 "nvme_iov_md": false 00:11:37.179 }, 00:11:37.179 "memory_domains": [ 00:11:37.179 { 00:11:37.179 "dma_device_id": "system", 00:11:37.179 "dma_device_type": 1 00:11:37.179 }, 00:11:37.179 { 00:11:37.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.179 "dma_device_type": 2 00:11:37.179 } 00:11:37.179 ], 00:11:37.179 "driver_specific": {} 00:11:37.179 } 00:11:37.179 ] 00:11:37.179 21:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:11:37.179 21:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:37.179 21:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:37.179 21:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:37.437 BaseBdev3 00:11:37.437 21:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:11:37.437 21:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev3 00:11:37.437 21:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:11:37.437 21:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:11:37.437 21:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:11:37.437 21:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:11:37.437 21:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:37.695 21:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:37.695 [ 00:11:37.695 { 00:11:37.695 "name": "BaseBdev3", 00:11:37.695 "aliases": [ 00:11:37.695 "e2f67352-42f3-11ef-9f7f-e9a656123a8b" 00:11:37.695 ], 00:11:37.695 "product_name": "Malloc disk", 00:11:37.695 "block_size": 512, 00:11:37.695 "num_blocks": 65536, 00:11:37.695 "uuid": "e2f67352-42f3-11ef-9f7f-e9a656123a8b", 00:11:37.695 "assigned_rate_limits": { 00:11:37.695 "rw_ios_per_sec": 0, 00:11:37.695 "rw_mbytes_per_sec": 0, 00:11:37.695 "r_mbytes_per_sec": 0, 00:11:37.695 "w_mbytes_per_sec": 0 00:11:37.695 }, 00:11:37.695 "claimed": false, 00:11:37.695 "zoned": false, 00:11:37.695 "supported_io_types": { 00:11:37.695 "read": true, 00:11:37.695 "write": true, 00:11:37.695 "unmap": true, 00:11:37.695 "flush": true, 00:11:37.695 "reset": true, 00:11:37.695 "nvme_admin": false, 00:11:37.695 "nvme_io": false, 00:11:37.695 "nvme_io_md": false, 00:11:37.695 "write_zeroes": true, 00:11:37.695 "zcopy": true, 00:11:37.695 "get_zone_info": false, 00:11:37.695 "zone_management": false, 00:11:37.695 "zone_append": false, 00:11:37.695 "compare": false, 00:11:37.695 "compare_and_write": false, 00:11:37.695 "abort": true, 00:11:37.695 "seek_hole": false, 00:11:37.695 "seek_data": false, 00:11:37.695 "copy": true, 00:11:37.695 "nvme_iov_md": false 00:11:37.695 }, 00:11:37.695 "memory_domains": [ 00:11:37.695 { 00:11:37.695 "dma_device_id": "system", 00:11:37.695 "dma_device_type": 1 00:11:37.695 }, 00:11:37.695 { 00:11:37.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.695 "dma_device_type": 2 00:11:37.695 } 00:11:37.695 ], 00:11:37.696 "driver_specific": {} 00:11:37.696 } 00:11:37.696 ] 00:11:37.696 21:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:11:37.696 21:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:37.696 21:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:37.696 21:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:37.953 [2024-07-15 21:47:53.062333] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:37.954 [2024-07-15 21:47:53.062377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:37.954 [2024-07-15 21:47:53.062401] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:37.954 [2024-07-15 21:47:53.063052] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:37.954 21:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:37.954 21:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:37.954 21:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:37.954 21:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:37.954 21:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:37.954 21:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:37.954 21:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:37.954 21:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:37.954 21:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:37.954 21:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:37.954 21:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:37.954 21:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.211 21:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:38.211 "name": "Existed_Raid", 00:11:38.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.211 "strip_size_kb": 0, 00:11:38.211 "state": "configuring", 00:11:38.211 "raid_level": "raid1", 00:11:38.211 "superblock": false, 00:11:38.211 "num_base_bdevs": 3, 00:11:38.211 "num_base_bdevs_discovered": 2, 00:11:38.211 "num_base_bdevs_operational": 3, 00:11:38.211 "base_bdevs_list": [ 00:11:38.211 { 00:11:38.211 "name": "BaseBdev1", 00:11:38.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.211 "is_configured": false, 00:11:38.211 "data_offset": 0, 00:11:38.211 "data_size": 0 00:11:38.211 }, 00:11:38.212 { 00:11:38.212 "name": "BaseBdev2", 00:11:38.212 "uuid": "e292f4f3-42f3-11ef-9f7f-e9a656123a8b", 00:11:38.212 "is_configured": true, 00:11:38.212 "data_offset": 0, 00:11:38.212 "data_size": 65536 00:11:38.212 }, 00:11:38.212 { 00:11:38.212 "name": "BaseBdev3", 00:11:38.212 "uuid": "e2f67352-42f3-11ef-9f7f-e9a656123a8b", 00:11:38.212 "is_configured": true, 00:11:38.212 "data_offset": 0, 00:11:38.212 "data_size": 65536 00:11:38.212 } 00:11:38.212 ] 00:11:38.212 }' 00:11:38.212 21:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:38.212 21:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.469 21:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:11:38.736 [2024-07-15 21:47:53.918345] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:38.998 21:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:38.998 21:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:38.998 21:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:38.998 21:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:38.998 21:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:38.998 21:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:38.998 21:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:38.998 21:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:38.998 21:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:38.998 21:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:38.998 21:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:38.998 21:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.998 21:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:38.998 "name": "Existed_Raid", 00:11:38.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.999 "strip_size_kb": 0, 00:11:38.999 "state": "configuring", 00:11:38.999 "raid_level": "raid1", 00:11:38.999 "superblock": false, 00:11:38.999 "num_base_bdevs": 3, 00:11:38.999 "num_base_bdevs_discovered": 1, 00:11:38.999 "num_base_bdevs_operational": 3, 00:11:38.999 "base_bdevs_list": [ 00:11:38.999 { 00:11:38.999 "name": "BaseBdev1", 00:11:38.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.999 "is_configured": false, 00:11:38.999 "data_offset": 0, 00:11:38.999 "data_size": 0 00:11:38.999 }, 00:11:38.999 { 00:11:38.999 "name": null, 00:11:38.999 "uuid": "e292f4f3-42f3-11ef-9f7f-e9a656123a8b", 00:11:38.999 "is_configured": false, 00:11:38.999 "data_offset": 0, 00:11:38.999 "data_size": 65536 00:11:38.999 }, 00:11:38.999 { 00:11:38.999 "name": "BaseBdev3", 00:11:38.999 "uuid": "e2f67352-42f3-11ef-9f7f-e9a656123a8b", 00:11:38.999 "is_configured": true, 00:11:38.999 "data_offset": 0, 00:11:38.999 "data_size": 65536 00:11:38.999 } 00:11:38.999 ] 00:11:38.999 }' 00:11:38.999 21:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:38.999 21:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.294 21:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:39.294 21:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:39.551 21:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:11:39.551 21:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:39.809 [2024-07-15 21:47:54.958543] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:39.809 BaseBdev1 00:11:39.809 21:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:11:39.809 21:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev1 00:11:39.809 21:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:11:39.809 21:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:11:39.809 21:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:11:39.809 21:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:11:39.809 21:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:40.067 21:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:40.326 [ 00:11:40.326 { 00:11:40.326 "name": "BaseBdev1", 00:11:40.326 "aliases": [ 00:11:40.326 "e47bdfb9-42f3-11ef-9f7f-e9a656123a8b" 00:11:40.326 ], 00:11:40.326 "product_name": "Malloc disk", 00:11:40.326 "block_size": 512, 00:11:40.326 "num_blocks": 65536, 00:11:40.326 "uuid": "e47bdfb9-42f3-11ef-9f7f-e9a656123a8b", 00:11:40.326 "assigned_rate_limits": { 00:11:40.326 "rw_ios_per_sec": 0, 00:11:40.326 "rw_mbytes_per_sec": 0, 00:11:40.326 "r_mbytes_per_sec": 0, 00:11:40.326 "w_mbytes_per_sec": 0 00:11:40.326 }, 00:11:40.326 "claimed": true, 00:11:40.326 "claim_type": "exclusive_write", 00:11:40.326 "zoned": false, 00:11:40.326 "supported_io_types": { 00:11:40.326 "read": true, 00:11:40.326 "write": true, 00:11:40.326 "unmap": true, 00:11:40.326 "flush": true, 00:11:40.326 "reset": true, 00:11:40.326 "nvme_admin": false, 00:11:40.326 "nvme_io": false, 00:11:40.326 "nvme_io_md": false, 00:11:40.326 "write_zeroes": true, 00:11:40.326 "zcopy": true, 00:11:40.326 "get_zone_info": false, 00:11:40.326 "zone_management": false, 00:11:40.326 "zone_append": false, 00:11:40.326 "compare": false, 00:11:40.326 "compare_and_write": false, 00:11:40.326 "abort": true, 00:11:40.326 "seek_hole": false, 00:11:40.326 "seek_data": false, 00:11:40.326 "copy": true, 00:11:40.326 "nvme_iov_md": false 00:11:40.326 }, 00:11:40.326 "memory_domains": [ 00:11:40.326 { 00:11:40.326 "dma_device_id": "system", 00:11:40.326 "dma_device_type": 1 00:11:40.326 }, 00:11:40.326 { 00:11:40.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.326 "dma_device_type": 2 00:11:40.326 } 00:11:40.326 ], 00:11:40.326 "driver_specific": {} 00:11:40.326 } 00:11:40.326 ] 00:11:40.326 21:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:11:40.326 21:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:40.326 21:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:40.326 21:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:40.326 21:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:40.326 21:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:40.326 21:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:40.326 21:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:40.326 21:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:40.326 21:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:40.326 21:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:40.326 21:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:40.326 21:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.585 21:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:40.585 "name": "Existed_Raid", 00:11:40.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.585 "strip_size_kb": 0, 00:11:40.585 "state": "configuring", 00:11:40.585 "raid_level": "raid1", 00:11:40.585 "superblock": false, 00:11:40.585 "num_base_bdevs": 3, 00:11:40.585 "num_base_bdevs_discovered": 2, 00:11:40.585 "num_base_bdevs_operational": 3, 00:11:40.585 "base_bdevs_list": [ 00:11:40.585 { 00:11:40.585 "name": "BaseBdev1", 00:11:40.585 "uuid": "e47bdfb9-42f3-11ef-9f7f-e9a656123a8b", 00:11:40.585 "is_configured": true, 00:11:40.585 "data_offset": 0, 00:11:40.585 "data_size": 65536 00:11:40.585 }, 00:11:40.585 { 00:11:40.585 "name": null, 00:11:40.585 "uuid": "e292f4f3-42f3-11ef-9f7f-e9a656123a8b", 00:11:40.585 "is_configured": false, 00:11:40.585 "data_offset": 0, 00:11:40.585 "data_size": 65536 00:11:40.585 }, 00:11:40.585 { 00:11:40.585 "name": "BaseBdev3", 00:11:40.585 "uuid": "e2f67352-42f3-11ef-9f7f-e9a656123a8b", 00:11:40.585 "is_configured": true, 00:11:40.585 "data_offset": 0, 00:11:40.585 "data_size": 65536 00:11:40.585 } 00:11:40.585 ] 00:11:40.585 }' 00:11:40.585 21:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:40.585 21:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.843 21:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:40.843 21:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:41.101 21:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:11:41.101 21:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:11:41.360 [2024-07-15 21:47:56.346508] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:41.360 21:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:41.360 21:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:41.360 21:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:41.360 21:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:41.360 21:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:41.360 21:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:41.360 21:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:41.360 21:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:41.360 21:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:41.360 21:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:41.360 21:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:41.360 21:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.619 21:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:41.619 "name": "Existed_Raid", 00:11:41.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.619 "strip_size_kb": 0, 00:11:41.619 "state": "configuring", 00:11:41.619 "raid_level": "raid1", 00:11:41.619 "superblock": false, 00:11:41.619 "num_base_bdevs": 3, 00:11:41.619 "num_base_bdevs_discovered": 1, 00:11:41.619 "num_base_bdevs_operational": 3, 00:11:41.619 "base_bdevs_list": [ 00:11:41.619 { 00:11:41.619 "name": "BaseBdev1", 00:11:41.619 "uuid": "e47bdfb9-42f3-11ef-9f7f-e9a656123a8b", 00:11:41.619 "is_configured": true, 00:11:41.619 "data_offset": 0, 00:11:41.619 "data_size": 65536 00:11:41.619 }, 00:11:41.619 { 00:11:41.619 "name": null, 00:11:41.619 "uuid": "e292f4f3-42f3-11ef-9f7f-e9a656123a8b", 00:11:41.619 "is_configured": false, 00:11:41.619 "data_offset": 0, 00:11:41.619 "data_size": 65536 00:11:41.619 }, 00:11:41.619 { 00:11:41.619 "name": null, 00:11:41.619 "uuid": "e2f67352-42f3-11ef-9f7f-e9a656123a8b", 00:11:41.619 "is_configured": false, 00:11:41.619 "data_offset": 0, 00:11:41.619 "data_size": 65536 00:11:41.619 } 00:11:41.619 ] 00:11:41.619 }' 00:11:41.619 21:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:41.619 21:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.877 21:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:41.877 21:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:42.135 21:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:11:42.135 21:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:42.393 [2024-07-15 21:47:57.330578] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:42.393 21:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:42.393 21:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:42.393 21:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:42.393 21:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:42.393 21:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:42.393 21:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:42.393 21:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:42.393 21:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:42.393 21:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:42.393 21:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:42.393 21:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.393 21:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:42.393 21:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:42.393 "name": "Existed_Raid", 00:11:42.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.393 "strip_size_kb": 0, 00:11:42.393 "state": "configuring", 00:11:42.393 "raid_level": "raid1", 00:11:42.393 "superblock": false, 00:11:42.393 "num_base_bdevs": 3, 00:11:42.393 "num_base_bdevs_discovered": 2, 00:11:42.393 "num_base_bdevs_operational": 3, 00:11:42.393 "base_bdevs_list": [ 00:11:42.393 { 00:11:42.393 "name": "BaseBdev1", 00:11:42.393 "uuid": "e47bdfb9-42f3-11ef-9f7f-e9a656123a8b", 00:11:42.393 "is_configured": true, 00:11:42.393 "data_offset": 0, 00:11:42.393 "data_size": 65536 00:11:42.393 }, 00:11:42.393 { 00:11:42.393 "name": null, 00:11:42.393 "uuid": "e292f4f3-42f3-11ef-9f7f-e9a656123a8b", 00:11:42.393 "is_configured": false, 00:11:42.393 "data_offset": 0, 00:11:42.393 "data_size": 65536 00:11:42.393 }, 00:11:42.393 { 00:11:42.393 "name": "BaseBdev3", 00:11:42.393 "uuid": "e2f67352-42f3-11ef-9f7f-e9a656123a8b", 00:11:42.393 "is_configured": true, 00:11:42.393 "data_offset": 0, 00:11:42.393 "data_size": 65536 00:11:42.393 } 00:11:42.393 ] 00:11:42.393 }' 00:11:42.393 21:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:42.393 21:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.958 21:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:42.958 21:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:42.958 21:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:11:42.958 21:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:43.216 [2024-07-15 21:47:58.254631] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:43.216 21:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:43.216 21:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:43.216 21:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:43.216 21:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:43.216 21:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:43.216 21:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:43.216 21:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:43.216 21:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:43.216 21:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:43.216 21:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:43.216 21:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:43.216 21:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.473 21:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:43.473 "name": "Existed_Raid", 00:11:43.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.473 "strip_size_kb": 0, 00:11:43.473 "state": "configuring", 00:11:43.473 "raid_level": "raid1", 00:11:43.473 "superblock": false, 00:11:43.473 "num_base_bdevs": 3, 00:11:43.473 "num_base_bdevs_discovered": 1, 00:11:43.473 "num_base_bdevs_operational": 3, 00:11:43.473 "base_bdevs_list": [ 00:11:43.473 { 00:11:43.473 "name": null, 00:11:43.473 "uuid": "e47bdfb9-42f3-11ef-9f7f-e9a656123a8b", 00:11:43.473 "is_configured": false, 00:11:43.473 "data_offset": 0, 00:11:43.473 "data_size": 65536 00:11:43.473 }, 00:11:43.473 { 00:11:43.473 "name": null, 00:11:43.473 "uuid": "e292f4f3-42f3-11ef-9f7f-e9a656123a8b", 00:11:43.473 "is_configured": false, 00:11:43.473 "data_offset": 0, 00:11:43.473 "data_size": 65536 00:11:43.473 }, 00:11:43.473 { 00:11:43.473 "name": "BaseBdev3", 00:11:43.473 "uuid": "e2f67352-42f3-11ef-9f7f-e9a656123a8b", 00:11:43.473 "is_configured": true, 00:11:43.473 "data_offset": 0, 00:11:43.473 "data_size": 65536 00:11:43.473 } 00:11:43.473 ] 00:11:43.473 }' 00:11:43.473 21:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:43.473 21:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.731 21:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:43.731 21:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:43.990 21:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:11:43.990 21:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:44.248 [2024-07-15 21:47:59.336398] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:44.248 21:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:44.248 21:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:44.248 21:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:44.248 21:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:44.248 21:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:44.248 21:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:44.248 21:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:44.248 21:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:44.248 21:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:44.248 21:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:44.248 21:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:44.248 21:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.506 21:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:44.506 "name": "Existed_Raid", 00:11:44.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.506 "strip_size_kb": 0, 00:11:44.506 "state": "configuring", 00:11:44.506 "raid_level": "raid1", 00:11:44.506 "superblock": false, 00:11:44.506 "num_base_bdevs": 3, 00:11:44.506 "num_base_bdevs_discovered": 2, 00:11:44.506 "num_base_bdevs_operational": 3, 00:11:44.506 "base_bdevs_list": [ 00:11:44.506 { 00:11:44.506 "name": null, 00:11:44.506 "uuid": "e47bdfb9-42f3-11ef-9f7f-e9a656123a8b", 00:11:44.506 "is_configured": false, 00:11:44.506 "data_offset": 0, 00:11:44.506 "data_size": 65536 00:11:44.506 }, 00:11:44.506 { 00:11:44.506 "name": "BaseBdev2", 00:11:44.506 "uuid": "e292f4f3-42f3-11ef-9f7f-e9a656123a8b", 00:11:44.506 "is_configured": true, 00:11:44.506 "data_offset": 0, 00:11:44.506 "data_size": 65536 00:11:44.506 }, 00:11:44.506 { 00:11:44.506 "name": "BaseBdev3", 00:11:44.506 "uuid": "e2f67352-42f3-11ef-9f7f-e9a656123a8b", 00:11:44.506 "is_configured": true, 00:11:44.506 "data_offset": 0, 00:11:44.506 "data_size": 65536 00:11:44.506 } 00:11:44.506 ] 00:11:44.506 }' 00:11:44.506 21:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:44.506 21:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.763 21:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:44.763 21:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:45.020 21:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:11:45.020 21:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:45.020 21:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:45.278 21:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u e47bdfb9-42f3-11ef-9f7f-e9a656123a8b 00:11:45.536 [2024-07-15 21:48:00.592552] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:45.537 [2024-07-15 21:48:00.592576] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x23e45d234f00 00:11:45.537 [2024-07-15 21:48:00.592598] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:45.537 [2024-07-15 21:48:00.592619] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23e45d297e20 00:11:45.537 [2024-07-15 21:48:00.592685] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x23e45d234f00 00:11:45.537 [2024-07-15 21:48:00.592690] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x23e45d234f00 00:11:45.537 [2024-07-15 21:48:00.592720] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.537 NewBaseBdev 00:11:45.537 21:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:11:45.537 21:48:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=NewBaseBdev 00:11:45.537 21:48:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:11:45.537 21:48:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:11:45.537 21:48:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:11:45.537 21:48:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:11:45.537 21:48:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:45.845 21:48:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:46.103 [ 00:11:46.103 { 00:11:46.103 "name": "NewBaseBdev", 00:11:46.103 "aliases": [ 00:11:46.103 "e47bdfb9-42f3-11ef-9f7f-e9a656123a8b" 00:11:46.103 ], 00:11:46.103 "product_name": "Malloc disk", 00:11:46.103 "block_size": 512, 00:11:46.103 "num_blocks": 65536, 00:11:46.103 "uuid": "e47bdfb9-42f3-11ef-9f7f-e9a656123a8b", 00:11:46.103 "assigned_rate_limits": { 00:11:46.103 "rw_ios_per_sec": 0, 00:11:46.103 "rw_mbytes_per_sec": 0, 00:11:46.103 "r_mbytes_per_sec": 0, 00:11:46.103 "w_mbytes_per_sec": 0 00:11:46.103 }, 00:11:46.103 "claimed": true, 00:11:46.103 "claim_type": "exclusive_write", 00:11:46.103 "zoned": false, 00:11:46.103 "supported_io_types": { 00:11:46.103 "read": true, 00:11:46.103 "write": true, 00:11:46.103 "unmap": true, 00:11:46.103 "flush": true, 00:11:46.103 "reset": true, 00:11:46.103 "nvme_admin": false, 00:11:46.103 "nvme_io": false, 00:11:46.103 "nvme_io_md": false, 00:11:46.103 "write_zeroes": true, 00:11:46.103 "zcopy": true, 00:11:46.103 "get_zone_info": false, 00:11:46.103 "zone_management": false, 00:11:46.103 "zone_append": false, 00:11:46.103 "compare": false, 00:11:46.103 "compare_and_write": false, 00:11:46.103 "abort": true, 00:11:46.103 "seek_hole": false, 00:11:46.103 "seek_data": false, 00:11:46.103 "copy": true, 00:11:46.103 "nvme_iov_md": false 00:11:46.103 }, 00:11:46.103 "memory_domains": [ 00:11:46.103 { 00:11:46.103 "dma_device_id": "system", 00:11:46.103 "dma_device_type": 1 00:11:46.103 }, 00:11:46.103 { 00:11:46.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.103 "dma_device_type": 2 00:11:46.103 } 00:11:46.103 ], 00:11:46.103 "driver_specific": {} 00:11:46.103 } 00:11:46.103 ] 00:11:46.103 21:48:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:11:46.103 21:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:46.103 21:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:46.103 21:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:46.103 21:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:46.103 21:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:46.103 21:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:46.103 21:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:46.103 21:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:46.103 21:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:46.103 21:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:46.103 21:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:46.103 21:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.361 21:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:46.361 "name": "Existed_Raid", 00:11:46.361 "uuid": "e7d79497-42f3-11ef-9f7f-e9a656123a8b", 00:11:46.361 "strip_size_kb": 0, 00:11:46.361 "state": "online", 00:11:46.361 "raid_level": "raid1", 00:11:46.361 "superblock": false, 00:11:46.361 "num_base_bdevs": 3, 00:11:46.361 "num_base_bdevs_discovered": 3, 00:11:46.361 "num_base_bdevs_operational": 3, 00:11:46.361 "base_bdevs_list": [ 00:11:46.361 { 00:11:46.361 "name": "NewBaseBdev", 00:11:46.361 "uuid": "e47bdfb9-42f3-11ef-9f7f-e9a656123a8b", 00:11:46.361 "is_configured": true, 00:11:46.361 "data_offset": 0, 00:11:46.361 "data_size": 65536 00:11:46.361 }, 00:11:46.361 { 00:11:46.361 "name": "BaseBdev2", 00:11:46.361 "uuid": "e292f4f3-42f3-11ef-9f7f-e9a656123a8b", 00:11:46.361 "is_configured": true, 00:11:46.361 "data_offset": 0, 00:11:46.361 "data_size": 65536 00:11:46.361 }, 00:11:46.361 { 00:11:46.361 "name": "BaseBdev3", 00:11:46.361 "uuid": "e2f67352-42f3-11ef-9f7f-e9a656123a8b", 00:11:46.361 "is_configured": true, 00:11:46.361 "data_offset": 0, 00:11:46.361 "data_size": 65536 00:11:46.361 } 00:11:46.361 ] 00:11:46.361 }' 00:11:46.361 21:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:46.361 21:48:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.619 21:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:11:46.619 21:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:46.619 21:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:46.619 21:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:46.619 21:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:46.619 21:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:46.619 21:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:46.619 21:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:46.877 [2024-07-15 21:48:01.816500] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:46.877 21:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:46.877 "name": "Existed_Raid", 00:11:46.877 "aliases": [ 00:11:46.877 "e7d79497-42f3-11ef-9f7f-e9a656123a8b" 00:11:46.877 ], 00:11:46.877 "product_name": "Raid Volume", 00:11:46.877 "block_size": 512, 00:11:46.877 "num_blocks": 65536, 00:11:46.877 "uuid": "e7d79497-42f3-11ef-9f7f-e9a656123a8b", 00:11:46.877 "assigned_rate_limits": { 00:11:46.877 "rw_ios_per_sec": 0, 00:11:46.877 "rw_mbytes_per_sec": 0, 00:11:46.877 "r_mbytes_per_sec": 0, 00:11:46.877 "w_mbytes_per_sec": 0 00:11:46.877 }, 00:11:46.877 "claimed": false, 00:11:46.877 "zoned": false, 00:11:46.877 "supported_io_types": { 00:11:46.877 "read": true, 00:11:46.877 "write": true, 00:11:46.877 "unmap": false, 00:11:46.877 "flush": false, 00:11:46.877 "reset": true, 00:11:46.877 "nvme_admin": false, 00:11:46.877 "nvme_io": false, 00:11:46.877 "nvme_io_md": false, 00:11:46.877 "write_zeroes": true, 00:11:46.877 "zcopy": false, 00:11:46.877 "get_zone_info": false, 00:11:46.877 "zone_management": false, 00:11:46.877 "zone_append": false, 00:11:46.877 "compare": false, 00:11:46.877 "compare_and_write": false, 00:11:46.877 "abort": false, 00:11:46.877 "seek_hole": false, 00:11:46.877 "seek_data": false, 00:11:46.877 "copy": false, 00:11:46.877 "nvme_iov_md": false 00:11:46.877 }, 00:11:46.877 "memory_domains": [ 00:11:46.877 { 00:11:46.877 "dma_device_id": "system", 00:11:46.877 "dma_device_type": 1 00:11:46.877 }, 00:11:46.877 { 00:11:46.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.877 "dma_device_type": 2 00:11:46.877 }, 00:11:46.877 { 00:11:46.877 "dma_device_id": "system", 00:11:46.877 "dma_device_type": 1 00:11:46.877 }, 00:11:46.877 { 00:11:46.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.877 "dma_device_type": 2 00:11:46.877 }, 00:11:46.877 { 00:11:46.877 "dma_device_id": "system", 00:11:46.877 "dma_device_type": 1 00:11:46.877 }, 00:11:46.877 { 00:11:46.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.877 "dma_device_type": 2 00:11:46.877 } 00:11:46.877 ], 00:11:46.877 "driver_specific": { 00:11:46.877 "raid": { 00:11:46.877 "uuid": "e7d79497-42f3-11ef-9f7f-e9a656123a8b", 00:11:46.877 "strip_size_kb": 0, 00:11:46.877 "state": "online", 00:11:46.877 "raid_level": "raid1", 00:11:46.877 "superblock": false, 00:11:46.877 "num_base_bdevs": 3, 00:11:46.877 "num_base_bdevs_discovered": 3, 00:11:46.877 "num_base_bdevs_operational": 3, 00:11:46.877 "base_bdevs_list": [ 00:11:46.877 { 00:11:46.877 "name": "NewBaseBdev", 00:11:46.877 "uuid": "e47bdfb9-42f3-11ef-9f7f-e9a656123a8b", 00:11:46.877 "is_configured": true, 00:11:46.877 "data_offset": 0, 00:11:46.877 "data_size": 65536 00:11:46.877 }, 00:11:46.877 { 00:11:46.877 "name": "BaseBdev2", 00:11:46.877 "uuid": "e292f4f3-42f3-11ef-9f7f-e9a656123a8b", 00:11:46.877 "is_configured": true, 00:11:46.877 "data_offset": 0, 00:11:46.877 "data_size": 65536 00:11:46.877 }, 00:11:46.877 { 00:11:46.877 "name": "BaseBdev3", 00:11:46.877 "uuid": "e2f67352-42f3-11ef-9f7f-e9a656123a8b", 00:11:46.877 "is_configured": true, 00:11:46.877 "data_offset": 0, 00:11:46.877 "data_size": 65536 00:11:46.877 } 00:11:46.877 ] 00:11:46.877 } 00:11:46.877 } 00:11:46.877 }' 00:11:46.878 21:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:46.878 21:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:11:46.878 BaseBdev2 00:11:46.878 BaseBdev3' 00:11:46.878 21:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:46.878 21:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:11:46.878 21:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:47.135 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:47.135 "name": "NewBaseBdev", 00:11:47.135 "aliases": [ 00:11:47.135 "e47bdfb9-42f3-11ef-9f7f-e9a656123a8b" 00:11:47.135 ], 00:11:47.135 "product_name": "Malloc disk", 00:11:47.135 "block_size": 512, 00:11:47.135 "num_blocks": 65536, 00:11:47.135 "uuid": "e47bdfb9-42f3-11ef-9f7f-e9a656123a8b", 00:11:47.135 "assigned_rate_limits": { 00:11:47.135 "rw_ios_per_sec": 0, 00:11:47.136 "rw_mbytes_per_sec": 0, 00:11:47.136 "r_mbytes_per_sec": 0, 00:11:47.136 "w_mbytes_per_sec": 0 00:11:47.136 }, 00:11:47.136 "claimed": true, 00:11:47.136 "claim_type": "exclusive_write", 00:11:47.136 "zoned": false, 00:11:47.136 "supported_io_types": { 00:11:47.136 "read": true, 00:11:47.136 "write": true, 00:11:47.136 "unmap": true, 00:11:47.136 "flush": true, 00:11:47.136 "reset": true, 00:11:47.136 "nvme_admin": false, 00:11:47.136 "nvme_io": false, 00:11:47.136 "nvme_io_md": false, 00:11:47.136 "write_zeroes": true, 00:11:47.136 "zcopy": true, 00:11:47.136 "get_zone_info": false, 00:11:47.136 "zone_management": false, 00:11:47.136 "zone_append": false, 00:11:47.136 "compare": false, 00:11:47.136 "compare_and_write": false, 00:11:47.136 "abort": true, 00:11:47.136 "seek_hole": false, 00:11:47.136 "seek_data": false, 00:11:47.136 "copy": true, 00:11:47.136 "nvme_iov_md": false 00:11:47.136 }, 00:11:47.136 "memory_domains": [ 00:11:47.136 { 00:11:47.136 "dma_device_id": "system", 00:11:47.136 "dma_device_type": 1 00:11:47.136 }, 00:11:47.136 { 00:11:47.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.136 "dma_device_type": 2 00:11:47.136 } 00:11:47.136 ], 00:11:47.136 "driver_specific": {} 00:11:47.136 }' 00:11:47.136 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:47.136 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:47.136 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:47.136 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:47.136 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:47.136 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:47.136 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:47.136 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:47.136 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:47.136 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:47.136 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:47.136 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:47.136 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:47.136 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:47.136 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:47.395 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:47.395 "name": "BaseBdev2", 00:11:47.395 "aliases": [ 00:11:47.395 "e292f4f3-42f3-11ef-9f7f-e9a656123a8b" 00:11:47.395 ], 00:11:47.395 "product_name": "Malloc disk", 00:11:47.395 "block_size": 512, 00:11:47.395 "num_blocks": 65536, 00:11:47.395 "uuid": "e292f4f3-42f3-11ef-9f7f-e9a656123a8b", 00:11:47.395 "assigned_rate_limits": { 00:11:47.395 "rw_ios_per_sec": 0, 00:11:47.395 "rw_mbytes_per_sec": 0, 00:11:47.395 "r_mbytes_per_sec": 0, 00:11:47.395 "w_mbytes_per_sec": 0 00:11:47.395 }, 00:11:47.395 "claimed": true, 00:11:47.395 "claim_type": "exclusive_write", 00:11:47.395 "zoned": false, 00:11:47.395 "supported_io_types": { 00:11:47.395 "read": true, 00:11:47.395 "write": true, 00:11:47.395 "unmap": true, 00:11:47.395 "flush": true, 00:11:47.395 "reset": true, 00:11:47.395 "nvme_admin": false, 00:11:47.395 "nvme_io": false, 00:11:47.395 "nvme_io_md": false, 00:11:47.395 "write_zeroes": true, 00:11:47.395 "zcopy": true, 00:11:47.395 "get_zone_info": false, 00:11:47.395 "zone_management": false, 00:11:47.395 "zone_append": false, 00:11:47.395 "compare": false, 00:11:47.395 "compare_and_write": false, 00:11:47.395 "abort": true, 00:11:47.395 "seek_hole": false, 00:11:47.395 "seek_data": false, 00:11:47.395 "copy": true, 00:11:47.395 "nvme_iov_md": false 00:11:47.395 }, 00:11:47.395 "memory_domains": [ 00:11:47.395 { 00:11:47.395 "dma_device_id": "system", 00:11:47.395 "dma_device_type": 1 00:11:47.395 }, 00:11:47.395 { 00:11:47.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.395 "dma_device_type": 2 00:11:47.395 } 00:11:47.395 ], 00:11:47.395 "driver_specific": {} 00:11:47.395 }' 00:11:47.395 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:47.395 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:47.395 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:47.395 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:47.395 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:47.395 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:47.395 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:47.395 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:47.395 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:47.395 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:47.395 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:47.395 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:47.395 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:47.395 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:47.395 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:47.654 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:47.654 "name": "BaseBdev3", 00:11:47.654 "aliases": [ 00:11:47.654 "e2f67352-42f3-11ef-9f7f-e9a656123a8b" 00:11:47.654 ], 00:11:47.654 "product_name": "Malloc disk", 00:11:47.654 "block_size": 512, 00:11:47.654 "num_blocks": 65536, 00:11:47.654 "uuid": "e2f67352-42f3-11ef-9f7f-e9a656123a8b", 00:11:47.654 "assigned_rate_limits": { 00:11:47.654 "rw_ios_per_sec": 0, 00:11:47.654 "rw_mbytes_per_sec": 0, 00:11:47.654 "r_mbytes_per_sec": 0, 00:11:47.654 "w_mbytes_per_sec": 0 00:11:47.654 }, 00:11:47.654 "claimed": true, 00:11:47.654 "claim_type": "exclusive_write", 00:11:47.654 "zoned": false, 00:11:47.654 "supported_io_types": { 00:11:47.654 "read": true, 00:11:47.654 "write": true, 00:11:47.654 "unmap": true, 00:11:47.654 "flush": true, 00:11:47.654 "reset": true, 00:11:47.654 "nvme_admin": false, 00:11:47.654 "nvme_io": false, 00:11:47.654 "nvme_io_md": false, 00:11:47.654 "write_zeroes": true, 00:11:47.654 "zcopy": true, 00:11:47.654 "get_zone_info": false, 00:11:47.654 "zone_management": false, 00:11:47.654 "zone_append": false, 00:11:47.654 "compare": false, 00:11:47.654 "compare_and_write": false, 00:11:47.654 "abort": true, 00:11:47.654 "seek_hole": false, 00:11:47.654 "seek_data": false, 00:11:47.654 "copy": true, 00:11:47.654 "nvme_iov_md": false 00:11:47.654 }, 00:11:47.654 "memory_domains": [ 00:11:47.654 { 00:11:47.654 "dma_device_id": "system", 00:11:47.654 "dma_device_type": 1 00:11:47.654 }, 00:11:47.654 { 00:11:47.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.654 "dma_device_type": 2 00:11:47.654 } 00:11:47.654 ], 00:11:47.654 "driver_specific": {} 00:11:47.654 }' 00:11:47.654 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:47.654 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:47.654 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:47.654 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:47.654 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:47.654 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:47.654 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:47.654 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:47.654 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:47.654 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:47.654 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:47.654 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:47.654 21:48:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:47.912 [2024-07-15 21:48:03.000466] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:47.912 [2024-07-15 21:48:03.000485] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:47.912 [2024-07-15 21:48:03.000522] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:47.912 [2024-07-15 21:48:03.000600] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:47.912 [2024-07-15 21:48:03.000605] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23e45d234f00 name Existed_Raid, state offline 00:11:47.912 21:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 56108 00:11:47.912 21:48:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@942 -- # '[' -z 56108 ']' 00:11:47.912 21:48:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # kill -0 56108 00:11:47.912 21:48:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@947 -- # uname 00:11:47.912 21:48:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:11:47.912 21:48:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # tail -1 00:11:47.913 21:48:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # ps -c -o command 56108 00:11:47.913 21:48:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:11:47.913 21:48:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:11:47.913 killing process with pid 56108 00:11:47.913 21:48:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 56108' 00:11:47.913 21:48:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@961 -- # kill 56108 00:11:47.913 [2024-07-15 21:48:03.028098] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:47.913 21:48:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # wait 56108 00:11:47.913 [2024-07-15 21:48:03.045130] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:48.171 21:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:11:48.171 00:11:48.171 real 0m22.092s 00:11:48.171 user 0m40.147s 00:11:48.171 sys 0m3.229s 00:11:48.171 21:48:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:11:48.171 ************************************ 00:11:48.171 END TEST raid_state_function_test 00:11:48.171 ************************************ 00:11:48.171 21:48:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.171 21:48:03 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:11:48.171 21:48:03 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:11:48.171 21:48:03 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:11:48.171 21:48:03 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:11:48.171 21:48:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:48.171 ************************************ 00:11:48.172 START TEST raid_state_function_test_sb 00:11:48.172 ************************************ 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1117 -- # raid_state_function_test raid1 3 true 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=56829 00:11:48.172 Process raid pid: 56829 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 56829' 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 56829 /var/tmp/spdk-raid.sock 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@823 -- # '[' -z 56829 ']' 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:48.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@828 -- # local max_retries=100 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # xtrace_disable 00:11:48.172 21:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.172 [2024-07-15 21:48:03.272952] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:11:48.172 [2024-07-15 21:48:03.273216] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:11:48.739 EAL: TSC is not safe to use in SMP mode 00:11:48.739 EAL: TSC is not invariant 00:11:48.739 [2024-07-15 21:48:03.787583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.739 [2024-07-15 21:48:03.865949] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:48.739 [2024-07-15 21:48:03.868178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.739 [2024-07-15 21:48:03.868979] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:48.739 [2024-07-15 21:48:03.868992] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:49.305 21:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:11:49.305 21:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # return 0 00:11:49.305 21:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:49.564 [2024-07-15 21:48:04.516599] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:49.564 [2024-07-15 21:48:04.516684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:49.564 [2024-07-15 21:48:04.516689] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:49.564 [2024-07-15 21:48:04.516714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:49.564 [2024-07-15 21:48:04.516717] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:49.564 [2024-07-15 21:48:04.516724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:49.564 21:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:49.564 21:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:49.564 21:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:49.564 21:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:49.564 21:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:49.564 21:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:49.564 21:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:49.564 21:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:49.564 21:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:49.564 21:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:49.564 21:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:49.564 21:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.823 21:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:49.823 "name": "Existed_Raid", 00:11:49.823 "uuid": "ea2e5573-42f3-11ef-9f7f-e9a656123a8b", 00:11:49.823 "strip_size_kb": 0, 00:11:49.823 "state": "configuring", 00:11:49.823 "raid_level": "raid1", 00:11:49.823 "superblock": true, 00:11:49.823 "num_base_bdevs": 3, 00:11:49.823 "num_base_bdevs_discovered": 0, 00:11:49.823 "num_base_bdevs_operational": 3, 00:11:49.823 "base_bdevs_list": [ 00:11:49.823 { 00:11:49.823 "name": "BaseBdev1", 00:11:49.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.823 "is_configured": false, 00:11:49.823 "data_offset": 0, 00:11:49.823 "data_size": 0 00:11:49.823 }, 00:11:49.823 { 00:11:49.823 "name": "BaseBdev2", 00:11:49.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.823 "is_configured": false, 00:11:49.823 "data_offset": 0, 00:11:49.823 "data_size": 0 00:11:49.823 }, 00:11:49.823 { 00:11:49.823 "name": "BaseBdev3", 00:11:49.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.823 "is_configured": false, 00:11:49.823 "data_offset": 0, 00:11:49.823 "data_size": 0 00:11:49.823 } 00:11:49.823 ] 00:11:49.823 }' 00:11:49.823 21:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:49.823 21:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.081 21:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:50.340 [2024-07-15 21:48:05.368601] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:50.340 [2024-07-15 21:48:05.368624] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17b8be634500 name Existed_Raid, state configuring 00:11:50.340 21:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:50.598 [2024-07-15 21:48:05.604640] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:50.598 [2024-07-15 21:48:05.604699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:50.598 [2024-07-15 21:48:05.604704] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:50.598 [2024-07-15 21:48:05.604728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:50.598 [2024-07-15 21:48:05.604731] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:50.598 [2024-07-15 21:48:05.604754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:50.598 21:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:50.857 [2024-07-15 21:48:05.869623] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:50.857 BaseBdev1 00:11:50.857 21:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:50.857 21:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev1 00:11:50.857 21:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:11:50.857 21:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:11:50.857 21:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:11:50.857 21:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:11:50.857 21:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:51.116 21:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:51.374 [ 00:11:51.374 { 00:11:51.374 "name": "BaseBdev1", 00:11:51.374 "aliases": [ 00:11:51.374 "eafca325-42f3-11ef-9f7f-e9a656123a8b" 00:11:51.374 ], 00:11:51.374 "product_name": "Malloc disk", 00:11:51.374 "block_size": 512, 00:11:51.374 "num_blocks": 65536, 00:11:51.374 "uuid": "eafca325-42f3-11ef-9f7f-e9a656123a8b", 00:11:51.374 "assigned_rate_limits": { 00:11:51.374 "rw_ios_per_sec": 0, 00:11:51.374 "rw_mbytes_per_sec": 0, 00:11:51.374 "r_mbytes_per_sec": 0, 00:11:51.374 "w_mbytes_per_sec": 0 00:11:51.374 }, 00:11:51.374 "claimed": true, 00:11:51.374 "claim_type": "exclusive_write", 00:11:51.374 "zoned": false, 00:11:51.374 "supported_io_types": { 00:11:51.374 "read": true, 00:11:51.374 "write": true, 00:11:51.374 "unmap": true, 00:11:51.374 "flush": true, 00:11:51.374 "reset": true, 00:11:51.374 "nvme_admin": false, 00:11:51.374 "nvme_io": false, 00:11:51.374 "nvme_io_md": false, 00:11:51.374 "write_zeroes": true, 00:11:51.374 "zcopy": true, 00:11:51.374 "get_zone_info": false, 00:11:51.374 "zone_management": false, 00:11:51.374 "zone_append": false, 00:11:51.374 "compare": false, 00:11:51.374 "compare_and_write": false, 00:11:51.374 "abort": true, 00:11:51.374 "seek_hole": false, 00:11:51.374 "seek_data": false, 00:11:51.374 "copy": true, 00:11:51.374 "nvme_iov_md": false 00:11:51.374 }, 00:11:51.374 "memory_domains": [ 00:11:51.374 { 00:11:51.374 "dma_device_id": "system", 00:11:51.374 "dma_device_type": 1 00:11:51.374 }, 00:11:51.374 { 00:11:51.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.374 "dma_device_type": 2 00:11:51.374 } 00:11:51.374 ], 00:11:51.374 "driver_specific": {} 00:11:51.374 } 00:11:51.375 ] 00:11:51.375 21:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:11:51.375 21:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:51.375 21:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:51.375 21:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:51.375 21:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:51.375 21:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:51.375 21:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:51.375 21:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:51.375 21:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:51.375 21:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:51.375 21:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:51.375 21:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:51.375 21:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.633 21:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:51.633 "name": "Existed_Raid", 00:11:51.633 "uuid": "ead45b17-42f3-11ef-9f7f-e9a656123a8b", 00:11:51.633 "strip_size_kb": 0, 00:11:51.633 "state": "configuring", 00:11:51.633 "raid_level": "raid1", 00:11:51.633 "superblock": true, 00:11:51.633 "num_base_bdevs": 3, 00:11:51.633 "num_base_bdevs_discovered": 1, 00:11:51.633 "num_base_bdevs_operational": 3, 00:11:51.633 "base_bdevs_list": [ 00:11:51.633 { 00:11:51.633 "name": "BaseBdev1", 00:11:51.633 "uuid": "eafca325-42f3-11ef-9f7f-e9a656123a8b", 00:11:51.633 "is_configured": true, 00:11:51.633 "data_offset": 2048, 00:11:51.633 "data_size": 63488 00:11:51.633 }, 00:11:51.633 { 00:11:51.633 "name": "BaseBdev2", 00:11:51.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.633 "is_configured": false, 00:11:51.633 "data_offset": 0, 00:11:51.633 "data_size": 0 00:11:51.633 }, 00:11:51.633 { 00:11:51.633 "name": "BaseBdev3", 00:11:51.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.633 "is_configured": false, 00:11:51.633 "data_offset": 0, 00:11:51.633 "data_size": 0 00:11:51.633 } 00:11:51.633 ] 00:11:51.633 }' 00:11:51.633 21:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:51.633 21:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.891 21:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:52.148 [2024-07-15 21:48:07.248691] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:52.148 [2024-07-15 21:48:07.248732] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17b8be634500 name Existed_Raid, state configuring 00:11:52.148 21:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:52.406 [2024-07-15 21:48:07.480742] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:52.406 [2024-07-15 21:48:07.481653] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:52.406 [2024-07-15 21:48:07.481729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:52.406 [2024-07-15 21:48:07.481734] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:52.406 [2024-07-15 21:48:07.481741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:52.406 21:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:52.406 21:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:52.406 21:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:52.406 21:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:52.406 21:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:52.406 21:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:52.406 21:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:52.406 21:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:52.406 21:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:52.406 21:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:52.406 21:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:52.406 21:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:52.406 21:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:52.406 21:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.666 21:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:52.666 "name": "Existed_Raid", 00:11:52.666 "uuid": "ebf29f1e-42f3-11ef-9f7f-e9a656123a8b", 00:11:52.666 "strip_size_kb": 0, 00:11:52.666 "state": "configuring", 00:11:52.666 "raid_level": "raid1", 00:11:52.666 "superblock": true, 00:11:52.666 "num_base_bdevs": 3, 00:11:52.666 "num_base_bdevs_discovered": 1, 00:11:52.666 "num_base_bdevs_operational": 3, 00:11:52.666 "base_bdevs_list": [ 00:11:52.666 { 00:11:52.666 "name": "BaseBdev1", 00:11:52.666 "uuid": "eafca325-42f3-11ef-9f7f-e9a656123a8b", 00:11:52.666 "is_configured": true, 00:11:52.666 "data_offset": 2048, 00:11:52.666 "data_size": 63488 00:11:52.666 }, 00:11:52.666 { 00:11:52.666 "name": "BaseBdev2", 00:11:52.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.666 "is_configured": false, 00:11:52.666 "data_offset": 0, 00:11:52.666 "data_size": 0 00:11:52.666 }, 00:11:52.666 { 00:11:52.666 "name": "BaseBdev3", 00:11:52.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.666 "is_configured": false, 00:11:52.666 "data_offset": 0, 00:11:52.666 "data_size": 0 00:11:52.666 } 00:11:52.666 ] 00:11:52.666 }' 00:11:52.666 21:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:52.666 21:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.923 21:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:53.180 [2024-07-15 21:48:08.236846] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:53.180 BaseBdev2 00:11:53.180 21:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:53.180 21:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev2 00:11:53.180 21:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:11:53.180 21:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:11:53.180 21:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:11:53.180 21:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:11:53.180 21:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:53.438 21:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:53.696 [ 00:11:53.696 { 00:11:53.696 "name": "BaseBdev2", 00:11:53.696 "aliases": [ 00:11:53.696 "ec65fb30-42f3-11ef-9f7f-e9a656123a8b" 00:11:53.696 ], 00:11:53.696 "product_name": "Malloc disk", 00:11:53.696 "block_size": 512, 00:11:53.696 "num_blocks": 65536, 00:11:53.696 "uuid": "ec65fb30-42f3-11ef-9f7f-e9a656123a8b", 00:11:53.696 "assigned_rate_limits": { 00:11:53.696 "rw_ios_per_sec": 0, 00:11:53.696 "rw_mbytes_per_sec": 0, 00:11:53.696 "r_mbytes_per_sec": 0, 00:11:53.696 "w_mbytes_per_sec": 0 00:11:53.696 }, 00:11:53.696 "claimed": true, 00:11:53.696 "claim_type": "exclusive_write", 00:11:53.696 "zoned": false, 00:11:53.696 "supported_io_types": { 00:11:53.696 "read": true, 00:11:53.696 "write": true, 00:11:53.696 "unmap": true, 00:11:53.696 "flush": true, 00:11:53.696 "reset": true, 00:11:53.696 "nvme_admin": false, 00:11:53.696 "nvme_io": false, 00:11:53.696 "nvme_io_md": false, 00:11:53.696 "write_zeroes": true, 00:11:53.696 "zcopy": true, 00:11:53.696 "get_zone_info": false, 00:11:53.696 "zone_management": false, 00:11:53.696 "zone_append": false, 00:11:53.696 "compare": false, 00:11:53.696 "compare_and_write": false, 00:11:53.696 "abort": true, 00:11:53.696 "seek_hole": false, 00:11:53.696 "seek_data": false, 00:11:53.696 "copy": true, 00:11:53.696 "nvme_iov_md": false 00:11:53.696 }, 00:11:53.696 "memory_domains": [ 00:11:53.696 { 00:11:53.696 "dma_device_id": "system", 00:11:53.696 "dma_device_type": 1 00:11:53.696 }, 00:11:53.696 { 00:11:53.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.696 "dma_device_type": 2 00:11:53.696 } 00:11:53.696 ], 00:11:53.696 "driver_specific": {} 00:11:53.696 } 00:11:53.696 ] 00:11:53.696 21:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:11:53.696 21:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:53.696 21:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:53.696 21:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:53.696 21:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:53.696 21:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:53.696 21:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:53.696 21:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:53.696 21:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:53.696 21:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:53.696 21:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:53.696 21:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:53.696 21:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:53.697 21:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:53.697 21:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.955 21:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:53.955 "name": "Existed_Raid", 00:11:53.955 "uuid": "ebf29f1e-42f3-11ef-9f7f-e9a656123a8b", 00:11:53.955 "strip_size_kb": 0, 00:11:53.955 "state": "configuring", 00:11:53.955 "raid_level": "raid1", 00:11:53.955 "superblock": true, 00:11:53.955 "num_base_bdevs": 3, 00:11:53.955 "num_base_bdevs_discovered": 2, 00:11:53.955 "num_base_bdevs_operational": 3, 00:11:53.955 "base_bdevs_list": [ 00:11:53.955 { 00:11:53.955 "name": "BaseBdev1", 00:11:53.955 "uuid": "eafca325-42f3-11ef-9f7f-e9a656123a8b", 00:11:53.955 "is_configured": true, 00:11:53.955 "data_offset": 2048, 00:11:53.955 "data_size": 63488 00:11:53.955 }, 00:11:53.955 { 00:11:53.955 "name": "BaseBdev2", 00:11:53.955 "uuid": "ec65fb30-42f3-11ef-9f7f-e9a656123a8b", 00:11:53.955 "is_configured": true, 00:11:53.955 "data_offset": 2048, 00:11:53.955 "data_size": 63488 00:11:53.955 }, 00:11:53.955 { 00:11:53.955 "name": "BaseBdev3", 00:11:53.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.955 "is_configured": false, 00:11:53.955 "data_offset": 0, 00:11:53.955 "data_size": 0 00:11:53.955 } 00:11:53.955 ] 00:11:53.955 }' 00:11:53.955 21:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:53.955 21:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.214 21:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:54.491 [2024-07-15 21:48:09.488965] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:54.491 [2024-07-15 21:48:09.489059] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x17b8be634a00 00:11:54.491 [2024-07-15 21:48:09.489081] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:54.491 [2024-07-15 21:48:09.489099] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17b8be697e20 00:11:54.491 [2024-07-15 21:48:09.489149] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x17b8be634a00 00:11:54.491 [2024-07-15 21:48:09.489153] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x17b8be634a00 00:11:54.491 [2024-07-15 21:48:09.489172] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.491 BaseBdev3 00:11:54.491 21:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:11:54.491 21:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev3 00:11:54.491 21:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:11:54.491 21:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:11:54.491 21:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:11:54.491 21:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:11:54.491 21:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:54.750 21:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:55.009 [ 00:11:55.009 { 00:11:55.009 "name": "BaseBdev3", 00:11:55.009 "aliases": [ 00:11:55.009 "ed250a5d-42f3-11ef-9f7f-e9a656123a8b" 00:11:55.009 ], 00:11:55.009 "product_name": "Malloc disk", 00:11:55.009 "block_size": 512, 00:11:55.009 "num_blocks": 65536, 00:11:55.009 "uuid": "ed250a5d-42f3-11ef-9f7f-e9a656123a8b", 00:11:55.009 "assigned_rate_limits": { 00:11:55.009 "rw_ios_per_sec": 0, 00:11:55.009 "rw_mbytes_per_sec": 0, 00:11:55.009 "r_mbytes_per_sec": 0, 00:11:55.009 "w_mbytes_per_sec": 0 00:11:55.009 }, 00:11:55.009 "claimed": true, 00:11:55.009 "claim_type": "exclusive_write", 00:11:55.009 "zoned": false, 00:11:55.009 "supported_io_types": { 00:11:55.009 "read": true, 00:11:55.009 "write": true, 00:11:55.009 "unmap": true, 00:11:55.009 "flush": true, 00:11:55.009 "reset": true, 00:11:55.009 "nvme_admin": false, 00:11:55.009 "nvme_io": false, 00:11:55.009 "nvme_io_md": false, 00:11:55.009 "write_zeroes": true, 00:11:55.009 "zcopy": true, 00:11:55.009 "get_zone_info": false, 00:11:55.009 "zone_management": false, 00:11:55.009 "zone_append": false, 00:11:55.009 "compare": false, 00:11:55.009 "compare_and_write": false, 00:11:55.009 "abort": true, 00:11:55.009 "seek_hole": false, 00:11:55.009 "seek_data": false, 00:11:55.009 "copy": true, 00:11:55.009 "nvme_iov_md": false 00:11:55.009 }, 00:11:55.009 "memory_domains": [ 00:11:55.009 { 00:11:55.009 "dma_device_id": "system", 00:11:55.009 "dma_device_type": 1 00:11:55.009 }, 00:11:55.010 { 00:11:55.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.010 "dma_device_type": 2 00:11:55.010 } 00:11:55.010 ], 00:11:55.010 "driver_specific": {} 00:11:55.010 } 00:11:55.010 ] 00:11:55.010 21:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:11:55.010 21:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:55.010 21:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:55.010 21:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:55.010 21:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:55.010 21:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:55.010 21:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:55.010 21:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:55.010 21:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:55.010 21:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:55.010 21:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:55.010 21:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:55.010 21:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:55.010 21:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:55.010 21:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.268 21:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:55.268 "name": "Existed_Raid", 00:11:55.268 "uuid": "ebf29f1e-42f3-11ef-9f7f-e9a656123a8b", 00:11:55.268 "strip_size_kb": 0, 00:11:55.268 "state": "online", 00:11:55.268 "raid_level": "raid1", 00:11:55.268 "superblock": true, 00:11:55.268 "num_base_bdevs": 3, 00:11:55.268 "num_base_bdevs_discovered": 3, 00:11:55.268 "num_base_bdevs_operational": 3, 00:11:55.268 "base_bdevs_list": [ 00:11:55.268 { 00:11:55.268 "name": "BaseBdev1", 00:11:55.268 "uuid": "eafca325-42f3-11ef-9f7f-e9a656123a8b", 00:11:55.268 "is_configured": true, 00:11:55.268 "data_offset": 2048, 00:11:55.268 "data_size": 63488 00:11:55.268 }, 00:11:55.268 { 00:11:55.268 "name": "BaseBdev2", 00:11:55.268 "uuid": "ec65fb30-42f3-11ef-9f7f-e9a656123a8b", 00:11:55.268 "is_configured": true, 00:11:55.268 "data_offset": 2048, 00:11:55.268 "data_size": 63488 00:11:55.268 }, 00:11:55.268 { 00:11:55.268 "name": "BaseBdev3", 00:11:55.268 "uuid": "ed250a5d-42f3-11ef-9f7f-e9a656123a8b", 00:11:55.268 "is_configured": true, 00:11:55.268 "data_offset": 2048, 00:11:55.268 "data_size": 63488 00:11:55.268 } 00:11:55.268 ] 00:11:55.268 }' 00:11:55.268 21:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:55.268 21:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.526 21:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:55.526 21:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:55.526 21:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:55.526 21:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:55.526 21:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:55.526 21:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:11:55.526 21:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:55.526 21:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:55.784 [2024-07-15 21:48:10.808917] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:55.784 21:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:55.784 "name": "Existed_Raid", 00:11:55.784 "aliases": [ 00:11:55.784 "ebf29f1e-42f3-11ef-9f7f-e9a656123a8b" 00:11:55.784 ], 00:11:55.784 "product_name": "Raid Volume", 00:11:55.784 "block_size": 512, 00:11:55.784 "num_blocks": 63488, 00:11:55.784 "uuid": "ebf29f1e-42f3-11ef-9f7f-e9a656123a8b", 00:11:55.784 "assigned_rate_limits": { 00:11:55.784 "rw_ios_per_sec": 0, 00:11:55.784 "rw_mbytes_per_sec": 0, 00:11:55.784 "r_mbytes_per_sec": 0, 00:11:55.784 "w_mbytes_per_sec": 0 00:11:55.784 }, 00:11:55.784 "claimed": false, 00:11:55.785 "zoned": false, 00:11:55.785 "supported_io_types": { 00:11:55.785 "read": true, 00:11:55.785 "write": true, 00:11:55.785 "unmap": false, 00:11:55.785 "flush": false, 00:11:55.785 "reset": true, 00:11:55.785 "nvme_admin": false, 00:11:55.785 "nvme_io": false, 00:11:55.785 "nvme_io_md": false, 00:11:55.785 "write_zeroes": true, 00:11:55.785 "zcopy": false, 00:11:55.785 "get_zone_info": false, 00:11:55.785 "zone_management": false, 00:11:55.785 "zone_append": false, 00:11:55.785 "compare": false, 00:11:55.785 "compare_and_write": false, 00:11:55.785 "abort": false, 00:11:55.785 "seek_hole": false, 00:11:55.785 "seek_data": false, 00:11:55.785 "copy": false, 00:11:55.785 "nvme_iov_md": false 00:11:55.785 }, 00:11:55.785 "memory_domains": [ 00:11:55.785 { 00:11:55.785 "dma_device_id": "system", 00:11:55.785 "dma_device_type": 1 00:11:55.785 }, 00:11:55.785 { 00:11:55.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.785 "dma_device_type": 2 00:11:55.785 }, 00:11:55.785 { 00:11:55.785 "dma_device_id": "system", 00:11:55.785 "dma_device_type": 1 00:11:55.785 }, 00:11:55.785 { 00:11:55.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.785 "dma_device_type": 2 00:11:55.785 }, 00:11:55.785 { 00:11:55.785 "dma_device_id": "system", 00:11:55.785 "dma_device_type": 1 00:11:55.785 }, 00:11:55.785 { 00:11:55.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.785 "dma_device_type": 2 00:11:55.785 } 00:11:55.785 ], 00:11:55.785 "driver_specific": { 00:11:55.785 "raid": { 00:11:55.785 "uuid": "ebf29f1e-42f3-11ef-9f7f-e9a656123a8b", 00:11:55.785 "strip_size_kb": 0, 00:11:55.785 "state": "online", 00:11:55.785 "raid_level": "raid1", 00:11:55.785 "superblock": true, 00:11:55.785 "num_base_bdevs": 3, 00:11:55.785 "num_base_bdevs_discovered": 3, 00:11:55.785 "num_base_bdevs_operational": 3, 00:11:55.785 "base_bdevs_list": [ 00:11:55.785 { 00:11:55.785 "name": "BaseBdev1", 00:11:55.785 "uuid": "eafca325-42f3-11ef-9f7f-e9a656123a8b", 00:11:55.785 "is_configured": true, 00:11:55.785 "data_offset": 2048, 00:11:55.785 "data_size": 63488 00:11:55.785 }, 00:11:55.785 { 00:11:55.785 "name": "BaseBdev2", 00:11:55.785 "uuid": "ec65fb30-42f3-11ef-9f7f-e9a656123a8b", 00:11:55.785 "is_configured": true, 00:11:55.785 "data_offset": 2048, 00:11:55.785 "data_size": 63488 00:11:55.785 }, 00:11:55.785 { 00:11:55.785 "name": "BaseBdev3", 00:11:55.785 "uuid": "ed250a5d-42f3-11ef-9f7f-e9a656123a8b", 00:11:55.785 "is_configured": true, 00:11:55.785 "data_offset": 2048, 00:11:55.785 "data_size": 63488 00:11:55.785 } 00:11:55.785 ] 00:11:55.785 } 00:11:55.785 } 00:11:55.785 }' 00:11:55.785 21:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:55.785 21:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:55.785 BaseBdev2 00:11:55.785 BaseBdev3' 00:11:55.785 21:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:55.785 21:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:55.785 21:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:56.043 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:56.043 "name": "BaseBdev1", 00:11:56.043 "aliases": [ 00:11:56.043 "eafca325-42f3-11ef-9f7f-e9a656123a8b" 00:11:56.043 ], 00:11:56.043 "product_name": "Malloc disk", 00:11:56.043 "block_size": 512, 00:11:56.043 "num_blocks": 65536, 00:11:56.043 "uuid": "eafca325-42f3-11ef-9f7f-e9a656123a8b", 00:11:56.043 "assigned_rate_limits": { 00:11:56.043 "rw_ios_per_sec": 0, 00:11:56.043 "rw_mbytes_per_sec": 0, 00:11:56.043 "r_mbytes_per_sec": 0, 00:11:56.043 "w_mbytes_per_sec": 0 00:11:56.043 }, 00:11:56.043 "claimed": true, 00:11:56.043 "claim_type": "exclusive_write", 00:11:56.043 "zoned": false, 00:11:56.043 "supported_io_types": { 00:11:56.043 "read": true, 00:11:56.043 "write": true, 00:11:56.043 "unmap": true, 00:11:56.043 "flush": true, 00:11:56.043 "reset": true, 00:11:56.043 "nvme_admin": false, 00:11:56.043 "nvme_io": false, 00:11:56.044 "nvme_io_md": false, 00:11:56.044 "write_zeroes": true, 00:11:56.044 "zcopy": true, 00:11:56.044 "get_zone_info": false, 00:11:56.044 "zone_management": false, 00:11:56.044 "zone_append": false, 00:11:56.044 "compare": false, 00:11:56.044 "compare_and_write": false, 00:11:56.044 "abort": true, 00:11:56.044 "seek_hole": false, 00:11:56.044 "seek_data": false, 00:11:56.044 "copy": true, 00:11:56.044 "nvme_iov_md": false 00:11:56.044 }, 00:11:56.044 "memory_domains": [ 00:11:56.044 { 00:11:56.044 "dma_device_id": "system", 00:11:56.044 "dma_device_type": 1 00:11:56.044 }, 00:11:56.044 { 00:11:56.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.044 "dma_device_type": 2 00:11:56.044 } 00:11:56.044 ], 00:11:56.044 "driver_specific": {} 00:11:56.044 }' 00:11:56.044 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:56.044 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:56.044 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:56.044 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:56.044 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:56.044 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:56.044 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:56.044 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:56.044 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:56.044 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:56.044 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:56.044 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:56.044 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:56.044 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:56.044 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:56.302 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:56.302 "name": "BaseBdev2", 00:11:56.302 "aliases": [ 00:11:56.302 "ec65fb30-42f3-11ef-9f7f-e9a656123a8b" 00:11:56.302 ], 00:11:56.302 "product_name": "Malloc disk", 00:11:56.302 "block_size": 512, 00:11:56.302 "num_blocks": 65536, 00:11:56.302 "uuid": "ec65fb30-42f3-11ef-9f7f-e9a656123a8b", 00:11:56.302 "assigned_rate_limits": { 00:11:56.302 "rw_ios_per_sec": 0, 00:11:56.302 "rw_mbytes_per_sec": 0, 00:11:56.302 "r_mbytes_per_sec": 0, 00:11:56.302 "w_mbytes_per_sec": 0 00:11:56.302 }, 00:11:56.302 "claimed": true, 00:11:56.302 "claim_type": "exclusive_write", 00:11:56.302 "zoned": false, 00:11:56.302 "supported_io_types": { 00:11:56.302 "read": true, 00:11:56.302 "write": true, 00:11:56.302 "unmap": true, 00:11:56.302 "flush": true, 00:11:56.302 "reset": true, 00:11:56.302 "nvme_admin": false, 00:11:56.302 "nvme_io": false, 00:11:56.302 "nvme_io_md": false, 00:11:56.302 "write_zeroes": true, 00:11:56.302 "zcopy": true, 00:11:56.302 "get_zone_info": false, 00:11:56.302 "zone_management": false, 00:11:56.302 "zone_append": false, 00:11:56.302 "compare": false, 00:11:56.302 "compare_and_write": false, 00:11:56.302 "abort": true, 00:11:56.302 "seek_hole": false, 00:11:56.302 "seek_data": false, 00:11:56.302 "copy": true, 00:11:56.302 "nvme_iov_md": false 00:11:56.302 }, 00:11:56.302 "memory_domains": [ 00:11:56.302 { 00:11:56.302 "dma_device_id": "system", 00:11:56.302 "dma_device_type": 1 00:11:56.302 }, 00:11:56.302 { 00:11:56.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.302 "dma_device_type": 2 00:11:56.302 } 00:11:56.302 ], 00:11:56.302 "driver_specific": {} 00:11:56.302 }' 00:11:56.302 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:56.302 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:56.302 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:56.302 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:56.302 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:56.302 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:56.302 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:56.302 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:56.302 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:56.303 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:56.303 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:56.303 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:56.303 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:56.303 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:56.303 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:56.561 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:56.561 "name": "BaseBdev3", 00:11:56.561 "aliases": [ 00:11:56.561 "ed250a5d-42f3-11ef-9f7f-e9a656123a8b" 00:11:56.561 ], 00:11:56.561 "product_name": "Malloc disk", 00:11:56.561 "block_size": 512, 00:11:56.561 "num_blocks": 65536, 00:11:56.561 "uuid": "ed250a5d-42f3-11ef-9f7f-e9a656123a8b", 00:11:56.561 "assigned_rate_limits": { 00:11:56.561 "rw_ios_per_sec": 0, 00:11:56.561 "rw_mbytes_per_sec": 0, 00:11:56.561 "r_mbytes_per_sec": 0, 00:11:56.561 "w_mbytes_per_sec": 0 00:11:56.561 }, 00:11:56.561 "claimed": true, 00:11:56.561 "claim_type": "exclusive_write", 00:11:56.561 "zoned": false, 00:11:56.561 "supported_io_types": { 00:11:56.561 "read": true, 00:11:56.561 "write": true, 00:11:56.561 "unmap": true, 00:11:56.561 "flush": true, 00:11:56.561 "reset": true, 00:11:56.561 "nvme_admin": false, 00:11:56.561 "nvme_io": false, 00:11:56.561 "nvme_io_md": false, 00:11:56.561 "write_zeroes": true, 00:11:56.561 "zcopy": true, 00:11:56.561 "get_zone_info": false, 00:11:56.561 "zone_management": false, 00:11:56.561 "zone_append": false, 00:11:56.561 "compare": false, 00:11:56.561 "compare_and_write": false, 00:11:56.561 "abort": true, 00:11:56.561 "seek_hole": false, 00:11:56.561 "seek_data": false, 00:11:56.561 "copy": true, 00:11:56.561 "nvme_iov_md": false 00:11:56.561 }, 00:11:56.561 "memory_domains": [ 00:11:56.561 { 00:11:56.561 "dma_device_id": "system", 00:11:56.561 "dma_device_type": 1 00:11:56.561 }, 00:11:56.561 { 00:11:56.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.561 "dma_device_type": 2 00:11:56.561 } 00:11:56.561 ], 00:11:56.561 "driver_specific": {} 00:11:56.561 }' 00:11:56.561 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:56.561 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:56.561 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:56.561 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:56.561 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:56.561 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:56.561 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:56.561 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:56.561 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:56.561 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:56.819 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:56.819 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:56.819 21:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:57.079 [2024-07-15 21:48:12.024933] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:57.079 21:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:57.079 21:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:11:57.079 21:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:57.079 21:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:11:57.079 21:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:11:57.079 21:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:57.079 21:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:57.079 21:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:57.079 21:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:57.079 21:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:57.079 21:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:57.079 21:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:57.079 21:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:57.079 21:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:57.079 21:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:57.079 21:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:57.079 21:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.338 21:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:57.338 "name": "Existed_Raid", 00:11:57.338 "uuid": "ebf29f1e-42f3-11ef-9f7f-e9a656123a8b", 00:11:57.338 "strip_size_kb": 0, 00:11:57.338 "state": "online", 00:11:57.338 "raid_level": "raid1", 00:11:57.338 "superblock": true, 00:11:57.338 "num_base_bdevs": 3, 00:11:57.338 "num_base_bdevs_discovered": 2, 00:11:57.338 "num_base_bdevs_operational": 2, 00:11:57.338 "base_bdevs_list": [ 00:11:57.338 { 00:11:57.338 "name": null, 00:11:57.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.338 "is_configured": false, 00:11:57.338 "data_offset": 2048, 00:11:57.338 "data_size": 63488 00:11:57.338 }, 00:11:57.338 { 00:11:57.338 "name": "BaseBdev2", 00:11:57.338 "uuid": "ec65fb30-42f3-11ef-9f7f-e9a656123a8b", 00:11:57.338 "is_configured": true, 00:11:57.338 "data_offset": 2048, 00:11:57.338 "data_size": 63488 00:11:57.338 }, 00:11:57.338 { 00:11:57.338 "name": "BaseBdev3", 00:11:57.338 "uuid": "ed250a5d-42f3-11ef-9f7f-e9a656123a8b", 00:11:57.338 "is_configured": true, 00:11:57.338 "data_offset": 2048, 00:11:57.338 "data_size": 63488 00:11:57.338 } 00:11:57.338 ] 00:11:57.338 }' 00:11:57.338 21:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:57.338 21:48:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.596 21:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:57.596 21:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:57.596 21:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:57.596 21:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:57.854 21:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:57.854 21:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:57.854 21:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:58.114 [2024-07-15 21:48:13.047048] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:58.114 21:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:58.114 21:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:58.114 21:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:58.114 21:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:58.371 21:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:58.371 21:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:58.371 21:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:58.372 [2024-07-15 21:48:13.537469] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:58.372 [2024-07-15 21:48:13.537517] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:58.372 [2024-07-15 21:48:13.543716] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:58.372 [2024-07-15 21:48:13.543748] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:58.372 [2024-07-15 21:48:13.543767] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17b8be634a00 name Existed_Raid, state offline 00:11:58.372 21:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:58.372 21:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:58.372 21:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:58.372 21:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:58.938 21:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:58.938 21:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:58.938 21:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:11:58.938 21:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:11:58.938 21:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:58.938 21:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:58.938 BaseBdev2 00:11:58.938 21:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:11:58.938 21:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev2 00:11:58.938 21:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:11:58.938 21:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:11:58.938 21:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:11:58.938 21:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:11:58.938 21:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:59.197 21:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:59.455 [ 00:11:59.455 { 00:11:59.455 "name": "BaseBdev2", 00:11:59.455 "aliases": [ 00:11:59.455 "efd99c37-42f3-11ef-9f7f-e9a656123a8b" 00:11:59.455 ], 00:11:59.455 "product_name": "Malloc disk", 00:11:59.455 "block_size": 512, 00:11:59.455 "num_blocks": 65536, 00:11:59.455 "uuid": "efd99c37-42f3-11ef-9f7f-e9a656123a8b", 00:11:59.455 "assigned_rate_limits": { 00:11:59.455 "rw_ios_per_sec": 0, 00:11:59.455 "rw_mbytes_per_sec": 0, 00:11:59.455 "r_mbytes_per_sec": 0, 00:11:59.455 "w_mbytes_per_sec": 0 00:11:59.455 }, 00:11:59.455 "claimed": false, 00:11:59.455 "zoned": false, 00:11:59.455 "supported_io_types": { 00:11:59.455 "read": true, 00:11:59.455 "write": true, 00:11:59.455 "unmap": true, 00:11:59.455 "flush": true, 00:11:59.455 "reset": true, 00:11:59.455 "nvme_admin": false, 00:11:59.455 "nvme_io": false, 00:11:59.455 "nvme_io_md": false, 00:11:59.455 "write_zeroes": true, 00:11:59.455 "zcopy": true, 00:11:59.455 "get_zone_info": false, 00:11:59.455 "zone_management": false, 00:11:59.455 "zone_append": false, 00:11:59.455 "compare": false, 00:11:59.455 "compare_and_write": false, 00:11:59.455 "abort": true, 00:11:59.455 "seek_hole": false, 00:11:59.455 "seek_data": false, 00:11:59.455 "copy": true, 00:11:59.455 "nvme_iov_md": false 00:11:59.455 }, 00:11:59.455 "memory_domains": [ 00:11:59.455 { 00:11:59.455 "dma_device_id": "system", 00:11:59.455 "dma_device_type": 1 00:11:59.455 }, 00:11:59.455 { 00:11:59.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.455 "dma_device_type": 2 00:11:59.455 } 00:11:59.455 ], 00:11:59.455 "driver_specific": {} 00:11:59.455 } 00:11:59.455 ] 00:11:59.455 21:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:11:59.455 21:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:59.455 21:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:59.455 21:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:59.712 BaseBdev3 00:11:59.712 21:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:11:59.712 21:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev3 00:11:59.712 21:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:11:59.712 21:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:11:59.712 21:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:11:59.712 21:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:11:59.712 21:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:59.970 21:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:00.229 [ 00:12:00.229 { 00:12:00.229 "name": "BaseBdev3", 00:12:00.229 "aliases": [ 00:12:00.229 "f0477a01-42f3-11ef-9f7f-e9a656123a8b" 00:12:00.229 ], 00:12:00.229 "product_name": "Malloc disk", 00:12:00.229 "block_size": 512, 00:12:00.229 "num_blocks": 65536, 00:12:00.229 "uuid": "f0477a01-42f3-11ef-9f7f-e9a656123a8b", 00:12:00.229 "assigned_rate_limits": { 00:12:00.229 "rw_ios_per_sec": 0, 00:12:00.229 "rw_mbytes_per_sec": 0, 00:12:00.229 "r_mbytes_per_sec": 0, 00:12:00.229 "w_mbytes_per_sec": 0 00:12:00.229 }, 00:12:00.229 "claimed": false, 00:12:00.229 "zoned": false, 00:12:00.229 "supported_io_types": { 00:12:00.229 "read": true, 00:12:00.229 "write": true, 00:12:00.229 "unmap": true, 00:12:00.229 "flush": true, 00:12:00.229 "reset": true, 00:12:00.229 "nvme_admin": false, 00:12:00.229 "nvme_io": false, 00:12:00.229 "nvme_io_md": false, 00:12:00.229 "write_zeroes": true, 00:12:00.229 "zcopy": true, 00:12:00.229 "get_zone_info": false, 00:12:00.229 "zone_management": false, 00:12:00.229 "zone_append": false, 00:12:00.229 "compare": false, 00:12:00.229 "compare_and_write": false, 00:12:00.229 "abort": true, 00:12:00.229 "seek_hole": false, 00:12:00.229 "seek_data": false, 00:12:00.229 "copy": true, 00:12:00.229 "nvme_iov_md": false 00:12:00.229 }, 00:12:00.229 "memory_domains": [ 00:12:00.229 { 00:12:00.229 "dma_device_id": "system", 00:12:00.229 "dma_device_type": 1 00:12:00.229 }, 00:12:00.229 { 00:12:00.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.229 "dma_device_type": 2 00:12:00.229 } 00:12:00.229 ], 00:12:00.229 "driver_specific": {} 00:12:00.229 } 00:12:00.229 ] 00:12:00.229 21:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:12:00.229 21:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:00.229 21:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:00.229 21:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:00.487 [2024-07-15 21:48:15.467703] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:00.487 [2024-07-15 21:48:15.467782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:00.487 [2024-07-15 21:48:15.467806] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:00.487 [2024-07-15 21:48:15.468430] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:00.487 21:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:00.487 21:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:00.487 21:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:00.487 21:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:00.487 21:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:00.487 21:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:00.487 21:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:00.487 21:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:00.487 21:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:00.487 21:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:00.487 21:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:00.487 21:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.745 21:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:00.745 "name": "Existed_Raid", 00:12:00.745 "uuid": "f0b556f7-42f3-11ef-9f7f-e9a656123a8b", 00:12:00.745 "strip_size_kb": 0, 00:12:00.745 "state": "configuring", 00:12:00.745 "raid_level": "raid1", 00:12:00.745 "superblock": true, 00:12:00.745 "num_base_bdevs": 3, 00:12:00.745 "num_base_bdevs_discovered": 2, 00:12:00.745 "num_base_bdevs_operational": 3, 00:12:00.745 "base_bdevs_list": [ 00:12:00.745 { 00:12:00.745 "name": "BaseBdev1", 00:12:00.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.745 "is_configured": false, 00:12:00.745 "data_offset": 0, 00:12:00.745 "data_size": 0 00:12:00.745 }, 00:12:00.745 { 00:12:00.745 "name": "BaseBdev2", 00:12:00.745 "uuid": "efd99c37-42f3-11ef-9f7f-e9a656123a8b", 00:12:00.745 "is_configured": true, 00:12:00.745 "data_offset": 2048, 00:12:00.745 "data_size": 63488 00:12:00.745 }, 00:12:00.745 { 00:12:00.745 "name": "BaseBdev3", 00:12:00.745 "uuid": "f0477a01-42f3-11ef-9f7f-e9a656123a8b", 00:12:00.745 "is_configured": true, 00:12:00.745 "data_offset": 2048, 00:12:00.745 "data_size": 63488 00:12:00.745 } 00:12:00.745 ] 00:12:00.745 }' 00:12:00.745 21:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:00.745 21:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.003 21:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:12:01.003 [2024-07-15 21:48:16.175769] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:01.003 21:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:01.260 21:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:01.260 21:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:01.260 21:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:01.260 21:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:01.260 21:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:01.260 21:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:01.260 21:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:01.260 21:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:01.260 21:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:01.260 21:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:01.260 21:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.517 21:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:01.517 "name": "Existed_Raid", 00:12:01.517 "uuid": "f0b556f7-42f3-11ef-9f7f-e9a656123a8b", 00:12:01.517 "strip_size_kb": 0, 00:12:01.517 "state": "configuring", 00:12:01.517 "raid_level": "raid1", 00:12:01.517 "superblock": true, 00:12:01.517 "num_base_bdevs": 3, 00:12:01.517 "num_base_bdevs_discovered": 1, 00:12:01.517 "num_base_bdevs_operational": 3, 00:12:01.517 "base_bdevs_list": [ 00:12:01.517 { 00:12:01.517 "name": "BaseBdev1", 00:12:01.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.517 "is_configured": false, 00:12:01.517 "data_offset": 0, 00:12:01.517 "data_size": 0 00:12:01.517 }, 00:12:01.517 { 00:12:01.517 "name": null, 00:12:01.517 "uuid": "efd99c37-42f3-11ef-9f7f-e9a656123a8b", 00:12:01.517 "is_configured": false, 00:12:01.517 "data_offset": 2048, 00:12:01.517 "data_size": 63488 00:12:01.517 }, 00:12:01.517 { 00:12:01.517 "name": "BaseBdev3", 00:12:01.517 "uuid": "f0477a01-42f3-11ef-9f7f-e9a656123a8b", 00:12:01.517 "is_configured": true, 00:12:01.517 "data_offset": 2048, 00:12:01.517 "data_size": 63488 00:12:01.517 } 00:12:01.517 ] 00:12:01.517 }' 00:12:01.517 21:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:01.517 21:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.775 21:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:01.775 21:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:01.775 21:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:12:01.775 21:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:02.034 [2024-07-15 21:48:17.203937] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:02.034 BaseBdev1 00:12:02.034 21:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:12:02.034 21:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev1 00:12:02.034 21:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:12:02.034 21:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:12:02.034 21:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:12:02.034 21:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:12:02.034 21:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:02.292 21:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:02.550 [ 00:12:02.550 { 00:12:02.550 "name": "BaseBdev1", 00:12:02.550 "aliases": [ 00:12:02.550 "f1be4085-42f3-11ef-9f7f-e9a656123a8b" 00:12:02.550 ], 00:12:02.550 "product_name": "Malloc disk", 00:12:02.550 "block_size": 512, 00:12:02.550 "num_blocks": 65536, 00:12:02.550 "uuid": "f1be4085-42f3-11ef-9f7f-e9a656123a8b", 00:12:02.550 "assigned_rate_limits": { 00:12:02.550 "rw_ios_per_sec": 0, 00:12:02.550 "rw_mbytes_per_sec": 0, 00:12:02.550 "r_mbytes_per_sec": 0, 00:12:02.550 "w_mbytes_per_sec": 0 00:12:02.550 }, 00:12:02.550 "claimed": true, 00:12:02.550 "claim_type": "exclusive_write", 00:12:02.550 "zoned": false, 00:12:02.550 "supported_io_types": { 00:12:02.550 "read": true, 00:12:02.550 "write": true, 00:12:02.550 "unmap": true, 00:12:02.550 "flush": true, 00:12:02.550 "reset": true, 00:12:02.550 "nvme_admin": false, 00:12:02.550 "nvme_io": false, 00:12:02.550 "nvme_io_md": false, 00:12:02.550 "write_zeroes": true, 00:12:02.550 "zcopy": true, 00:12:02.550 "get_zone_info": false, 00:12:02.550 "zone_management": false, 00:12:02.550 "zone_append": false, 00:12:02.550 "compare": false, 00:12:02.550 "compare_and_write": false, 00:12:02.550 "abort": true, 00:12:02.550 "seek_hole": false, 00:12:02.550 "seek_data": false, 00:12:02.550 "copy": true, 00:12:02.550 "nvme_iov_md": false 00:12:02.550 }, 00:12:02.550 "memory_domains": [ 00:12:02.550 { 00:12:02.550 "dma_device_id": "system", 00:12:02.550 "dma_device_type": 1 00:12:02.550 }, 00:12:02.550 { 00:12:02.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.550 "dma_device_type": 2 00:12:02.550 } 00:12:02.550 ], 00:12:02.550 "driver_specific": {} 00:12:02.550 } 00:12:02.550 ] 00:12:02.550 21:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:12:02.550 21:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:02.550 21:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:02.550 21:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:02.550 21:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:02.550 21:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:02.550 21:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:02.550 21:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:02.550 21:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:02.550 21:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:02.550 21:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:02.550 21:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:02.550 21:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.811 21:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:02.811 "name": "Existed_Raid", 00:12:02.811 "uuid": "f0b556f7-42f3-11ef-9f7f-e9a656123a8b", 00:12:02.811 "strip_size_kb": 0, 00:12:02.811 "state": "configuring", 00:12:02.811 "raid_level": "raid1", 00:12:02.811 "superblock": true, 00:12:02.811 "num_base_bdevs": 3, 00:12:02.811 "num_base_bdevs_discovered": 2, 00:12:02.811 "num_base_bdevs_operational": 3, 00:12:02.811 "base_bdevs_list": [ 00:12:02.811 { 00:12:02.811 "name": "BaseBdev1", 00:12:02.811 "uuid": "f1be4085-42f3-11ef-9f7f-e9a656123a8b", 00:12:02.811 "is_configured": true, 00:12:02.811 "data_offset": 2048, 00:12:02.811 "data_size": 63488 00:12:02.811 }, 00:12:02.811 { 00:12:02.811 "name": null, 00:12:02.811 "uuid": "efd99c37-42f3-11ef-9f7f-e9a656123a8b", 00:12:02.811 "is_configured": false, 00:12:02.811 "data_offset": 2048, 00:12:02.811 "data_size": 63488 00:12:02.811 }, 00:12:02.811 { 00:12:02.811 "name": "BaseBdev3", 00:12:02.811 "uuid": "f0477a01-42f3-11ef-9f7f-e9a656123a8b", 00:12:02.811 "is_configured": true, 00:12:02.811 "data_offset": 2048, 00:12:02.811 "data_size": 63488 00:12:02.811 } 00:12:02.811 ] 00:12:02.811 }' 00:12:02.811 21:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:02.811 21:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.069 21:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:03.069 21:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:03.329 21:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:12:03.329 21:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:12:03.587 [2024-07-15 21:48:18.551844] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:03.587 21:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:03.587 21:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:03.587 21:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:03.587 21:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:03.587 21:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:03.587 21:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:03.587 21:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:03.587 21:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:03.587 21:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:03.587 21:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:03.587 21:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:03.587 21:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.845 21:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:03.845 "name": "Existed_Raid", 00:12:03.845 "uuid": "f0b556f7-42f3-11ef-9f7f-e9a656123a8b", 00:12:03.845 "strip_size_kb": 0, 00:12:03.845 "state": "configuring", 00:12:03.845 "raid_level": "raid1", 00:12:03.845 "superblock": true, 00:12:03.845 "num_base_bdevs": 3, 00:12:03.845 "num_base_bdevs_discovered": 1, 00:12:03.845 "num_base_bdevs_operational": 3, 00:12:03.845 "base_bdevs_list": [ 00:12:03.845 { 00:12:03.845 "name": "BaseBdev1", 00:12:03.845 "uuid": "f1be4085-42f3-11ef-9f7f-e9a656123a8b", 00:12:03.845 "is_configured": true, 00:12:03.845 "data_offset": 2048, 00:12:03.845 "data_size": 63488 00:12:03.845 }, 00:12:03.845 { 00:12:03.845 "name": null, 00:12:03.845 "uuid": "efd99c37-42f3-11ef-9f7f-e9a656123a8b", 00:12:03.845 "is_configured": false, 00:12:03.845 "data_offset": 2048, 00:12:03.845 "data_size": 63488 00:12:03.845 }, 00:12:03.845 { 00:12:03.845 "name": null, 00:12:03.845 "uuid": "f0477a01-42f3-11ef-9f7f-e9a656123a8b", 00:12:03.845 "is_configured": false, 00:12:03.845 "data_offset": 2048, 00:12:03.845 "data_size": 63488 00:12:03.845 } 00:12:03.845 ] 00:12:03.845 }' 00:12:03.845 21:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:03.845 21:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.104 21:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:04.104 21:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:04.104 21:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:12:04.104 21:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:04.361 [2024-07-15 21:48:19.491886] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:04.361 21:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:04.361 21:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:04.361 21:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:04.362 21:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:04.362 21:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:04.362 21:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:04.362 21:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:04.362 21:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:04.362 21:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:04.362 21:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:04.362 21:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:04.362 21:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.620 21:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:04.620 "name": "Existed_Raid", 00:12:04.620 "uuid": "f0b556f7-42f3-11ef-9f7f-e9a656123a8b", 00:12:04.620 "strip_size_kb": 0, 00:12:04.620 "state": "configuring", 00:12:04.620 "raid_level": "raid1", 00:12:04.620 "superblock": true, 00:12:04.620 "num_base_bdevs": 3, 00:12:04.620 "num_base_bdevs_discovered": 2, 00:12:04.620 "num_base_bdevs_operational": 3, 00:12:04.620 "base_bdevs_list": [ 00:12:04.620 { 00:12:04.620 "name": "BaseBdev1", 00:12:04.620 "uuid": "f1be4085-42f3-11ef-9f7f-e9a656123a8b", 00:12:04.620 "is_configured": true, 00:12:04.620 "data_offset": 2048, 00:12:04.620 "data_size": 63488 00:12:04.620 }, 00:12:04.620 { 00:12:04.620 "name": null, 00:12:04.620 "uuid": "efd99c37-42f3-11ef-9f7f-e9a656123a8b", 00:12:04.620 "is_configured": false, 00:12:04.620 "data_offset": 2048, 00:12:04.620 "data_size": 63488 00:12:04.620 }, 00:12:04.620 { 00:12:04.620 "name": "BaseBdev3", 00:12:04.620 "uuid": "f0477a01-42f3-11ef-9f7f-e9a656123a8b", 00:12:04.620 "is_configured": true, 00:12:04.620 "data_offset": 2048, 00:12:04.620 "data_size": 63488 00:12:04.620 } 00:12:04.620 ] 00:12:04.620 }' 00:12:04.620 21:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:04.620 21:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.878 21:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:04.878 21:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:05.136 21:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:12:05.136 21:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:05.394 [2024-07-15 21:48:20.499921] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:05.394 21:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:05.394 21:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:05.394 21:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:05.394 21:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:05.394 21:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:05.394 21:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:05.394 21:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:05.394 21:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:05.394 21:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:05.394 21:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:05.394 21:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.394 21:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:05.652 21:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:05.652 "name": "Existed_Raid", 00:12:05.652 "uuid": "f0b556f7-42f3-11ef-9f7f-e9a656123a8b", 00:12:05.652 "strip_size_kb": 0, 00:12:05.652 "state": "configuring", 00:12:05.652 "raid_level": "raid1", 00:12:05.652 "superblock": true, 00:12:05.652 "num_base_bdevs": 3, 00:12:05.652 "num_base_bdevs_discovered": 1, 00:12:05.652 "num_base_bdevs_operational": 3, 00:12:05.652 "base_bdevs_list": [ 00:12:05.652 { 00:12:05.652 "name": null, 00:12:05.652 "uuid": "f1be4085-42f3-11ef-9f7f-e9a656123a8b", 00:12:05.652 "is_configured": false, 00:12:05.652 "data_offset": 2048, 00:12:05.652 "data_size": 63488 00:12:05.652 }, 00:12:05.652 { 00:12:05.652 "name": null, 00:12:05.652 "uuid": "efd99c37-42f3-11ef-9f7f-e9a656123a8b", 00:12:05.652 "is_configured": false, 00:12:05.652 "data_offset": 2048, 00:12:05.652 "data_size": 63488 00:12:05.652 }, 00:12:05.652 { 00:12:05.652 "name": "BaseBdev3", 00:12:05.652 "uuid": "f0477a01-42f3-11ef-9f7f-e9a656123a8b", 00:12:05.652 "is_configured": true, 00:12:05.652 "data_offset": 2048, 00:12:05.652 "data_size": 63488 00:12:05.652 } 00:12:05.652 ] 00:12:05.652 }' 00:12:05.652 21:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:05.652 21:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.911 21:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:05.911 21:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:06.169 21:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:12:06.169 21:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:06.481 [2024-07-15 21:48:21.537837] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:06.481 21:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:06.481 21:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:06.481 21:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:06.481 21:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:06.481 21:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:06.481 21:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:06.481 21:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:06.481 21:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:06.481 21:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:06.481 21:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:06.481 21:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:06.481 21:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.739 21:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:06.739 "name": "Existed_Raid", 00:12:06.739 "uuid": "f0b556f7-42f3-11ef-9f7f-e9a656123a8b", 00:12:06.739 "strip_size_kb": 0, 00:12:06.739 "state": "configuring", 00:12:06.739 "raid_level": "raid1", 00:12:06.739 "superblock": true, 00:12:06.739 "num_base_bdevs": 3, 00:12:06.739 "num_base_bdevs_discovered": 2, 00:12:06.739 "num_base_bdevs_operational": 3, 00:12:06.739 "base_bdevs_list": [ 00:12:06.739 { 00:12:06.739 "name": null, 00:12:06.739 "uuid": "f1be4085-42f3-11ef-9f7f-e9a656123a8b", 00:12:06.739 "is_configured": false, 00:12:06.739 "data_offset": 2048, 00:12:06.739 "data_size": 63488 00:12:06.739 }, 00:12:06.739 { 00:12:06.739 "name": "BaseBdev2", 00:12:06.739 "uuid": "efd99c37-42f3-11ef-9f7f-e9a656123a8b", 00:12:06.739 "is_configured": true, 00:12:06.739 "data_offset": 2048, 00:12:06.739 "data_size": 63488 00:12:06.739 }, 00:12:06.739 { 00:12:06.739 "name": "BaseBdev3", 00:12:06.739 "uuid": "f0477a01-42f3-11ef-9f7f-e9a656123a8b", 00:12:06.739 "is_configured": true, 00:12:06.739 "data_offset": 2048, 00:12:06.739 "data_size": 63488 00:12:06.739 } 00:12:06.739 ] 00:12:06.739 }' 00:12:06.739 21:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:06.739 21:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.997 21:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:06.997 21:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:07.255 21:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:12:07.255 21:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:07.255 21:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:07.513 21:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u f1be4085-42f3-11ef-9f7f-e9a656123a8b 00:12:07.771 [2024-07-15 21:48:22.774031] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:07.771 [2024-07-15 21:48:22.774124] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x17b8be634f00 00:12:07.771 [2024-07-15 21:48:22.774130] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:07.771 [2024-07-15 21:48:22.774179] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17b8be697e20 00:12:07.771 [2024-07-15 21:48:22.774253] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x17b8be634f00 00:12:07.771 [2024-07-15 21:48:22.774256] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x17b8be634f00 00:12:07.771 [2024-07-15 21:48:22.774274] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:07.771 NewBaseBdev 00:12:07.771 21:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:12:07.771 21:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=NewBaseBdev 00:12:07.771 21:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:12:07.771 21:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:12:07.771 21:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:12:07.771 21:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:12:07.771 21:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:08.029 21:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:08.029 [ 00:12:08.029 { 00:12:08.029 "name": "NewBaseBdev", 00:12:08.029 "aliases": [ 00:12:08.029 "f1be4085-42f3-11ef-9f7f-e9a656123a8b" 00:12:08.029 ], 00:12:08.029 "product_name": "Malloc disk", 00:12:08.029 "block_size": 512, 00:12:08.029 "num_blocks": 65536, 00:12:08.029 "uuid": "f1be4085-42f3-11ef-9f7f-e9a656123a8b", 00:12:08.029 "assigned_rate_limits": { 00:12:08.029 "rw_ios_per_sec": 0, 00:12:08.029 "rw_mbytes_per_sec": 0, 00:12:08.029 "r_mbytes_per_sec": 0, 00:12:08.029 "w_mbytes_per_sec": 0 00:12:08.029 }, 00:12:08.029 "claimed": true, 00:12:08.029 "claim_type": "exclusive_write", 00:12:08.029 "zoned": false, 00:12:08.029 "supported_io_types": { 00:12:08.029 "read": true, 00:12:08.029 "write": true, 00:12:08.029 "unmap": true, 00:12:08.029 "flush": true, 00:12:08.029 "reset": true, 00:12:08.029 "nvme_admin": false, 00:12:08.029 "nvme_io": false, 00:12:08.029 "nvme_io_md": false, 00:12:08.029 "write_zeroes": true, 00:12:08.029 "zcopy": true, 00:12:08.029 "get_zone_info": false, 00:12:08.029 "zone_management": false, 00:12:08.029 "zone_append": false, 00:12:08.029 "compare": false, 00:12:08.029 "compare_and_write": false, 00:12:08.029 "abort": true, 00:12:08.029 "seek_hole": false, 00:12:08.029 "seek_data": false, 00:12:08.029 "copy": true, 00:12:08.029 "nvme_iov_md": false 00:12:08.029 }, 00:12:08.029 "memory_domains": [ 00:12:08.029 { 00:12:08.030 "dma_device_id": "system", 00:12:08.030 "dma_device_type": 1 00:12:08.030 }, 00:12:08.030 { 00:12:08.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.030 "dma_device_type": 2 00:12:08.030 } 00:12:08.030 ], 00:12:08.030 "driver_specific": {} 00:12:08.030 } 00:12:08.030 ] 00:12:08.287 21:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:12:08.287 21:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:08.287 21:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:08.287 21:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:08.287 21:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:08.287 21:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:08.287 21:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:08.287 21:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:08.287 21:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:08.287 21:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:08.287 21:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:08.287 21:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:08.287 21:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.287 21:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:08.287 "name": "Existed_Raid", 00:12:08.287 "uuid": "f0b556f7-42f3-11ef-9f7f-e9a656123a8b", 00:12:08.287 "strip_size_kb": 0, 00:12:08.287 "state": "online", 00:12:08.287 "raid_level": "raid1", 00:12:08.287 "superblock": true, 00:12:08.287 "num_base_bdevs": 3, 00:12:08.287 "num_base_bdevs_discovered": 3, 00:12:08.287 "num_base_bdevs_operational": 3, 00:12:08.287 "base_bdevs_list": [ 00:12:08.287 { 00:12:08.287 "name": "NewBaseBdev", 00:12:08.287 "uuid": "f1be4085-42f3-11ef-9f7f-e9a656123a8b", 00:12:08.287 "is_configured": true, 00:12:08.287 "data_offset": 2048, 00:12:08.287 "data_size": 63488 00:12:08.287 }, 00:12:08.287 { 00:12:08.287 "name": "BaseBdev2", 00:12:08.287 "uuid": "efd99c37-42f3-11ef-9f7f-e9a656123a8b", 00:12:08.287 "is_configured": true, 00:12:08.287 "data_offset": 2048, 00:12:08.287 "data_size": 63488 00:12:08.287 }, 00:12:08.287 { 00:12:08.287 "name": "BaseBdev3", 00:12:08.287 "uuid": "f0477a01-42f3-11ef-9f7f-e9a656123a8b", 00:12:08.287 "is_configured": true, 00:12:08.287 "data_offset": 2048, 00:12:08.287 "data_size": 63488 00:12:08.287 } 00:12:08.287 ] 00:12:08.287 }' 00:12:08.287 21:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:08.287 21:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.546 21:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:12:08.546 21:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:08.546 21:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:08.546 21:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:08.546 21:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:08.546 21:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:12:08.546 21:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:08.546 21:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:08.804 [2024-07-15 21:48:23.905970] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:08.804 21:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:08.804 "name": "Existed_Raid", 00:12:08.804 "aliases": [ 00:12:08.804 "f0b556f7-42f3-11ef-9f7f-e9a656123a8b" 00:12:08.804 ], 00:12:08.804 "product_name": "Raid Volume", 00:12:08.804 "block_size": 512, 00:12:08.804 "num_blocks": 63488, 00:12:08.804 "uuid": "f0b556f7-42f3-11ef-9f7f-e9a656123a8b", 00:12:08.804 "assigned_rate_limits": { 00:12:08.804 "rw_ios_per_sec": 0, 00:12:08.804 "rw_mbytes_per_sec": 0, 00:12:08.804 "r_mbytes_per_sec": 0, 00:12:08.804 "w_mbytes_per_sec": 0 00:12:08.804 }, 00:12:08.804 "claimed": false, 00:12:08.804 "zoned": false, 00:12:08.804 "supported_io_types": { 00:12:08.804 "read": true, 00:12:08.804 "write": true, 00:12:08.804 "unmap": false, 00:12:08.804 "flush": false, 00:12:08.804 "reset": true, 00:12:08.804 "nvme_admin": false, 00:12:08.804 "nvme_io": false, 00:12:08.804 "nvme_io_md": false, 00:12:08.804 "write_zeroes": true, 00:12:08.804 "zcopy": false, 00:12:08.804 "get_zone_info": false, 00:12:08.804 "zone_management": false, 00:12:08.804 "zone_append": false, 00:12:08.804 "compare": false, 00:12:08.804 "compare_and_write": false, 00:12:08.804 "abort": false, 00:12:08.804 "seek_hole": false, 00:12:08.804 "seek_data": false, 00:12:08.804 "copy": false, 00:12:08.804 "nvme_iov_md": false 00:12:08.804 }, 00:12:08.804 "memory_domains": [ 00:12:08.804 { 00:12:08.804 "dma_device_id": "system", 00:12:08.804 "dma_device_type": 1 00:12:08.804 }, 00:12:08.804 { 00:12:08.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.804 "dma_device_type": 2 00:12:08.804 }, 00:12:08.804 { 00:12:08.804 "dma_device_id": "system", 00:12:08.804 "dma_device_type": 1 00:12:08.804 }, 00:12:08.804 { 00:12:08.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.804 "dma_device_type": 2 00:12:08.804 }, 00:12:08.804 { 00:12:08.804 "dma_device_id": "system", 00:12:08.804 "dma_device_type": 1 00:12:08.804 }, 00:12:08.804 { 00:12:08.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.804 "dma_device_type": 2 00:12:08.804 } 00:12:08.804 ], 00:12:08.804 "driver_specific": { 00:12:08.804 "raid": { 00:12:08.804 "uuid": "f0b556f7-42f3-11ef-9f7f-e9a656123a8b", 00:12:08.804 "strip_size_kb": 0, 00:12:08.804 "state": "online", 00:12:08.804 "raid_level": "raid1", 00:12:08.804 "superblock": true, 00:12:08.804 "num_base_bdevs": 3, 00:12:08.804 "num_base_bdevs_discovered": 3, 00:12:08.804 "num_base_bdevs_operational": 3, 00:12:08.804 "base_bdevs_list": [ 00:12:08.804 { 00:12:08.804 "name": "NewBaseBdev", 00:12:08.804 "uuid": "f1be4085-42f3-11ef-9f7f-e9a656123a8b", 00:12:08.804 "is_configured": true, 00:12:08.804 "data_offset": 2048, 00:12:08.804 "data_size": 63488 00:12:08.804 }, 00:12:08.804 { 00:12:08.804 "name": "BaseBdev2", 00:12:08.804 "uuid": "efd99c37-42f3-11ef-9f7f-e9a656123a8b", 00:12:08.804 "is_configured": true, 00:12:08.804 "data_offset": 2048, 00:12:08.804 "data_size": 63488 00:12:08.804 }, 00:12:08.804 { 00:12:08.804 "name": "BaseBdev3", 00:12:08.804 "uuid": "f0477a01-42f3-11ef-9f7f-e9a656123a8b", 00:12:08.804 "is_configured": true, 00:12:08.804 "data_offset": 2048, 00:12:08.804 "data_size": 63488 00:12:08.804 } 00:12:08.804 ] 00:12:08.804 } 00:12:08.804 } 00:12:08.804 }' 00:12:08.804 21:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:08.804 21:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:12:08.804 BaseBdev2 00:12:08.804 BaseBdev3' 00:12:08.804 21:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:08.804 21:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:12:08.804 21:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:09.063 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:09.063 "name": "NewBaseBdev", 00:12:09.063 "aliases": [ 00:12:09.063 "f1be4085-42f3-11ef-9f7f-e9a656123a8b" 00:12:09.063 ], 00:12:09.063 "product_name": "Malloc disk", 00:12:09.063 "block_size": 512, 00:12:09.063 "num_blocks": 65536, 00:12:09.063 "uuid": "f1be4085-42f3-11ef-9f7f-e9a656123a8b", 00:12:09.063 "assigned_rate_limits": { 00:12:09.063 "rw_ios_per_sec": 0, 00:12:09.063 "rw_mbytes_per_sec": 0, 00:12:09.063 "r_mbytes_per_sec": 0, 00:12:09.063 "w_mbytes_per_sec": 0 00:12:09.063 }, 00:12:09.063 "claimed": true, 00:12:09.063 "claim_type": "exclusive_write", 00:12:09.063 "zoned": false, 00:12:09.063 "supported_io_types": { 00:12:09.063 "read": true, 00:12:09.063 "write": true, 00:12:09.063 "unmap": true, 00:12:09.063 "flush": true, 00:12:09.063 "reset": true, 00:12:09.063 "nvme_admin": false, 00:12:09.063 "nvme_io": false, 00:12:09.063 "nvme_io_md": false, 00:12:09.063 "write_zeroes": true, 00:12:09.063 "zcopy": true, 00:12:09.063 "get_zone_info": false, 00:12:09.063 "zone_management": false, 00:12:09.063 "zone_append": false, 00:12:09.063 "compare": false, 00:12:09.063 "compare_and_write": false, 00:12:09.063 "abort": true, 00:12:09.063 "seek_hole": false, 00:12:09.063 "seek_data": false, 00:12:09.063 "copy": true, 00:12:09.063 "nvme_iov_md": false 00:12:09.063 }, 00:12:09.063 "memory_domains": [ 00:12:09.063 { 00:12:09.063 "dma_device_id": "system", 00:12:09.063 "dma_device_type": 1 00:12:09.063 }, 00:12:09.063 { 00:12:09.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.063 "dma_device_type": 2 00:12:09.063 } 00:12:09.063 ], 00:12:09.063 "driver_specific": {} 00:12:09.063 }' 00:12:09.063 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:09.063 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:09.063 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:09.063 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:09.063 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:09.063 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:09.063 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:09.063 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:09.063 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:09.063 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:09.063 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:09.321 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:09.321 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:09.321 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:09.321 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:09.579 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:09.579 "name": "BaseBdev2", 00:12:09.579 "aliases": [ 00:12:09.579 "efd99c37-42f3-11ef-9f7f-e9a656123a8b" 00:12:09.579 ], 00:12:09.579 "product_name": "Malloc disk", 00:12:09.579 "block_size": 512, 00:12:09.579 "num_blocks": 65536, 00:12:09.579 "uuid": "efd99c37-42f3-11ef-9f7f-e9a656123a8b", 00:12:09.579 "assigned_rate_limits": { 00:12:09.579 "rw_ios_per_sec": 0, 00:12:09.579 "rw_mbytes_per_sec": 0, 00:12:09.579 "r_mbytes_per_sec": 0, 00:12:09.579 "w_mbytes_per_sec": 0 00:12:09.579 }, 00:12:09.579 "claimed": true, 00:12:09.579 "claim_type": "exclusive_write", 00:12:09.579 "zoned": false, 00:12:09.579 "supported_io_types": { 00:12:09.579 "read": true, 00:12:09.579 "write": true, 00:12:09.579 "unmap": true, 00:12:09.579 "flush": true, 00:12:09.579 "reset": true, 00:12:09.579 "nvme_admin": false, 00:12:09.579 "nvme_io": false, 00:12:09.579 "nvme_io_md": false, 00:12:09.579 "write_zeroes": true, 00:12:09.579 "zcopy": true, 00:12:09.579 "get_zone_info": false, 00:12:09.579 "zone_management": false, 00:12:09.579 "zone_append": false, 00:12:09.579 "compare": false, 00:12:09.579 "compare_and_write": false, 00:12:09.579 "abort": true, 00:12:09.580 "seek_hole": false, 00:12:09.580 "seek_data": false, 00:12:09.580 "copy": true, 00:12:09.580 "nvme_iov_md": false 00:12:09.580 }, 00:12:09.580 "memory_domains": [ 00:12:09.580 { 00:12:09.580 "dma_device_id": "system", 00:12:09.580 "dma_device_type": 1 00:12:09.580 }, 00:12:09.580 { 00:12:09.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.580 "dma_device_type": 2 00:12:09.580 } 00:12:09.580 ], 00:12:09.580 "driver_specific": {} 00:12:09.580 }' 00:12:09.580 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:09.580 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:09.580 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:09.580 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:09.580 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:09.580 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:09.580 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:09.580 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:09.580 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:09.580 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:09.580 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:09.580 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:09.580 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:09.580 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:09.580 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:09.837 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:09.837 "name": "BaseBdev3", 00:12:09.837 "aliases": [ 00:12:09.837 "f0477a01-42f3-11ef-9f7f-e9a656123a8b" 00:12:09.837 ], 00:12:09.837 "product_name": "Malloc disk", 00:12:09.837 "block_size": 512, 00:12:09.837 "num_blocks": 65536, 00:12:09.837 "uuid": "f0477a01-42f3-11ef-9f7f-e9a656123a8b", 00:12:09.837 "assigned_rate_limits": { 00:12:09.837 "rw_ios_per_sec": 0, 00:12:09.837 "rw_mbytes_per_sec": 0, 00:12:09.837 "r_mbytes_per_sec": 0, 00:12:09.837 "w_mbytes_per_sec": 0 00:12:09.837 }, 00:12:09.837 "claimed": true, 00:12:09.837 "claim_type": "exclusive_write", 00:12:09.837 "zoned": false, 00:12:09.837 "supported_io_types": { 00:12:09.837 "read": true, 00:12:09.837 "write": true, 00:12:09.837 "unmap": true, 00:12:09.837 "flush": true, 00:12:09.837 "reset": true, 00:12:09.837 "nvme_admin": false, 00:12:09.837 "nvme_io": false, 00:12:09.837 "nvme_io_md": false, 00:12:09.837 "write_zeroes": true, 00:12:09.837 "zcopy": true, 00:12:09.837 "get_zone_info": false, 00:12:09.837 "zone_management": false, 00:12:09.837 "zone_append": false, 00:12:09.837 "compare": false, 00:12:09.837 "compare_and_write": false, 00:12:09.837 "abort": true, 00:12:09.837 "seek_hole": false, 00:12:09.837 "seek_data": false, 00:12:09.837 "copy": true, 00:12:09.837 "nvme_iov_md": false 00:12:09.837 }, 00:12:09.837 "memory_domains": [ 00:12:09.837 { 00:12:09.837 "dma_device_id": "system", 00:12:09.837 "dma_device_type": 1 00:12:09.837 }, 00:12:09.837 { 00:12:09.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.837 "dma_device_type": 2 00:12:09.837 } 00:12:09.837 ], 00:12:09.837 "driver_specific": {} 00:12:09.837 }' 00:12:09.837 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:09.837 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:09.837 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:09.837 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:09.837 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:09.837 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:09.837 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:09.837 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:09.837 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:09.837 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:09.837 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:09.837 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:09.837 21:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:10.095 [2024-07-15 21:48:25.077970] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:10.095 [2024-07-15 21:48:25.077988] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:10.095 [2024-07-15 21:48:25.078025] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:10.095 [2024-07-15 21:48:25.078094] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:10.095 [2024-07-15 21:48:25.078098] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17b8be634f00 name Existed_Raid, state offline 00:12:10.095 21:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 56829 00:12:10.095 21:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@942 -- # '[' -z 56829 ']' 00:12:10.095 21:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # kill -0 56829 00:12:10.095 21:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@947 -- # uname 00:12:10.095 21:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:12:10.095 21:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # ps -c -o command 56829 00:12:10.095 21:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # tail -1 00:12:10.095 21:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:12:10.095 killing process with pid 56829 00:12:10.095 21:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:12:10.095 21:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # echo 'killing process with pid 56829' 00:12:10.095 21:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@961 -- # kill 56829 00:12:10.095 [2024-07-15 21:48:25.103363] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:10.095 21:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # wait 56829 00:12:10.095 [2024-07-15 21:48:25.120227] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:10.095 21:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:12:10.095 00:12:10.095 real 0m22.015s 00:12:10.095 user 0m40.131s 00:12:10.095 sys 0m3.136s 00:12:10.095 21:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1118 -- # xtrace_disable 00:12:10.095 ************************************ 00:12:10.095 END TEST raid_state_function_test_sb 00:12:10.096 ************************************ 00:12:10.096 21:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.355 21:48:25 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:12:10.355 21:48:25 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:12:10.355 21:48:25 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:12:10.355 21:48:25 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:12:10.355 21:48:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:10.355 ************************************ 00:12:10.355 START TEST raid_superblock_test 00:12:10.355 ************************************ 00:12:10.355 21:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1117 -- # raid_superblock_test raid1 3 00:12:10.355 21:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:12:10.355 21:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:12:10.355 21:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:12:10.355 21:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:12:10.355 21:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:12:10.355 21:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:12:10.355 21:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:12:10.355 21:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:12:10.355 21:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:12:10.355 21:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:12:10.355 21:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:12:10.355 21:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:12:10.355 21:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:12:10.355 21:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:12:10.355 21:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:12:10.355 21:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=57549 00:12:10.355 21:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 57549 /var/tmp/spdk-raid.sock 00:12:10.355 21:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:12:10.355 21:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@823 -- # '[' -z 57549 ']' 00:12:10.355 21:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:10.355 21:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:12:10.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:10.355 21:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:10.355 21:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:12:10.355 21:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.355 [2024-07-15 21:48:25.337666] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:12:10.355 [2024-07-15 21:48:25.337978] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:12:10.921 EAL: TSC is not safe to use in SMP mode 00:12:10.921 EAL: TSC is not invariant 00:12:10.921 [2024-07-15 21:48:25.914350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.921 [2024-07-15 21:48:25.993664] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:10.921 [2024-07-15 21:48:25.995885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.921 [2024-07-15 21:48:25.996742] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:10.921 [2024-07-15 21:48:25.996771] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:11.487 21:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:12:11.487 21:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # return 0 00:12:11.487 21:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:12:11.487 21:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:11.487 21:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:12:11.487 21:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:12:11.487 21:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:11.487 21:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:11.487 21:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:11.487 21:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:11.487 21:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:12:11.745 malloc1 00:12:11.745 21:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:12.003 [2024-07-15 21:48:27.035842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:12.003 [2024-07-15 21:48:27.035912] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.003 [2024-07-15 21:48:27.035941] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3d17fca34780 00:12:12.003 [2024-07-15 21:48:27.035950] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.003 [2024-07-15 21:48:27.036826] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.003 [2024-07-15 21:48:27.036850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:12.003 pt1 00:12:12.003 21:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:12.003 21:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:12.003 21:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:12:12.003 21:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:12:12.003 21:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:12.003 21:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:12.003 21:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:12.003 21:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:12.003 21:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:12:12.261 malloc2 00:12:12.261 21:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:12.520 [2024-07-15 21:48:27.575845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:12.520 [2024-07-15 21:48:27.575910] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.520 [2024-07-15 21:48:27.575937] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3d17fca34c80 00:12:12.520 [2024-07-15 21:48:27.575945] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.520 [2024-07-15 21:48:27.576621] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.520 [2024-07-15 21:48:27.576646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:12.520 pt2 00:12:12.520 21:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:12.520 21:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:12.520 21:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:12:12.520 21:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:12:12.520 21:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:12.520 21:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:12.520 21:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:12.520 21:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:12.520 21:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:12:12.778 malloc3 00:12:12.778 21:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:13.037 [2024-07-15 21:48:28.091863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:13.037 [2024-07-15 21:48:28.091917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.037 [2024-07-15 21:48:28.091929] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3d17fca35180 00:12:13.037 [2024-07-15 21:48:28.091937] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.037 [2024-07-15 21:48:28.092620] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.037 [2024-07-15 21:48:28.092642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:13.037 pt3 00:12:13.037 21:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:13.037 21:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:13.037 21:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:12:13.295 [2024-07-15 21:48:28.355866] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:13.295 [2024-07-15 21:48:28.356523] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:13.295 [2024-07-15 21:48:28.356558] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:13.295 [2024-07-15 21:48:28.356609] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3d17fca35400 00:12:13.295 [2024-07-15 21:48:28.356629] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:13.295 [2024-07-15 21:48:28.356674] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3d17fca97e20 00:12:13.295 [2024-07-15 21:48:28.356765] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3d17fca35400 00:12:13.295 [2024-07-15 21:48:28.356769] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3d17fca35400 00:12:13.295 [2024-07-15 21:48:28.356796] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.295 21:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:13.295 21:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:13.295 21:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:13.295 21:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:13.295 21:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:13.295 21:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:13.295 21:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:13.295 21:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:13.295 21:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:13.295 21:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:13.295 21:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:13.295 21:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.554 21:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:13.554 "name": "raid_bdev1", 00:12:13.554 "uuid": "f863eab1-42f3-11ef-9f7f-e9a656123a8b", 00:12:13.554 "strip_size_kb": 0, 00:12:13.554 "state": "online", 00:12:13.554 "raid_level": "raid1", 00:12:13.554 "superblock": true, 00:12:13.554 "num_base_bdevs": 3, 00:12:13.554 "num_base_bdevs_discovered": 3, 00:12:13.554 "num_base_bdevs_operational": 3, 00:12:13.554 "base_bdevs_list": [ 00:12:13.554 { 00:12:13.554 "name": "pt1", 00:12:13.554 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:13.554 "is_configured": true, 00:12:13.554 "data_offset": 2048, 00:12:13.554 "data_size": 63488 00:12:13.554 }, 00:12:13.554 { 00:12:13.554 "name": "pt2", 00:12:13.554 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:13.554 "is_configured": true, 00:12:13.554 "data_offset": 2048, 00:12:13.554 "data_size": 63488 00:12:13.554 }, 00:12:13.554 { 00:12:13.554 "name": "pt3", 00:12:13.554 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:13.554 "is_configured": true, 00:12:13.554 "data_offset": 2048, 00:12:13.554 "data_size": 63488 00:12:13.554 } 00:12:13.554 ] 00:12:13.554 }' 00:12:13.554 21:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:13.554 21:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.813 21:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:12:13.813 21:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:13.813 21:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:13.813 21:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:13.813 21:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:13.813 21:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:13.813 21:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:13.813 21:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:14.071 [2024-07-15 21:48:29.107936] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:14.071 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:14.071 "name": "raid_bdev1", 00:12:14.071 "aliases": [ 00:12:14.071 "f863eab1-42f3-11ef-9f7f-e9a656123a8b" 00:12:14.071 ], 00:12:14.071 "product_name": "Raid Volume", 00:12:14.071 "block_size": 512, 00:12:14.071 "num_blocks": 63488, 00:12:14.071 "uuid": "f863eab1-42f3-11ef-9f7f-e9a656123a8b", 00:12:14.071 "assigned_rate_limits": { 00:12:14.071 "rw_ios_per_sec": 0, 00:12:14.071 "rw_mbytes_per_sec": 0, 00:12:14.071 "r_mbytes_per_sec": 0, 00:12:14.071 "w_mbytes_per_sec": 0 00:12:14.071 }, 00:12:14.071 "claimed": false, 00:12:14.071 "zoned": false, 00:12:14.071 "supported_io_types": { 00:12:14.071 "read": true, 00:12:14.071 "write": true, 00:12:14.071 "unmap": false, 00:12:14.071 "flush": false, 00:12:14.071 "reset": true, 00:12:14.071 "nvme_admin": false, 00:12:14.071 "nvme_io": false, 00:12:14.071 "nvme_io_md": false, 00:12:14.071 "write_zeroes": true, 00:12:14.071 "zcopy": false, 00:12:14.071 "get_zone_info": false, 00:12:14.071 "zone_management": false, 00:12:14.071 "zone_append": false, 00:12:14.071 "compare": false, 00:12:14.071 "compare_and_write": false, 00:12:14.071 "abort": false, 00:12:14.071 "seek_hole": false, 00:12:14.071 "seek_data": false, 00:12:14.071 "copy": false, 00:12:14.071 "nvme_iov_md": false 00:12:14.071 }, 00:12:14.071 "memory_domains": [ 00:12:14.071 { 00:12:14.071 "dma_device_id": "system", 00:12:14.071 "dma_device_type": 1 00:12:14.071 }, 00:12:14.071 { 00:12:14.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.071 "dma_device_type": 2 00:12:14.071 }, 00:12:14.071 { 00:12:14.071 "dma_device_id": "system", 00:12:14.071 "dma_device_type": 1 00:12:14.071 }, 00:12:14.071 { 00:12:14.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.071 "dma_device_type": 2 00:12:14.071 }, 00:12:14.071 { 00:12:14.071 "dma_device_id": "system", 00:12:14.071 "dma_device_type": 1 00:12:14.071 }, 00:12:14.071 { 00:12:14.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.071 "dma_device_type": 2 00:12:14.071 } 00:12:14.071 ], 00:12:14.071 "driver_specific": { 00:12:14.071 "raid": { 00:12:14.071 "uuid": "f863eab1-42f3-11ef-9f7f-e9a656123a8b", 00:12:14.071 "strip_size_kb": 0, 00:12:14.071 "state": "online", 00:12:14.071 "raid_level": "raid1", 00:12:14.071 "superblock": true, 00:12:14.071 "num_base_bdevs": 3, 00:12:14.071 "num_base_bdevs_discovered": 3, 00:12:14.071 "num_base_bdevs_operational": 3, 00:12:14.071 "base_bdevs_list": [ 00:12:14.071 { 00:12:14.071 "name": "pt1", 00:12:14.071 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:14.071 "is_configured": true, 00:12:14.071 "data_offset": 2048, 00:12:14.071 "data_size": 63488 00:12:14.071 }, 00:12:14.071 { 00:12:14.071 "name": "pt2", 00:12:14.071 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:14.071 "is_configured": true, 00:12:14.071 "data_offset": 2048, 00:12:14.071 "data_size": 63488 00:12:14.071 }, 00:12:14.071 { 00:12:14.071 "name": "pt3", 00:12:14.071 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:14.071 "is_configured": true, 00:12:14.071 "data_offset": 2048, 00:12:14.071 "data_size": 63488 00:12:14.071 } 00:12:14.071 ] 00:12:14.071 } 00:12:14.071 } 00:12:14.071 }' 00:12:14.071 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:14.071 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:14.071 pt2 00:12:14.071 pt3' 00:12:14.071 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:14.071 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:14.071 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:14.330 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:14.330 "name": "pt1", 00:12:14.330 "aliases": [ 00:12:14.330 "00000000-0000-0000-0000-000000000001" 00:12:14.330 ], 00:12:14.330 "product_name": "passthru", 00:12:14.330 "block_size": 512, 00:12:14.330 "num_blocks": 65536, 00:12:14.330 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:14.330 "assigned_rate_limits": { 00:12:14.330 "rw_ios_per_sec": 0, 00:12:14.330 "rw_mbytes_per_sec": 0, 00:12:14.330 "r_mbytes_per_sec": 0, 00:12:14.330 "w_mbytes_per_sec": 0 00:12:14.330 }, 00:12:14.330 "claimed": true, 00:12:14.330 "claim_type": "exclusive_write", 00:12:14.330 "zoned": false, 00:12:14.330 "supported_io_types": { 00:12:14.330 "read": true, 00:12:14.330 "write": true, 00:12:14.330 "unmap": true, 00:12:14.330 "flush": true, 00:12:14.330 "reset": true, 00:12:14.330 "nvme_admin": false, 00:12:14.330 "nvme_io": false, 00:12:14.330 "nvme_io_md": false, 00:12:14.330 "write_zeroes": true, 00:12:14.330 "zcopy": true, 00:12:14.330 "get_zone_info": false, 00:12:14.330 "zone_management": false, 00:12:14.330 "zone_append": false, 00:12:14.330 "compare": false, 00:12:14.330 "compare_and_write": false, 00:12:14.330 "abort": true, 00:12:14.330 "seek_hole": false, 00:12:14.330 "seek_data": false, 00:12:14.330 "copy": true, 00:12:14.330 "nvme_iov_md": false 00:12:14.330 }, 00:12:14.330 "memory_domains": [ 00:12:14.330 { 00:12:14.330 "dma_device_id": "system", 00:12:14.330 "dma_device_type": 1 00:12:14.330 }, 00:12:14.330 { 00:12:14.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.330 "dma_device_type": 2 00:12:14.330 } 00:12:14.330 ], 00:12:14.330 "driver_specific": { 00:12:14.330 "passthru": { 00:12:14.330 "name": "pt1", 00:12:14.330 "base_bdev_name": "malloc1" 00:12:14.330 } 00:12:14.330 } 00:12:14.330 }' 00:12:14.330 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:14.330 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:14.330 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:14.330 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:14.330 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:14.330 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:14.330 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:14.330 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:14.330 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:14.330 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:14.330 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:14.330 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:14.330 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:14.330 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:14.330 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:14.588 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:14.588 "name": "pt2", 00:12:14.588 "aliases": [ 00:12:14.588 "00000000-0000-0000-0000-000000000002" 00:12:14.588 ], 00:12:14.588 "product_name": "passthru", 00:12:14.588 "block_size": 512, 00:12:14.588 "num_blocks": 65536, 00:12:14.588 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:14.588 "assigned_rate_limits": { 00:12:14.588 "rw_ios_per_sec": 0, 00:12:14.588 "rw_mbytes_per_sec": 0, 00:12:14.588 "r_mbytes_per_sec": 0, 00:12:14.588 "w_mbytes_per_sec": 0 00:12:14.588 }, 00:12:14.588 "claimed": true, 00:12:14.588 "claim_type": "exclusive_write", 00:12:14.588 "zoned": false, 00:12:14.588 "supported_io_types": { 00:12:14.588 "read": true, 00:12:14.588 "write": true, 00:12:14.588 "unmap": true, 00:12:14.588 "flush": true, 00:12:14.588 "reset": true, 00:12:14.588 "nvme_admin": false, 00:12:14.588 "nvme_io": false, 00:12:14.588 "nvme_io_md": false, 00:12:14.588 "write_zeroes": true, 00:12:14.588 "zcopy": true, 00:12:14.588 "get_zone_info": false, 00:12:14.588 "zone_management": false, 00:12:14.588 "zone_append": false, 00:12:14.588 "compare": false, 00:12:14.588 "compare_and_write": false, 00:12:14.588 "abort": true, 00:12:14.588 "seek_hole": false, 00:12:14.588 "seek_data": false, 00:12:14.588 "copy": true, 00:12:14.588 "nvme_iov_md": false 00:12:14.588 }, 00:12:14.588 "memory_domains": [ 00:12:14.588 { 00:12:14.588 "dma_device_id": "system", 00:12:14.588 "dma_device_type": 1 00:12:14.588 }, 00:12:14.588 { 00:12:14.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.588 "dma_device_type": 2 00:12:14.588 } 00:12:14.588 ], 00:12:14.588 "driver_specific": { 00:12:14.588 "passthru": { 00:12:14.588 "name": "pt2", 00:12:14.588 "base_bdev_name": "malloc2" 00:12:14.588 } 00:12:14.588 } 00:12:14.588 }' 00:12:14.588 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:14.588 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:14.588 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:14.588 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:14.588 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:14.588 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:14.588 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:14.588 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:14.588 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:14.588 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:14.588 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:14.588 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:14.588 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:14.588 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:14.588 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:14.846 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:14.846 "name": "pt3", 00:12:14.846 "aliases": [ 00:12:14.846 "00000000-0000-0000-0000-000000000003" 00:12:14.846 ], 00:12:14.846 "product_name": "passthru", 00:12:14.846 "block_size": 512, 00:12:14.846 "num_blocks": 65536, 00:12:14.846 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:14.846 "assigned_rate_limits": { 00:12:14.846 "rw_ios_per_sec": 0, 00:12:14.846 "rw_mbytes_per_sec": 0, 00:12:14.846 "r_mbytes_per_sec": 0, 00:12:14.846 "w_mbytes_per_sec": 0 00:12:14.846 }, 00:12:14.846 "claimed": true, 00:12:14.846 "claim_type": "exclusive_write", 00:12:14.846 "zoned": false, 00:12:14.846 "supported_io_types": { 00:12:14.846 "read": true, 00:12:14.846 "write": true, 00:12:14.846 "unmap": true, 00:12:14.846 "flush": true, 00:12:14.846 "reset": true, 00:12:14.846 "nvme_admin": false, 00:12:14.846 "nvme_io": false, 00:12:14.846 "nvme_io_md": false, 00:12:14.846 "write_zeroes": true, 00:12:14.846 "zcopy": true, 00:12:14.846 "get_zone_info": false, 00:12:14.846 "zone_management": false, 00:12:14.846 "zone_append": false, 00:12:14.846 "compare": false, 00:12:14.846 "compare_and_write": false, 00:12:14.846 "abort": true, 00:12:14.846 "seek_hole": false, 00:12:14.846 "seek_data": false, 00:12:14.846 "copy": true, 00:12:14.846 "nvme_iov_md": false 00:12:14.846 }, 00:12:14.846 "memory_domains": [ 00:12:14.846 { 00:12:14.846 "dma_device_id": "system", 00:12:14.846 "dma_device_type": 1 00:12:14.846 }, 00:12:14.846 { 00:12:14.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.846 "dma_device_type": 2 00:12:14.846 } 00:12:14.846 ], 00:12:14.846 "driver_specific": { 00:12:14.846 "passthru": { 00:12:14.846 "name": "pt3", 00:12:14.846 "base_bdev_name": "malloc3" 00:12:14.846 } 00:12:14.846 } 00:12:14.846 }' 00:12:14.846 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:14.846 21:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:14.846 21:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:14.846 21:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:14.846 21:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:14.846 21:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:14.846 21:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:14.846 21:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:15.104 21:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:15.104 21:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:15.104 21:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:15.104 21:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:15.104 21:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:15.104 21:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:12:15.362 [2024-07-15 21:48:30.311976] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:15.362 21:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=f863eab1-42f3-11ef-9f7f-e9a656123a8b 00:12:15.362 21:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z f863eab1-42f3-11ef-9f7f-e9a656123a8b ']' 00:12:15.362 21:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:15.362 [2024-07-15 21:48:30.547960] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:15.362 [2024-07-15 21:48:30.547981] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:15.362 [2024-07-15 21:48:30.548018] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:15.362 [2024-07-15 21:48:30.548034] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:15.362 [2024-07-15 21:48:30.548038] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3d17fca35400 name raid_bdev1, state offline 00:12:15.620 21:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:15.620 21:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:12:15.620 21:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:12:15.620 21:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:12:15.620 21:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:15.620 21:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:15.877 21:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:15.877 21:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:16.135 21:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:16.135 21:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:16.701 21:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:12:16.701 21:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:16.701 21:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:12:16.701 21:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:16.701 21:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # local es=0 00:12:16.701 21:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:16.701 21:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@630 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:16.701 21:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:12:16.701 21:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:16.701 21:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:12:16.701 21:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:16.701 21:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:12:16.701 21:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:16.701 21:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:16.701 21:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:16.959 [2024-07-15 21:48:32.039999] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:16.959 [2024-07-15 21:48:32.040629] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:16.959 [2024-07-15 21:48:32.040641] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:16.959 [2024-07-15 21:48:32.040654] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:16.959 [2024-07-15 21:48:32.040691] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:16.959 [2024-07-15 21:48:32.040702] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:16.959 [2024-07-15 21:48:32.040710] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:16.959 [2024-07-15 21:48:32.040714] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3d17fca35180 name raid_bdev1, state configuring 00:12:16.959 request: 00:12:16.959 { 00:12:16.959 "name": "raid_bdev1", 00:12:16.959 "raid_level": "raid1", 00:12:16.959 "base_bdevs": [ 00:12:16.959 "malloc1", 00:12:16.959 "malloc2", 00:12:16.959 "malloc3" 00:12:16.959 ], 00:12:16.959 "superblock": false, 00:12:16.959 "method": "bdev_raid_create", 00:12:16.959 "req_id": 1 00:12:16.959 } 00:12:16.959 Got JSON-RPC error response 00:12:16.959 response: 00:12:16.959 { 00:12:16.959 "code": -17, 00:12:16.959 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:16.959 } 00:12:16.959 21:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # es=1 00:12:16.959 21:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:12:16.959 21:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:12:16.959 21:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:12:16.959 21:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:16.959 21:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:12:17.217 21:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:12:17.217 21:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:12:17.217 21:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:17.475 [2024-07-15 21:48:32.512003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:17.475 [2024-07-15 21:48:32.512079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.475 [2024-07-15 21:48:32.512107] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3d17fca34c80 00:12:17.475 [2024-07-15 21:48:32.512114] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.475 [2024-07-15 21:48:32.512797] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.475 [2024-07-15 21:48:32.512823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:17.475 [2024-07-15 21:48:32.512847] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:17.475 [2024-07-15 21:48:32.512859] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:17.475 pt1 00:12:17.475 21:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:17.475 21:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:17.475 21:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:17.475 21:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:17.475 21:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:17.475 21:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:17.475 21:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:17.475 21:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:17.475 21:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:17.475 21:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:17.475 21:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:17.475 21:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.733 21:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:17.733 "name": "raid_bdev1", 00:12:17.733 "uuid": "f863eab1-42f3-11ef-9f7f-e9a656123a8b", 00:12:17.733 "strip_size_kb": 0, 00:12:17.733 "state": "configuring", 00:12:17.733 "raid_level": "raid1", 00:12:17.733 "superblock": true, 00:12:17.733 "num_base_bdevs": 3, 00:12:17.733 "num_base_bdevs_discovered": 1, 00:12:17.733 "num_base_bdevs_operational": 3, 00:12:17.733 "base_bdevs_list": [ 00:12:17.733 { 00:12:17.733 "name": "pt1", 00:12:17.733 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:17.733 "is_configured": true, 00:12:17.734 "data_offset": 2048, 00:12:17.734 "data_size": 63488 00:12:17.734 }, 00:12:17.734 { 00:12:17.734 "name": null, 00:12:17.734 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:17.734 "is_configured": false, 00:12:17.734 "data_offset": 2048, 00:12:17.734 "data_size": 63488 00:12:17.734 }, 00:12:17.734 { 00:12:17.734 "name": null, 00:12:17.734 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:17.734 "is_configured": false, 00:12:17.734 "data_offset": 2048, 00:12:17.734 "data_size": 63488 00:12:17.734 } 00:12:17.734 ] 00:12:17.734 }' 00:12:17.734 21:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:17.734 21:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.991 21:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:12:17.992 21:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:18.250 [2024-07-15 21:48:33.312055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:18.250 [2024-07-15 21:48:33.312124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.250 [2024-07-15 21:48:33.312152] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3d17fca35680 00:12:18.250 [2024-07-15 21:48:33.312159] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.250 [2024-07-15 21:48:33.312269] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.250 [2024-07-15 21:48:33.312279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:18.250 [2024-07-15 21:48:33.312324] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:18.250 [2024-07-15 21:48:33.312332] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:18.250 pt2 00:12:18.250 21:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:18.507 [2024-07-15 21:48:33.528125] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:18.507 21:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:18.507 21:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:18.507 21:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:18.507 21:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:18.507 21:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:18.507 21:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:18.507 21:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:18.507 21:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:18.507 21:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:18.507 21:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:18.507 21:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:18.507 21:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.765 21:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:18.765 "name": "raid_bdev1", 00:12:18.765 "uuid": "f863eab1-42f3-11ef-9f7f-e9a656123a8b", 00:12:18.765 "strip_size_kb": 0, 00:12:18.765 "state": "configuring", 00:12:18.765 "raid_level": "raid1", 00:12:18.765 "superblock": true, 00:12:18.765 "num_base_bdevs": 3, 00:12:18.765 "num_base_bdevs_discovered": 1, 00:12:18.765 "num_base_bdevs_operational": 3, 00:12:18.765 "base_bdevs_list": [ 00:12:18.765 { 00:12:18.765 "name": "pt1", 00:12:18.765 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:18.765 "is_configured": true, 00:12:18.765 "data_offset": 2048, 00:12:18.765 "data_size": 63488 00:12:18.765 }, 00:12:18.765 { 00:12:18.765 "name": null, 00:12:18.765 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:18.765 "is_configured": false, 00:12:18.765 "data_offset": 2048, 00:12:18.765 "data_size": 63488 00:12:18.765 }, 00:12:18.765 { 00:12:18.765 "name": null, 00:12:18.765 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:18.765 "is_configured": false, 00:12:18.765 "data_offset": 2048, 00:12:18.765 "data_size": 63488 00:12:18.765 } 00:12:18.765 ] 00:12:18.765 }' 00:12:18.765 21:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:18.765 21:48:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.022 21:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:12:19.022 21:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:19.022 21:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:19.280 [2024-07-15 21:48:34.428184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:19.280 [2024-07-15 21:48:34.428234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.280 [2024-07-15 21:48:34.428260] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3d17fca35680 00:12:19.280 [2024-07-15 21:48:34.428267] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.280 [2024-07-15 21:48:34.428372] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.280 [2024-07-15 21:48:34.428399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:19.280 [2024-07-15 21:48:34.428421] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:19.280 [2024-07-15 21:48:34.428429] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:19.280 pt2 00:12:19.280 21:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:12:19.280 21:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:19.280 21:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:19.537 [2024-07-15 21:48:34.684201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:19.537 [2024-07-15 21:48:34.684267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.537 [2024-07-15 21:48:34.684294] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3d17fca35400 00:12:19.537 [2024-07-15 21:48:34.684301] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.537 [2024-07-15 21:48:34.684438] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.537 [2024-07-15 21:48:34.684448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:19.537 [2024-07-15 21:48:34.684486] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:19.537 [2024-07-15 21:48:34.684501] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:19.537 [2024-07-15 21:48:34.684540] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3d17fca34780 00:12:19.537 [2024-07-15 21:48:34.684544] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:19.537 [2024-07-15 21:48:34.684565] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3d17fca97e20 00:12:19.537 [2024-07-15 21:48:34.684625] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3d17fca34780 00:12:19.537 [2024-07-15 21:48:34.684630] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3d17fca34780 00:12:19.537 [2024-07-15 21:48:34.684651] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.537 pt3 00:12:19.537 21:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:12:19.537 21:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:19.537 21:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:19.537 21:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:19.537 21:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:19.537 21:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:19.537 21:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:19.537 21:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:19.537 21:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:19.537 21:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:19.537 21:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:19.537 21:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:19.537 21:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.537 21:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:20.103 21:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:20.103 "name": "raid_bdev1", 00:12:20.103 "uuid": "f863eab1-42f3-11ef-9f7f-e9a656123a8b", 00:12:20.103 "strip_size_kb": 0, 00:12:20.103 "state": "online", 00:12:20.103 "raid_level": "raid1", 00:12:20.103 "superblock": true, 00:12:20.103 "num_base_bdevs": 3, 00:12:20.103 "num_base_bdevs_discovered": 3, 00:12:20.103 "num_base_bdevs_operational": 3, 00:12:20.103 "base_bdevs_list": [ 00:12:20.103 { 00:12:20.103 "name": "pt1", 00:12:20.103 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:20.103 "is_configured": true, 00:12:20.103 "data_offset": 2048, 00:12:20.103 "data_size": 63488 00:12:20.103 }, 00:12:20.103 { 00:12:20.103 "name": "pt2", 00:12:20.103 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:20.103 "is_configured": true, 00:12:20.103 "data_offset": 2048, 00:12:20.103 "data_size": 63488 00:12:20.103 }, 00:12:20.103 { 00:12:20.103 "name": "pt3", 00:12:20.103 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:20.103 "is_configured": true, 00:12:20.103 "data_offset": 2048, 00:12:20.103 "data_size": 63488 00:12:20.103 } 00:12:20.103 ] 00:12:20.103 }' 00:12:20.103 21:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:20.103 21:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.103 21:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:12:20.103 21:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:20.103 21:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:20.103 21:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:20.103 21:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:20.103 21:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:20.103 21:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:20.103 21:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:20.367 [2024-07-15 21:48:35.528282] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:20.367 21:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:20.367 "name": "raid_bdev1", 00:12:20.367 "aliases": [ 00:12:20.367 "f863eab1-42f3-11ef-9f7f-e9a656123a8b" 00:12:20.367 ], 00:12:20.367 "product_name": "Raid Volume", 00:12:20.367 "block_size": 512, 00:12:20.367 "num_blocks": 63488, 00:12:20.367 "uuid": "f863eab1-42f3-11ef-9f7f-e9a656123a8b", 00:12:20.367 "assigned_rate_limits": { 00:12:20.367 "rw_ios_per_sec": 0, 00:12:20.367 "rw_mbytes_per_sec": 0, 00:12:20.367 "r_mbytes_per_sec": 0, 00:12:20.367 "w_mbytes_per_sec": 0 00:12:20.367 }, 00:12:20.367 "claimed": false, 00:12:20.367 "zoned": false, 00:12:20.367 "supported_io_types": { 00:12:20.367 "read": true, 00:12:20.367 "write": true, 00:12:20.367 "unmap": false, 00:12:20.367 "flush": false, 00:12:20.367 "reset": true, 00:12:20.367 "nvme_admin": false, 00:12:20.367 "nvme_io": false, 00:12:20.367 "nvme_io_md": false, 00:12:20.367 "write_zeroes": true, 00:12:20.367 "zcopy": false, 00:12:20.367 "get_zone_info": false, 00:12:20.367 "zone_management": false, 00:12:20.367 "zone_append": false, 00:12:20.367 "compare": false, 00:12:20.367 "compare_and_write": false, 00:12:20.367 "abort": false, 00:12:20.367 "seek_hole": false, 00:12:20.367 "seek_data": false, 00:12:20.367 "copy": false, 00:12:20.367 "nvme_iov_md": false 00:12:20.367 }, 00:12:20.367 "memory_domains": [ 00:12:20.367 { 00:12:20.367 "dma_device_id": "system", 00:12:20.367 "dma_device_type": 1 00:12:20.367 }, 00:12:20.367 { 00:12:20.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.367 "dma_device_type": 2 00:12:20.367 }, 00:12:20.367 { 00:12:20.367 "dma_device_id": "system", 00:12:20.367 "dma_device_type": 1 00:12:20.367 }, 00:12:20.367 { 00:12:20.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.367 "dma_device_type": 2 00:12:20.367 }, 00:12:20.367 { 00:12:20.367 "dma_device_id": "system", 00:12:20.367 "dma_device_type": 1 00:12:20.367 }, 00:12:20.367 { 00:12:20.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.367 "dma_device_type": 2 00:12:20.367 } 00:12:20.367 ], 00:12:20.367 "driver_specific": { 00:12:20.367 "raid": { 00:12:20.367 "uuid": "f863eab1-42f3-11ef-9f7f-e9a656123a8b", 00:12:20.367 "strip_size_kb": 0, 00:12:20.367 "state": "online", 00:12:20.367 "raid_level": "raid1", 00:12:20.367 "superblock": true, 00:12:20.367 "num_base_bdevs": 3, 00:12:20.367 "num_base_bdevs_discovered": 3, 00:12:20.367 "num_base_bdevs_operational": 3, 00:12:20.367 "base_bdevs_list": [ 00:12:20.367 { 00:12:20.367 "name": "pt1", 00:12:20.367 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:20.367 "is_configured": true, 00:12:20.367 "data_offset": 2048, 00:12:20.367 "data_size": 63488 00:12:20.367 }, 00:12:20.367 { 00:12:20.367 "name": "pt2", 00:12:20.367 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:20.367 "is_configured": true, 00:12:20.367 "data_offset": 2048, 00:12:20.367 "data_size": 63488 00:12:20.368 }, 00:12:20.368 { 00:12:20.368 "name": "pt3", 00:12:20.368 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:20.368 "is_configured": true, 00:12:20.368 "data_offset": 2048, 00:12:20.368 "data_size": 63488 00:12:20.368 } 00:12:20.368 ] 00:12:20.368 } 00:12:20.368 } 00:12:20.368 }' 00:12:20.368 21:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:20.368 21:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:20.368 pt2 00:12:20.368 pt3' 00:12:20.368 21:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:20.630 21:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:20.630 21:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:20.630 21:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:20.630 "name": "pt1", 00:12:20.630 "aliases": [ 00:12:20.630 "00000000-0000-0000-0000-000000000001" 00:12:20.630 ], 00:12:20.630 "product_name": "passthru", 00:12:20.630 "block_size": 512, 00:12:20.630 "num_blocks": 65536, 00:12:20.630 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:20.630 "assigned_rate_limits": { 00:12:20.630 "rw_ios_per_sec": 0, 00:12:20.630 "rw_mbytes_per_sec": 0, 00:12:20.630 "r_mbytes_per_sec": 0, 00:12:20.630 "w_mbytes_per_sec": 0 00:12:20.630 }, 00:12:20.630 "claimed": true, 00:12:20.630 "claim_type": "exclusive_write", 00:12:20.630 "zoned": false, 00:12:20.630 "supported_io_types": { 00:12:20.630 "read": true, 00:12:20.630 "write": true, 00:12:20.630 "unmap": true, 00:12:20.630 "flush": true, 00:12:20.630 "reset": true, 00:12:20.630 "nvme_admin": false, 00:12:20.630 "nvme_io": false, 00:12:20.630 "nvme_io_md": false, 00:12:20.630 "write_zeroes": true, 00:12:20.630 "zcopy": true, 00:12:20.630 "get_zone_info": false, 00:12:20.630 "zone_management": false, 00:12:20.630 "zone_append": false, 00:12:20.630 "compare": false, 00:12:20.630 "compare_and_write": false, 00:12:20.630 "abort": true, 00:12:20.630 "seek_hole": false, 00:12:20.630 "seek_data": false, 00:12:20.630 "copy": true, 00:12:20.630 "nvme_iov_md": false 00:12:20.630 }, 00:12:20.630 "memory_domains": [ 00:12:20.630 { 00:12:20.630 "dma_device_id": "system", 00:12:20.630 "dma_device_type": 1 00:12:20.630 }, 00:12:20.630 { 00:12:20.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.630 "dma_device_type": 2 00:12:20.630 } 00:12:20.630 ], 00:12:20.630 "driver_specific": { 00:12:20.630 "passthru": { 00:12:20.630 "name": "pt1", 00:12:20.630 "base_bdev_name": "malloc1" 00:12:20.630 } 00:12:20.630 } 00:12:20.630 }' 00:12:20.630 21:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:20.630 21:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:20.630 21:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:20.630 21:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:20.630 21:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:20.630 21:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:20.630 21:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:20.630 21:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:20.630 21:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:20.888 21:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:20.888 21:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:20.888 21:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:20.888 21:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:20.888 21:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:20.888 21:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:20.888 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:20.888 "name": "pt2", 00:12:20.888 "aliases": [ 00:12:20.888 "00000000-0000-0000-0000-000000000002" 00:12:20.888 ], 00:12:20.888 "product_name": "passthru", 00:12:20.888 "block_size": 512, 00:12:20.888 "num_blocks": 65536, 00:12:20.888 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:20.888 "assigned_rate_limits": { 00:12:20.888 "rw_ios_per_sec": 0, 00:12:20.888 "rw_mbytes_per_sec": 0, 00:12:20.888 "r_mbytes_per_sec": 0, 00:12:20.888 "w_mbytes_per_sec": 0 00:12:20.888 }, 00:12:20.888 "claimed": true, 00:12:20.888 "claim_type": "exclusive_write", 00:12:20.888 "zoned": false, 00:12:20.888 "supported_io_types": { 00:12:20.888 "read": true, 00:12:20.888 "write": true, 00:12:20.888 "unmap": true, 00:12:20.888 "flush": true, 00:12:20.888 "reset": true, 00:12:20.888 "nvme_admin": false, 00:12:20.888 "nvme_io": false, 00:12:20.888 "nvme_io_md": false, 00:12:20.888 "write_zeroes": true, 00:12:20.888 "zcopy": true, 00:12:20.888 "get_zone_info": false, 00:12:20.888 "zone_management": false, 00:12:20.888 "zone_append": false, 00:12:20.888 "compare": false, 00:12:20.888 "compare_and_write": false, 00:12:20.888 "abort": true, 00:12:20.888 "seek_hole": false, 00:12:20.888 "seek_data": false, 00:12:20.888 "copy": true, 00:12:20.888 "nvme_iov_md": false 00:12:20.888 }, 00:12:20.888 "memory_domains": [ 00:12:20.888 { 00:12:20.888 "dma_device_id": "system", 00:12:20.888 "dma_device_type": 1 00:12:20.888 }, 00:12:20.888 { 00:12:20.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.888 "dma_device_type": 2 00:12:20.888 } 00:12:20.888 ], 00:12:20.888 "driver_specific": { 00:12:20.888 "passthru": { 00:12:20.888 "name": "pt2", 00:12:20.888 "base_bdev_name": "malloc2" 00:12:20.888 } 00:12:20.888 } 00:12:20.888 }' 00:12:20.888 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:20.888 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:20.888 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:20.888 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:20.888 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:21.146 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:21.146 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:21.146 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:21.146 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:21.146 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:21.146 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:21.146 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:21.146 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:21.146 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:21.146 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:21.404 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:21.404 "name": "pt3", 00:12:21.404 "aliases": [ 00:12:21.404 "00000000-0000-0000-0000-000000000003" 00:12:21.404 ], 00:12:21.404 "product_name": "passthru", 00:12:21.404 "block_size": 512, 00:12:21.404 "num_blocks": 65536, 00:12:21.404 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:21.404 "assigned_rate_limits": { 00:12:21.404 "rw_ios_per_sec": 0, 00:12:21.404 "rw_mbytes_per_sec": 0, 00:12:21.404 "r_mbytes_per_sec": 0, 00:12:21.404 "w_mbytes_per_sec": 0 00:12:21.404 }, 00:12:21.404 "claimed": true, 00:12:21.404 "claim_type": "exclusive_write", 00:12:21.404 "zoned": false, 00:12:21.404 "supported_io_types": { 00:12:21.404 "read": true, 00:12:21.404 "write": true, 00:12:21.404 "unmap": true, 00:12:21.404 "flush": true, 00:12:21.404 "reset": true, 00:12:21.404 "nvme_admin": false, 00:12:21.404 "nvme_io": false, 00:12:21.404 "nvme_io_md": false, 00:12:21.404 "write_zeroes": true, 00:12:21.404 "zcopy": true, 00:12:21.404 "get_zone_info": false, 00:12:21.404 "zone_management": false, 00:12:21.404 "zone_append": false, 00:12:21.404 "compare": false, 00:12:21.404 "compare_and_write": false, 00:12:21.404 "abort": true, 00:12:21.404 "seek_hole": false, 00:12:21.404 "seek_data": false, 00:12:21.404 "copy": true, 00:12:21.404 "nvme_iov_md": false 00:12:21.404 }, 00:12:21.404 "memory_domains": [ 00:12:21.404 { 00:12:21.404 "dma_device_id": "system", 00:12:21.404 "dma_device_type": 1 00:12:21.404 }, 00:12:21.404 { 00:12:21.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.404 "dma_device_type": 2 00:12:21.404 } 00:12:21.404 ], 00:12:21.404 "driver_specific": { 00:12:21.404 "passthru": { 00:12:21.404 "name": "pt3", 00:12:21.404 "base_bdev_name": "malloc3" 00:12:21.404 } 00:12:21.404 } 00:12:21.404 }' 00:12:21.404 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:21.404 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:21.404 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:21.404 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:21.404 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:21.404 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:21.404 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:21.404 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:21.404 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:21.404 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:21.404 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:21.404 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:21.404 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:21.404 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:12:21.663 [2024-07-15 21:48:36.716325] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:21.663 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' f863eab1-42f3-11ef-9f7f-e9a656123a8b '!=' f863eab1-42f3-11ef-9f7f-e9a656123a8b ']' 00:12:21.663 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:12:21.663 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:21.663 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:12:21.663 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:21.922 [2024-07-15 21:48:36.932298] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:21.922 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:21.922 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:21.922 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:21.922 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:21.922 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:21.922 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:21.922 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:21.922 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:21.922 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:21.922 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:21.922 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:21.922 21:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.193 21:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:22.193 "name": "raid_bdev1", 00:12:22.193 "uuid": "f863eab1-42f3-11ef-9f7f-e9a656123a8b", 00:12:22.193 "strip_size_kb": 0, 00:12:22.193 "state": "online", 00:12:22.193 "raid_level": "raid1", 00:12:22.193 "superblock": true, 00:12:22.193 "num_base_bdevs": 3, 00:12:22.193 "num_base_bdevs_discovered": 2, 00:12:22.193 "num_base_bdevs_operational": 2, 00:12:22.193 "base_bdevs_list": [ 00:12:22.193 { 00:12:22.193 "name": null, 00:12:22.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.193 "is_configured": false, 00:12:22.193 "data_offset": 2048, 00:12:22.193 "data_size": 63488 00:12:22.193 }, 00:12:22.193 { 00:12:22.193 "name": "pt2", 00:12:22.193 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:22.193 "is_configured": true, 00:12:22.193 "data_offset": 2048, 00:12:22.193 "data_size": 63488 00:12:22.193 }, 00:12:22.193 { 00:12:22.193 "name": "pt3", 00:12:22.193 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:22.193 "is_configured": true, 00:12:22.193 "data_offset": 2048, 00:12:22.193 "data_size": 63488 00:12:22.193 } 00:12:22.193 ] 00:12:22.193 }' 00:12:22.193 21:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:22.193 21:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.466 21:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:22.725 [2024-07-15 21:48:37.740331] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:22.725 [2024-07-15 21:48:37.740356] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:22.725 [2024-07-15 21:48:37.740393] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:22.725 [2024-07-15 21:48:37.740407] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:22.725 [2024-07-15 21:48:37.740412] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3d17fca34780 name raid_bdev1, state offline 00:12:22.725 21:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:12:22.725 21:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:22.983 21:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:12:22.983 21:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:12:22.983 21:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:12:22.983 21:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:12:22.983 21:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:23.241 21:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:12:23.241 21:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:12:23.241 21:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:23.500 21:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:12:23.500 21:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:12:23.500 21:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:12:23.500 21:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:12:23.501 21:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:23.501 [2024-07-15 21:48:38.672364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:23.501 [2024-07-15 21:48:38.672435] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.501 [2024-07-15 21:48:38.672462] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3d17fca35400 00:12:23.501 [2024-07-15 21:48:38.672469] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.501 [2024-07-15 21:48:38.673160] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.501 [2024-07-15 21:48:38.673185] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:23.501 [2024-07-15 21:48:38.673209] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:23.501 [2024-07-15 21:48:38.673221] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:23.501 pt2 00:12:23.760 21:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:23.760 21:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:23.760 21:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:23.760 21:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:23.760 21:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:23.760 21:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:23.760 21:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:23.760 21:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:23.760 21:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:23.760 21:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:23.760 21:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:23.760 21:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.760 21:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:23.760 "name": "raid_bdev1", 00:12:23.760 "uuid": "f863eab1-42f3-11ef-9f7f-e9a656123a8b", 00:12:23.760 "strip_size_kb": 0, 00:12:23.760 "state": "configuring", 00:12:23.760 "raid_level": "raid1", 00:12:23.760 "superblock": true, 00:12:23.760 "num_base_bdevs": 3, 00:12:23.760 "num_base_bdevs_discovered": 1, 00:12:23.760 "num_base_bdevs_operational": 2, 00:12:23.760 "base_bdevs_list": [ 00:12:23.760 { 00:12:23.760 "name": null, 00:12:23.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.760 "is_configured": false, 00:12:23.760 "data_offset": 2048, 00:12:23.760 "data_size": 63488 00:12:23.760 }, 00:12:23.760 { 00:12:23.760 "name": "pt2", 00:12:23.760 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:23.760 "is_configured": true, 00:12:23.760 "data_offset": 2048, 00:12:23.760 "data_size": 63488 00:12:23.760 }, 00:12:23.760 { 00:12:23.760 "name": null, 00:12:23.760 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:23.760 "is_configured": false, 00:12:23.760 "data_offset": 2048, 00:12:23.760 "data_size": 63488 00:12:23.760 } 00:12:23.760 ] 00:12:23.760 }' 00:12:23.760 21:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:23.760 21:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.326 21:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:12:24.326 21:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:12:24.326 21:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:12:24.326 21:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:24.327 [2024-07-15 21:48:39.500393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:24.327 [2024-07-15 21:48:39.500490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.327 [2024-07-15 21:48:39.500518] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3d17fca34780 00:12:24.327 [2024-07-15 21:48:39.500536] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.327 [2024-07-15 21:48:39.500644] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.327 [2024-07-15 21:48:39.500671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:24.327 [2024-07-15 21:48:39.500695] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:24.327 [2024-07-15 21:48:39.500705] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:24.327 [2024-07-15 21:48:39.500732] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3d17fca35180 00:12:24.327 [2024-07-15 21:48:39.500736] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:24.327 [2024-07-15 21:48:39.500759] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3d17fca97e20 00:12:24.327 [2024-07-15 21:48:39.500805] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3d17fca35180 00:12:24.327 [2024-07-15 21:48:39.500810] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3d17fca35180 00:12:24.327 [2024-07-15 21:48:39.500830] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.327 pt3 00:12:24.585 21:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:24.585 21:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:24.585 21:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:24.585 21:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:24.585 21:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:24.585 21:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:24.585 21:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:24.585 21:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:24.585 21:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:24.585 21:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:24.585 21:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:24.585 21:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.843 21:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:24.843 "name": "raid_bdev1", 00:12:24.843 "uuid": "f863eab1-42f3-11ef-9f7f-e9a656123a8b", 00:12:24.843 "strip_size_kb": 0, 00:12:24.843 "state": "online", 00:12:24.843 "raid_level": "raid1", 00:12:24.843 "superblock": true, 00:12:24.843 "num_base_bdevs": 3, 00:12:24.843 "num_base_bdevs_discovered": 2, 00:12:24.843 "num_base_bdevs_operational": 2, 00:12:24.843 "base_bdevs_list": [ 00:12:24.843 { 00:12:24.843 "name": null, 00:12:24.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.843 "is_configured": false, 00:12:24.843 "data_offset": 2048, 00:12:24.843 "data_size": 63488 00:12:24.843 }, 00:12:24.843 { 00:12:24.843 "name": "pt2", 00:12:24.843 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:24.843 "is_configured": true, 00:12:24.843 "data_offset": 2048, 00:12:24.843 "data_size": 63488 00:12:24.843 }, 00:12:24.843 { 00:12:24.843 "name": "pt3", 00:12:24.843 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:24.843 "is_configured": true, 00:12:24.843 "data_offset": 2048, 00:12:24.843 "data_size": 63488 00:12:24.843 } 00:12:24.843 ] 00:12:24.843 }' 00:12:24.843 21:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:24.843 21:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.101 21:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:25.101 [2024-07-15 21:48:40.276401] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:25.101 [2024-07-15 21:48:40.276425] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:25.101 [2024-07-15 21:48:40.276478] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:25.101 [2024-07-15 21:48:40.276491] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:25.101 [2024-07-15 21:48:40.276495] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3d17fca35180 name raid_bdev1, state offline 00:12:25.360 21:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:25.360 21:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:12:25.360 21:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:12:25.360 21:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:12:25.360 21:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:12:25.360 21:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:12:25.360 21:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:25.618 21:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:25.876 [2024-07-15 21:48:40.936410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:25.876 [2024-07-15 21:48:40.936505] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.876 [2024-07-15 21:48:40.936522] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3d17fca34780 00:12:25.876 [2024-07-15 21:48:40.936529] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.876 [2024-07-15 21:48:40.937235] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.877 [2024-07-15 21:48:40.937260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:25.877 [2024-07-15 21:48:40.937284] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:25.877 [2024-07-15 21:48:40.937294] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:25.877 [2024-07-15 21:48:40.937354] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:25.877 [2024-07-15 21:48:40.937358] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:25.877 [2024-07-15 21:48:40.937363] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3d17fca35180 name raid_bdev1, state configuring 00:12:25.877 [2024-07-15 21:48:40.937370] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:25.877 pt1 00:12:25.877 21:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:12:25.877 21:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:25.877 21:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:25.877 21:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:25.877 21:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:25.877 21:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:25.877 21:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:25.877 21:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:25.877 21:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:25.877 21:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:25.877 21:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:25.877 21:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:25.877 21:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.135 21:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:26.135 "name": "raid_bdev1", 00:12:26.135 "uuid": "f863eab1-42f3-11ef-9f7f-e9a656123a8b", 00:12:26.135 "strip_size_kb": 0, 00:12:26.135 "state": "configuring", 00:12:26.135 "raid_level": "raid1", 00:12:26.135 "superblock": true, 00:12:26.135 "num_base_bdevs": 3, 00:12:26.135 "num_base_bdevs_discovered": 1, 00:12:26.135 "num_base_bdevs_operational": 2, 00:12:26.135 "base_bdevs_list": [ 00:12:26.135 { 00:12:26.135 "name": null, 00:12:26.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.135 "is_configured": false, 00:12:26.135 "data_offset": 2048, 00:12:26.135 "data_size": 63488 00:12:26.135 }, 00:12:26.135 { 00:12:26.135 "name": "pt2", 00:12:26.135 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:26.135 "is_configured": true, 00:12:26.135 "data_offset": 2048, 00:12:26.135 "data_size": 63488 00:12:26.135 }, 00:12:26.135 { 00:12:26.135 "name": null, 00:12:26.135 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:26.135 "is_configured": false, 00:12:26.135 "data_offset": 2048, 00:12:26.135 "data_size": 63488 00:12:26.135 } 00:12:26.135 ] 00:12:26.135 }' 00:12:26.135 21:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:26.135 21:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.395 21:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:12:26.395 21:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:26.654 21:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:12:26.654 21:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:26.914 [2024-07-15 21:48:42.008426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:26.914 [2024-07-15 21:48:42.008491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.914 [2024-07-15 21:48:42.008518] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3d17fca34c80 00:12:26.914 [2024-07-15 21:48:42.008524] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.914 [2024-07-15 21:48:42.008628] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.914 [2024-07-15 21:48:42.008638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:26.914 [2024-07-15 21:48:42.008707] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:26.914 [2024-07-15 21:48:42.008715] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:26.914 [2024-07-15 21:48:42.008742] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3d17fca35180 00:12:26.914 [2024-07-15 21:48:42.008746] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:26.914 [2024-07-15 21:48:42.008765] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3d17fca97e20 00:12:26.914 [2024-07-15 21:48:42.008825] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3d17fca35180 00:12:26.914 [2024-07-15 21:48:42.008829] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3d17fca35180 00:12:26.914 [2024-07-15 21:48:42.008849] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.914 pt3 00:12:26.914 21:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:26.914 21:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:26.914 21:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:26.914 21:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:26.914 21:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:26.914 21:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:26.914 21:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:26.914 21:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:26.914 21:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:26.914 21:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:26.914 21:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:26.914 21:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.174 21:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:27.174 "name": "raid_bdev1", 00:12:27.174 "uuid": "f863eab1-42f3-11ef-9f7f-e9a656123a8b", 00:12:27.174 "strip_size_kb": 0, 00:12:27.174 "state": "online", 00:12:27.174 "raid_level": "raid1", 00:12:27.174 "superblock": true, 00:12:27.174 "num_base_bdevs": 3, 00:12:27.174 "num_base_bdevs_discovered": 2, 00:12:27.174 "num_base_bdevs_operational": 2, 00:12:27.174 "base_bdevs_list": [ 00:12:27.174 { 00:12:27.174 "name": null, 00:12:27.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.174 "is_configured": false, 00:12:27.174 "data_offset": 2048, 00:12:27.174 "data_size": 63488 00:12:27.174 }, 00:12:27.174 { 00:12:27.174 "name": "pt2", 00:12:27.174 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:27.174 "is_configured": true, 00:12:27.174 "data_offset": 2048, 00:12:27.174 "data_size": 63488 00:12:27.174 }, 00:12:27.174 { 00:12:27.174 "name": "pt3", 00:12:27.174 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:27.174 "is_configured": true, 00:12:27.174 "data_offset": 2048, 00:12:27.174 "data_size": 63488 00:12:27.174 } 00:12:27.174 ] 00:12:27.174 }' 00:12:27.174 21:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:27.174 21:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.433 21:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:12:27.433 21:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:27.693 21:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:12:27.693 21:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:27.693 21:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:12:27.967 [2024-07-15 21:48:43.020551] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:27.967 21:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' f863eab1-42f3-11ef-9f7f-e9a656123a8b '!=' f863eab1-42f3-11ef-9f7f-e9a656123a8b ']' 00:12:27.967 21:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 57549 00:12:27.967 21:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@942 -- # '[' -z 57549 ']' 00:12:27.967 21:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # kill -0 57549 00:12:27.967 21:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@947 -- # uname 00:12:27.967 21:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:12:27.967 21:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # ps -c -o command 57549 00:12:27.967 21:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # tail -1 00:12:27.967 21:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:12:27.967 21:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:12:27.967 killing process with pid 57549 00:12:27.967 21:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 57549' 00:12:27.967 21:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@961 -- # kill 57549 00:12:27.967 [2024-07-15 21:48:43.049738] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:27.967 [2024-07-15 21:48:43.049788] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:27.967 [2024-07-15 21:48:43.049803] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:27.967 [2024-07-15 21:48:43.049808] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3d17fca35180 name raid_bdev1, state offline 00:12:27.967 21:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # wait 57549 00:12:27.967 [2024-07-15 21:48:43.067628] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:28.235 21:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:12:28.235 00:12:28.235 real 0m17.906s 00:12:28.235 user 0m32.279s 00:12:28.235 sys 0m2.741s 00:12:28.235 21:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:12:28.235 ************************************ 00:12:28.235 END TEST raid_superblock_test 00:12:28.235 ************************************ 00:12:28.235 21:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.235 21:48:43 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:12:28.235 21:48:43 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:12:28.235 21:48:43 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:12:28.235 21:48:43 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:12:28.235 21:48:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:28.235 ************************************ 00:12:28.235 START TEST raid_read_error_test 00:12:28.235 ************************************ 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1117 -- # raid_io_error_test raid1 3 read 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.5FFNVkK27s 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=58099 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 58099 /var/tmp/spdk-raid.sock 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@823 -- # '[' -z 58099 ']' 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:12:28.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:12:28.235 21:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.235 [2024-07-15 21:48:43.297472] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:12:28.235 [2024-07-15 21:48:43.297672] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:12:28.802 EAL: TSC is not safe to use in SMP mode 00:12:28.802 EAL: TSC is not invariant 00:12:28.802 [2024-07-15 21:48:43.841311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.802 [2024-07-15 21:48:43.923246] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:28.802 [2024-07-15 21:48:43.925389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.802 [2024-07-15 21:48:43.926222] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:28.802 [2024-07-15 21:48:43.926235] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.370 21:48:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:12:29.370 21:48:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # return 0 00:12:29.370 21:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:29.370 21:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:29.629 BaseBdev1_malloc 00:12:29.629 21:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:29.887 true 00:12:29.887 21:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:29.887 [2024-07-15 21:48:45.053830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:29.887 [2024-07-15 21:48:45.053919] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.887 [2024-07-15 21:48:45.053945] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b80e9034780 00:12:29.887 [2024-07-15 21:48:45.053954] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.887 [2024-07-15 21:48:45.054622] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.887 [2024-07-15 21:48:45.054646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:29.887 BaseBdev1 00:12:29.887 21:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:29.888 21:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:30.146 BaseBdev2_malloc 00:12:30.146 21:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:30.405 true 00:12:30.405 21:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:30.663 [2024-07-15 21:48:45.785837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:30.663 [2024-07-15 21:48:45.785896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.663 [2024-07-15 21:48:45.785934] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b80e9034c80 00:12:30.663 [2024-07-15 21:48:45.785943] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.663 [2024-07-15 21:48:45.786460] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.663 [2024-07-15 21:48:45.786486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:30.663 BaseBdev2 00:12:30.663 21:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:30.664 21:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:30.922 BaseBdev3_malloc 00:12:30.922 21:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:12:31.181 true 00:12:31.181 21:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:31.439 [2024-07-15 21:48:46.561856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:31.439 [2024-07-15 21:48:46.561931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.439 [2024-07-15 21:48:46.561969] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b80e9035180 00:12:31.439 [2024-07-15 21:48:46.561977] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.439 [2024-07-15 21:48:46.562733] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.439 [2024-07-15 21:48:46.562759] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:31.439 BaseBdev3 00:12:31.439 21:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:12:31.698 [2024-07-15 21:48:46.773888] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:31.698 [2024-07-15 21:48:46.774545] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:31.698 [2024-07-15 21:48:46.774569] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:31.698 [2024-07-15 21:48:46.774625] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b80e9035400 00:12:31.698 [2024-07-15 21:48:46.774631] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:31.698 [2024-07-15 21:48:46.774660] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b80e90a0e20 00:12:31.698 [2024-07-15 21:48:46.774733] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b80e9035400 00:12:31.698 [2024-07-15 21:48:46.774737] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1b80e9035400 00:12:31.698 [2024-07-15 21:48:46.774762] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.699 21:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:31.699 21:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:31.699 21:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:31.699 21:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:31.699 21:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:31.699 21:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:31.699 21:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:31.699 21:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:31.699 21:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:31.699 21:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:31.699 21:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:31.699 21:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.957 21:48:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:31.957 "name": "raid_bdev1", 00:12:31.957 "uuid": "035e4873-42f4-11ef-9f7f-e9a656123a8b", 00:12:31.957 "strip_size_kb": 0, 00:12:31.957 "state": "online", 00:12:31.957 "raid_level": "raid1", 00:12:31.957 "superblock": true, 00:12:31.957 "num_base_bdevs": 3, 00:12:31.957 "num_base_bdevs_discovered": 3, 00:12:31.957 "num_base_bdevs_operational": 3, 00:12:31.957 "base_bdevs_list": [ 00:12:31.957 { 00:12:31.957 "name": "BaseBdev1", 00:12:31.957 "uuid": "cbc086ab-b51c-f15a-b5e3-3f69525079dd", 00:12:31.957 "is_configured": true, 00:12:31.957 "data_offset": 2048, 00:12:31.957 "data_size": 63488 00:12:31.957 }, 00:12:31.957 { 00:12:31.957 "name": "BaseBdev2", 00:12:31.957 "uuid": "3f7aad44-3293-ea5a-978c-e2fee52080d1", 00:12:31.957 "is_configured": true, 00:12:31.957 "data_offset": 2048, 00:12:31.957 "data_size": 63488 00:12:31.957 }, 00:12:31.957 { 00:12:31.957 "name": "BaseBdev3", 00:12:31.957 "uuid": "08886596-4279-e159-8739-55b27316952a", 00:12:31.957 "is_configured": true, 00:12:31.957 "data_offset": 2048, 00:12:31.957 "data_size": 63488 00:12:31.957 } 00:12:31.957 ] 00:12:31.957 }' 00:12:31.957 21:48:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:31.957 21:48:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.217 21:48:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:12:32.217 21:48:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:32.476 [2024-07-15 21:48:47.466137] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b80e90a0ec0 00:12:33.417 21:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:33.675 21:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:12:33.675 21:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:33.675 21:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:12:33.675 21:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:33.675 21:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:33.675 21:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:33.675 21:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:33.675 21:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:33.675 21:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:33.675 21:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:33.675 21:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:33.675 21:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:33.675 21:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:33.676 21:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:33.676 21:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:33.676 21:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.935 21:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:33.935 "name": "raid_bdev1", 00:12:33.935 "uuid": "035e4873-42f4-11ef-9f7f-e9a656123a8b", 00:12:33.935 "strip_size_kb": 0, 00:12:33.935 "state": "online", 00:12:33.935 "raid_level": "raid1", 00:12:33.935 "superblock": true, 00:12:33.935 "num_base_bdevs": 3, 00:12:33.935 "num_base_bdevs_discovered": 3, 00:12:33.935 "num_base_bdevs_operational": 3, 00:12:33.935 "base_bdevs_list": [ 00:12:33.935 { 00:12:33.935 "name": "BaseBdev1", 00:12:33.935 "uuid": "cbc086ab-b51c-f15a-b5e3-3f69525079dd", 00:12:33.935 "is_configured": true, 00:12:33.935 "data_offset": 2048, 00:12:33.935 "data_size": 63488 00:12:33.935 }, 00:12:33.935 { 00:12:33.935 "name": "BaseBdev2", 00:12:33.935 "uuid": "3f7aad44-3293-ea5a-978c-e2fee52080d1", 00:12:33.935 "is_configured": true, 00:12:33.935 "data_offset": 2048, 00:12:33.935 "data_size": 63488 00:12:33.935 }, 00:12:33.935 { 00:12:33.935 "name": "BaseBdev3", 00:12:33.935 "uuid": "08886596-4279-e159-8739-55b27316952a", 00:12:33.935 "is_configured": true, 00:12:33.935 "data_offset": 2048, 00:12:33.935 "data_size": 63488 00:12:33.935 } 00:12:33.935 ] 00:12:33.935 }' 00:12:33.935 21:48:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:33.935 21:48:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.194 21:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:34.453 [2024-07-15 21:48:49.479233] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:34.453 [2024-07-15 21:48:49.479263] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:34.453 [2024-07-15 21:48:49.479597] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:34.453 [2024-07-15 21:48:49.479618] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.453 [2024-07-15 21:48:49.479632] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:34.453 [2024-07-15 21:48:49.479648] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b80e9035400 name raid_bdev1, state offline 00:12:34.453 0 00:12:34.453 21:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 58099 00:12:34.453 21:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@942 -- # '[' -z 58099 ']' 00:12:34.453 21:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # kill -0 58099 00:12:34.453 21:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@947 -- # uname 00:12:34.453 21:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:12:34.453 21:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # ps -c -o command 58099 00:12:34.453 21:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # tail -1 00:12:34.453 21:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # process_name=bdevperf 00:12:34.453 21:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' bdevperf = sudo ']' 00:12:34.453 killing process with pid 58099 00:12:34.453 21:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 58099' 00:12:34.453 21:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@961 -- # kill 58099 00:12:34.453 [2024-07-15 21:48:49.506911] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:34.453 21:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # wait 58099 00:12:34.453 [2024-07-15 21:48:49.524861] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:34.725 21:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.5FFNVkK27s 00:12:34.725 21:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:12:34.725 21:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:12:34.725 21:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:12:34.725 21:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:12:34.725 21:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:34.725 21:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:12:34.725 21:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:34.725 00:12:34.725 real 0m6.427s 00:12:34.725 user 0m10.076s 00:12:34.725 sys 0m1.045s 00:12:34.725 21:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:12:34.725 21:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.725 ************************************ 00:12:34.725 END TEST raid_read_error_test 00:12:34.725 ************************************ 00:12:34.725 21:48:49 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:12:34.725 21:48:49 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:12:34.725 21:48:49 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:12:34.725 21:48:49 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:12:34.725 21:48:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:34.725 ************************************ 00:12:34.725 START TEST raid_write_error_test 00:12:34.725 ************************************ 00:12:34.725 21:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1117 -- # raid_io_error_test raid1 3 write 00:12:34.725 21:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:12:34.725 21:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:12:34.725 21:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:12:34.725 21:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:12:34.725 21:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:34.725 21:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:12:34.725 21:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:34.725 21:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:34.725 21:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:12:34.725 21:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:34.725 21:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:34.725 21:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:12:34.725 21:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:34.725 21:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:34.725 21:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:34.725 21:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:12:34.725 21:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:12:34.725 21:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:12:34.725 21:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:12:34.725 21:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:12:34.725 21:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:12:34.725 21:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:12:34.725 21:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:12:34.725 21:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:12:34.725 21:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.q6NjNbPGoR 00:12:34.725 21:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=58230 00:12:34.725 21:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 58230 /var/tmp/spdk-raid.sock 00:12:34.725 21:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@823 -- # '[' -z 58230 ']' 00:12:34.725 21:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:34.725 21:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:34.725 21:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:12:34.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:34.726 21:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:34.726 21:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:12:34.726 21:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.726 [2024-07-15 21:48:49.776933] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:12:34.726 [2024-07-15 21:48:49.777196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:12:35.300 EAL: TSC is not safe to use in SMP mode 00:12:35.300 EAL: TSC is not invariant 00:12:35.300 [2024-07-15 21:48:50.328094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.300 [2024-07-15 21:48:50.403402] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:35.300 [2024-07-15 21:48:50.405860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.300 [2024-07-15 21:48:50.406776] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:35.300 [2024-07-15 21:48:50.406787] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:35.867 21:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:12:35.867 21:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # return 0 00:12:35.867 21:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:35.867 21:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:35.867 BaseBdev1_malloc 00:12:35.867 21:48:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:36.126 true 00:12:36.126 21:48:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:36.385 [2024-07-15 21:48:51.521643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:36.385 [2024-07-15 21:48:51.521726] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.385 [2024-07-15 21:48:51.521779] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3a7c5d434780 00:12:36.385 [2024-07-15 21:48:51.521792] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.385 [2024-07-15 21:48:51.522512] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.385 [2024-07-15 21:48:51.522572] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:36.385 BaseBdev1 00:12:36.385 21:48:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:36.385 21:48:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:36.643 BaseBdev2_malloc 00:12:36.643 21:48:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:36.902 true 00:12:36.902 21:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:37.161 [2024-07-15 21:48:52.221639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:37.161 [2024-07-15 21:48:52.221718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.161 [2024-07-15 21:48:52.221807] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3a7c5d434c80 00:12:37.161 [2024-07-15 21:48:52.221816] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.161 [2024-07-15 21:48:52.222521] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.161 [2024-07-15 21:48:52.222544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:37.161 BaseBdev2 00:12:37.161 21:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:37.162 21:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:37.421 BaseBdev3_malloc 00:12:37.421 21:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:12:37.679 true 00:12:37.679 21:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:37.938 [2024-07-15 21:48:52.917668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:37.938 [2024-07-15 21:48:52.917729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.938 [2024-07-15 21:48:52.917794] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3a7c5d435180 00:12:37.938 [2024-07-15 21:48:52.917803] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.938 [2024-07-15 21:48:52.918476] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.938 [2024-07-15 21:48:52.918516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:37.938 BaseBdev3 00:12:37.938 21:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:12:38.206 [2024-07-15 21:48:53.169706] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:38.206 [2024-07-15 21:48:53.170430] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:38.206 [2024-07-15 21:48:53.170454] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:38.206 [2024-07-15 21:48:53.170523] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3a7c5d435400 00:12:38.206 [2024-07-15 21:48:53.170528] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:38.206 [2024-07-15 21:48:53.170561] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3a7c5d4a0e20 00:12:38.206 [2024-07-15 21:48:53.170702] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3a7c5d435400 00:12:38.206 [2024-07-15 21:48:53.170706] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3a7c5d435400 00:12:38.206 [2024-07-15 21:48:53.170730] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.206 21:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:38.206 21:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:38.206 21:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:38.206 21:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:38.206 21:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:38.206 21:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:38.206 21:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:38.206 21:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:38.206 21:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:38.206 21:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:38.206 21:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:38.206 21:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.484 21:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:38.484 "name": "raid_bdev1", 00:12:38.484 "uuid": "072e3567-42f4-11ef-9f7f-e9a656123a8b", 00:12:38.484 "strip_size_kb": 0, 00:12:38.484 "state": "online", 00:12:38.484 "raid_level": "raid1", 00:12:38.484 "superblock": true, 00:12:38.484 "num_base_bdevs": 3, 00:12:38.484 "num_base_bdevs_discovered": 3, 00:12:38.484 "num_base_bdevs_operational": 3, 00:12:38.484 "base_bdevs_list": [ 00:12:38.484 { 00:12:38.484 "name": "BaseBdev1", 00:12:38.484 "uuid": "c07a219b-1e2b-6353-a3e9-d1176524637c", 00:12:38.484 "is_configured": true, 00:12:38.484 "data_offset": 2048, 00:12:38.484 "data_size": 63488 00:12:38.484 }, 00:12:38.484 { 00:12:38.484 "name": "BaseBdev2", 00:12:38.484 "uuid": "e6a07e8c-3aab-3f5c-a554-65eb294564ad", 00:12:38.484 "is_configured": true, 00:12:38.484 "data_offset": 2048, 00:12:38.484 "data_size": 63488 00:12:38.484 }, 00:12:38.484 { 00:12:38.484 "name": "BaseBdev3", 00:12:38.484 "uuid": "d9f78567-eb52-d858-ae2c-169fdf911553", 00:12:38.484 "is_configured": true, 00:12:38.484 "data_offset": 2048, 00:12:38.484 "data_size": 63488 00:12:38.484 } 00:12:38.484 ] 00:12:38.484 }' 00:12:38.484 21:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:38.484 21:48:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.742 21:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:12:38.742 21:48:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:38.742 [2024-07-15 21:48:53.781972] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3a7c5d4a0ec0 00:12:39.676 21:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:39.933 [2024-07-15 21:48:55.006603] bdev_raid.c:2222:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:39.933 [2024-07-15 21:48:55.006670] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:39.933 [2024-07-15 21:48:55.006797] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x3a7c5d4a0ec0 00:12:39.933 21:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:12:39.933 21:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:39.933 21:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:12:39.933 21:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=2 00:12:39.933 21:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:39.933 21:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:39.933 21:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:39.933 21:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:39.933 21:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:39.933 21:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:39.933 21:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:39.933 21:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:39.933 21:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:39.933 21:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:39.933 21:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:39.933 21:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.191 21:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:40.191 "name": "raid_bdev1", 00:12:40.191 "uuid": "072e3567-42f4-11ef-9f7f-e9a656123a8b", 00:12:40.191 "strip_size_kb": 0, 00:12:40.191 "state": "online", 00:12:40.191 "raid_level": "raid1", 00:12:40.191 "superblock": true, 00:12:40.191 "num_base_bdevs": 3, 00:12:40.191 "num_base_bdevs_discovered": 2, 00:12:40.191 "num_base_bdevs_operational": 2, 00:12:40.191 "base_bdevs_list": [ 00:12:40.191 { 00:12:40.191 "name": null, 00:12:40.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.191 "is_configured": false, 00:12:40.191 "data_offset": 2048, 00:12:40.191 "data_size": 63488 00:12:40.191 }, 00:12:40.191 { 00:12:40.192 "name": "BaseBdev2", 00:12:40.192 "uuid": "e6a07e8c-3aab-3f5c-a554-65eb294564ad", 00:12:40.192 "is_configured": true, 00:12:40.192 "data_offset": 2048, 00:12:40.192 "data_size": 63488 00:12:40.192 }, 00:12:40.192 { 00:12:40.192 "name": "BaseBdev3", 00:12:40.192 "uuid": "d9f78567-eb52-d858-ae2c-169fdf911553", 00:12:40.192 "is_configured": true, 00:12:40.192 "data_offset": 2048, 00:12:40.192 "data_size": 63488 00:12:40.192 } 00:12:40.192 ] 00:12:40.192 }' 00:12:40.192 21:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:40.192 21:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.450 21:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:40.708 [2024-07-15 21:48:55.829294] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:40.708 [2024-07-15 21:48:55.829319] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:40.708 [2024-07-15 21:48:55.829641] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:40.708 [2024-07-15 21:48:55.829650] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.708 [2024-07-15 21:48:55.829661] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:40.708 [2024-07-15 21:48:55.829665] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3a7c5d435400 name raid_bdev1, state offline 00:12:40.708 0 00:12:40.708 21:48:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 58230 00:12:40.708 21:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@942 -- # '[' -z 58230 ']' 00:12:40.708 21:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # kill -0 58230 00:12:40.708 21:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@947 -- # uname 00:12:40.708 21:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:12:40.708 21:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # ps -c -o command 58230 00:12:40.708 21:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # tail -1 00:12:40.708 21:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # process_name=bdevperf 00:12:40.708 21:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' bdevperf = sudo ']' 00:12:40.708 killing process with pid 58230 00:12:40.708 21:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 58230' 00:12:40.708 21:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@961 -- # kill 58230 00:12:40.708 [2024-07-15 21:48:55.857893] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:40.708 21:48:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # wait 58230 00:12:40.708 [2024-07-15 21:48:55.874456] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:40.966 21:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.q6NjNbPGoR 00:12:40.966 21:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:12:40.966 21:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:12:40.966 21:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:12:40.966 21:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:12:40.966 21:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:40.966 21:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:12:40.966 21:48:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:40.966 00:12:40.966 real 0m6.298s 00:12:40.967 user 0m9.694s 00:12:40.967 sys 0m1.120s 00:12:40.967 21:48:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:12:40.967 21:48:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.967 ************************************ 00:12:40.967 END TEST raid_write_error_test 00:12:40.967 ************************************ 00:12:40.967 21:48:56 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:12:40.967 21:48:56 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:12:40.967 21:48:56 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:12:40.967 21:48:56 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:12:40.967 21:48:56 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:12:40.967 21:48:56 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:12:40.967 21:48:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:40.967 ************************************ 00:12:40.967 START TEST raid_state_function_test 00:12:40.967 ************************************ 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1117 -- # raid_state_function_test raid0 4 false 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=58359 00:12:40.967 Process raid pid: 58359 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 58359' 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 58359 /var/tmp/spdk-raid.sock 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@823 -- # '[' -z 58359 ']' 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:12:40.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:12:40.967 21:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.967 [2024-07-15 21:48:56.124288] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:12:40.967 [2024-07-15 21:48:56.124548] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:12:41.533 EAL: TSC is not safe to use in SMP mode 00:12:41.533 EAL: TSC is not invariant 00:12:41.533 [2024-07-15 21:48:56.690460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.791 [2024-07-15 21:48:56.768272] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:41.791 [2024-07-15 21:48:56.770609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.791 [2024-07-15 21:48:56.771493] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:41.791 [2024-07-15 21:48:56.771507] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:42.049 21:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:12:42.049 21:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # return 0 00:12:42.049 21:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:42.307 [2024-07-15 21:48:57.368330] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:42.307 [2024-07-15 21:48:57.368392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:42.307 [2024-07-15 21:48:57.368396] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:42.307 [2024-07-15 21:48:57.368421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:42.307 [2024-07-15 21:48:57.368424] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:42.307 [2024-07-15 21:48:57.368431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:42.307 [2024-07-15 21:48:57.368434] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:42.307 [2024-07-15 21:48:57.368441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:42.307 21:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:42.307 21:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:42.307 21:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:42.307 21:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:42.307 21:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:42.307 21:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:42.307 21:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:42.307 21:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:42.307 21:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:42.307 21:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:42.307 21:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:42.307 21:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.616 21:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:42.616 "name": "Existed_Raid", 00:12:42.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.616 "strip_size_kb": 64, 00:12:42.616 "state": "configuring", 00:12:42.616 "raid_level": "raid0", 00:12:42.616 "superblock": false, 00:12:42.616 "num_base_bdevs": 4, 00:12:42.616 "num_base_bdevs_discovered": 0, 00:12:42.616 "num_base_bdevs_operational": 4, 00:12:42.616 "base_bdevs_list": [ 00:12:42.616 { 00:12:42.616 "name": "BaseBdev1", 00:12:42.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.616 "is_configured": false, 00:12:42.616 "data_offset": 0, 00:12:42.616 "data_size": 0 00:12:42.616 }, 00:12:42.616 { 00:12:42.616 "name": "BaseBdev2", 00:12:42.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.616 "is_configured": false, 00:12:42.616 "data_offset": 0, 00:12:42.616 "data_size": 0 00:12:42.616 }, 00:12:42.616 { 00:12:42.616 "name": "BaseBdev3", 00:12:42.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.616 "is_configured": false, 00:12:42.616 "data_offset": 0, 00:12:42.616 "data_size": 0 00:12:42.616 }, 00:12:42.616 { 00:12:42.616 "name": "BaseBdev4", 00:12:42.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.616 "is_configured": false, 00:12:42.616 "data_offset": 0, 00:12:42.616 "data_size": 0 00:12:42.616 } 00:12:42.616 ] 00:12:42.616 }' 00:12:42.616 21:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:42.616 21:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.875 21:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:43.134 [2024-07-15 21:48:58.176344] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:43.134 [2024-07-15 21:48:58.176370] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2dcbb7e34500 name Existed_Raid, state configuring 00:12:43.134 21:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:43.393 [2024-07-15 21:48:58.396352] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:43.393 [2024-07-15 21:48:58.396403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:43.393 [2024-07-15 21:48:58.396423] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:43.393 [2024-07-15 21:48:58.396431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:43.393 [2024-07-15 21:48:58.396434] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:43.393 [2024-07-15 21:48:58.396441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:43.393 [2024-07-15 21:48:58.396443] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:43.393 [2024-07-15 21:48:58.396450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:43.393 21:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:43.653 [2024-07-15 21:48:58.657357] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:43.653 BaseBdev1 00:12:43.653 21:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:43.653 21:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev1 00:12:43.653 21:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:12:43.653 21:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:12:43.653 21:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:12:43.653 21:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:12:43.653 21:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:43.912 21:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:44.171 [ 00:12:44.171 { 00:12:44.171 "name": "BaseBdev1", 00:12:44.171 "aliases": [ 00:12:44.171 "0a7368a3-42f4-11ef-9f7f-e9a656123a8b" 00:12:44.171 ], 00:12:44.171 "product_name": "Malloc disk", 00:12:44.171 "block_size": 512, 00:12:44.171 "num_blocks": 65536, 00:12:44.171 "uuid": "0a7368a3-42f4-11ef-9f7f-e9a656123a8b", 00:12:44.171 "assigned_rate_limits": { 00:12:44.171 "rw_ios_per_sec": 0, 00:12:44.171 "rw_mbytes_per_sec": 0, 00:12:44.171 "r_mbytes_per_sec": 0, 00:12:44.171 "w_mbytes_per_sec": 0 00:12:44.171 }, 00:12:44.171 "claimed": true, 00:12:44.171 "claim_type": "exclusive_write", 00:12:44.171 "zoned": false, 00:12:44.171 "supported_io_types": { 00:12:44.171 "read": true, 00:12:44.171 "write": true, 00:12:44.171 "unmap": true, 00:12:44.171 "flush": true, 00:12:44.171 "reset": true, 00:12:44.171 "nvme_admin": false, 00:12:44.171 "nvme_io": false, 00:12:44.171 "nvme_io_md": false, 00:12:44.171 "write_zeroes": true, 00:12:44.171 "zcopy": true, 00:12:44.171 "get_zone_info": false, 00:12:44.171 "zone_management": false, 00:12:44.171 "zone_append": false, 00:12:44.171 "compare": false, 00:12:44.171 "compare_and_write": false, 00:12:44.171 "abort": true, 00:12:44.171 "seek_hole": false, 00:12:44.171 "seek_data": false, 00:12:44.171 "copy": true, 00:12:44.171 "nvme_iov_md": false 00:12:44.171 }, 00:12:44.171 "memory_domains": [ 00:12:44.171 { 00:12:44.171 "dma_device_id": "system", 00:12:44.171 "dma_device_type": 1 00:12:44.171 }, 00:12:44.171 { 00:12:44.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.171 "dma_device_type": 2 00:12:44.171 } 00:12:44.171 ], 00:12:44.171 "driver_specific": {} 00:12:44.171 } 00:12:44.171 ] 00:12:44.172 21:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:12:44.172 21:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:44.172 21:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:44.172 21:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:44.172 21:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:44.172 21:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:44.172 21:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:44.172 21:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:44.172 21:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:44.172 21:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:44.172 21:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:44.172 21:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:44.172 21:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.430 21:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:44.431 "name": "Existed_Raid", 00:12:44.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.431 "strip_size_kb": 64, 00:12:44.431 "state": "configuring", 00:12:44.431 "raid_level": "raid0", 00:12:44.431 "superblock": false, 00:12:44.431 "num_base_bdevs": 4, 00:12:44.431 "num_base_bdevs_discovered": 1, 00:12:44.431 "num_base_bdevs_operational": 4, 00:12:44.431 "base_bdevs_list": [ 00:12:44.431 { 00:12:44.431 "name": "BaseBdev1", 00:12:44.431 "uuid": "0a7368a3-42f4-11ef-9f7f-e9a656123a8b", 00:12:44.431 "is_configured": true, 00:12:44.431 "data_offset": 0, 00:12:44.431 "data_size": 65536 00:12:44.431 }, 00:12:44.431 { 00:12:44.431 "name": "BaseBdev2", 00:12:44.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.431 "is_configured": false, 00:12:44.431 "data_offset": 0, 00:12:44.431 "data_size": 0 00:12:44.431 }, 00:12:44.431 { 00:12:44.431 "name": "BaseBdev3", 00:12:44.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.431 "is_configured": false, 00:12:44.431 "data_offset": 0, 00:12:44.431 "data_size": 0 00:12:44.431 }, 00:12:44.431 { 00:12:44.431 "name": "BaseBdev4", 00:12:44.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.431 "is_configured": false, 00:12:44.431 "data_offset": 0, 00:12:44.431 "data_size": 0 00:12:44.431 } 00:12:44.431 ] 00:12:44.431 }' 00:12:44.431 21:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:44.431 21:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.689 21:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:44.949 [2024-07-15 21:48:59.896378] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:44.949 [2024-07-15 21:48:59.896422] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2dcbb7e34500 name Existed_Raid, state configuring 00:12:44.949 21:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:45.208 [2024-07-15 21:49:00.156395] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:45.208 [2024-07-15 21:49:00.157351] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:45.208 [2024-07-15 21:49:00.157400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:45.208 [2024-07-15 21:49:00.157405] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:45.208 [2024-07-15 21:49:00.157434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:45.208 [2024-07-15 21:49:00.157438] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:45.208 [2024-07-15 21:49:00.157445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:45.208 21:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:45.208 21:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:45.208 21:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:45.208 21:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:45.208 21:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:45.208 21:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:45.208 21:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:45.208 21:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:45.208 21:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:45.208 21:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:45.208 21:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:45.208 21:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:45.208 21:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:45.208 21:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.208 21:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:45.208 "name": "Existed_Raid", 00:12:45.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.208 "strip_size_kb": 64, 00:12:45.208 "state": "configuring", 00:12:45.208 "raid_level": "raid0", 00:12:45.208 "superblock": false, 00:12:45.208 "num_base_bdevs": 4, 00:12:45.208 "num_base_bdevs_discovered": 1, 00:12:45.208 "num_base_bdevs_operational": 4, 00:12:45.208 "base_bdevs_list": [ 00:12:45.208 { 00:12:45.208 "name": "BaseBdev1", 00:12:45.208 "uuid": "0a7368a3-42f4-11ef-9f7f-e9a656123a8b", 00:12:45.208 "is_configured": true, 00:12:45.208 "data_offset": 0, 00:12:45.208 "data_size": 65536 00:12:45.208 }, 00:12:45.208 { 00:12:45.208 "name": "BaseBdev2", 00:12:45.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.208 "is_configured": false, 00:12:45.208 "data_offset": 0, 00:12:45.208 "data_size": 0 00:12:45.208 }, 00:12:45.208 { 00:12:45.208 "name": "BaseBdev3", 00:12:45.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.208 "is_configured": false, 00:12:45.208 "data_offset": 0, 00:12:45.208 "data_size": 0 00:12:45.208 }, 00:12:45.208 { 00:12:45.208 "name": "BaseBdev4", 00:12:45.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.208 "is_configured": false, 00:12:45.208 "data_offset": 0, 00:12:45.208 "data_size": 0 00:12:45.208 } 00:12:45.208 ] 00:12:45.208 }' 00:12:45.208 21:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:45.208 21:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.775 21:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:45.775 [2024-07-15 21:49:00.900535] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:45.775 BaseBdev2 00:12:45.775 21:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:45.775 21:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev2 00:12:45.775 21:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:12:45.775 21:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:12:45.775 21:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:12:45.775 21:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:12:45.775 21:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:46.033 21:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:46.292 [ 00:12:46.292 { 00:12:46.292 "name": "BaseBdev2", 00:12:46.292 "aliases": [ 00:12:46.292 "0bc9d22a-42f4-11ef-9f7f-e9a656123a8b" 00:12:46.292 ], 00:12:46.292 "product_name": "Malloc disk", 00:12:46.292 "block_size": 512, 00:12:46.292 "num_blocks": 65536, 00:12:46.292 "uuid": "0bc9d22a-42f4-11ef-9f7f-e9a656123a8b", 00:12:46.292 "assigned_rate_limits": { 00:12:46.292 "rw_ios_per_sec": 0, 00:12:46.292 "rw_mbytes_per_sec": 0, 00:12:46.292 "r_mbytes_per_sec": 0, 00:12:46.292 "w_mbytes_per_sec": 0 00:12:46.292 }, 00:12:46.292 "claimed": true, 00:12:46.292 "claim_type": "exclusive_write", 00:12:46.292 "zoned": false, 00:12:46.292 "supported_io_types": { 00:12:46.292 "read": true, 00:12:46.292 "write": true, 00:12:46.292 "unmap": true, 00:12:46.292 "flush": true, 00:12:46.292 "reset": true, 00:12:46.292 "nvme_admin": false, 00:12:46.292 "nvme_io": false, 00:12:46.292 "nvme_io_md": false, 00:12:46.292 "write_zeroes": true, 00:12:46.292 "zcopy": true, 00:12:46.292 "get_zone_info": false, 00:12:46.292 "zone_management": false, 00:12:46.292 "zone_append": false, 00:12:46.292 "compare": false, 00:12:46.292 "compare_and_write": false, 00:12:46.292 "abort": true, 00:12:46.292 "seek_hole": false, 00:12:46.292 "seek_data": false, 00:12:46.292 "copy": true, 00:12:46.292 "nvme_iov_md": false 00:12:46.292 }, 00:12:46.292 "memory_domains": [ 00:12:46.292 { 00:12:46.292 "dma_device_id": "system", 00:12:46.292 "dma_device_type": 1 00:12:46.292 }, 00:12:46.292 { 00:12:46.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.292 "dma_device_type": 2 00:12:46.292 } 00:12:46.292 ], 00:12:46.292 "driver_specific": {} 00:12:46.292 } 00:12:46.292 ] 00:12:46.292 21:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:12:46.292 21:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:46.292 21:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:46.292 21:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:46.292 21:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:46.292 21:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:46.292 21:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:46.292 21:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:46.292 21:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:46.292 21:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:46.292 21:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:46.292 21:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:46.292 21:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:46.292 21:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:46.292 21:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.551 21:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:46.551 "name": "Existed_Raid", 00:12:46.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.551 "strip_size_kb": 64, 00:12:46.551 "state": "configuring", 00:12:46.551 "raid_level": "raid0", 00:12:46.551 "superblock": false, 00:12:46.551 "num_base_bdevs": 4, 00:12:46.551 "num_base_bdevs_discovered": 2, 00:12:46.551 "num_base_bdevs_operational": 4, 00:12:46.551 "base_bdevs_list": [ 00:12:46.551 { 00:12:46.551 "name": "BaseBdev1", 00:12:46.551 "uuid": "0a7368a3-42f4-11ef-9f7f-e9a656123a8b", 00:12:46.551 "is_configured": true, 00:12:46.551 "data_offset": 0, 00:12:46.551 "data_size": 65536 00:12:46.551 }, 00:12:46.551 { 00:12:46.551 "name": "BaseBdev2", 00:12:46.551 "uuid": "0bc9d22a-42f4-11ef-9f7f-e9a656123a8b", 00:12:46.551 "is_configured": true, 00:12:46.552 "data_offset": 0, 00:12:46.552 "data_size": 65536 00:12:46.552 }, 00:12:46.552 { 00:12:46.552 "name": "BaseBdev3", 00:12:46.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.552 "is_configured": false, 00:12:46.552 "data_offset": 0, 00:12:46.552 "data_size": 0 00:12:46.552 }, 00:12:46.552 { 00:12:46.552 "name": "BaseBdev4", 00:12:46.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.552 "is_configured": false, 00:12:46.552 "data_offset": 0, 00:12:46.552 "data_size": 0 00:12:46.552 } 00:12:46.552 ] 00:12:46.552 }' 00:12:46.552 21:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:46.552 21:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.812 21:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:47.071 [2024-07-15 21:49:02.208591] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:47.071 BaseBdev3 00:12:47.071 21:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:12:47.071 21:49:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev3 00:12:47.071 21:49:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:12:47.071 21:49:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:12:47.071 21:49:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:12:47.071 21:49:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:12:47.071 21:49:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:47.330 21:49:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:47.589 [ 00:12:47.589 { 00:12:47.589 "name": "BaseBdev3", 00:12:47.589 "aliases": [ 00:12:47.589 "0c916ad9-42f4-11ef-9f7f-e9a656123a8b" 00:12:47.589 ], 00:12:47.589 "product_name": "Malloc disk", 00:12:47.589 "block_size": 512, 00:12:47.589 "num_blocks": 65536, 00:12:47.589 "uuid": "0c916ad9-42f4-11ef-9f7f-e9a656123a8b", 00:12:47.590 "assigned_rate_limits": { 00:12:47.590 "rw_ios_per_sec": 0, 00:12:47.590 "rw_mbytes_per_sec": 0, 00:12:47.590 "r_mbytes_per_sec": 0, 00:12:47.590 "w_mbytes_per_sec": 0 00:12:47.590 }, 00:12:47.590 "claimed": true, 00:12:47.590 "claim_type": "exclusive_write", 00:12:47.590 "zoned": false, 00:12:47.590 "supported_io_types": { 00:12:47.590 "read": true, 00:12:47.590 "write": true, 00:12:47.590 "unmap": true, 00:12:47.590 "flush": true, 00:12:47.590 "reset": true, 00:12:47.590 "nvme_admin": false, 00:12:47.590 "nvme_io": false, 00:12:47.590 "nvme_io_md": false, 00:12:47.590 "write_zeroes": true, 00:12:47.590 "zcopy": true, 00:12:47.590 "get_zone_info": false, 00:12:47.590 "zone_management": false, 00:12:47.590 "zone_append": false, 00:12:47.590 "compare": false, 00:12:47.590 "compare_and_write": false, 00:12:47.590 "abort": true, 00:12:47.590 "seek_hole": false, 00:12:47.590 "seek_data": false, 00:12:47.590 "copy": true, 00:12:47.590 "nvme_iov_md": false 00:12:47.590 }, 00:12:47.590 "memory_domains": [ 00:12:47.590 { 00:12:47.590 "dma_device_id": "system", 00:12:47.590 "dma_device_type": 1 00:12:47.590 }, 00:12:47.590 { 00:12:47.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.590 "dma_device_type": 2 00:12:47.590 } 00:12:47.590 ], 00:12:47.590 "driver_specific": {} 00:12:47.590 } 00:12:47.590 ] 00:12:47.590 21:49:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:12:47.590 21:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:47.590 21:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:47.590 21:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:47.590 21:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:47.590 21:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:47.590 21:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:47.590 21:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:47.590 21:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:47.590 21:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:47.590 21:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:47.590 21:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:47.590 21:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:47.590 21:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:47.590 21:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.849 21:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:47.849 "name": "Existed_Raid", 00:12:47.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.849 "strip_size_kb": 64, 00:12:47.849 "state": "configuring", 00:12:47.849 "raid_level": "raid0", 00:12:47.849 "superblock": false, 00:12:47.849 "num_base_bdevs": 4, 00:12:47.849 "num_base_bdevs_discovered": 3, 00:12:47.849 "num_base_bdevs_operational": 4, 00:12:47.849 "base_bdevs_list": [ 00:12:47.849 { 00:12:47.849 "name": "BaseBdev1", 00:12:47.849 "uuid": "0a7368a3-42f4-11ef-9f7f-e9a656123a8b", 00:12:47.849 "is_configured": true, 00:12:47.849 "data_offset": 0, 00:12:47.849 "data_size": 65536 00:12:47.849 }, 00:12:47.849 { 00:12:47.849 "name": "BaseBdev2", 00:12:47.849 "uuid": "0bc9d22a-42f4-11ef-9f7f-e9a656123a8b", 00:12:47.849 "is_configured": true, 00:12:47.849 "data_offset": 0, 00:12:47.849 "data_size": 65536 00:12:47.849 }, 00:12:47.849 { 00:12:47.849 "name": "BaseBdev3", 00:12:47.849 "uuid": "0c916ad9-42f4-11ef-9f7f-e9a656123a8b", 00:12:47.849 "is_configured": true, 00:12:47.849 "data_offset": 0, 00:12:47.849 "data_size": 65536 00:12:47.849 }, 00:12:47.849 { 00:12:47.849 "name": "BaseBdev4", 00:12:47.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.849 "is_configured": false, 00:12:47.849 "data_offset": 0, 00:12:47.849 "data_size": 0 00:12:47.849 } 00:12:47.849 ] 00:12:47.849 }' 00:12:47.849 21:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:47.849 21:49:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.108 21:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:12:48.365 [2024-07-15 21:49:03.440628] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:48.365 [2024-07-15 21:49:03.440653] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2dcbb7e34a00 00:12:48.365 [2024-07-15 21:49:03.440673] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:48.365 [2024-07-15 21:49:03.440705] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2dcbb7e97e20 00:12:48.365 [2024-07-15 21:49:03.440787] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2dcbb7e34a00 00:12:48.365 [2024-07-15 21:49:03.440791] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2dcbb7e34a00 00:12:48.365 [2024-07-15 21:49:03.440826] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.365 BaseBdev4 00:12:48.365 21:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:12:48.365 21:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev4 00:12:48.365 21:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:12:48.365 21:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:12:48.365 21:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:12:48.365 21:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:12:48.365 21:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:48.623 21:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:48.881 [ 00:12:48.881 { 00:12:48.881 "name": "BaseBdev4", 00:12:48.881 "aliases": [ 00:12:48.881 "0d4d6928-42f4-11ef-9f7f-e9a656123a8b" 00:12:48.881 ], 00:12:48.881 "product_name": "Malloc disk", 00:12:48.881 "block_size": 512, 00:12:48.881 "num_blocks": 65536, 00:12:48.881 "uuid": "0d4d6928-42f4-11ef-9f7f-e9a656123a8b", 00:12:48.881 "assigned_rate_limits": { 00:12:48.881 "rw_ios_per_sec": 0, 00:12:48.881 "rw_mbytes_per_sec": 0, 00:12:48.881 "r_mbytes_per_sec": 0, 00:12:48.881 "w_mbytes_per_sec": 0 00:12:48.881 }, 00:12:48.881 "claimed": true, 00:12:48.881 "claim_type": "exclusive_write", 00:12:48.881 "zoned": false, 00:12:48.881 "supported_io_types": { 00:12:48.881 "read": true, 00:12:48.881 "write": true, 00:12:48.881 "unmap": true, 00:12:48.881 "flush": true, 00:12:48.881 "reset": true, 00:12:48.881 "nvme_admin": false, 00:12:48.881 "nvme_io": false, 00:12:48.881 "nvme_io_md": false, 00:12:48.881 "write_zeroes": true, 00:12:48.881 "zcopy": true, 00:12:48.881 "get_zone_info": false, 00:12:48.881 "zone_management": false, 00:12:48.881 "zone_append": false, 00:12:48.881 "compare": false, 00:12:48.881 "compare_and_write": false, 00:12:48.881 "abort": true, 00:12:48.881 "seek_hole": false, 00:12:48.881 "seek_data": false, 00:12:48.881 "copy": true, 00:12:48.881 "nvme_iov_md": false 00:12:48.881 }, 00:12:48.881 "memory_domains": [ 00:12:48.881 { 00:12:48.881 "dma_device_id": "system", 00:12:48.881 "dma_device_type": 1 00:12:48.881 }, 00:12:48.881 { 00:12:48.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.881 "dma_device_type": 2 00:12:48.881 } 00:12:48.881 ], 00:12:48.881 "driver_specific": {} 00:12:48.881 } 00:12:48.881 ] 00:12:48.881 21:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:12:48.881 21:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:48.881 21:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:48.881 21:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:48.881 21:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:48.881 21:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:48.881 21:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:48.881 21:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:48.881 21:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:48.881 21:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:48.881 21:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:48.881 21:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:48.882 21:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:48.882 21:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.882 21:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:49.141 21:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:49.141 "name": "Existed_Raid", 00:12:49.141 "uuid": "0d4d6fb8-42f4-11ef-9f7f-e9a656123a8b", 00:12:49.141 "strip_size_kb": 64, 00:12:49.141 "state": "online", 00:12:49.141 "raid_level": "raid0", 00:12:49.141 "superblock": false, 00:12:49.141 "num_base_bdevs": 4, 00:12:49.141 "num_base_bdevs_discovered": 4, 00:12:49.141 "num_base_bdevs_operational": 4, 00:12:49.141 "base_bdevs_list": [ 00:12:49.141 { 00:12:49.141 "name": "BaseBdev1", 00:12:49.141 "uuid": "0a7368a3-42f4-11ef-9f7f-e9a656123a8b", 00:12:49.141 "is_configured": true, 00:12:49.141 "data_offset": 0, 00:12:49.141 "data_size": 65536 00:12:49.141 }, 00:12:49.141 { 00:12:49.141 "name": "BaseBdev2", 00:12:49.141 "uuid": "0bc9d22a-42f4-11ef-9f7f-e9a656123a8b", 00:12:49.141 "is_configured": true, 00:12:49.141 "data_offset": 0, 00:12:49.141 "data_size": 65536 00:12:49.141 }, 00:12:49.141 { 00:12:49.141 "name": "BaseBdev3", 00:12:49.141 "uuid": "0c916ad9-42f4-11ef-9f7f-e9a656123a8b", 00:12:49.141 "is_configured": true, 00:12:49.141 "data_offset": 0, 00:12:49.141 "data_size": 65536 00:12:49.141 }, 00:12:49.141 { 00:12:49.141 "name": "BaseBdev4", 00:12:49.141 "uuid": "0d4d6928-42f4-11ef-9f7f-e9a656123a8b", 00:12:49.141 "is_configured": true, 00:12:49.141 "data_offset": 0, 00:12:49.141 "data_size": 65536 00:12:49.141 } 00:12:49.141 ] 00:12:49.141 }' 00:12:49.141 21:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:49.141 21:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.399 21:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:49.399 21:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:49.399 21:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:49.399 21:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:49.399 21:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:49.399 21:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:49.399 21:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:49.399 21:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:49.657 [2024-07-15 21:49:04.740610] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:49.657 21:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:49.657 "name": "Existed_Raid", 00:12:49.657 "aliases": [ 00:12:49.657 "0d4d6fb8-42f4-11ef-9f7f-e9a656123a8b" 00:12:49.657 ], 00:12:49.657 "product_name": "Raid Volume", 00:12:49.657 "block_size": 512, 00:12:49.657 "num_blocks": 262144, 00:12:49.657 "uuid": "0d4d6fb8-42f4-11ef-9f7f-e9a656123a8b", 00:12:49.657 "assigned_rate_limits": { 00:12:49.657 "rw_ios_per_sec": 0, 00:12:49.657 "rw_mbytes_per_sec": 0, 00:12:49.657 "r_mbytes_per_sec": 0, 00:12:49.657 "w_mbytes_per_sec": 0 00:12:49.657 }, 00:12:49.657 "claimed": false, 00:12:49.657 "zoned": false, 00:12:49.657 "supported_io_types": { 00:12:49.657 "read": true, 00:12:49.657 "write": true, 00:12:49.657 "unmap": true, 00:12:49.657 "flush": true, 00:12:49.657 "reset": true, 00:12:49.657 "nvme_admin": false, 00:12:49.657 "nvme_io": false, 00:12:49.657 "nvme_io_md": false, 00:12:49.657 "write_zeroes": true, 00:12:49.657 "zcopy": false, 00:12:49.657 "get_zone_info": false, 00:12:49.657 "zone_management": false, 00:12:49.657 "zone_append": false, 00:12:49.657 "compare": false, 00:12:49.657 "compare_and_write": false, 00:12:49.657 "abort": false, 00:12:49.657 "seek_hole": false, 00:12:49.657 "seek_data": false, 00:12:49.657 "copy": false, 00:12:49.657 "nvme_iov_md": false 00:12:49.657 }, 00:12:49.657 "memory_domains": [ 00:12:49.657 { 00:12:49.657 "dma_device_id": "system", 00:12:49.658 "dma_device_type": 1 00:12:49.658 }, 00:12:49.658 { 00:12:49.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.658 "dma_device_type": 2 00:12:49.658 }, 00:12:49.658 { 00:12:49.658 "dma_device_id": "system", 00:12:49.658 "dma_device_type": 1 00:12:49.658 }, 00:12:49.658 { 00:12:49.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.658 "dma_device_type": 2 00:12:49.658 }, 00:12:49.658 { 00:12:49.658 "dma_device_id": "system", 00:12:49.658 "dma_device_type": 1 00:12:49.658 }, 00:12:49.658 { 00:12:49.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.658 "dma_device_type": 2 00:12:49.658 }, 00:12:49.658 { 00:12:49.658 "dma_device_id": "system", 00:12:49.658 "dma_device_type": 1 00:12:49.658 }, 00:12:49.658 { 00:12:49.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.658 "dma_device_type": 2 00:12:49.658 } 00:12:49.658 ], 00:12:49.658 "driver_specific": { 00:12:49.658 "raid": { 00:12:49.658 "uuid": "0d4d6fb8-42f4-11ef-9f7f-e9a656123a8b", 00:12:49.658 "strip_size_kb": 64, 00:12:49.658 "state": "online", 00:12:49.658 "raid_level": "raid0", 00:12:49.658 "superblock": false, 00:12:49.658 "num_base_bdevs": 4, 00:12:49.658 "num_base_bdevs_discovered": 4, 00:12:49.658 "num_base_bdevs_operational": 4, 00:12:49.658 "base_bdevs_list": [ 00:12:49.658 { 00:12:49.658 "name": "BaseBdev1", 00:12:49.658 "uuid": "0a7368a3-42f4-11ef-9f7f-e9a656123a8b", 00:12:49.658 "is_configured": true, 00:12:49.658 "data_offset": 0, 00:12:49.658 "data_size": 65536 00:12:49.658 }, 00:12:49.658 { 00:12:49.658 "name": "BaseBdev2", 00:12:49.658 "uuid": "0bc9d22a-42f4-11ef-9f7f-e9a656123a8b", 00:12:49.658 "is_configured": true, 00:12:49.658 "data_offset": 0, 00:12:49.658 "data_size": 65536 00:12:49.658 }, 00:12:49.658 { 00:12:49.658 "name": "BaseBdev3", 00:12:49.658 "uuid": "0c916ad9-42f4-11ef-9f7f-e9a656123a8b", 00:12:49.658 "is_configured": true, 00:12:49.658 "data_offset": 0, 00:12:49.658 "data_size": 65536 00:12:49.658 }, 00:12:49.658 { 00:12:49.658 "name": "BaseBdev4", 00:12:49.658 "uuid": "0d4d6928-42f4-11ef-9f7f-e9a656123a8b", 00:12:49.658 "is_configured": true, 00:12:49.658 "data_offset": 0, 00:12:49.658 "data_size": 65536 00:12:49.658 } 00:12:49.658 ] 00:12:49.658 } 00:12:49.658 } 00:12:49.658 }' 00:12:49.658 21:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:49.658 21:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:49.658 BaseBdev2 00:12:49.658 BaseBdev3 00:12:49.658 BaseBdev4' 00:12:49.658 21:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:49.658 21:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:49.658 21:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:49.917 21:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:49.917 "name": "BaseBdev1", 00:12:49.917 "aliases": [ 00:12:49.917 "0a7368a3-42f4-11ef-9f7f-e9a656123a8b" 00:12:49.917 ], 00:12:49.917 "product_name": "Malloc disk", 00:12:49.917 "block_size": 512, 00:12:49.917 "num_blocks": 65536, 00:12:49.917 "uuid": "0a7368a3-42f4-11ef-9f7f-e9a656123a8b", 00:12:49.917 "assigned_rate_limits": { 00:12:49.917 "rw_ios_per_sec": 0, 00:12:49.917 "rw_mbytes_per_sec": 0, 00:12:49.917 "r_mbytes_per_sec": 0, 00:12:49.917 "w_mbytes_per_sec": 0 00:12:49.917 }, 00:12:49.917 "claimed": true, 00:12:49.917 "claim_type": "exclusive_write", 00:12:49.917 "zoned": false, 00:12:49.917 "supported_io_types": { 00:12:49.917 "read": true, 00:12:49.917 "write": true, 00:12:49.917 "unmap": true, 00:12:49.917 "flush": true, 00:12:49.917 "reset": true, 00:12:49.917 "nvme_admin": false, 00:12:49.917 "nvme_io": false, 00:12:49.917 "nvme_io_md": false, 00:12:49.917 "write_zeroes": true, 00:12:49.917 "zcopy": true, 00:12:49.917 "get_zone_info": false, 00:12:49.917 "zone_management": false, 00:12:49.917 "zone_append": false, 00:12:49.917 "compare": false, 00:12:49.917 "compare_and_write": false, 00:12:49.917 "abort": true, 00:12:49.917 "seek_hole": false, 00:12:49.917 "seek_data": false, 00:12:49.917 "copy": true, 00:12:49.917 "nvme_iov_md": false 00:12:49.917 }, 00:12:49.917 "memory_domains": [ 00:12:49.917 { 00:12:49.917 "dma_device_id": "system", 00:12:49.917 "dma_device_type": 1 00:12:49.917 }, 00:12:49.917 { 00:12:49.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.917 "dma_device_type": 2 00:12:49.917 } 00:12:49.917 ], 00:12:49.917 "driver_specific": {} 00:12:49.917 }' 00:12:49.917 21:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:49.917 21:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:49.917 21:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:49.917 21:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:49.917 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:49.917 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:49.917 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:49.917 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:49.917 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:49.917 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:49.917 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:49.917 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:49.917 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:49.917 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:49.917 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:50.175 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:50.175 "name": "BaseBdev2", 00:12:50.175 "aliases": [ 00:12:50.175 "0bc9d22a-42f4-11ef-9f7f-e9a656123a8b" 00:12:50.175 ], 00:12:50.175 "product_name": "Malloc disk", 00:12:50.175 "block_size": 512, 00:12:50.175 "num_blocks": 65536, 00:12:50.175 "uuid": "0bc9d22a-42f4-11ef-9f7f-e9a656123a8b", 00:12:50.175 "assigned_rate_limits": { 00:12:50.175 "rw_ios_per_sec": 0, 00:12:50.175 "rw_mbytes_per_sec": 0, 00:12:50.175 "r_mbytes_per_sec": 0, 00:12:50.175 "w_mbytes_per_sec": 0 00:12:50.175 }, 00:12:50.175 "claimed": true, 00:12:50.175 "claim_type": "exclusive_write", 00:12:50.175 "zoned": false, 00:12:50.175 "supported_io_types": { 00:12:50.175 "read": true, 00:12:50.175 "write": true, 00:12:50.175 "unmap": true, 00:12:50.175 "flush": true, 00:12:50.175 "reset": true, 00:12:50.175 "nvme_admin": false, 00:12:50.175 "nvme_io": false, 00:12:50.175 "nvme_io_md": false, 00:12:50.175 "write_zeroes": true, 00:12:50.175 "zcopy": true, 00:12:50.175 "get_zone_info": false, 00:12:50.175 "zone_management": false, 00:12:50.175 "zone_append": false, 00:12:50.175 "compare": false, 00:12:50.175 "compare_and_write": false, 00:12:50.175 "abort": true, 00:12:50.175 "seek_hole": false, 00:12:50.175 "seek_data": false, 00:12:50.175 "copy": true, 00:12:50.175 "nvme_iov_md": false 00:12:50.175 }, 00:12:50.175 "memory_domains": [ 00:12:50.175 { 00:12:50.175 "dma_device_id": "system", 00:12:50.175 "dma_device_type": 1 00:12:50.175 }, 00:12:50.175 { 00:12:50.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.175 "dma_device_type": 2 00:12:50.175 } 00:12:50.175 ], 00:12:50.175 "driver_specific": {} 00:12:50.175 }' 00:12:50.175 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:50.175 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:50.175 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:50.175 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:50.175 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:50.175 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:50.175 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:50.175 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:50.434 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:50.434 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:50.434 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:50.434 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:50.434 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:50.434 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:50.434 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:50.692 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:50.692 "name": "BaseBdev3", 00:12:50.692 "aliases": [ 00:12:50.692 "0c916ad9-42f4-11ef-9f7f-e9a656123a8b" 00:12:50.692 ], 00:12:50.692 "product_name": "Malloc disk", 00:12:50.692 "block_size": 512, 00:12:50.692 "num_blocks": 65536, 00:12:50.692 "uuid": "0c916ad9-42f4-11ef-9f7f-e9a656123a8b", 00:12:50.692 "assigned_rate_limits": { 00:12:50.692 "rw_ios_per_sec": 0, 00:12:50.692 "rw_mbytes_per_sec": 0, 00:12:50.692 "r_mbytes_per_sec": 0, 00:12:50.692 "w_mbytes_per_sec": 0 00:12:50.692 }, 00:12:50.692 "claimed": true, 00:12:50.692 "claim_type": "exclusive_write", 00:12:50.692 "zoned": false, 00:12:50.692 "supported_io_types": { 00:12:50.692 "read": true, 00:12:50.692 "write": true, 00:12:50.692 "unmap": true, 00:12:50.692 "flush": true, 00:12:50.692 "reset": true, 00:12:50.692 "nvme_admin": false, 00:12:50.692 "nvme_io": false, 00:12:50.692 "nvme_io_md": false, 00:12:50.692 "write_zeroes": true, 00:12:50.692 "zcopy": true, 00:12:50.692 "get_zone_info": false, 00:12:50.692 "zone_management": false, 00:12:50.692 "zone_append": false, 00:12:50.692 "compare": false, 00:12:50.692 "compare_and_write": false, 00:12:50.692 "abort": true, 00:12:50.692 "seek_hole": false, 00:12:50.692 "seek_data": false, 00:12:50.692 "copy": true, 00:12:50.692 "nvme_iov_md": false 00:12:50.692 }, 00:12:50.692 "memory_domains": [ 00:12:50.692 { 00:12:50.692 "dma_device_id": "system", 00:12:50.692 "dma_device_type": 1 00:12:50.692 }, 00:12:50.692 { 00:12:50.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.692 "dma_device_type": 2 00:12:50.692 } 00:12:50.692 ], 00:12:50.692 "driver_specific": {} 00:12:50.692 }' 00:12:50.692 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:50.692 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:50.692 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:50.692 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:50.692 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:50.692 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:50.692 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:50.692 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:50.692 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:50.693 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:50.693 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:50.693 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:50.693 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:50.693 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:50.693 21:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:12:50.951 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:50.951 "name": "BaseBdev4", 00:12:50.951 "aliases": [ 00:12:50.951 "0d4d6928-42f4-11ef-9f7f-e9a656123a8b" 00:12:50.951 ], 00:12:50.951 "product_name": "Malloc disk", 00:12:50.951 "block_size": 512, 00:12:50.951 "num_blocks": 65536, 00:12:50.951 "uuid": "0d4d6928-42f4-11ef-9f7f-e9a656123a8b", 00:12:50.951 "assigned_rate_limits": { 00:12:50.951 "rw_ios_per_sec": 0, 00:12:50.951 "rw_mbytes_per_sec": 0, 00:12:50.951 "r_mbytes_per_sec": 0, 00:12:50.951 "w_mbytes_per_sec": 0 00:12:50.951 }, 00:12:50.951 "claimed": true, 00:12:50.951 "claim_type": "exclusive_write", 00:12:50.951 "zoned": false, 00:12:50.951 "supported_io_types": { 00:12:50.951 "read": true, 00:12:50.951 "write": true, 00:12:50.951 "unmap": true, 00:12:50.951 "flush": true, 00:12:50.951 "reset": true, 00:12:50.951 "nvme_admin": false, 00:12:50.951 "nvme_io": false, 00:12:50.951 "nvme_io_md": false, 00:12:50.951 "write_zeroes": true, 00:12:50.951 "zcopy": true, 00:12:50.951 "get_zone_info": false, 00:12:50.951 "zone_management": false, 00:12:50.951 "zone_append": false, 00:12:50.951 "compare": false, 00:12:50.951 "compare_and_write": false, 00:12:50.951 "abort": true, 00:12:50.951 "seek_hole": false, 00:12:50.951 "seek_data": false, 00:12:50.951 "copy": true, 00:12:50.951 "nvme_iov_md": false 00:12:50.951 }, 00:12:50.951 "memory_domains": [ 00:12:50.951 { 00:12:50.951 "dma_device_id": "system", 00:12:50.952 "dma_device_type": 1 00:12:50.952 }, 00:12:50.952 { 00:12:50.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.952 "dma_device_type": 2 00:12:50.952 } 00:12:50.952 ], 00:12:50.952 "driver_specific": {} 00:12:50.952 }' 00:12:50.952 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:50.952 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:50.952 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:50.952 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:50.952 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:50.952 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:50.952 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:50.952 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:50.952 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:50.952 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:50.952 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:50.952 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:50.952 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:51.211 [2024-07-15 21:49:06.332618] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:51.211 [2024-07-15 21:49:06.332658] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:51.211 [2024-07-15 21:49:06.332670] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:51.211 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:51.211 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:12:51.211 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:51.211 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:51.211 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:12:51.211 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:51.211 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:51.211 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:12:51.211 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:51.211 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:51.211 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:51.211 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:51.211 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:51.211 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:51.211 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:51.211 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:51.211 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.470 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:51.470 "name": "Existed_Raid", 00:12:51.470 "uuid": "0d4d6fb8-42f4-11ef-9f7f-e9a656123a8b", 00:12:51.470 "strip_size_kb": 64, 00:12:51.470 "state": "offline", 00:12:51.470 "raid_level": "raid0", 00:12:51.470 "superblock": false, 00:12:51.470 "num_base_bdevs": 4, 00:12:51.470 "num_base_bdevs_discovered": 3, 00:12:51.470 "num_base_bdevs_operational": 3, 00:12:51.470 "base_bdevs_list": [ 00:12:51.470 { 00:12:51.470 "name": null, 00:12:51.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.470 "is_configured": false, 00:12:51.470 "data_offset": 0, 00:12:51.470 "data_size": 65536 00:12:51.470 }, 00:12:51.470 { 00:12:51.470 "name": "BaseBdev2", 00:12:51.470 "uuid": "0bc9d22a-42f4-11ef-9f7f-e9a656123a8b", 00:12:51.470 "is_configured": true, 00:12:51.470 "data_offset": 0, 00:12:51.470 "data_size": 65536 00:12:51.470 }, 00:12:51.470 { 00:12:51.470 "name": "BaseBdev3", 00:12:51.470 "uuid": "0c916ad9-42f4-11ef-9f7f-e9a656123a8b", 00:12:51.470 "is_configured": true, 00:12:51.470 "data_offset": 0, 00:12:51.470 "data_size": 65536 00:12:51.470 }, 00:12:51.470 { 00:12:51.470 "name": "BaseBdev4", 00:12:51.470 "uuid": "0d4d6928-42f4-11ef-9f7f-e9a656123a8b", 00:12:51.470 "is_configured": true, 00:12:51.470 "data_offset": 0, 00:12:51.470 "data_size": 65536 00:12:51.470 } 00:12:51.470 ] 00:12:51.470 }' 00:12:51.470 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:51.470 21:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.729 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:51.729 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:51.729 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:51.729 21:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:51.987 21:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:51.987 21:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:51.987 21:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:52.245 [2024-07-15 21:49:07.334925] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:52.245 21:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:52.245 21:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:52.245 21:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:52.246 21:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:52.504 21:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:52.504 21:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:52.504 21:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:52.762 [2024-07-15 21:49:07.865356] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:52.762 21:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:52.762 21:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:52.762 21:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:52.762 21:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:53.021 21:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:53.021 21:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:53.021 21:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:12:53.280 [2024-07-15 21:49:08.303697] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:53.280 [2024-07-15 21:49:08.303754] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2dcbb7e34a00 name Existed_Raid, state offline 00:12:53.280 21:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:53.280 21:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:53.280 21:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:53.280 21:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:53.538 21:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:53.538 21:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:53.538 21:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:12:53.538 21:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:12:53.538 21:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:53.538 21:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:53.797 BaseBdev2 00:12:53.798 21:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:12:53.798 21:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev2 00:12:53.798 21:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:12:53.798 21:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:12:53.798 21:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:12:53.798 21:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:12:53.798 21:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:54.057 21:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:54.316 [ 00:12:54.316 { 00:12:54.316 "name": "BaseBdev2", 00:12:54.316 "aliases": [ 00:12:54.316 "1083ce95-42f4-11ef-9f7f-e9a656123a8b" 00:12:54.316 ], 00:12:54.316 "product_name": "Malloc disk", 00:12:54.316 "block_size": 512, 00:12:54.316 "num_blocks": 65536, 00:12:54.316 "uuid": "1083ce95-42f4-11ef-9f7f-e9a656123a8b", 00:12:54.316 "assigned_rate_limits": { 00:12:54.316 "rw_ios_per_sec": 0, 00:12:54.316 "rw_mbytes_per_sec": 0, 00:12:54.316 "r_mbytes_per_sec": 0, 00:12:54.316 "w_mbytes_per_sec": 0 00:12:54.316 }, 00:12:54.316 "claimed": false, 00:12:54.316 "zoned": false, 00:12:54.316 "supported_io_types": { 00:12:54.316 "read": true, 00:12:54.316 "write": true, 00:12:54.316 "unmap": true, 00:12:54.316 "flush": true, 00:12:54.316 "reset": true, 00:12:54.317 "nvme_admin": false, 00:12:54.317 "nvme_io": false, 00:12:54.317 "nvme_io_md": false, 00:12:54.317 "write_zeroes": true, 00:12:54.317 "zcopy": true, 00:12:54.317 "get_zone_info": false, 00:12:54.317 "zone_management": false, 00:12:54.317 "zone_append": false, 00:12:54.317 "compare": false, 00:12:54.317 "compare_and_write": false, 00:12:54.317 "abort": true, 00:12:54.317 "seek_hole": false, 00:12:54.317 "seek_data": false, 00:12:54.317 "copy": true, 00:12:54.317 "nvme_iov_md": false 00:12:54.317 }, 00:12:54.317 "memory_domains": [ 00:12:54.317 { 00:12:54.317 "dma_device_id": "system", 00:12:54.317 "dma_device_type": 1 00:12:54.317 }, 00:12:54.317 { 00:12:54.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.317 "dma_device_type": 2 00:12:54.317 } 00:12:54.317 ], 00:12:54.317 "driver_specific": {} 00:12:54.317 } 00:12:54.317 ] 00:12:54.317 21:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:12:54.317 21:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:54.317 21:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:54.317 21:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:54.575 BaseBdev3 00:12:54.575 21:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:12:54.575 21:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev3 00:12:54.576 21:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:12:54.576 21:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:12:54.576 21:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:12:54.576 21:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:12:54.576 21:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:54.833 21:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:55.113 [ 00:12:55.113 { 00:12:55.113 "name": "BaseBdev3", 00:12:55.113 "aliases": [ 00:12:55.113 "10f1ad7d-42f4-11ef-9f7f-e9a656123a8b" 00:12:55.113 ], 00:12:55.113 "product_name": "Malloc disk", 00:12:55.113 "block_size": 512, 00:12:55.113 "num_blocks": 65536, 00:12:55.113 "uuid": "10f1ad7d-42f4-11ef-9f7f-e9a656123a8b", 00:12:55.113 "assigned_rate_limits": { 00:12:55.113 "rw_ios_per_sec": 0, 00:12:55.113 "rw_mbytes_per_sec": 0, 00:12:55.113 "r_mbytes_per_sec": 0, 00:12:55.113 "w_mbytes_per_sec": 0 00:12:55.113 }, 00:12:55.113 "claimed": false, 00:12:55.113 "zoned": false, 00:12:55.113 "supported_io_types": { 00:12:55.113 "read": true, 00:12:55.113 "write": true, 00:12:55.113 "unmap": true, 00:12:55.113 "flush": true, 00:12:55.113 "reset": true, 00:12:55.113 "nvme_admin": false, 00:12:55.113 "nvme_io": false, 00:12:55.113 "nvme_io_md": false, 00:12:55.113 "write_zeroes": true, 00:12:55.113 "zcopy": true, 00:12:55.113 "get_zone_info": false, 00:12:55.113 "zone_management": false, 00:12:55.113 "zone_append": false, 00:12:55.113 "compare": false, 00:12:55.113 "compare_and_write": false, 00:12:55.113 "abort": true, 00:12:55.113 "seek_hole": false, 00:12:55.113 "seek_data": false, 00:12:55.113 "copy": true, 00:12:55.113 "nvme_iov_md": false 00:12:55.113 }, 00:12:55.113 "memory_domains": [ 00:12:55.113 { 00:12:55.113 "dma_device_id": "system", 00:12:55.113 "dma_device_type": 1 00:12:55.113 }, 00:12:55.113 { 00:12:55.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.113 "dma_device_type": 2 00:12:55.113 } 00:12:55.113 ], 00:12:55.113 "driver_specific": {} 00:12:55.113 } 00:12:55.113 ] 00:12:55.113 21:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:12:55.113 21:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:55.113 21:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:55.113 21:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:12:55.374 BaseBdev4 00:12:55.374 21:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:12:55.374 21:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev4 00:12:55.374 21:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:12:55.374 21:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:12:55.374 21:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:12:55.374 21:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:12:55.374 21:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:55.374 21:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:55.633 [ 00:12:55.633 { 00:12:55.633 "name": "BaseBdev4", 00:12:55.633 "aliases": [ 00:12:55.633 "1165a563-42f4-11ef-9f7f-e9a656123a8b" 00:12:55.633 ], 00:12:55.633 "product_name": "Malloc disk", 00:12:55.633 "block_size": 512, 00:12:55.633 "num_blocks": 65536, 00:12:55.633 "uuid": "1165a563-42f4-11ef-9f7f-e9a656123a8b", 00:12:55.633 "assigned_rate_limits": { 00:12:55.633 "rw_ios_per_sec": 0, 00:12:55.633 "rw_mbytes_per_sec": 0, 00:12:55.633 "r_mbytes_per_sec": 0, 00:12:55.633 "w_mbytes_per_sec": 0 00:12:55.633 }, 00:12:55.633 "claimed": false, 00:12:55.633 "zoned": false, 00:12:55.633 "supported_io_types": { 00:12:55.633 "read": true, 00:12:55.633 "write": true, 00:12:55.633 "unmap": true, 00:12:55.633 "flush": true, 00:12:55.633 "reset": true, 00:12:55.633 "nvme_admin": false, 00:12:55.633 "nvme_io": false, 00:12:55.633 "nvme_io_md": false, 00:12:55.633 "write_zeroes": true, 00:12:55.633 "zcopy": true, 00:12:55.633 "get_zone_info": false, 00:12:55.633 "zone_management": false, 00:12:55.633 "zone_append": false, 00:12:55.633 "compare": false, 00:12:55.633 "compare_and_write": false, 00:12:55.633 "abort": true, 00:12:55.633 "seek_hole": false, 00:12:55.633 "seek_data": false, 00:12:55.633 "copy": true, 00:12:55.633 "nvme_iov_md": false 00:12:55.633 }, 00:12:55.633 "memory_domains": [ 00:12:55.633 { 00:12:55.633 "dma_device_id": "system", 00:12:55.633 "dma_device_type": 1 00:12:55.633 }, 00:12:55.633 { 00:12:55.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.633 "dma_device_type": 2 00:12:55.633 } 00:12:55.633 ], 00:12:55.633 "driver_specific": {} 00:12:55.633 } 00:12:55.633 ] 00:12:55.633 21:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:12:55.633 21:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:55.633 21:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:55.633 21:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:55.892 [2024-07-15 21:49:10.958257] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:55.892 [2024-07-15 21:49:10.958326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:55.892 [2024-07-15 21:49:10.958350] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:55.892 [2024-07-15 21:49:10.959069] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:55.892 [2024-07-15 21:49:10.959087] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:55.892 21:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:55.892 21:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:55.892 21:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:55.892 21:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:55.892 21:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:55.892 21:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:55.892 21:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:55.892 21:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:55.892 21:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:55.892 21:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:55.892 21:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.892 21:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.150 21:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:56.151 "name": "Existed_Raid", 00:12:56.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.151 "strip_size_kb": 64, 00:12:56.151 "state": "configuring", 00:12:56.151 "raid_level": "raid0", 00:12:56.151 "superblock": false, 00:12:56.151 "num_base_bdevs": 4, 00:12:56.151 "num_base_bdevs_discovered": 3, 00:12:56.151 "num_base_bdevs_operational": 4, 00:12:56.151 "base_bdevs_list": [ 00:12:56.151 { 00:12:56.151 "name": "BaseBdev1", 00:12:56.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.151 "is_configured": false, 00:12:56.151 "data_offset": 0, 00:12:56.151 "data_size": 0 00:12:56.151 }, 00:12:56.151 { 00:12:56.151 "name": "BaseBdev2", 00:12:56.151 "uuid": "1083ce95-42f4-11ef-9f7f-e9a656123a8b", 00:12:56.151 "is_configured": true, 00:12:56.151 "data_offset": 0, 00:12:56.151 "data_size": 65536 00:12:56.151 }, 00:12:56.151 { 00:12:56.151 "name": "BaseBdev3", 00:12:56.151 "uuid": "10f1ad7d-42f4-11ef-9f7f-e9a656123a8b", 00:12:56.151 "is_configured": true, 00:12:56.151 "data_offset": 0, 00:12:56.151 "data_size": 65536 00:12:56.151 }, 00:12:56.151 { 00:12:56.151 "name": "BaseBdev4", 00:12:56.151 "uuid": "1165a563-42f4-11ef-9f7f-e9a656123a8b", 00:12:56.151 "is_configured": true, 00:12:56.151 "data_offset": 0, 00:12:56.151 "data_size": 65536 00:12:56.151 } 00:12:56.151 ] 00:12:56.151 }' 00:12:56.151 21:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:56.151 21:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.409 21:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:12:56.667 [2024-07-15 21:49:11.726274] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:56.667 21:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:56.667 21:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:56.667 21:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:56.667 21:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:56.667 21:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:56.667 21:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:56.667 21:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:56.667 21:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:56.667 21:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:56.667 21:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:56.668 21:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:56.668 21:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.926 21:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:56.926 "name": "Existed_Raid", 00:12:56.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.926 "strip_size_kb": 64, 00:12:56.926 "state": "configuring", 00:12:56.926 "raid_level": "raid0", 00:12:56.926 "superblock": false, 00:12:56.926 "num_base_bdevs": 4, 00:12:56.926 "num_base_bdevs_discovered": 2, 00:12:56.926 "num_base_bdevs_operational": 4, 00:12:56.926 "base_bdevs_list": [ 00:12:56.926 { 00:12:56.926 "name": "BaseBdev1", 00:12:56.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.926 "is_configured": false, 00:12:56.926 "data_offset": 0, 00:12:56.926 "data_size": 0 00:12:56.926 }, 00:12:56.926 { 00:12:56.926 "name": null, 00:12:56.926 "uuid": "1083ce95-42f4-11ef-9f7f-e9a656123a8b", 00:12:56.926 "is_configured": false, 00:12:56.926 "data_offset": 0, 00:12:56.926 "data_size": 65536 00:12:56.926 }, 00:12:56.926 { 00:12:56.926 "name": "BaseBdev3", 00:12:56.926 "uuid": "10f1ad7d-42f4-11ef-9f7f-e9a656123a8b", 00:12:56.926 "is_configured": true, 00:12:56.926 "data_offset": 0, 00:12:56.926 "data_size": 65536 00:12:56.926 }, 00:12:56.926 { 00:12:56.926 "name": "BaseBdev4", 00:12:56.926 "uuid": "1165a563-42f4-11ef-9f7f-e9a656123a8b", 00:12:56.926 "is_configured": true, 00:12:56.926 "data_offset": 0, 00:12:56.926 "data_size": 65536 00:12:56.926 } 00:12:56.926 ] 00:12:56.926 }' 00:12:56.926 21:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:56.926 21:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.185 21:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:57.185 21:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:57.444 21:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:12:57.444 21:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:57.702 [2024-07-15 21:49:12.778464] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:57.702 BaseBdev1 00:12:57.702 21:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:12:57.702 21:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev1 00:12:57.702 21:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:12:57.702 21:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:12:57.702 21:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:12:57.703 21:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:12:57.703 21:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:57.961 21:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:58.224 [ 00:12:58.224 { 00:12:58.225 "name": "BaseBdev1", 00:12:58.225 "aliases": [ 00:12:58.225 "12de3fcf-42f4-11ef-9f7f-e9a656123a8b" 00:12:58.225 ], 00:12:58.225 "product_name": "Malloc disk", 00:12:58.225 "block_size": 512, 00:12:58.225 "num_blocks": 65536, 00:12:58.225 "uuid": "12de3fcf-42f4-11ef-9f7f-e9a656123a8b", 00:12:58.225 "assigned_rate_limits": { 00:12:58.225 "rw_ios_per_sec": 0, 00:12:58.225 "rw_mbytes_per_sec": 0, 00:12:58.225 "r_mbytes_per_sec": 0, 00:12:58.225 "w_mbytes_per_sec": 0 00:12:58.225 }, 00:12:58.225 "claimed": true, 00:12:58.225 "claim_type": "exclusive_write", 00:12:58.225 "zoned": false, 00:12:58.225 "supported_io_types": { 00:12:58.225 "read": true, 00:12:58.225 "write": true, 00:12:58.225 "unmap": true, 00:12:58.225 "flush": true, 00:12:58.225 "reset": true, 00:12:58.225 "nvme_admin": false, 00:12:58.225 "nvme_io": false, 00:12:58.225 "nvme_io_md": false, 00:12:58.225 "write_zeroes": true, 00:12:58.225 "zcopy": true, 00:12:58.225 "get_zone_info": false, 00:12:58.225 "zone_management": false, 00:12:58.225 "zone_append": false, 00:12:58.225 "compare": false, 00:12:58.225 "compare_and_write": false, 00:12:58.225 "abort": true, 00:12:58.225 "seek_hole": false, 00:12:58.225 "seek_data": false, 00:12:58.225 "copy": true, 00:12:58.225 "nvme_iov_md": false 00:12:58.225 }, 00:12:58.225 "memory_domains": [ 00:12:58.225 { 00:12:58.225 "dma_device_id": "system", 00:12:58.225 "dma_device_type": 1 00:12:58.225 }, 00:12:58.225 { 00:12:58.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.225 "dma_device_type": 2 00:12:58.225 } 00:12:58.225 ], 00:12:58.225 "driver_specific": {} 00:12:58.225 } 00:12:58.225 ] 00:12:58.225 21:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:12:58.225 21:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:58.225 21:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:58.225 21:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:58.225 21:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:58.225 21:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:58.225 21:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:58.225 21:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:58.225 21:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:58.225 21:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:58.225 21:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:58.225 21:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:58.225 21:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.493 21:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:58.493 "name": "Existed_Raid", 00:12:58.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.493 "strip_size_kb": 64, 00:12:58.493 "state": "configuring", 00:12:58.493 "raid_level": "raid0", 00:12:58.493 "superblock": false, 00:12:58.493 "num_base_bdevs": 4, 00:12:58.493 "num_base_bdevs_discovered": 3, 00:12:58.493 "num_base_bdevs_operational": 4, 00:12:58.493 "base_bdevs_list": [ 00:12:58.493 { 00:12:58.493 "name": "BaseBdev1", 00:12:58.493 "uuid": "12de3fcf-42f4-11ef-9f7f-e9a656123a8b", 00:12:58.493 "is_configured": true, 00:12:58.493 "data_offset": 0, 00:12:58.493 "data_size": 65536 00:12:58.493 }, 00:12:58.493 { 00:12:58.493 "name": null, 00:12:58.493 "uuid": "1083ce95-42f4-11ef-9f7f-e9a656123a8b", 00:12:58.493 "is_configured": false, 00:12:58.493 "data_offset": 0, 00:12:58.493 "data_size": 65536 00:12:58.493 }, 00:12:58.493 { 00:12:58.493 "name": "BaseBdev3", 00:12:58.493 "uuid": "10f1ad7d-42f4-11ef-9f7f-e9a656123a8b", 00:12:58.493 "is_configured": true, 00:12:58.493 "data_offset": 0, 00:12:58.493 "data_size": 65536 00:12:58.493 }, 00:12:58.493 { 00:12:58.493 "name": "BaseBdev4", 00:12:58.493 "uuid": "1165a563-42f4-11ef-9f7f-e9a656123a8b", 00:12:58.493 "is_configured": true, 00:12:58.493 "data_offset": 0, 00:12:58.493 "data_size": 65536 00:12:58.493 } 00:12:58.493 ] 00:12:58.493 }' 00:12:58.493 21:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:58.493 21:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.751 21:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:58.751 21:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:59.008 21:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:12:59.009 21:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:12:59.267 [2024-07-15 21:49:14.302407] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:59.267 21:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:59.267 21:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:59.267 21:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:59.267 21:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:59.267 21:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:59.267 21:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:59.267 21:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:59.267 21:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:59.267 21:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:59.267 21:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:59.267 21:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:59.267 21:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.526 21:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:59.526 "name": "Existed_Raid", 00:12:59.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.526 "strip_size_kb": 64, 00:12:59.526 "state": "configuring", 00:12:59.526 "raid_level": "raid0", 00:12:59.526 "superblock": false, 00:12:59.526 "num_base_bdevs": 4, 00:12:59.526 "num_base_bdevs_discovered": 2, 00:12:59.526 "num_base_bdevs_operational": 4, 00:12:59.526 "base_bdevs_list": [ 00:12:59.526 { 00:12:59.526 "name": "BaseBdev1", 00:12:59.526 "uuid": "12de3fcf-42f4-11ef-9f7f-e9a656123a8b", 00:12:59.526 "is_configured": true, 00:12:59.526 "data_offset": 0, 00:12:59.526 "data_size": 65536 00:12:59.526 }, 00:12:59.526 { 00:12:59.526 "name": null, 00:12:59.526 "uuid": "1083ce95-42f4-11ef-9f7f-e9a656123a8b", 00:12:59.526 "is_configured": false, 00:12:59.526 "data_offset": 0, 00:12:59.526 "data_size": 65536 00:12:59.526 }, 00:12:59.526 { 00:12:59.526 "name": null, 00:12:59.526 "uuid": "10f1ad7d-42f4-11ef-9f7f-e9a656123a8b", 00:12:59.526 "is_configured": false, 00:12:59.526 "data_offset": 0, 00:12:59.526 "data_size": 65536 00:12:59.526 }, 00:12:59.526 { 00:12:59.526 "name": "BaseBdev4", 00:12:59.526 "uuid": "1165a563-42f4-11ef-9f7f-e9a656123a8b", 00:12:59.526 "is_configured": true, 00:12:59.526 "data_offset": 0, 00:12:59.526 "data_size": 65536 00:12:59.526 } 00:12:59.526 ] 00:12:59.526 }' 00:12:59.526 21:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:59.526 21:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.784 21:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:59.785 21:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:00.090 21:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:13:00.091 21:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:00.349 [2024-07-15 21:49:15.322472] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:00.349 21:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:00.349 21:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:00.349 21:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:00.349 21:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:00.349 21:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:00.349 21:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:00.349 21:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:00.349 21:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:00.349 21:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:00.349 21:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:00.349 21:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:00.349 21:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.607 21:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:00.607 "name": "Existed_Raid", 00:13:00.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.607 "strip_size_kb": 64, 00:13:00.607 "state": "configuring", 00:13:00.607 "raid_level": "raid0", 00:13:00.607 "superblock": false, 00:13:00.607 "num_base_bdevs": 4, 00:13:00.607 "num_base_bdevs_discovered": 3, 00:13:00.607 "num_base_bdevs_operational": 4, 00:13:00.607 "base_bdevs_list": [ 00:13:00.607 { 00:13:00.607 "name": "BaseBdev1", 00:13:00.607 "uuid": "12de3fcf-42f4-11ef-9f7f-e9a656123a8b", 00:13:00.607 "is_configured": true, 00:13:00.607 "data_offset": 0, 00:13:00.607 "data_size": 65536 00:13:00.607 }, 00:13:00.607 { 00:13:00.607 "name": null, 00:13:00.607 "uuid": "1083ce95-42f4-11ef-9f7f-e9a656123a8b", 00:13:00.607 "is_configured": false, 00:13:00.607 "data_offset": 0, 00:13:00.607 "data_size": 65536 00:13:00.607 }, 00:13:00.607 { 00:13:00.607 "name": "BaseBdev3", 00:13:00.607 "uuid": "10f1ad7d-42f4-11ef-9f7f-e9a656123a8b", 00:13:00.607 "is_configured": true, 00:13:00.607 "data_offset": 0, 00:13:00.607 "data_size": 65536 00:13:00.607 }, 00:13:00.607 { 00:13:00.607 "name": "BaseBdev4", 00:13:00.607 "uuid": "1165a563-42f4-11ef-9f7f-e9a656123a8b", 00:13:00.607 "is_configured": true, 00:13:00.607 "data_offset": 0, 00:13:00.607 "data_size": 65536 00:13:00.607 } 00:13:00.607 ] 00:13:00.607 }' 00:13:00.607 21:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:00.607 21:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.865 21:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:00.866 21:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:01.124 21:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:13:01.124 21:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:01.383 [2024-07-15 21:49:16.370543] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:01.383 21:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:01.383 21:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:01.383 21:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:01.383 21:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:01.383 21:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:01.383 21:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:01.383 21:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:01.383 21:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:01.383 21:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:01.383 21:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:01.383 21:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.383 21:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:01.641 21:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:01.641 "name": "Existed_Raid", 00:13:01.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.641 "strip_size_kb": 64, 00:13:01.641 "state": "configuring", 00:13:01.641 "raid_level": "raid0", 00:13:01.641 "superblock": false, 00:13:01.641 "num_base_bdevs": 4, 00:13:01.641 "num_base_bdevs_discovered": 2, 00:13:01.641 "num_base_bdevs_operational": 4, 00:13:01.641 "base_bdevs_list": [ 00:13:01.641 { 00:13:01.641 "name": null, 00:13:01.641 "uuid": "12de3fcf-42f4-11ef-9f7f-e9a656123a8b", 00:13:01.641 "is_configured": false, 00:13:01.641 "data_offset": 0, 00:13:01.641 "data_size": 65536 00:13:01.641 }, 00:13:01.641 { 00:13:01.641 "name": null, 00:13:01.641 "uuid": "1083ce95-42f4-11ef-9f7f-e9a656123a8b", 00:13:01.641 "is_configured": false, 00:13:01.641 "data_offset": 0, 00:13:01.641 "data_size": 65536 00:13:01.641 }, 00:13:01.641 { 00:13:01.641 "name": "BaseBdev3", 00:13:01.641 "uuid": "10f1ad7d-42f4-11ef-9f7f-e9a656123a8b", 00:13:01.641 "is_configured": true, 00:13:01.641 "data_offset": 0, 00:13:01.641 "data_size": 65536 00:13:01.641 }, 00:13:01.641 { 00:13:01.641 "name": "BaseBdev4", 00:13:01.641 "uuid": "1165a563-42f4-11ef-9f7f-e9a656123a8b", 00:13:01.641 "is_configured": true, 00:13:01.641 "data_offset": 0, 00:13:01.642 "data_size": 65536 00:13:01.642 } 00:13:01.642 ] 00:13:01.642 }' 00:13:01.642 21:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:01.642 21:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.900 21:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:01.900 21:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:02.159 21:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:13:02.159 21:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:02.159 [2024-07-15 21:49:17.337202] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:02.418 21:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:02.418 21:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:02.418 21:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:02.418 21:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:02.418 21:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:02.418 21:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:02.418 21:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:02.418 21:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:02.418 21:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:02.418 21:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:02.418 21:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.418 21:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:02.418 21:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:02.418 "name": "Existed_Raid", 00:13:02.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.418 "strip_size_kb": 64, 00:13:02.418 "state": "configuring", 00:13:02.418 "raid_level": "raid0", 00:13:02.418 "superblock": false, 00:13:02.418 "num_base_bdevs": 4, 00:13:02.418 "num_base_bdevs_discovered": 3, 00:13:02.418 "num_base_bdevs_operational": 4, 00:13:02.418 "base_bdevs_list": [ 00:13:02.418 { 00:13:02.418 "name": null, 00:13:02.418 "uuid": "12de3fcf-42f4-11ef-9f7f-e9a656123a8b", 00:13:02.418 "is_configured": false, 00:13:02.418 "data_offset": 0, 00:13:02.418 "data_size": 65536 00:13:02.418 }, 00:13:02.418 { 00:13:02.418 "name": "BaseBdev2", 00:13:02.418 "uuid": "1083ce95-42f4-11ef-9f7f-e9a656123a8b", 00:13:02.418 "is_configured": true, 00:13:02.418 "data_offset": 0, 00:13:02.418 "data_size": 65536 00:13:02.418 }, 00:13:02.418 { 00:13:02.418 "name": "BaseBdev3", 00:13:02.418 "uuid": "10f1ad7d-42f4-11ef-9f7f-e9a656123a8b", 00:13:02.418 "is_configured": true, 00:13:02.418 "data_offset": 0, 00:13:02.418 "data_size": 65536 00:13:02.418 }, 00:13:02.418 { 00:13:02.418 "name": "BaseBdev4", 00:13:02.418 "uuid": "1165a563-42f4-11ef-9f7f-e9a656123a8b", 00:13:02.418 "is_configured": true, 00:13:02.418 "data_offset": 0, 00:13:02.418 "data_size": 65536 00:13:02.418 } 00:13:02.418 ] 00:13:02.418 }' 00:13:02.418 21:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:02.418 21:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.985 21:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:02.985 21:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:02.985 21:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:13:02.985 21:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:02.985 21:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:03.244 21:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 12de3fcf-42f4-11ef-9f7f-e9a656123a8b 00:13:03.503 [2024-07-15 21:49:18.569415] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:03.503 [2024-07-15 21:49:18.569441] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2dcbb7e34f00 00:13:03.503 [2024-07-15 21:49:18.569460] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:03.503 [2024-07-15 21:49:18.569481] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2dcbb7e97e20 00:13:03.503 [2024-07-15 21:49:18.569545] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2dcbb7e34f00 00:13:03.503 [2024-07-15 21:49:18.569550] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2dcbb7e34f00 00:13:03.503 [2024-07-15 21:49:18.569585] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.503 NewBaseBdev 00:13:03.503 21:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:13:03.503 21:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=NewBaseBdev 00:13:03.503 21:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:13:03.503 21:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:13:03.503 21:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:13:03.503 21:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:13:03.504 21:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:03.762 21:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:04.021 [ 00:13:04.021 { 00:13:04.021 "name": "NewBaseBdev", 00:13:04.021 "aliases": [ 00:13:04.021 "12de3fcf-42f4-11ef-9f7f-e9a656123a8b" 00:13:04.021 ], 00:13:04.021 "product_name": "Malloc disk", 00:13:04.021 "block_size": 512, 00:13:04.021 "num_blocks": 65536, 00:13:04.021 "uuid": "12de3fcf-42f4-11ef-9f7f-e9a656123a8b", 00:13:04.021 "assigned_rate_limits": { 00:13:04.021 "rw_ios_per_sec": 0, 00:13:04.021 "rw_mbytes_per_sec": 0, 00:13:04.021 "r_mbytes_per_sec": 0, 00:13:04.021 "w_mbytes_per_sec": 0 00:13:04.021 }, 00:13:04.021 "claimed": true, 00:13:04.021 "claim_type": "exclusive_write", 00:13:04.021 "zoned": false, 00:13:04.021 "supported_io_types": { 00:13:04.022 "read": true, 00:13:04.022 "write": true, 00:13:04.022 "unmap": true, 00:13:04.022 "flush": true, 00:13:04.022 "reset": true, 00:13:04.022 "nvme_admin": false, 00:13:04.022 "nvme_io": false, 00:13:04.022 "nvme_io_md": false, 00:13:04.022 "write_zeroes": true, 00:13:04.022 "zcopy": true, 00:13:04.022 "get_zone_info": false, 00:13:04.022 "zone_management": false, 00:13:04.022 "zone_append": false, 00:13:04.022 "compare": false, 00:13:04.022 "compare_and_write": false, 00:13:04.022 "abort": true, 00:13:04.022 "seek_hole": false, 00:13:04.022 "seek_data": false, 00:13:04.022 "copy": true, 00:13:04.022 "nvme_iov_md": false 00:13:04.022 }, 00:13:04.022 "memory_domains": [ 00:13:04.022 { 00:13:04.022 "dma_device_id": "system", 00:13:04.022 "dma_device_type": 1 00:13:04.022 }, 00:13:04.022 { 00:13:04.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.022 "dma_device_type": 2 00:13:04.022 } 00:13:04.022 ], 00:13:04.022 "driver_specific": {} 00:13:04.022 } 00:13:04.022 ] 00:13:04.022 21:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:13:04.022 21:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:04.022 21:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:04.022 21:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:04.022 21:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:04.022 21:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:04.022 21:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:04.022 21:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:04.022 21:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:04.022 21:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:04.022 21:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:04.022 21:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:04.022 21:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.281 21:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:04.281 "name": "Existed_Raid", 00:13:04.281 "uuid": "1651e7c5-42f4-11ef-9f7f-e9a656123a8b", 00:13:04.281 "strip_size_kb": 64, 00:13:04.281 "state": "online", 00:13:04.281 "raid_level": "raid0", 00:13:04.281 "superblock": false, 00:13:04.281 "num_base_bdevs": 4, 00:13:04.281 "num_base_bdevs_discovered": 4, 00:13:04.281 "num_base_bdevs_operational": 4, 00:13:04.281 "base_bdevs_list": [ 00:13:04.281 { 00:13:04.281 "name": "NewBaseBdev", 00:13:04.281 "uuid": "12de3fcf-42f4-11ef-9f7f-e9a656123a8b", 00:13:04.281 "is_configured": true, 00:13:04.281 "data_offset": 0, 00:13:04.281 "data_size": 65536 00:13:04.281 }, 00:13:04.281 { 00:13:04.281 "name": "BaseBdev2", 00:13:04.281 "uuid": "1083ce95-42f4-11ef-9f7f-e9a656123a8b", 00:13:04.281 "is_configured": true, 00:13:04.281 "data_offset": 0, 00:13:04.281 "data_size": 65536 00:13:04.281 }, 00:13:04.281 { 00:13:04.281 "name": "BaseBdev3", 00:13:04.281 "uuid": "10f1ad7d-42f4-11ef-9f7f-e9a656123a8b", 00:13:04.281 "is_configured": true, 00:13:04.281 "data_offset": 0, 00:13:04.281 "data_size": 65536 00:13:04.281 }, 00:13:04.281 { 00:13:04.281 "name": "BaseBdev4", 00:13:04.281 "uuid": "1165a563-42f4-11ef-9f7f-e9a656123a8b", 00:13:04.281 "is_configured": true, 00:13:04.281 "data_offset": 0, 00:13:04.281 "data_size": 65536 00:13:04.281 } 00:13:04.281 ] 00:13:04.281 }' 00:13:04.281 21:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:04.281 21:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.539 21:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:13:04.539 21:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:04.539 21:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:04.539 21:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:04.539 21:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:04.539 21:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:04.539 21:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:04.539 21:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:04.798 [2024-07-15 21:49:19.849375] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:04.798 21:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:04.798 "name": "Existed_Raid", 00:13:04.798 "aliases": [ 00:13:04.798 "1651e7c5-42f4-11ef-9f7f-e9a656123a8b" 00:13:04.798 ], 00:13:04.798 "product_name": "Raid Volume", 00:13:04.798 "block_size": 512, 00:13:04.798 "num_blocks": 262144, 00:13:04.798 "uuid": "1651e7c5-42f4-11ef-9f7f-e9a656123a8b", 00:13:04.798 "assigned_rate_limits": { 00:13:04.798 "rw_ios_per_sec": 0, 00:13:04.798 "rw_mbytes_per_sec": 0, 00:13:04.798 "r_mbytes_per_sec": 0, 00:13:04.798 "w_mbytes_per_sec": 0 00:13:04.798 }, 00:13:04.798 "claimed": false, 00:13:04.798 "zoned": false, 00:13:04.798 "supported_io_types": { 00:13:04.798 "read": true, 00:13:04.798 "write": true, 00:13:04.798 "unmap": true, 00:13:04.798 "flush": true, 00:13:04.798 "reset": true, 00:13:04.798 "nvme_admin": false, 00:13:04.798 "nvme_io": false, 00:13:04.798 "nvme_io_md": false, 00:13:04.798 "write_zeroes": true, 00:13:04.798 "zcopy": false, 00:13:04.798 "get_zone_info": false, 00:13:04.798 "zone_management": false, 00:13:04.798 "zone_append": false, 00:13:04.798 "compare": false, 00:13:04.798 "compare_and_write": false, 00:13:04.798 "abort": false, 00:13:04.798 "seek_hole": false, 00:13:04.798 "seek_data": false, 00:13:04.798 "copy": false, 00:13:04.798 "nvme_iov_md": false 00:13:04.798 }, 00:13:04.798 "memory_domains": [ 00:13:04.798 { 00:13:04.798 "dma_device_id": "system", 00:13:04.798 "dma_device_type": 1 00:13:04.798 }, 00:13:04.798 { 00:13:04.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.798 "dma_device_type": 2 00:13:04.798 }, 00:13:04.798 { 00:13:04.798 "dma_device_id": "system", 00:13:04.798 "dma_device_type": 1 00:13:04.798 }, 00:13:04.798 { 00:13:04.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.798 "dma_device_type": 2 00:13:04.798 }, 00:13:04.798 { 00:13:04.798 "dma_device_id": "system", 00:13:04.798 "dma_device_type": 1 00:13:04.798 }, 00:13:04.798 { 00:13:04.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.798 "dma_device_type": 2 00:13:04.798 }, 00:13:04.798 { 00:13:04.798 "dma_device_id": "system", 00:13:04.798 "dma_device_type": 1 00:13:04.798 }, 00:13:04.798 { 00:13:04.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.798 "dma_device_type": 2 00:13:04.798 } 00:13:04.798 ], 00:13:04.798 "driver_specific": { 00:13:04.798 "raid": { 00:13:04.798 "uuid": "1651e7c5-42f4-11ef-9f7f-e9a656123a8b", 00:13:04.798 "strip_size_kb": 64, 00:13:04.798 "state": "online", 00:13:04.798 "raid_level": "raid0", 00:13:04.798 "superblock": false, 00:13:04.798 "num_base_bdevs": 4, 00:13:04.798 "num_base_bdevs_discovered": 4, 00:13:04.798 "num_base_bdevs_operational": 4, 00:13:04.798 "base_bdevs_list": [ 00:13:04.798 { 00:13:04.798 "name": "NewBaseBdev", 00:13:04.798 "uuid": "12de3fcf-42f4-11ef-9f7f-e9a656123a8b", 00:13:04.798 "is_configured": true, 00:13:04.798 "data_offset": 0, 00:13:04.798 "data_size": 65536 00:13:04.798 }, 00:13:04.798 { 00:13:04.798 "name": "BaseBdev2", 00:13:04.798 "uuid": "1083ce95-42f4-11ef-9f7f-e9a656123a8b", 00:13:04.798 "is_configured": true, 00:13:04.798 "data_offset": 0, 00:13:04.798 "data_size": 65536 00:13:04.798 }, 00:13:04.798 { 00:13:04.798 "name": "BaseBdev3", 00:13:04.798 "uuid": "10f1ad7d-42f4-11ef-9f7f-e9a656123a8b", 00:13:04.798 "is_configured": true, 00:13:04.798 "data_offset": 0, 00:13:04.798 "data_size": 65536 00:13:04.798 }, 00:13:04.798 { 00:13:04.798 "name": "BaseBdev4", 00:13:04.798 "uuid": "1165a563-42f4-11ef-9f7f-e9a656123a8b", 00:13:04.798 "is_configured": true, 00:13:04.798 "data_offset": 0, 00:13:04.798 "data_size": 65536 00:13:04.798 } 00:13:04.798 ] 00:13:04.798 } 00:13:04.798 } 00:13:04.798 }' 00:13:04.798 21:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:04.798 21:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:13:04.798 BaseBdev2 00:13:04.798 BaseBdev3 00:13:04.798 BaseBdev4' 00:13:04.798 21:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:04.798 21:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:04.798 21:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:13:05.057 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:05.057 "name": "NewBaseBdev", 00:13:05.057 "aliases": [ 00:13:05.057 "12de3fcf-42f4-11ef-9f7f-e9a656123a8b" 00:13:05.057 ], 00:13:05.057 "product_name": "Malloc disk", 00:13:05.057 "block_size": 512, 00:13:05.057 "num_blocks": 65536, 00:13:05.057 "uuid": "12de3fcf-42f4-11ef-9f7f-e9a656123a8b", 00:13:05.057 "assigned_rate_limits": { 00:13:05.057 "rw_ios_per_sec": 0, 00:13:05.057 "rw_mbytes_per_sec": 0, 00:13:05.057 "r_mbytes_per_sec": 0, 00:13:05.057 "w_mbytes_per_sec": 0 00:13:05.057 }, 00:13:05.057 "claimed": true, 00:13:05.057 "claim_type": "exclusive_write", 00:13:05.057 "zoned": false, 00:13:05.057 "supported_io_types": { 00:13:05.057 "read": true, 00:13:05.057 "write": true, 00:13:05.057 "unmap": true, 00:13:05.057 "flush": true, 00:13:05.057 "reset": true, 00:13:05.057 "nvme_admin": false, 00:13:05.057 "nvme_io": false, 00:13:05.057 "nvme_io_md": false, 00:13:05.057 "write_zeroes": true, 00:13:05.057 "zcopy": true, 00:13:05.057 "get_zone_info": false, 00:13:05.057 "zone_management": false, 00:13:05.057 "zone_append": false, 00:13:05.057 "compare": false, 00:13:05.057 "compare_and_write": false, 00:13:05.057 "abort": true, 00:13:05.057 "seek_hole": false, 00:13:05.057 "seek_data": false, 00:13:05.057 "copy": true, 00:13:05.057 "nvme_iov_md": false 00:13:05.057 }, 00:13:05.057 "memory_domains": [ 00:13:05.057 { 00:13:05.057 "dma_device_id": "system", 00:13:05.057 "dma_device_type": 1 00:13:05.057 }, 00:13:05.057 { 00:13:05.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.058 "dma_device_type": 2 00:13:05.058 } 00:13:05.058 ], 00:13:05.058 "driver_specific": {} 00:13:05.058 }' 00:13:05.058 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:05.058 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:05.058 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:05.058 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:05.058 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:05.058 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:05.058 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:05.058 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:05.058 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:05.058 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:05.058 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:05.058 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:05.058 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:05.058 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:05.058 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:05.316 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:05.316 "name": "BaseBdev2", 00:13:05.316 "aliases": [ 00:13:05.316 "1083ce95-42f4-11ef-9f7f-e9a656123a8b" 00:13:05.316 ], 00:13:05.316 "product_name": "Malloc disk", 00:13:05.316 "block_size": 512, 00:13:05.316 "num_blocks": 65536, 00:13:05.316 "uuid": "1083ce95-42f4-11ef-9f7f-e9a656123a8b", 00:13:05.316 "assigned_rate_limits": { 00:13:05.316 "rw_ios_per_sec": 0, 00:13:05.316 "rw_mbytes_per_sec": 0, 00:13:05.316 "r_mbytes_per_sec": 0, 00:13:05.317 "w_mbytes_per_sec": 0 00:13:05.317 }, 00:13:05.317 "claimed": true, 00:13:05.317 "claim_type": "exclusive_write", 00:13:05.317 "zoned": false, 00:13:05.317 "supported_io_types": { 00:13:05.317 "read": true, 00:13:05.317 "write": true, 00:13:05.317 "unmap": true, 00:13:05.317 "flush": true, 00:13:05.317 "reset": true, 00:13:05.317 "nvme_admin": false, 00:13:05.317 "nvme_io": false, 00:13:05.317 "nvme_io_md": false, 00:13:05.317 "write_zeroes": true, 00:13:05.317 "zcopy": true, 00:13:05.317 "get_zone_info": false, 00:13:05.317 "zone_management": false, 00:13:05.317 "zone_append": false, 00:13:05.317 "compare": false, 00:13:05.317 "compare_and_write": false, 00:13:05.317 "abort": true, 00:13:05.317 "seek_hole": false, 00:13:05.317 "seek_data": false, 00:13:05.317 "copy": true, 00:13:05.317 "nvme_iov_md": false 00:13:05.317 }, 00:13:05.317 "memory_domains": [ 00:13:05.317 { 00:13:05.317 "dma_device_id": "system", 00:13:05.317 "dma_device_type": 1 00:13:05.317 }, 00:13:05.317 { 00:13:05.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.317 "dma_device_type": 2 00:13:05.317 } 00:13:05.317 ], 00:13:05.317 "driver_specific": {} 00:13:05.317 }' 00:13:05.317 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:05.317 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:05.317 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:05.317 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:05.317 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:05.317 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:05.317 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:05.317 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:05.317 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:05.317 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:05.317 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:05.317 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:05.317 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:05.317 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:05.317 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:05.885 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:05.885 "name": "BaseBdev3", 00:13:05.885 "aliases": [ 00:13:05.885 "10f1ad7d-42f4-11ef-9f7f-e9a656123a8b" 00:13:05.885 ], 00:13:05.885 "product_name": "Malloc disk", 00:13:05.885 "block_size": 512, 00:13:05.885 "num_blocks": 65536, 00:13:05.885 "uuid": "10f1ad7d-42f4-11ef-9f7f-e9a656123a8b", 00:13:05.885 "assigned_rate_limits": { 00:13:05.885 "rw_ios_per_sec": 0, 00:13:05.885 "rw_mbytes_per_sec": 0, 00:13:05.885 "r_mbytes_per_sec": 0, 00:13:05.885 "w_mbytes_per_sec": 0 00:13:05.885 }, 00:13:05.885 "claimed": true, 00:13:05.885 "claim_type": "exclusive_write", 00:13:05.885 "zoned": false, 00:13:05.885 "supported_io_types": { 00:13:05.885 "read": true, 00:13:05.885 "write": true, 00:13:05.885 "unmap": true, 00:13:05.885 "flush": true, 00:13:05.885 "reset": true, 00:13:05.885 "nvme_admin": false, 00:13:05.885 "nvme_io": false, 00:13:05.885 "nvme_io_md": false, 00:13:05.885 "write_zeroes": true, 00:13:05.885 "zcopy": true, 00:13:05.885 "get_zone_info": false, 00:13:05.885 "zone_management": false, 00:13:05.885 "zone_append": false, 00:13:05.885 "compare": false, 00:13:05.885 "compare_and_write": false, 00:13:05.885 "abort": true, 00:13:05.885 "seek_hole": false, 00:13:05.885 "seek_data": false, 00:13:05.885 "copy": true, 00:13:05.885 "nvme_iov_md": false 00:13:05.885 }, 00:13:05.885 "memory_domains": [ 00:13:05.885 { 00:13:05.885 "dma_device_id": "system", 00:13:05.885 "dma_device_type": 1 00:13:05.885 }, 00:13:05.885 { 00:13:05.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.885 "dma_device_type": 2 00:13:05.885 } 00:13:05.885 ], 00:13:05.885 "driver_specific": {} 00:13:05.885 }' 00:13:05.885 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:05.885 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:05.885 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:05.885 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:05.885 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:05.885 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:05.885 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:05.885 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:05.885 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:05.885 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:05.885 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:05.885 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:05.885 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:05.885 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:05.885 21:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:13:06.144 21:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:06.144 "name": "BaseBdev4", 00:13:06.144 "aliases": [ 00:13:06.144 "1165a563-42f4-11ef-9f7f-e9a656123a8b" 00:13:06.144 ], 00:13:06.144 "product_name": "Malloc disk", 00:13:06.144 "block_size": 512, 00:13:06.144 "num_blocks": 65536, 00:13:06.144 "uuid": "1165a563-42f4-11ef-9f7f-e9a656123a8b", 00:13:06.144 "assigned_rate_limits": { 00:13:06.144 "rw_ios_per_sec": 0, 00:13:06.144 "rw_mbytes_per_sec": 0, 00:13:06.144 "r_mbytes_per_sec": 0, 00:13:06.144 "w_mbytes_per_sec": 0 00:13:06.144 }, 00:13:06.144 "claimed": true, 00:13:06.144 "claim_type": "exclusive_write", 00:13:06.144 "zoned": false, 00:13:06.144 "supported_io_types": { 00:13:06.144 "read": true, 00:13:06.144 "write": true, 00:13:06.144 "unmap": true, 00:13:06.144 "flush": true, 00:13:06.144 "reset": true, 00:13:06.144 "nvme_admin": false, 00:13:06.144 "nvme_io": false, 00:13:06.144 "nvme_io_md": false, 00:13:06.144 "write_zeroes": true, 00:13:06.144 "zcopy": true, 00:13:06.144 "get_zone_info": false, 00:13:06.144 "zone_management": false, 00:13:06.144 "zone_append": false, 00:13:06.144 "compare": false, 00:13:06.144 "compare_and_write": false, 00:13:06.144 "abort": true, 00:13:06.144 "seek_hole": false, 00:13:06.144 "seek_data": false, 00:13:06.144 "copy": true, 00:13:06.144 "nvme_iov_md": false 00:13:06.144 }, 00:13:06.144 "memory_domains": [ 00:13:06.144 { 00:13:06.144 "dma_device_id": "system", 00:13:06.144 "dma_device_type": 1 00:13:06.144 }, 00:13:06.144 { 00:13:06.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.144 "dma_device_type": 2 00:13:06.144 } 00:13:06.144 ], 00:13:06.144 "driver_specific": {} 00:13:06.144 }' 00:13:06.144 21:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:06.144 21:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:06.144 21:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:06.144 21:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:06.144 21:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:06.144 21:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:06.145 21:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:06.145 21:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:06.145 21:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:06.145 21:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:06.145 21:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:06.145 21:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:06.145 21:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:06.404 [2024-07-15 21:49:21.361405] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:06.404 [2024-07-15 21:49:21.361439] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:06.404 [2024-07-15 21:49:21.361491] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:06.404 [2024-07-15 21:49:21.361504] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:06.404 [2024-07-15 21:49:21.361508] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2dcbb7e34f00 name Existed_Raid, state offline 00:13:06.404 21:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 58359 00:13:06.404 21:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@942 -- # '[' -z 58359 ']' 00:13:06.404 21:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # kill -0 58359 00:13:06.404 21:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@947 -- # uname 00:13:06.404 21:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:13:06.404 21:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # ps -c -o command 58359 00:13:06.404 21:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # tail -1 00:13:06.404 21:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:13:06.404 21:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:13:06.404 killing process with pid 58359 00:13:06.404 21:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 58359' 00:13:06.404 21:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@961 -- # kill 58359 00:13:06.404 [2024-07-15 21:49:21.387850] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:06.404 21:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # wait 58359 00:13:06.404 [2024-07-15 21:49:21.412523] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:06.404 21:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:13:06.404 00:13:06.404 real 0m25.476s 00:13:06.404 user 0m46.281s 00:13:06.404 sys 0m3.842s 00:13:06.404 21:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:13:06.404 21:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.404 ************************************ 00:13:06.404 END TEST raid_state_function_test 00:13:06.404 ************************************ 00:13:06.664 21:49:21 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:13:06.664 21:49:21 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:13:06.664 21:49:21 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:13:06.664 21:49:21 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:13:06.664 21:49:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:06.664 ************************************ 00:13:06.664 START TEST raid_state_function_test_sb 00:13:06.664 ************************************ 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1117 -- # raid_state_function_test raid0 4 true 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=59170 00:13:06.664 Process raid pid: 59170 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 59170' 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 59170 /var/tmp/spdk-raid.sock 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@823 -- # '[' -z 59170 ']' 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@828 -- # local max_retries=100 00:13:06.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # xtrace_disable 00:13:06.664 21:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.664 [2024-07-15 21:49:21.656813] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:13:06.664 [2024-07-15 21:49:21.657157] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:13:07.232 EAL: TSC is not safe to use in SMP mode 00:13:07.232 EAL: TSC is not invariant 00:13:07.232 [2024-07-15 21:49:22.214086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.232 [2024-07-15 21:49:22.295642] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:07.232 [2024-07-15 21:49:22.298261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.232 [2024-07-15 21:49:22.299140] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:07.232 [2024-07-15 21:49:22.299153] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:07.491 21:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:13:07.491 21:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # return 0 00:13:07.491 21:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:07.750 [2024-07-15 21:49:22.878712] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:07.750 [2024-07-15 21:49:22.878789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:07.750 [2024-07-15 21:49:22.878809] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:07.750 [2024-07-15 21:49:22.878817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:07.750 [2024-07-15 21:49:22.878819] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:07.750 [2024-07-15 21:49:22.878826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:07.750 [2024-07-15 21:49:22.878829] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:07.750 [2024-07-15 21:49:22.878835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:07.750 21:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:07.750 21:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:07.750 21:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:07.750 21:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:07.750 21:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:07.750 21:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:07.750 21:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:07.750 21:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:07.750 21:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:07.750 21:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:07.750 21:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:07.750 21:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:08.009 21:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:08.009 "name": "Existed_Raid", 00:13:08.009 "uuid": "18e37177-42f4-11ef-9f7f-e9a656123a8b", 00:13:08.009 "strip_size_kb": 64, 00:13:08.009 "state": "configuring", 00:13:08.009 "raid_level": "raid0", 00:13:08.009 "superblock": true, 00:13:08.009 "num_base_bdevs": 4, 00:13:08.009 "num_base_bdevs_discovered": 0, 00:13:08.009 "num_base_bdevs_operational": 4, 00:13:08.009 "base_bdevs_list": [ 00:13:08.009 { 00:13:08.009 "name": "BaseBdev1", 00:13:08.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.010 "is_configured": false, 00:13:08.010 "data_offset": 0, 00:13:08.010 "data_size": 0 00:13:08.010 }, 00:13:08.010 { 00:13:08.010 "name": "BaseBdev2", 00:13:08.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.010 "is_configured": false, 00:13:08.010 "data_offset": 0, 00:13:08.010 "data_size": 0 00:13:08.010 }, 00:13:08.010 { 00:13:08.010 "name": "BaseBdev3", 00:13:08.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.010 "is_configured": false, 00:13:08.010 "data_offset": 0, 00:13:08.010 "data_size": 0 00:13:08.010 }, 00:13:08.010 { 00:13:08.010 "name": "BaseBdev4", 00:13:08.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.010 "is_configured": false, 00:13:08.010 "data_offset": 0, 00:13:08.010 "data_size": 0 00:13:08.010 } 00:13:08.010 ] 00:13:08.010 }' 00:13:08.010 21:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:08.010 21:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.267 21:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:08.525 [2024-07-15 21:49:23.634742] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:08.525 [2024-07-15 21:49:23.634765] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x30535ea34500 name Existed_Raid, state configuring 00:13:08.525 21:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:08.783 [2024-07-15 21:49:23.842753] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:08.783 [2024-07-15 21:49:23.842815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:08.783 [2024-07-15 21:49:23.842819] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:08.783 [2024-07-15 21:49:23.842849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:08.783 [2024-07-15 21:49:23.842852] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:08.783 [2024-07-15 21:49:23.842859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:08.783 [2024-07-15 21:49:23.842861] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:08.783 [2024-07-15 21:49:23.842868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:08.783 21:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:09.042 [2024-07-15 21:49:24.099737] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:09.042 BaseBdev1 00:13:09.042 21:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:09.042 21:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev1 00:13:09.042 21:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:13:09.042 21:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:13:09.042 21:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:13:09.042 21:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:13:09.042 21:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:09.301 21:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:09.562 [ 00:13:09.562 { 00:13:09.562 "name": "BaseBdev1", 00:13:09.562 "aliases": [ 00:13:09.562 "199d9d4c-42f4-11ef-9f7f-e9a656123a8b" 00:13:09.562 ], 00:13:09.562 "product_name": "Malloc disk", 00:13:09.562 "block_size": 512, 00:13:09.562 "num_blocks": 65536, 00:13:09.562 "uuid": "199d9d4c-42f4-11ef-9f7f-e9a656123a8b", 00:13:09.562 "assigned_rate_limits": { 00:13:09.562 "rw_ios_per_sec": 0, 00:13:09.562 "rw_mbytes_per_sec": 0, 00:13:09.562 "r_mbytes_per_sec": 0, 00:13:09.562 "w_mbytes_per_sec": 0 00:13:09.562 }, 00:13:09.562 "claimed": true, 00:13:09.562 "claim_type": "exclusive_write", 00:13:09.562 "zoned": false, 00:13:09.562 "supported_io_types": { 00:13:09.562 "read": true, 00:13:09.562 "write": true, 00:13:09.562 "unmap": true, 00:13:09.562 "flush": true, 00:13:09.562 "reset": true, 00:13:09.562 "nvme_admin": false, 00:13:09.562 "nvme_io": false, 00:13:09.562 "nvme_io_md": false, 00:13:09.562 "write_zeroes": true, 00:13:09.562 "zcopy": true, 00:13:09.562 "get_zone_info": false, 00:13:09.562 "zone_management": false, 00:13:09.562 "zone_append": false, 00:13:09.562 "compare": false, 00:13:09.562 "compare_and_write": false, 00:13:09.562 "abort": true, 00:13:09.562 "seek_hole": false, 00:13:09.562 "seek_data": false, 00:13:09.562 "copy": true, 00:13:09.562 "nvme_iov_md": false 00:13:09.562 }, 00:13:09.562 "memory_domains": [ 00:13:09.562 { 00:13:09.562 "dma_device_id": "system", 00:13:09.562 "dma_device_type": 1 00:13:09.562 }, 00:13:09.562 { 00:13:09.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.562 "dma_device_type": 2 00:13:09.562 } 00:13:09.562 ], 00:13:09.562 "driver_specific": {} 00:13:09.562 } 00:13:09.562 ] 00:13:09.562 21:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:13:09.562 21:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:09.562 21:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:09.562 21:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:09.562 21:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:09.562 21:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:09.562 21:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:09.562 21:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:09.562 21:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:09.562 21:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:09.562 21:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:09.562 21:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:09.562 21:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:09.822 21:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:09.822 "name": "Existed_Raid", 00:13:09.822 "uuid": "19768b59-42f4-11ef-9f7f-e9a656123a8b", 00:13:09.822 "strip_size_kb": 64, 00:13:09.822 "state": "configuring", 00:13:09.822 "raid_level": "raid0", 00:13:09.822 "superblock": true, 00:13:09.822 "num_base_bdevs": 4, 00:13:09.822 "num_base_bdevs_discovered": 1, 00:13:09.822 "num_base_bdevs_operational": 4, 00:13:09.822 "base_bdevs_list": [ 00:13:09.822 { 00:13:09.822 "name": "BaseBdev1", 00:13:09.822 "uuid": "199d9d4c-42f4-11ef-9f7f-e9a656123a8b", 00:13:09.822 "is_configured": true, 00:13:09.822 "data_offset": 2048, 00:13:09.822 "data_size": 63488 00:13:09.822 }, 00:13:09.822 { 00:13:09.822 "name": "BaseBdev2", 00:13:09.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.822 "is_configured": false, 00:13:09.822 "data_offset": 0, 00:13:09.822 "data_size": 0 00:13:09.822 }, 00:13:09.822 { 00:13:09.822 "name": "BaseBdev3", 00:13:09.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.822 "is_configured": false, 00:13:09.822 "data_offset": 0, 00:13:09.822 "data_size": 0 00:13:09.822 }, 00:13:09.822 { 00:13:09.822 "name": "BaseBdev4", 00:13:09.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.822 "is_configured": false, 00:13:09.822 "data_offset": 0, 00:13:09.822 "data_size": 0 00:13:09.822 } 00:13:09.822 ] 00:13:09.822 }' 00:13:09.822 21:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:09.822 21:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.081 21:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:10.339 [2024-07-15 21:49:25.398844] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:10.340 [2024-07-15 21:49:25.398897] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x30535ea34500 name Existed_Raid, state configuring 00:13:10.340 21:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:10.598 [2024-07-15 21:49:25.622866] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:10.598 [2024-07-15 21:49:25.623798] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:10.598 [2024-07-15 21:49:25.623847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:10.598 [2024-07-15 21:49:25.623852] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:10.598 [2024-07-15 21:49:25.623860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:10.598 [2024-07-15 21:49:25.623864] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:10.598 [2024-07-15 21:49:25.623871] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:10.598 21:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:10.598 21:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:10.598 21:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:10.598 21:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:10.598 21:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:10.598 21:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:10.598 21:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:10.598 21:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:10.598 21:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:10.598 21:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:10.598 21:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:10.598 21:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:10.598 21:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:10.598 21:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.856 21:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:10.857 "name": "Existed_Raid", 00:13:10.857 "uuid": "1a862ae9-42f4-11ef-9f7f-e9a656123a8b", 00:13:10.857 "strip_size_kb": 64, 00:13:10.857 "state": "configuring", 00:13:10.857 "raid_level": "raid0", 00:13:10.857 "superblock": true, 00:13:10.857 "num_base_bdevs": 4, 00:13:10.857 "num_base_bdevs_discovered": 1, 00:13:10.857 "num_base_bdevs_operational": 4, 00:13:10.857 "base_bdevs_list": [ 00:13:10.857 { 00:13:10.857 "name": "BaseBdev1", 00:13:10.857 "uuid": "199d9d4c-42f4-11ef-9f7f-e9a656123a8b", 00:13:10.857 "is_configured": true, 00:13:10.857 "data_offset": 2048, 00:13:10.857 "data_size": 63488 00:13:10.857 }, 00:13:10.857 { 00:13:10.857 "name": "BaseBdev2", 00:13:10.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.857 "is_configured": false, 00:13:10.857 "data_offset": 0, 00:13:10.857 "data_size": 0 00:13:10.857 }, 00:13:10.857 { 00:13:10.857 "name": "BaseBdev3", 00:13:10.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.857 "is_configured": false, 00:13:10.857 "data_offset": 0, 00:13:10.857 "data_size": 0 00:13:10.857 }, 00:13:10.857 { 00:13:10.857 "name": "BaseBdev4", 00:13:10.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.857 "is_configured": false, 00:13:10.857 "data_offset": 0, 00:13:10.857 "data_size": 0 00:13:10.857 } 00:13:10.857 ] 00:13:10.857 }' 00:13:10.857 21:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:10.857 21:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.115 21:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:11.373 [2024-07-15 21:49:26.391053] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:11.373 BaseBdev2 00:13:11.373 21:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:11.373 21:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev2 00:13:11.373 21:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:13:11.373 21:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:13:11.373 21:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:13:11.373 21:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:13:11.373 21:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:11.632 21:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:11.891 [ 00:13:11.891 { 00:13:11.891 "name": "BaseBdev2", 00:13:11.891 "aliases": [ 00:13:11.891 "1afb5cd0-42f4-11ef-9f7f-e9a656123a8b" 00:13:11.891 ], 00:13:11.891 "product_name": "Malloc disk", 00:13:11.891 "block_size": 512, 00:13:11.891 "num_blocks": 65536, 00:13:11.891 "uuid": "1afb5cd0-42f4-11ef-9f7f-e9a656123a8b", 00:13:11.891 "assigned_rate_limits": { 00:13:11.891 "rw_ios_per_sec": 0, 00:13:11.891 "rw_mbytes_per_sec": 0, 00:13:11.891 "r_mbytes_per_sec": 0, 00:13:11.891 "w_mbytes_per_sec": 0 00:13:11.891 }, 00:13:11.891 "claimed": true, 00:13:11.891 "claim_type": "exclusive_write", 00:13:11.891 "zoned": false, 00:13:11.891 "supported_io_types": { 00:13:11.891 "read": true, 00:13:11.891 "write": true, 00:13:11.891 "unmap": true, 00:13:11.891 "flush": true, 00:13:11.891 "reset": true, 00:13:11.891 "nvme_admin": false, 00:13:11.891 "nvme_io": false, 00:13:11.891 "nvme_io_md": false, 00:13:11.891 "write_zeroes": true, 00:13:11.891 "zcopy": true, 00:13:11.891 "get_zone_info": false, 00:13:11.891 "zone_management": false, 00:13:11.891 "zone_append": false, 00:13:11.891 "compare": false, 00:13:11.891 "compare_and_write": false, 00:13:11.891 "abort": true, 00:13:11.891 "seek_hole": false, 00:13:11.891 "seek_data": false, 00:13:11.891 "copy": true, 00:13:11.891 "nvme_iov_md": false 00:13:11.891 }, 00:13:11.891 "memory_domains": [ 00:13:11.891 { 00:13:11.891 "dma_device_id": "system", 00:13:11.891 "dma_device_type": 1 00:13:11.891 }, 00:13:11.891 { 00:13:11.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.891 "dma_device_type": 2 00:13:11.891 } 00:13:11.891 ], 00:13:11.891 "driver_specific": {} 00:13:11.891 } 00:13:11.891 ] 00:13:11.891 21:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:13:11.891 21:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:11.891 21:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:11.891 21:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:11.891 21:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:11.891 21:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:11.891 21:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:11.891 21:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:11.891 21:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:11.891 21:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:11.891 21:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:11.891 21:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:11.891 21:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:11.891 21:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.891 21:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:12.149 21:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:12.149 "name": "Existed_Raid", 00:13:12.149 "uuid": "1a862ae9-42f4-11ef-9f7f-e9a656123a8b", 00:13:12.149 "strip_size_kb": 64, 00:13:12.149 "state": "configuring", 00:13:12.149 "raid_level": "raid0", 00:13:12.149 "superblock": true, 00:13:12.149 "num_base_bdevs": 4, 00:13:12.149 "num_base_bdevs_discovered": 2, 00:13:12.149 "num_base_bdevs_operational": 4, 00:13:12.149 "base_bdevs_list": [ 00:13:12.149 { 00:13:12.149 "name": "BaseBdev1", 00:13:12.149 "uuid": "199d9d4c-42f4-11ef-9f7f-e9a656123a8b", 00:13:12.149 "is_configured": true, 00:13:12.149 "data_offset": 2048, 00:13:12.149 "data_size": 63488 00:13:12.149 }, 00:13:12.149 { 00:13:12.149 "name": "BaseBdev2", 00:13:12.149 "uuid": "1afb5cd0-42f4-11ef-9f7f-e9a656123a8b", 00:13:12.149 "is_configured": true, 00:13:12.149 "data_offset": 2048, 00:13:12.149 "data_size": 63488 00:13:12.149 }, 00:13:12.149 { 00:13:12.149 "name": "BaseBdev3", 00:13:12.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.149 "is_configured": false, 00:13:12.149 "data_offset": 0, 00:13:12.149 "data_size": 0 00:13:12.149 }, 00:13:12.149 { 00:13:12.149 "name": "BaseBdev4", 00:13:12.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.149 "is_configured": false, 00:13:12.149 "data_offset": 0, 00:13:12.149 "data_size": 0 00:13:12.149 } 00:13:12.149 ] 00:13:12.149 }' 00:13:12.149 21:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:12.149 21:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.408 21:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:12.666 [2024-07-15 21:49:27.707257] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:12.666 BaseBdev3 00:13:12.666 21:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:13:12.666 21:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev3 00:13:12.666 21:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:13:12.666 21:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:13:12.666 21:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:13:12.666 21:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:13:12.666 21:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:12.924 21:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:13.183 [ 00:13:13.183 { 00:13:13.183 "name": "BaseBdev3", 00:13:13.183 "aliases": [ 00:13:13.183 "1bc43396-42f4-11ef-9f7f-e9a656123a8b" 00:13:13.183 ], 00:13:13.183 "product_name": "Malloc disk", 00:13:13.183 "block_size": 512, 00:13:13.183 "num_blocks": 65536, 00:13:13.183 "uuid": "1bc43396-42f4-11ef-9f7f-e9a656123a8b", 00:13:13.183 "assigned_rate_limits": { 00:13:13.183 "rw_ios_per_sec": 0, 00:13:13.183 "rw_mbytes_per_sec": 0, 00:13:13.183 "r_mbytes_per_sec": 0, 00:13:13.183 "w_mbytes_per_sec": 0 00:13:13.183 }, 00:13:13.183 "claimed": true, 00:13:13.183 "claim_type": "exclusive_write", 00:13:13.183 "zoned": false, 00:13:13.183 "supported_io_types": { 00:13:13.183 "read": true, 00:13:13.183 "write": true, 00:13:13.183 "unmap": true, 00:13:13.183 "flush": true, 00:13:13.183 "reset": true, 00:13:13.183 "nvme_admin": false, 00:13:13.183 "nvme_io": false, 00:13:13.183 "nvme_io_md": false, 00:13:13.183 "write_zeroes": true, 00:13:13.183 "zcopy": true, 00:13:13.183 "get_zone_info": false, 00:13:13.183 "zone_management": false, 00:13:13.183 "zone_append": false, 00:13:13.183 "compare": false, 00:13:13.183 "compare_and_write": false, 00:13:13.183 "abort": true, 00:13:13.183 "seek_hole": false, 00:13:13.183 "seek_data": false, 00:13:13.183 "copy": true, 00:13:13.183 "nvme_iov_md": false 00:13:13.183 }, 00:13:13.183 "memory_domains": [ 00:13:13.183 { 00:13:13.183 "dma_device_id": "system", 00:13:13.183 "dma_device_type": 1 00:13:13.183 }, 00:13:13.183 { 00:13:13.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.183 "dma_device_type": 2 00:13:13.183 } 00:13:13.183 ], 00:13:13.183 "driver_specific": {} 00:13:13.183 } 00:13:13.183 ] 00:13:13.183 21:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:13:13.183 21:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:13.183 21:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:13.183 21:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:13.183 21:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:13.183 21:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:13.183 21:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:13.183 21:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:13.183 21:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:13.183 21:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:13.183 21:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:13.183 21:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:13.183 21:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:13.183 21:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:13.183 21:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.442 21:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:13.442 "name": "Existed_Raid", 00:13:13.442 "uuid": "1a862ae9-42f4-11ef-9f7f-e9a656123a8b", 00:13:13.442 "strip_size_kb": 64, 00:13:13.442 "state": "configuring", 00:13:13.442 "raid_level": "raid0", 00:13:13.442 "superblock": true, 00:13:13.442 "num_base_bdevs": 4, 00:13:13.442 "num_base_bdevs_discovered": 3, 00:13:13.442 "num_base_bdevs_operational": 4, 00:13:13.442 "base_bdevs_list": [ 00:13:13.442 { 00:13:13.442 "name": "BaseBdev1", 00:13:13.442 "uuid": "199d9d4c-42f4-11ef-9f7f-e9a656123a8b", 00:13:13.442 "is_configured": true, 00:13:13.442 "data_offset": 2048, 00:13:13.442 "data_size": 63488 00:13:13.442 }, 00:13:13.442 { 00:13:13.442 "name": "BaseBdev2", 00:13:13.442 "uuid": "1afb5cd0-42f4-11ef-9f7f-e9a656123a8b", 00:13:13.442 "is_configured": true, 00:13:13.442 "data_offset": 2048, 00:13:13.442 "data_size": 63488 00:13:13.442 }, 00:13:13.442 { 00:13:13.442 "name": "BaseBdev3", 00:13:13.442 "uuid": "1bc43396-42f4-11ef-9f7f-e9a656123a8b", 00:13:13.442 "is_configured": true, 00:13:13.442 "data_offset": 2048, 00:13:13.442 "data_size": 63488 00:13:13.442 }, 00:13:13.442 { 00:13:13.442 "name": "BaseBdev4", 00:13:13.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.442 "is_configured": false, 00:13:13.442 "data_offset": 0, 00:13:13.442 "data_size": 0 00:13:13.442 } 00:13:13.442 ] 00:13:13.442 }' 00:13:13.442 21:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:13.442 21:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.701 21:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:13.960 [2024-07-15 21:49:29.019374] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:13.960 [2024-07-15 21:49:29.019457] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x30535ea34a00 00:13:13.960 [2024-07-15 21:49:29.019463] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:13.960 [2024-07-15 21:49:29.019482] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x30535ea97e20 00:13:13.960 [2024-07-15 21:49:29.019536] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x30535ea34a00 00:13:13.960 [2024-07-15 21:49:29.019540] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x30535ea34a00 00:13:13.960 [2024-07-15 21:49:29.019578] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.960 BaseBdev4 00:13:13.960 21:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:13:13.960 21:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev4 00:13:13.960 21:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:13:13.960 21:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:13:13.960 21:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:13:13.960 21:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:13:13.960 21:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:14.219 21:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:14.478 [ 00:13:14.478 { 00:13:14.478 "name": "BaseBdev4", 00:13:14.478 "aliases": [ 00:13:14.478 "1c8c6b1f-42f4-11ef-9f7f-e9a656123a8b" 00:13:14.478 ], 00:13:14.478 "product_name": "Malloc disk", 00:13:14.478 "block_size": 512, 00:13:14.478 "num_blocks": 65536, 00:13:14.478 "uuid": "1c8c6b1f-42f4-11ef-9f7f-e9a656123a8b", 00:13:14.478 "assigned_rate_limits": { 00:13:14.478 "rw_ios_per_sec": 0, 00:13:14.478 "rw_mbytes_per_sec": 0, 00:13:14.478 "r_mbytes_per_sec": 0, 00:13:14.478 "w_mbytes_per_sec": 0 00:13:14.478 }, 00:13:14.478 "claimed": true, 00:13:14.478 "claim_type": "exclusive_write", 00:13:14.478 "zoned": false, 00:13:14.478 "supported_io_types": { 00:13:14.478 "read": true, 00:13:14.478 "write": true, 00:13:14.478 "unmap": true, 00:13:14.478 "flush": true, 00:13:14.478 "reset": true, 00:13:14.478 "nvme_admin": false, 00:13:14.478 "nvme_io": false, 00:13:14.478 "nvme_io_md": false, 00:13:14.478 "write_zeroes": true, 00:13:14.478 "zcopy": true, 00:13:14.478 "get_zone_info": false, 00:13:14.478 "zone_management": false, 00:13:14.478 "zone_append": false, 00:13:14.478 "compare": false, 00:13:14.478 "compare_and_write": false, 00:13:14.478 "abort": true, 00:13:14.478 "seek_hole": false, 00:13:14.478 "seek_data": false, 00:13:14.478 "copy": true, 00:13:14.478 "nvme_iov_md": false 00:13:14.478 }, 00:13:14.478 "memory_domains": [ 00:13:14.478 { 00:13:14.478 "dma_device_id": "system", 00:13:14.478 "dma_device_type": 1 00:13:14.478 }, 00:13:14.478 { 00:13:14.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.478 "dma_device_type": 2 00:13:14.478 } 00:13:14.478 ], 00:13:14.478 "driver_specific": {} 00:13:14.478 } 00:13:14.478 ] 00:13:14.478 21:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:13:14.478 21:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:14.478 21:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:14.478 21:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:14.478 21:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:14.478 21:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:14.478 21:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:14.478 21:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:14.478 21:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:14.478 21:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:14.478 21:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:14.478 21:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:14.478 21:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:14.478 21:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:14.478 21:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.737 21:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:14.737 "name": "Existed_Raid", 00:13:14.737 "uuid": "1a862ae9-42f4-11ef-9f7f-e9a656123a8b", 00:13:14.737 "strip_size_kb": 64, 00:13:14.737 "state": "online", 00:13:14.737 "raid_level": "raid0", 00:13:14.737 "superblock": true, 00:13:14.737 "num_base_bdevs": 4, 00:13:14.737 "num_base_bdevs_discovered": 4, 00:13:14.737 "num_base_bdevs_operational": 4, 00:13:14.737 "base_bdevs_list": [ 00:13:14.737 { 00:13:14.737 "name": "BaseBdev1", 00:13:14.737 "uuid": "199d9d4c-42f4-11ef-9f7f-e9a656123a8b", 00:13:14.737 "is_configured": true, 00:13:14.737 "data_offset": 2048, 00:13:14.737 "data_size": 63488 00:13:14.737 }, 00:13:14.737 { 00:13:14.737 "name": "BaseBdev2", 00:13:14.737 "uuid": "1afb5cd0-42f4-11ef-9f7f-e9a656123a8b", 00:13:14.737 "is_configured": true, 00:13:14.737 "data_offset": 2048, 00:13:14.737 "data_size": 63488 00:13:14.737 }, 00:13:14.737 { 00:13:14.737 "name": "BaseBdev3", 00:13:14.737 "uuid": "1bc43396-42f4-11ef-9f7f-e9a656123a8b", 00:13:14.737 "is_configured": true, 00:13:14.737 "data_offset": 2048, 00:13:14.737 "data_size": 63488 00:13:14.737 }, 00:13:14.737 { 00:13:14.737 "name": "BaseBdev4", 00:13:14.737 "uuid": "1c8c6b1f-42f4-11ef-9f7f-e9a656123a8b", 00:13:14.737 "is_configured": true, 00:13:14.737 "data_offset": 2048, 00:13:14.737 "data_size": 63488 00:13:14.737 } 00:13:14.737 ] 00:13:14.737 }' 00:13:14.737 21:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:14.737 21:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.996 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:14.996 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:14.996 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:14.996 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:14.996 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:14.996 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:14.996 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:14.996 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:15.254 [2024-07-15 21:49:30.371478] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:15.254 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:15.254 "name": "Existed_Raid", 00:13:15.254 "aliases": [ 00:13:15.254 "1a862ae9-42f4-11ef-9f7f-e9a656123a8b" 00:13:15.254 ], 00:13:15.254 "product_name": "Raid Volume", 00:13:15.254 "block_size": 512, 00:13:15.254 "num_blocks": 253952, 00:13:15.254 "uuid": "1a862ae9-42f4-11ef-9f7f-e9a656123a8b", 00:13:15.254 "assigned_rate_limits": { 00:13:15.254 "rw_ios_per_sec": 0, 00:13:15.254 "rw_mbytes_per_sec": 0, 00:13:15.254 "r_mbytes_per_sec": 0, 00:13:15.254 "w_mbytes_per_sec": 0 00:13:15.254 }, 00:13:15.254 "claimed": false, 00:13:15.254 "zoned": false, 00:13:15.254 "supported_io_types": { 00:13:15.254 "read": true, 00:13:15.254 "write": true, 00:13:15.254 "unmap": true, 00:13:15.254 "flush": true, 00:13:15.254 "reset": true, 00:13:15.254 "nvme_admin": false, 00:13:15.254 "nvme_io": false, 00:13:15.254 "nvme_io_md": false, 00:13:15.254 "write_zeroes": true, 00:13:15.254 "zcopy": false, 00:13:15.254 "get_zone_info": false, 00:13:15.254 "zone_management": false, 00:13:15.254 "zone_append": false, 00:13:15.254 "compare": false, 00:13:15.254 "compare_and_write": false, 00:13:15.254 "abort": false, 00:13:15.254 "seek_hole": false, 00:13:15.254 "seek_data": false, 00:13:15.254 "copy": false, 00:13:15.254 "nvme_iov_md": false 00:13:15.254 }, 00:13:15.254 "memory_domains": [ 00:13:15.254 { 00:13:15.254 "dma_device_id": "system", 00:13:15.254 "dma_device_type": 1 00:13:15.254 }, 00:13:15.254 { 00:13:15.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.254 "dma_device_type": 2 00:13:15.254 }, 00:13:15.254 { 00:13:15.254 "dma_device_id": "system", 00:13:15.254 "dma_device_type": 1 00:13:15.254 }, 00:13:15.254 { 00:13:15.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.254 "dma_device_type": 2 00:13:15.254 }, 00:13:15.254 { 00:13:15.254 "dma_device_id": "system", 00:13:15.254 "dma_device_type": 1 00:13:15.254 }, 00:13:15.254 { 00:13:15.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.254 "dma_device_type": 2 00:13:15.254 }, 00:13:15.254 { 00:13:15.254 "dma_device_id": "system", 00:13:15.254 "dma_device_type": 1 00:13:15.254 }, 00:13:15.254 { 00:13:15.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.254 "dma_device_type": 2 00:13:15.254 } 00:13:15.254 ], 00:13:15.254 "driver_specific": { 00:13:15.254 "raid": { 00:13:15.254 "uuid": "1a862ae9-42f4-11ef-9f7f-e9a656123a8b", 00:13:15.254 "strip_size_kb": 64, 00:13:15.254 "state": "online", 00:13:15.254 "raid_level": "raid0", 00:13:15.254 "superblock": true, 00:13:15.254 "num_base_bdevs": 4, 00:13:15.254 "num_base_bdevs_discovered": 4, 00:13:15.254 "num_base_bdevs_operational": 4, 00:13:15.254 "base_bdevs_list": [ 00:13:15.254 { 00:13:15.254 "name": "BaseBdev1", 00:13:15.254 "uuid": "199d9d4c-42f4-11ef-9f7f-e9a656123a8b", 00:13:15.254 "is_configured": true, 00:13:15.254 "data_offset": 2048, 00:13:15.254 "data_size": 63488 00:13:15.254 }, 00:13:15.254 { 00:13:15.254 "name": "BaseBdev2", 00:13:15.254 "uuid": "1afb5cd0-42f4-11ef-9f7f-e9a656123a8b", 00:13:15.254 "is_configured": true, 00:13:15.254 "data_offset": 2048, 00:13:15.254 "data_size": 63488 00:13:15.254 }, 00:13:15.255 { 00:13:15.255 "name": "BaseBdev3", 00:13:15.255 "uuid": "1bc43396-42f4-11ef-9f7f-e9a656123a8b", 00:13:15.255 "is_configured": true, 00:13:15.255 "data_offset": 2048, 00:13:15.255 "data_size": 63488 00:13:15.255 }, 00:13:15.255 { 00:13:15.255 "name": "BaseBdev4", 00:13:15.255 "uuid": "1c8c6b1f-42f4-11ef-9f7f-e9a656123a8b", 00:13:15.255 "is_configured": true, 00:13:15.255 "data_offset": 2048, 00:13:15.255 "data_size": 63488 00:13:15.255 } 00:13:15.255 ] 00:13:15.255 } 00:13:15.255 } 00:13:15.255 }' 00:13:15.255 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:15.255 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:15.255 BaseBdev2 00:13:15.255 BaseBdev3 00:13:15.255 BaseBdev4' 00:13:15.255 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:15.255 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:15.255 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:15.540 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:15.540 "name": "BaseBdev1", 00:13:15.540 "aliases": [ 00:13:15.540 "199d9d4c-42f4-11ef-9f7f-e9a656123a8b" 00:13:15.540 ], 00:13:15.540 "product_name": "Malloc disk", 00:13:15.540 "block_size": 512, 00:13:15.540 "num_blocks": 65536, 00:13:15.540 "uuid": "199d9d4c-42f4-11ef-9f7f-e9a656123a8b", 00:13:15.540 "assigned_rate_limits": { 00:13:15.540 "rw_ios_per_sec": 0, 00:13:15.540 "rw_mbytes_per_sec": 0, 00:13:15.540 "r_mbytes_per_sec": 0, 00:13:15.540 "w_mbytes_per_sec": 0 00:13:15.540 }, 00:13:15.540 "claimed": true, 00:13:15.540 "claim_type": "exclusive_write", 00:13:15.540 "zoned": false, 00:13:15.540 "supported_io_types": { 00:13:15.540 "read": true, 00:13:15.540 "write": true, 00:13:15.540 "unmap": true, 00:13:15.540 "flush": true, 00:13:15.540 "reset": true, 00:13:15.540 "nvme_admin": false, 00:13:15.540 "nvme_io": false, 00:13:15.540 "nvme_io_md": false, 00:13:15.540 "write_zeroes": true, 00:13:15.540 "zcopy": true, 00:13:15.540 "get_zone_info": false, 00:13:15.540 "zone_management": false, 00:13:15.540 "zone_append": false, 00:13:15.540 "compare": false, 00:13:15.540 "compare_and_write": false, 00:13:15.540 "abort": true, 00:13:15.540 "seek_hole": false, 00:13:15.540 "seek_data": false, 00:13:15.540 "copy": true, 00:13:15.540 "nvme_iov_md": false 00:13:15.540 }, 00:13:15.540 "memory_domains": [ 00:13:15.540 { 00:13:15.540 "dma_device_id": "system", 00:13:15.540 "dma_device_type": 1 00:13:15.540 }, 00:13:15.540 { 00:13:15.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.540 "dma_device_type": 2 00:13:15.540 } 00:13:15.540 ], 00:13:15.540 "driver_specific": {} 00:13:15.540 }' 00:13:15.540 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:15.540 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:15.540 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:15.540 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:15.540 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:15.540 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:15.540 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:15.540 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:15.540 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:15.540 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:15.540 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:15.540 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:15.540 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:15.540 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:15.540 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:15.799 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:15.799 "name": "BaseBdev2", 00:13:15.799 "aliases": [ 00:13:15.799 "1afb5cd0-42f4-11ef-9f7f-e9a656123a8b" 00:13:15.799 ], 00:13:15.799 "product_name": "Malloc disk", 00:13:15.799 "block_size": 512, 00:13:15.799 "num_blocks": 65536, 00:13:15.799 "uuid": "1afb5cd0-42f4-11ef-9f7f-e9a656123a8b", 00:13:15.799 "assigned_rate_limits": { 00:13:15.799 "rw_ios_per_sec": 0, 00:13:15.799 "rw_mbytes_per_sec": 0, 00:13:15.799 "r_mbytes_per_sec": 0, 00:13:15.799 "w_mbytes_per_sec": 0 00:13:15.799 }, 00:13:15.799 "claimed": true, 00:13:15.799 "claim_type": "exclusive_write", 00:13:15.799 "zoned": false, 00:13:15.799 "supported_io_types": { 00:13:15.799 "read": true, 00:13:15.799 "write": true, 00:13:15.799 "unmap": true, 00:13:15.799 "flush": true, 00:13:15.799 "reset": true, 00:13:15.799 "nvme_admin": false, 00:13:15.799 "nvme_io": false, 00:13:15.799 "nvme_io_md": false, 00:13:15.799 "write_zeroes": true, 00:13:15.799 "zcopy": true, 00:13:15.799 "get_zone_info": false, 00:13:15.799 "zone_management": false, 00:13:15.799 "zone_append": false, 00:13:15.799 "compare": false, 00:13:15.799 "compare_and_write": false, 00:13:15.799 "abort": true, 00:13:15.799 "seek_hole": false, 00:13:15.799 "seek_data": false, 00:13:15.799 "copy": true, 00:13:15.799 "nvme_iov_md": false 00:13:15.799 }, 00:13:15.799 "memory_domains": [ 00:13:15.799 { 00:13:15.799 "dma_device_id": "system", 00:13:15.799 "dma_device_type": 1 00:13:15.799 }, 00:13:15.799 { 00:13:15.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.799 "dma_device_type": 2 00:13:15.799 } 00:13:15.799 ], 00:13:15.799 "driver_specific": {} 00:13:15.799 }' 00:13:15.799 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:15.799 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:16.057 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:16.057 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:16.057 21:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:16.057 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:16.057 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:16.057 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:16.057 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:16.057 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:16.057 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:16.057 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:16.058 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:16.058 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:16.058 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:16.316 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:16.316 "name": "BaseBdev3", 00:13:16.316 "aliases": [ 00:13:16.316 "1bc43396-42f4-11ef-9f7f-e9a656123a8b" 00:13:16.316 ], 00:13:16.316 "product_name": "Malloc disk", 00:13:16.316 "block_size": 512, 00:13:16.316 "num_blocks": 65536, 00:13:16.316 "uuid": "1bc43396-42f4-11ef-9f7f-e9a656123a8b", 00:13:16.316 "assigned_rate_limits": { 00:13:16.316 "rw_ios_per_sec": 0, 00:13:16.316 "rw_mbytes_per_sec": 0, 00:13:16.316 "r_mbytes_per_sec": 0, 00:13:16.316 "w_mbytes_per_sec": 0 00:13:16.316 }, 00:13:16.316 "claimed": true, 00:13:16.316 "claim_type": "exclusive_write", 00:13:16.316 "zoned": false, 00:13:16.316 "supported_io_types": { 00:13:16.316 "read": true, 00:13:16.316 "write": true, 00:13:16.316 "unmap": true, 00:13:16.316 "flush": true, 00:13:16.316 "reset": true, 00:13:16.316 "nvme_admin": false, 00:13:16.316 "nvme_io": false, 00:13:16.316 "nvme_io_md": false, 00:13:16.316 "write_zeroes": true, 00:13:16.316 "zcopy": true, 00:13:16.316 "get_zone_info": false, 00:13:16.316 "zone_management": false, 00:13:16.316 "zone_append": false, 00:13:16.316 "compare": false, 00:13:16.316 "compare_and_write": false, 00:13:16.316 "abort": true, 00:13:16.316 "seek_hole": false, 00:13:16.316 "seek_data": false, 00:13:16.316 "copy": true, 00:13:16.316 "nvme_iov_md": false 00:13:16.316 }, 00:13:16.316 "memory_domains": [ 00:13:16.316 { 00:13:16.316 "dma_device_id": "system", 00:13:16.316 "dma_device_type": 1 00:13:16.316 }, 00:13:16.316 { 00:13:16.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.316 "dma_device_type": 2 00:13:16.316 } 00:13:16.316 ], 00:13:16.316 "driver_specific": {} 00:13:16.316 }' 00:13:16.316 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:16.316 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:16.316 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:16.316 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:16.316 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:16.316 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:16.316 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:16.316 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:16.316 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:16.316 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:16.316 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:16.316 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:16.316 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:16.316 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:13:16.316 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:16.575 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:16.575 "name": "BaseBdev4", 00:13:16.575 "aliases": [ 00:13:16.575 "1c8c6b1f-42f4-11ef-9f7f-e9a656123a8b" 00:13:16.575 ], 00:13:16.575 "product_name": "Malloc disk", 00:13:16.575 "block_size": 512, 00:13:16.575 "num_blocks": 65536, 00:13:16.575 "uuid": "1c8c6b1f-42f4-11ef-9f7f-e9a656123a8b", 00:13:16.575 "assigned_rate_limits": { 00:13:16.575 "rw_ios_per_sec": 0, 00:13:16.575 "rw_mbytes_per_sec": 0, 00:13:16.575 "r_mbytes_per_sec": 0, 00:13:16.575 "w_mbytes_per_sec": 0 00:13:16.575 }, 00:13:16.575 "claimed": true, 00:13:16.575 "claim_type": "exclusive_write", 00:13:16.575 "zoned": false, 00:13:16.575 "supported_io_types": { 00:13:16.575 "read": true, 00:13:16.575 "write": true, 00:13:16.575 "unmap": true, 00:13:16.575 "flush": true, 00:13:16.575 "reset": true, 00:13:16.575 "nvme_admin": false, 00:13:16.575 "nvme_io": false, 00:13:16.575 "nvme_io_md": false, 00:13:16.575 "write_zeroes": true, 00:13:16.575 "zcopy": true, 00:13:16.575 "get_zone_info": false, 00:13:16.575 "zone_management": false, 00:13:16.575 "zone_append": false, 00:13:16.575 "compare": false, 00:13:16.575 "compare_and_write": false, 00:13:16.575 "abort": true, 00:13:16.575 "seek_hole": false, 00:13:16.575 "seek_data": false, 00:13:16.575 "copy": true, 00:13:16.575 "nvme_iov_md": false 00:13:16.575 }, 00:13:16.575 "memory_domains": [ 00:13:16.575 { 00:13:16.575 "dma_device_id": "system", 00:13:16.575 "dma_device_type": 1 00:13:16.575 }, 00:13:16.575 { 00:13:16.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.575 "dma_device_type": 2 00:13:16.575 } 00:13:16.575 ], 00:13:16.575 "driver_specific": {} 00:13:16.575 }' 00:13:16.575 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:16.575 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:16.575 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:16.575 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:16.575 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:16.575 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:16.575 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:16.575 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:16.575 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:16.576 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:16.576 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:16.576 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:16.576 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:16.834 [2024-07-15 21:49:31.939672] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:16.834 [2024-07-15 21:49:31.939698] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:16.834 [2024-07-15 21:49:31.939727] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:16.834 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:16.834 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:13:16.834 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:16.834 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:13:16.834 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:13:16.834 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:13:16.834 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:16.834 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:13:16.834 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:16.834 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:16.834 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:16.834 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:16.834 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:16.834 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:16.834 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:16.834 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:16.834 21:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.094 21:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:17.094 "name": "Existed_Raid", 00:13:17.094 "uuid": "1a862ae9-42f4-11ef-9f7f-e9a656123a8b", 00:13:17.094 "strip_size_kb": 64, 00:13:17.094 "state": "offline", 00:13:17.094 "raid_level": "raid0", 00:13:17.094 "superblock": true, 00:13:17.094 "num_base_bdevs": 4, 00:13:17.094 "num_base_bdevs_discovered": 3, 00:13:17.094 "num_base_bdevs_operational": 3, 00:13:17.094 "base_bdevs_list": [ 00:13:17.094 { 00:13:17.094 "name": null, 00:13:17.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.094 "is_configured": false, 00:13:17.094 "data_offset": 2048, 00:13:17.094 "data_size": 63488 00:13:17.094 }, 00:13:17.094 { 00:13:17.094 "name": "BaseBdev2", 00:13:17.094 "uuid": "1afb5cd0-42f4-11ef-9f7f-e9a656123a8b", 00:13:17.094 "is_configured": true, 00:13:17.094 "data_offset": 2048, 00:13:17.094 "data_size": 63488 00:13:17.094 }, 00:13:17.094 { 00:13:17.094 "name": "BaseBdev3", 00:13:17.094 "uuid": "1bc43396-42f4-11ef-9f7f-e9a656123a8b", 00:13:17.094 "is_configured": true, 00:13:17.094 "data_offset": 2048, 00:13:17.094 "data_size": 63488 00:13:17.094 }, 00:13:17.094 { 00:13:17.094 "name": "BaseBdev4", 00:13:17.094 "uuid": "1c8c6b1f-42f4-11ef-9f7f-e9a656123a8b", 00:13:17.094 "is_configured": true, 00:13:17.094 "data_offset": 2048, 00:13:17.094 "data_size": 63488 00:13:17.094 } 00:13:17.094 ] 00:13:17.094 }' 00:13:17.094 21:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:17.094 21:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.353 21:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:17.353 21:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:17.353 21:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:17.353 21:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:17.612 21:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:17.612 21:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:17.612 21:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:17.870 [2024-07-15 21:49:32.998276] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:17.870 21:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:17.870 21:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:17.870 21:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:17.870 21:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:18.128 21:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:18.128 21:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:18.128 21:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:18.386 [2024-07-15 21:49:33.508752] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:18.386 21:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:18.386 21:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:18.386 21:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:18.386 21:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:18.644 21:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:18.644 21:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:18.644 21:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:13:18.902 [2024-07-15 21:49:33.991230] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:18.902 [2024-07-15 21:49:33.991271] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x30535ea34a00 name Existed_Raid, state offline 00:13:18.902 21:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:18.902 21:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:18.902 21:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:18.902 21:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:19.161 21:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:19.161 21:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:19.161 21:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:13:19.161 21:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:13:19.161 21:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:19.161 21:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:19.420 BaseBdev2 00:13:19.420 21:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:13:19.420 21:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev2 00:13:19.420 21:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:13:19.420 21:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:13:19.420 21:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:13:19.420 21:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:13:19.420 21:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:19.679 21:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:19.938 [ 00:13:19.938 { 00:13:19.938 "name": "BaseBdev2", 00:13:19.938 "aliases": [ 00:13:19.938 "1fd997bf-42f4-11ef-9f7f-e9a656123a8b" 00:13:19.938 ], 00:13:19.938 "product_name": "Malloc disk", 00:13:19.938 "block_size": 512, 00:13:19.938 "num_blocks": 65536, 00:13:19.938 "uuid": "1fd997bf-42f4-11ef-9f7f-e9a656123a8b", 00:13:19.938 "assigned_rate_limits": { 00:13:19.938 "rw_ios_per_sec": 0, 00:13:19.938 "rw_mbytes_per_sec": 0, 00:13:19.938 "r_mbytes_per_sec": 0, 00:13:19.938 "w_mbytes_per_sec": 0 00:13:19.938 }, 00:13:19.938 "claimed": false, 00:13:19.938 "zoned": false, 00:13:19.938 "supported_io_types": { 00:13:19.938 "read": true, 00:13:19.938 "write": true, 00:13:19.938 "unmap": true, 00:13:19.938 "flush": true, 00:13:19.938 "reset": true, 00:13:19.938 "nvme_admin": false, 00:13:19.938 "nvme_io": false, 00:13:19.938 "nvme_io_md": false, 00:13:19.938 "write_zeroes": true, 00:13:19.938 "zcopy": true, 00:13:19.938 "get_zone_info": false, 00:13:19.938 "zone_management": false, 00:13:19.938 "zone_append": false, 00:13:19.938 "compare": false, 00:13:19.938 "compare_and_write": false, 00:13:19.938 "abort": true, 00:13:19.938 "seek_hole": false, 00:13:19.938 "seek_data": false, 00:13:19.938 "copy": true, 00:13:19.938 "nvme_iov_md": false 00:13:19.938 }, 00:13:19.938 "memory_domains": [ 00:13:19.938 { 00:13:19.938 "dma_device_id": "system", 00:13:19.938 "dma_device_type": 1 00:13:19.938 }, 00:13:19.938 { 00:13:19.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.938 "dma_device_type": 2 00:13:19.938 } 00:13:19.938 ], 00:13:19.938 "driver_specific": {} 00:13:19.938 } 00:13:19.938 ] 00:13:19.938 21:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:13:19.938 21:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:19.938 21:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:19.938 21:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:20.197 BaseBdev3 00:13:20.197 21:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:13:20.197 21:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev3 00:13:20.197 21:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:13:20.197 21:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:13:20.197 21:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:13:20.197 21:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:13:20.197 21:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:20.455 21:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:20.713 [ 00:13:20.713 { 00:13:20.713 "name": "BaseBdev3", 00:13:20.713 "aliases": [ 00:13:20.713 "204b215e-42f4-11ef-9f7f-e9a656123a8b" 00:13:20.713 ], 00:13:20.713 "product_name": "Malloc disk", 00:13:20.713 "block_size": 512, 00:13:20.713 "num_blocks": 65536, 00:13:20.713 "uuid": "204b215e-42f4-11ef-9f7f-e9a656123a8b", 00:13:20.713 "assigned_rate_limits": { 00:13:20.713 "rw_ios_per_sec": 0, 00:13:20.713 "rw_mbytes_per_sec": 0, 00:13:20.713 "r_mbytes_per_sec": 0, 00:13:20.713 "w_mbytes_per_sec": 0 00:13:20.713 }, 00:13:20.713 "claimed": false, 00:13:20.713 "zoned": false, 00:13:20.713 "supported_io_types": { 00:13:20.713 "read": true, 00:13:20.713 "write": true, 00:13:20.713 "unmap": true, 00:13:20.713 "flush": true, 00:13:20.713 "reset": true, 00:13:20.713 "nvme_admin": false, 00:13:20.713 "nvme_io": false, 00:13:20.713 "nvme_io_md": false, 00:13:20.713 "write_zeroes": true, 00:13:20.713 "zcopy": true, 00:13:20.713 "get_zone_info": false, 00:13:20.713 "zone_management": false, 00:13:20.713 "zone_append": false, 00:13:20.713 "compare": false, 00:13:20.713 "compare_and_write": false, 00:13:20.713 "abort": true, 00:13:20.713 "seek_hole": false, 00:13:20.713 "seek_data": false, 00:13:20.713 "copy": true, 00:13:20.713 "nvme_iov_md": false 00:13:20.713 }, 00:13:20.713 "memory_domains": [ 00:13:20.713 { 00:13:20.714 "dma_device_id": "system", 00:13:20.714 "dma_device_type": 1 00:13:20.714 }, 00:13:20.714 { 00:13:20.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.714 "dma_device_type": 2 00:13:20.714 } 00:13:20.714 ], 00:13:20.714 "driver_specific": {} 00:13:20.714 } 00:13:20.714 ] 00:13:20.714 21:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:13:20.714 21:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:20.714 21:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:20.714 21:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:20.973 BaseBdev4 00:13:20.973 21:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:13:20.973 21:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev4 00:13:20.973 21:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:13:20.973 21:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:13:20.973 21:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:13:20.973 21:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:13:20.973 21:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:21.232 21:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:21.490 [ 00:13:21.490 { 00:13:21.490 "name": "BaseBdev4", 00:13:21.490 "aliases": [ 00:13:21.490 "20b72a0d-42f4-11ef-9f7f-e9a656123a8b" 00:13:21.490 ], 00:13:21.490 "product_name": "Malloc disk", 00:13:21.490 "block_size": 512, 00:13:21.490 "num_blocks": 65536, 00:13:21.490 "uuid": "20b72a0d-42f4-11ef-9f7f-e9a656123a8b", 00:13:21.490 "assigned_rate_limits": { 00:13:21.490 "rw_ios_per_sec": 0, 00:13:21.490 "rw_mbytes_per_sec": 0, 00:13:21.490 "r_mbytes_per_sec": 0, 00:13:21.490 "w_mbytes_per_sec": 0 00:13:21.490 }, 00:13:21.490 "claimed": false, 00:13:21.490 "zoned": false, 00:13:21.490 "supported_io_types": { 00:13:21.490 "read": true, 00:13:21.490 "write": true, 00:13:21.490 "unmap": true, 00:13:21.490 "flush": true, 00:13:21.490 "reset": true, 00:13:21.490 "nvme_admin": false, 00:13:21.490 "nvme_io": false, 00:13:21.490 "nvme_io_md": false, 00:13:21.490 "write_zeroes": true, 00:13:21.490 "zcopy": true, 00:13:21.490 "get_zone_info": false, 00:13:21.490 "zone_management": false, 00:13:21.490 "zone_append": false, 00:13:21.490 "compare": false, 00:13:21.490 "compare_and_write": false, 00:13:21.490 "abort": true, 00:13:21.490 "seek_hole": false, 00:13:21.490 "seek_data": false, 00:13:21.490 "copy": true, 00:13:21.490 "nvme_iov_md": false 00:13:21.490 }, 00:13:21.490 "memory_domains": [ 00:13:21.490 { 00:13:21.490 "dma_device_id": "system", 00:13:21.491 "dma_device_type": 1 00:13:21.491 }, 00:13:21.491 { 00:13:21.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.491 "dma_device_type": 2 00:13:21.491 } 00:13:21.491 ], 00:13:21.491 "driver_specific": {} 00:13:21.491 } 00:13:21.491 ] 00:13:21.491 21:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:13:21.491 21:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:21.491 21:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:21.491 21:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:21.781 [2024-07-15 21:49:36.718391] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:21.781 [2024-07-15 21:49:36.718446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:21.781 [2024-07-15 21:49:36.718472] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:21.781 [2024-07-15 21:49:36.719085] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:21.781 [2024-07-15 21:49:36.719103] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:21.781 21:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:21.781 21:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:21.781 21:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:21.781 21:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:21.781 21:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:21.781 21:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:21.781 21:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:21.781 21:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:21.781 21:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:21.781 21:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:21.781 21:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:21.781 21:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.039 21:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:22.039 "name": "Existed_Raid", 00:13:22.039 "uuid": "212335f8-42f4-11ef-9f7f-e9a656123a8b", 00:13:22.039 "strip_size_kb": 64, 00:13:22.039 "state": "configuring", 00:13:22.039 "raid_level": "raid0", 00:13:22.039 "superblock": true, 00:13:22.039 "num_base_bdevs": 4, 00:13:22.039 "num_base_bdevs_discovered": 3, 00:13:22.039 "num_base_bdevs_operational": 4, 00:13:22.039 "base_bdevs_list": [ 00:13:22.039 { 00:13:22.039 "name": "BaseBdev1", 00:13:22.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.039 "is_configured": false, 00:13:22.039 "data_offset": 0, 00:13:22.039 "data_size": 0 00:13:22.039 }, 00:13:22.039 { 00:13:22.039 "name": "BaseBdev2", 00:13:22.039 "uuid": "1fd997bf-42f4-11ef-9f7f-e9a656123a8b", 00:13:22.039 "is_configured": true, 00:13:22.039 "data_offset": 2048, 00:13:22.039 "data_size": 63488 00:13:22.039 }, 00:13:22.039 { 00:13:22.039 "name": "BaseBdev3", 00:13:22.039 "uuid": "204b215e-42f4-11ef-9f7f-e9a656123a8b", 00:13:22.039 "is_configured": true, 00:13:22.040 "data_offset": 2048, 00:13:22.040 "data_size": 63488 00:13:22.040 }, 00:13:22.040 { 00:13:22.040 "name": "BaseBdev4", 00:13:22.040 "uuid": "20b72a0d-42f4-11ef-9f7f-e9a656123a8b", 00:13:22.040 "is_configured": true, 00:13:22.040 "data_offset": 2048, 00:13:22.040 "data_size": 63488 00:13:22.040 } 00:13:22.040 ] 00:13:22.040 }' 00:13:22.040 21:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:22.040 21:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.297 21:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:13:22.297 [2024-07-15 21:49:37.462519] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:22.297 21:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:22.297 21:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:22.297 21:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:22.297 21:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:22.297 21:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:22.297 21:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:22.297 21:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:22.297 21:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:22.297 21:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:22.297 21:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:22.297 21:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:22.297 21:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.554 21:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:22.554 "name": "Existed_Raid", 00:13:22.554 "uuid": "212335f8-42f4-11ef-9f7f-e9a656123a8b", 00:13:22.554 "strip_size_kb": 64, 00:13:22.554 "state": "configuring", 00:13:22.554 "raid_level": "raid0", 00:13:22.554 "superblock": true, 00:13:22.554 "num_base_bdevs": 4, 00:13:22.554 "num_base_bdevs_discovered": 2, 00:13:22.554 "num_base_bdevs_operational": 4, 00:13:22.554 "base_bdevs_list": [ 00:13:22.554 { 00:13:22.554 "name": "BaseBdev1", 00:13:22.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.554 "is_configured": false, 00:13:22.554 "data_offset": 0, 00:13:22.554 "data_size": 0 00:13:22.554 }, 00:13:22.554 { 00:13:22.554 "name": null, 00:13:22.554 "uuid": "1fd997bf-42f4-11ef-9f7f-e9a656123a8b", 00:13:22.554 "is_configured": false, 00:13:22.554 "data_offset": 2048, 00:13:22.554 "data_size": 63488 00:13:22.554 }, 00:13:22.554 { 00:13:22.554 "name": "BaseBdev3", 00:13:22.554 "uuid": "204b215e-42f4-11ef-9f7f-e9a656123a8b", 00:13:22.554 "is_configured": true, 00:13:22.554 "data_offset": 2048, 00:13:22.554 "data_size": 63488 00:13:22.554 }, 00:13:22.554 { 00:13:22.554 "name": "BaseBdev4", 00:13:22.554 "uuid": "20b72a0d-42f4-11ef-9f7f-e9a656123a8b", 00:13:22.554 "is_configured": true, 00:13:22.554 "data_offset": 2048, 00:13:22.554 "data_size": 63488 00:13:22.554 } 00:13:22.554 ] 00:13:22.554 }' 00:13:22.554 21:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:22.555 21:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.120 21:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:23.120 21:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:23.120 21:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:13:23.120 21:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:23.377 [2024-07-15 21:49:38.506692] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:23.377 BaseBdev1 00:13:23.377 21:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:13:23.377 21:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev1 00:13:23.377 21:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:13:23.377 21:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:13:23.377 21:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:13:23.377 21:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:13:23.377 21:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:23.633 21:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:23.890 [ 00:13:23.890 { 00:13:23.890 "name": "BaseBdev1", 00:13:23.890 "aliases": [ 00:13:23.890 "22341140-42f4-11ef-9f7f-e9a656123a8b" 00:13:23.890 ], 00:13:23.890 "product_name": "Malloc disk", 00:13:23.890 "block_size": 512, 00:13:23.890 "num_blocks": 65536, 00:13:23.890 "uuid": "22341140-42f4-11ef-9f7f-e9a656123a8b", 00:13:23.890 "assigned_rate_limits": { 00:13:23.890 "rw_ios_per_sec": 0, 00:13:23.890 "rw_mbytes_per_sec": 0, 00:13:23.890 "r_mbytes_per_sec": 0, 00:13:23.890 "w_mbytes_per_sec": 0 00:13:23.890 }, 00:13:23.890 "claimed": true, 00:13:23.890 "claim_type": "exclusive_write", 00:13:23.890 "zoned": false, 00:13:23.890 "supported_io_types": { 00:13:23.890 "read": true, 00:13:23.890 "write": true, 00:13:23.890 "unmap": true, 00:13:23.890 "flush": true, 00:13:23.890 "reset": true, 00:13:23.890 "nvme_admin": false, 00:13:23.890 "nvme_io": false, 00:13:23.890 "nvme_io_md": false, 00:13:23.890 "write_zeroes": true, 00:13:23.890 "zcopy": true, 00:13:23.890 "get_zone_info": false, 00:13:23.890 "zone_management": false, 00:13:23.890 "zone_append": false, 00:13:23.890 "compare": false, 00:13:23.890 "compare_and_write": false, 00:13:23.890 "abort": true, 00:13:23.890 "seek_hole": false, 00:13:23.890 "seek_data": false, 00:13:23.890 "copy": true, 00:13:23.890 "nvme_iov_md": false 00:13:23.890 }, 00:13:23.890 "memory_domains": [ 00:13:23.890 { 00:13:23.890 "dma_device_id": "system", 00:13:23.890 "dma_device_type": 1 00:13:23.890 }, 00:13:23.890 { 00:13:23.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.890 "dma_device_type": 2 00:13:23.890 } 00:13:23.891 ], 00:13:23.891 "driver_specific": {} 00:13:23.891 } 00:13:23.891 ] 00:13:23.891 21:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:13:23.891 21:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:23.891 21:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:23.891 21:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:23.891 21:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:23.891 21:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:23.891 21:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:23.891 21:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:23.891 21:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:23.891 21:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:23.891 21:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:23.891 21:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:23.891 21:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.148 21:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:24.148 "name": "Existed_Raid", 00:13:24.148 "uuid": "212335f8-42f4-11ef-9f7f-e9a656123a8b", 00:13:24.148 "strip_size_kb": 64, 00:13:24.148 "state": "configuring", 00:13:24.148 "raid_level": "raid0", 00:13:24.148 "superblock": true, 00:13:24.148 "num_base_bdevs": 4, 00:13:24.148 "num_base_bdevs_discovered": 3, 00:13:24.148 "num_base_bdevs_operational": 4, 00:13:24.148 "base_bdevs_list": [ 00:13:24.148 { 00:13:24.148 "name": "BaseBdev1", 00:13:24.148 "uuid": "22341140-42f4-11ef-9f7f-e9a656123a8b", 00:13:24.148 "is_configured": true, 00:13:24.148 "data_offset": 2048, 00:13:24.148 "data_size": 63488 00:13:24.148 }, 00:13:24.148 { 00:13:24.148 "name": null, 00:13:24.148 "uuid": "1fd997bf-42f4-11ef-9f7f-e9a656123a8b", 00:13:24.148 "is_configured": false, 00:13:24.148 "data_offset": 2048, 00:13:24.148 "data_size": 63488 00:13:24.148 }, 00:13:24.148 { 00:13:24.148 "name": "BaseBdev3", 00:13:24.148 "uuid": "204b215e-42f4-11ef-9f7f-e9a656123a8b", 00:13:24.148 "is_configured": true, 00:13:24.148 "data_offset": 2048, 00:13:24.148 "data_size": 63488 00:13:24.148 }, 00:13:24.148 { 00:13:24.148 "name": "BaseBdev4", 00:13:24.148 "uuid": "20b72a0d-42f4-11ef-9f7f-e9a656123a8b", 00:13:24.148 "is_configured": true, 00:13:24.148 "data_offset": 2048, 00:13:24.148 "data_size": 63488 00:13:24.148 } 00:13:24.148 ] 00:13:24.148 }' 00:13:24.148 21:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:24.148 21:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.406 21:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:24.406 21:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:24.682 21:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:13:24.682 21:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:24.940 [2024-07-15 21:49:39.938662] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:24.940 21:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:24.940 21:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:24.940 21:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:24.940 21:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:24.940 21:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:24.940 21:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:24.940 21:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:24.940 21:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:24.940 21:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:24.940 21:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:24.940 21:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.940 21:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:25.199 21:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:25.199 "name": "Existed_Raid", 00:13:25.199 "uuid": "212335f8-42f4-11ef-9f7f-e9a656123a8b", 00:13:25.199 "strip_size_kb": 64, 00:13:25.199 "state": "configuring", 00:13:25.199 "raid_level": "raid0", 00:13:25.199 "superblock": true, 00:13:25.199 "num_base_bdevs": 4, 00:13:25.199 "num_base_bdevs_discovered": 2, 00:13:25.199 "num_base_bdevs_operational": 4, 00:13:25.199 "base_bdevs_list": [ 00:13:25.199 { 00:13:25.199 "name": "BaseBdev1", 00:13:25.200 "uuid": "22341140-42f4-11ef-9f7f-e9a656123a8b", 00:13:25.200 "is_configured": true, 00:13:25.200 "data_offset": 2048, 00:13:25.200 "data_size": 63488 00:13:25.200 }, 00:13:25.200 { 00:13:25.200 "name": null, 00:13:25.200 "uuid": "1fd997bf-42f4-11ef-9f7f-e9a656123a8b", 00:13:25.200 "is_configured": false, 00:13:25.200 "data_offset": 2048, 00:13:25.200 "data_size": 63488 00:13:25.200 }, 00:13:25.200 { 00:13:25.200 "name": null, 00:13:25.200 "uuid": "204b215e-42f4-11ef-9f7f-e9a656123a8b", 00:13:25.200 "is_configured": false, 00:13:25.200 "data_offset": 2048, 00:13:25.200 "data_size": 63488 00:13:25.200 }, 00:13:25.200 { 00:13:25.200 "name": "BaseBdev4", 00:13:25.200 "uuid": "20b72a0d-42f4-11ef-9f7f-e9a656123a8b", 00:13:25.200 "is_configured": true, 00:13:25.200 "data_offset": 2048, 00:13:25.200 "data_size": 63488 00:13:25.200 } 00:13:25.200 ] 00:13:25.200 }' 00:13:25.200 21:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:25.200 21:49:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.458 21:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:25.458 21:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:25.717 21:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:13:25.717 21:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:25.976 [2024-07-15 21:49:40.910727] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:25.976 21:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:25.976 21:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:25.976 21:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:25.976 21:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:25.976 21:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:25.976 21:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:25.976 21:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:25.976 21:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:25.976 21:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:25.976 21:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:25.976 21:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:25.976 21:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.976 21:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:25.976 "name": "Existed_Raid", 00:13:25.976 "uuid": "212335f8-42f4-11ef-9f7f-e9a656123a8b", 00:13:25.976 "strip_size_kb": 64, 00:13:25.976 "state": "configuring", 00:13:25.976 "raid_level": "raid0", 00:13:25.976 "superblock": true, 00:13:25.976 "num_base_bdevs": 4, 00:13:25.976 "num_base_bdevs_discovered": 3, 00:13:25.976 "num_base_bdevs_operational": 4, 00:13:25.976 "base_bdevs_list": [ 00:13:25.976 { 00:13:25.976 "name": "BaseBdev1", 00:13:25.976 "uuid": "22341140-42f4-11ef-9f7f-e9a656123a8b", 00:13:25.976 "is_configured": true, 00:13:25.976 "data_offset": 2048, 00:13:25.976 "data_size": 63488 00:13:25.976 }, 00:13:25.976 { 00:13:25.976 "name": null, 00:13:25.976 "uuid": "1fd997bf-42f4-11ef-9f7f-e9a656123a8b", 00:13:25.976 "is_configured": false, 00:13:25.976 "data_offset": 2048, 00:13:25.976 "data_size": 63488 00:13:25.976 }, 00:13:25.976 { 00:13:25.976 "name": "BaseBdev3", 00:13:25.976 "uuid": "204b215e-42f4-11ef-9f7f-e9a656123a8b", 00:13:25.976 "is_configured": true, 00:13:25.976 "data_offset": 2048, 00:13:25.976 "data_size": 63488 00:13:25.976 }, 00:13:25.976 { 00:13:25.976 "name": "BaseBdev4", 00:13:25.976 "uuid": "20b72a0d-42f4-11ef-9f7f-e9a656123a8b", 00:13:25.976 "is_configured": true, 00:13:25.976 "data_offset": 2048, 00:13:25.976 "data_size": 63488 00:13:25.976 } 00:13:25.976 ] 00:13:25.976 }' 00:13:25.976 21:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:25.976 21:49:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.543 21:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:26.543 21:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:26.543 21:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:13:26.543 21:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:26.802 [2024-07-15 21:49:41.950786] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:26.802 21:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:26.802 21:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:26.802 21:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:26.802 21:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:26.802 21:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:26.802 21:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:26.802 21:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:26.802 21:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:26.802 21:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:26.802 21:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:26.802 21:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:26.802 21:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.060 21:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:27.060 "name": "Existed_Raid", 00:13:27.060 "uuid": "212335f8-42f4-11ef-9f7f-e9a656123a8b", 00:13:27.060 "strip_size_kb": 64, 00:13:27.060 "state": "configuring", 00:13:27.060 "raid_level": "raid0", 00:13:27.060 "superblock": true, 00:13:27.060 "num_base_bdevs": 4, 00:13:27.060 "num_base_bdevs_discovered": 2, 00:13:27.060 "num_base_bdevs_operational": 4, 00:13:27.060 "base_bdevs_list": [ 00:13:27.060 { 00:13:27.060 "name": null, 00:13:27.060 "uuid": "22341140-42f4-11ef-9f7f-e9a656123a8b", 00:13:27.060 "is_configured": false, 00:13:27.060 "data_offset": 2048, 00:13:27.060 "data_size": 63488 00:13:27.060 }, 00:13:27.060 { 00:13:27.060 "name": null, 00:13:27.060 "uuid": "1fd997bf-42f4-11ef-9f7f-e9a656123a8b", 00:13:27.060 "is_configured": false, 00:13:27.060 "data_offset": 2048, 00:13:27.060 "data_size": 63488 00:13:27.060 }, 00:13:27.060 { 00:13:27.060 "name": "BaseBdev3", 00:13:27.060 "uuid": "204b215e-42f4-11ef-9f7f-e9a656123a8b", 00:13:27.060 "is_configured": true, 00:13:27.060 "data_offset": 2048, 00:13:27.060 "data_size": 63488 00:13:27.060 }, 00:13:27.060 { 00:13:27.060 "name": "BaseBdev4", 00:13:27.060 "uuid": "20b72a0d-42f4-11ef-9f7f-e9a656123a8b", 00:13:27.060 "is_configured": true, 00:13:27.060 "data_offset": 2048, 00:13:27.060 "data_size": 63488 00:13:27.060 } 00:13:27.060 ] 00:13:27.060 }' 00:13:27.060 21:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:27.060 21:49:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.319 21:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:27.319 21:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:27.603 21:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:13:27.603 21:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:27.885 [2024-07-15 21:49:42.881486] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:27.885 21:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:27.886 21:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:27.886 21:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:27.886 21:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:27.886 21:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:27.886 21:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:27.886 21:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:27.886 21:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:27.886 21:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:27.886 21:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:27.886 21:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:27.886 21:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.144 21:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:28.144 "name": "Existed_Raid", 00:13:28.144 "uuid": "212335f8-42f4-11ef-9f7f-e9a656123a8b", 00:13:28.144 "strip_size_kb": 64, 00:13:28.144 "state": "configuring", 00:13:28.144 "raid_level": "raid0", 00:13:28.144 "superblock": true, 00:13:28.144 "num_base_bdevs": 4, 00:13:28.144 "num_base_bdevs_discovered": 3, 00:13:28.144 "num_base_bdevs_operational": 4, 00:13:28.144 "base_bdevs_list": [ 00:13:28.144 { 00:13:28.144 "name": null, 00:13:28.144 "uuid": "22341140-42f4-11ef-9f7f-e9a656123a8b", 00:13:28.144 "is_configured": false, 00:13:28.144 "data_offset": 2048, 00:13:28.144 "data_size": 63488 00:13:28.144 }, 00:13:28.144 { 00:13:28.144 "name": "BaseBdev2", 00:13:28.144 "uuid": "1fd997bf-42f4-11ef-9f7f-e9a656123a8b", 00:13:28.144 "is_configured": true, 00:13:28.144 "data_offset": 2048, 00:13:28.144 "data_size": 63488 00:13:28.144 }, 00:13:28.144 { 00:13:28.145 "name": "BaseBdev3", 00:13:28.145 "uuid": "204b215e-42f4-11ef-9f7f-e9a656123a8b", 00:13:28.145 "is_configured": true, 00:13:28.145 "data_offset": 2048, 00:13:28.145 "data_size": 63488 00:13:28.145 }, 00:13:28.145 { 00:13:28.145 "name": "BaseBdev4", 00:13:28.145 "uuid": "20b72a0d-42f4-11ef-9f7f-e9a656123a8b", 00:13:28.145 "is_configured": true, 00:13:28.145 "data_offset": 2048, 00:13:28.145 "data_size": 63488 00:13:28.145 } 00:13:28.145 ] 00:13:28.145 }' 00:13:28.145 21:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:28.145 21:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.402 21:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:28.402 21:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:28.660 21:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:13:28.660 21:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:28.660 21:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:28.918 21:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 22341140-42f4-11ef-9f7f-e9a656123a8b 00:13:29.177 [2024-07-15 21:49:44.185789] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:29.177 [2024-07-15 21:49:44.185854] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x30535ea34f00 00:13:29.177 [2024-07-15 21:49:44.185859] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:29.177 [2024-07-15 21:49:44.185880] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x30535ea97e20 00:13:29.177 [2024-07-15 21:49:44.185932] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x30535ea34f00 00:13:29.177 [2024-07-15 21:49:44.185951] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x30535ea34f00 00:13:29.177 [2024-07-15 21:49:44.185986] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.177 NewBaseBdev 00:13:29.177 21:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:13:29.177 21:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=NewBaseBdev 00:13:29.177 21:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:13:29.177 21:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:13:29.177 21:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:13:29.177 21:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:13:29.177 21:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:29.436 21:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:29.695 [ 00:13:29.695 { 00:13:29.695 "name": "NewBaseBdev", 00:13:29.695 "aliases": [ 00:13:29.695 "22341140-42f4-11ef-9f7f-e9a656123a8b" 00:13:29.695 ], 00:13:29.695 "product_name": "Malloc disk", 00:13:29.695 "block_size": 512, 00:13:29.695 "num_blocks": 65536, 00:13:29.695 "uuid": "22341140-42f4-11ef-9f7f-e9a656123a8b", 00:13:29.695 "assigned_rate_limits": { 00:13:29.695 "rw_ios_per_sec": 0, 00:13:29.695 "rw_mbytes_per_sec": 0, 00:13:29.695 "r_mbytes_per_sec": 0, 00:13:29.695 "w_mbytes_per_sec": 0 00:13:29.695 }, 00:13:29.695 "claimed": true, 00:13:29.695 "claim_type": "exclusive_write", 00:13:29.695 "zoned": false, 00:13:29.695 "supported_io_types": { 00:13:29.695 "read": true, 00:13:29.695 "write": true, 00:13:29.695 "unmap": true, 00:13:29.695 "flush": true, 00:13:29.695 "reset": true, 00:13:29.695 "nvme_admin": false, 00:13:29.695 "nvme_io": false, 00:13:29.695 "nvme_io_md": false, 00:13:29.695 "write_zeroes": true, 00:13:29.695 "zcopy": true, 00:13:29.695 "get_zone_info": false, 00:13:29.695 "zone_management": false, 00:13:29.695 "zone_append": false, 00:13:29.695 "compare": false, 00:13:29.695 "compare_and_write": false, 00:13:29.695 "abort": true, 00:13:29.695 "seek_hole": false, 00:13:29.695 "seek_data": false, 00:13:29.695 "copy": true, 00:13:29.695 "nvme_iov_md": false 00:13:29.695 }, 00:13:29.695 "memory_domains": [ 00:13:29.695 { 00:13:29.695 "dma_device_id": "system", 00:13:29.695 "dma_device_type": 1 00:13:29.695 }, 00:13:29.695 { 00:13:29.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.695 "dma_device_type": 2 00:13:29.695 } 00:13:29.695 ], 00:13:29.695 "driver_specific": {} 00:13:29.695 } 00:13:29.695 ] 00:13:29.695 21:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:13:29.695 21:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:29.695 21:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:29.695 21:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:29.695 21:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:29.695 21:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:29.695 21:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:29.695 21:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:29.695 21:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:29.695 21:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:29.695 21:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:29.695 21:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:29.695 21:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.954 21:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:29.954 "name": "Existed_Raid", 00:13:29.954 "uuid": "212335f8-42f4-11ef-9f7f-e9a656123a8b", 00:13:29.954 "strip_size_kb": 64, 00:13:29.954 "state": "online", 00:13:29.954 "raid_level": "raid0", 00:13:29.954 "superblock": true, 00:13:29.954 "num_base_bdevs": 4, 00:13:29.954 "num_base_bdevs_discovered": 4, 00:13:29.954 "num_base_bdevs_operational": 4, 00:13:29.954 "base_bdevs_list": [ 00:13:29.954 { 00:13:29.954 "name": "NewBaseBdev", 00:13:29.954 "uuid": "22341140-42f4-11ef-9f7f-e9a656123a8b", 00:13:29.954 "is_configured": true, 00:13:29.954 "data_offset": 2048, 00:13:29.954 "data_size": 63488 00:13:29.954 }, 00:13:29.954 { 00:13:29.954 "name": "BaseBdev2", 00:13:29.954 "uuid": "1fd997bf-42f4-11ef-9f7f-e9a656123a8b", 00:13:29.954 "is_configured": true, 00:13:29.954 "data_offset": 2048, 00:13:29.954 "data_size": 63488 00:13:29.954 }, 00:13:29.954 { 00:13:29.954 "name": "BaseBdev3", 00:13:29.954 "uuid": "204b215e-42f4-11ef-9f7f-e9a656123a8b", 00:13:29.954 "is_configured": true, 00:13:29.954 "data_offset": 2048, 00:13:29.954 "data_size": 63488 00:13:29.954 }, 00:13:29.954 { 00:13:29.954 "name": "BaseBdev4", 00:13:29.954 "uuid": "20b72a0d-42f4-11ef-9f7f-e9a656123a8b", 00:13:29.954 "is_configured": true, 00:13:29.954 "data_offset": 2048, 00:13:29.954 "data_size": 63488 00:13:29.954 } 00:13:29.954 ] 00:13:29.954 }' 00:13:29.954 21:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:29.954 21:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.214 21:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:13:30.214 21:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:30.214 21:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:30.214 21:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:30.214 21:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:30.214 21:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:30.214 21:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:30.214 21:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:30.473 [2024-07-15 21:49:45.449771] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:30.473 21:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:30.473 "name": "Existed_Raid", 00:13:30.473 "aliases": [ 00:13:30.473 "212335f8-42f4-11ef-9f7f-e9a656123a8b" 00:13:30.473 ], 00:13:30.473 "product_name": "Raid Volume", 00:13:30.473 "block_size": 512, 00:13:30.473 "num_blocks": 253952, 00:13:30.473 "uuid": "212335f8-42f4-11ef-9f7f-e9a656123a8b", 00:13:30.473 "assigned_rate_limits": { 00:13:30.473 "rw_ios_per_sec": 0, 00:13:30.473 "rw_mbytes_per_sec": 0, 00:13:30.473 "r_mbytes_per_sec": 0, 00:13:30.473 "w_mbytes_per_sec": 0 00:13:30.473 }, 00:13:30.473 "claimed": false, 00:13:30.473 "zoned": false, 00:13:30.473 "supported_io_types": { 00:13:30.473 "read": true, 00:13:30.473 "write": true, 00:13:30.473 "unmap": true, 00:13:30.473 "flush": true, 00:13:30.473 "reset": true, 00:13:30.473 "nvme_admin": false, 00:13:30.473 "nvme_io": false, 00:13:30.473 "nvme_io_md": false, 00:13:30.473 "write_zeroes": true, 00:13:30.473 "zcopy": false, 00:13:30.473 "get_zone_info": false, 00:13:30.473 "zone_management": false, 00:13:30.473 "zone_append": false, 00:13:30.473 "compare": false, 00:13:30.473 "compare_and_write": false, 00:13:30.473 "abort": false, 00:13:30.473 "seek_hole": false, 00:13:30.473 "seek_data": false, 00:13:30.473 "copy": false, 00:13:30.473 "nvme_iov_md": false 00:13:30.473 }, 00:13:30.473 "memory_domains": [ 00:13:30.473 { 00:13:30.473 "dma_device_id": "system", 00:13:30.473 "dma_device_type": 1 00:13:30.473 }, 00:13:30.473 { 00:13:30.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.473 "dma_device_type": 2 00:13:30.473 }, 00:13:30.473 { 00:13:30.473 "dma_device_id": "system", 00:13:30.473 "dma_device_type": 1 00:13:30.473 }, 00:13:30.473 { 00:13:30.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.473 "dma_device_type": 2 00:13:30.473 }, 00:13:30.473 { 00:13:30.473 "dma_device_id": "system", 00:13:30.473 "dma_device_type": 1 00:13:30.473 }, 00:13:30.473 { 00:13:30.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.473 "dma_device_type": 2 00:13:30.473 }, 00:13:30.473 { 00:13:30.473 "dma_device_id": "system", 00:13:30.473 "dma_device_type": 1 00:13:30.473 }, 00:13:30.473 { 00:13:30.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.473 "dma_device_type": 2 00:13:30.473 } 00:13:30.473 ], 00:13:30.473 "driver_specific": { 00:13:30.473 "raid": { 00:13:30.473 "uuid": "212335f8-42f4-11ef-9f7f-e9a656123a8b", 00:13:30.473 "strip_size_kb": 64, 00:13:30.473 "state": "online", 00:13:30.473 "raid_level": "raid0", 00:13:30.473 "superblock": true, 00:13:30.473 "num_base_bdevs": 4, 00:13:30.473 "num_base_bdevs_discovered": 4, 00:13:30.473 "num_base_bdevs_operational": 4, 00:13:30.473 "base_bdevs_list": [ 00:13:30.473 { 00:13:30.473 "name": "NewBaseBdev", 00:13:30.473 "uuid": "22341140-42f4-11ef-9f7f-e9a656123a8b", 00:13:30.473 "is_configured": true, 00:13:30.473 "data_offset": 2048, 00:13:30.473 "data_size": 63488 00:13:30.473 }, 00:13:30.473 { 00:13:30.473 "name": "BaseBdev2", 00:13:30.473 "uuid": "1fd997bf-42f4-11ef-9f7f-e9a656123a8b", 00:13:30.473 "is_configured": true, 00:13:30.473 "data_offset": 2048, 00:13:30.473 "data_size": 63488 00:13:30.473 }, 00:13:30.473 { 00:13:30.473 "name": "BaseBdev3", 00:13:30.473 "uuid": "204b215e-42f4-11ef-9f7f-e9a656123a8b", 00:13:30.473 "is_configured": true, 00:13:30.473 "data_offset": 2048, 00:13:30.473 "data_size": 63488 00:13:30.473 }, 00:13:30.473 { 00:13:30.473 "name": "BaseBdev4", 00:13:30.473 "uuid": "20b72a0d-42f4-11ef-9f7f-e9a656123a8b", 00:13:30.473 "is_configured": true, 00:13:30.473 "data_offset": 2048, 00:13:30.473 "data_size": 63488 00:13:30.473 } 00:13:30.473 ] 00:13:30.473 } 00:13:30.473 } 00:13:30.473 }' 00:13:30.473 21:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:30.474 21:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:13:30.474 BaseBdev2 00:13:30.474 BaseBdev3 00:13:30.474 BaseBdev4' 00:13:30.474 21:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:30.474 21:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:13:30.474 21:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:30.733 21:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:30.733 "name": "NewBaseBdev", 00:13:30.733 "aliases": [ 00:13:30.733 "22341140-42f4-11ef-9f7f-e9a656123a8b" 00:13:30.733 ], 00:13:30.733 "product_name": "Malloc disk", 00:13:30.733 "block_size": 512, 00:13:30.733 "num_blocks": 65536, 00:13:30.733 "uuid": "22341140-42f4-11ef-9f7f-e9a656123a8b", 00:13:30.733 "assigned_rate_limits": { 00:13:30.733 "rw_ios_per_sec": 0, 00:13:30.733 "rw_mbytes_per_sec": 0, 00:13:30.733 "r_mbytes_per_sec": 0, 00:13:30.733 "w_mbytes_per_sec": 0 00:13:30.733 }, 00:13:30.733 "claimed": true, 00:13:30.733 "claim_type": "exclusive_write", 00:13:30.733 "zoned": false, 00:13:30.733 "supported_io_types": { 00:13:30.733 "read": true, 00:13:30.733 "write": true, 00:13:30.733 "unmap": true, 00:13:30.733 "flush": true, 00:13:30.733 "reset": true, 00:13:30.733 "nvme_admin": false, 00:13:30.733 "nvme_io": false, 00:13:30.733 "nvme_io_md": false, 00:13:30.733 "write_zeroes": true, 00:13:30.733 "zcopy": true, 00:13:30.733 "get_zone_info": false, 00:13:30.733 "zone_management": false, 00:13:30.733 "zone_append": false, 00:13:30.733 "compare": false, 00:13:30.733 "compare_and_write": false, 00:13:30.733 "abort": true, 00:13:30.733 "seek_hole": false, 00:13:30.733 "seek_data": false, 00:13:30.733 "copy": true, 00:13:30.733 "nvme_iov_md": false 00:13:30.733 }, 00:13:30.733 "memory_domains": [ 00:13:30.733 { 00:13:30.733 "dma_device_id": "system", 00:13:30.733 "dma_device_type": 1 00:13:30.733 }, 00:13:30.733 { 00:13:30.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.733 "dma_device_type": 2 00:13:30.733 } 00:13:30.733 ], 00:13:30.733 "driver_specific": {} 00:13:30.733 }' 00:13:30.733 21:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:30.733 21:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:30.733 21:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:30.733 21:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:30.733 21:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:30.733 21:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:30.733 21:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:30.733 21:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:30.733 21:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:30.733 21:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:30.733 21:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:30.733 21:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:30.733 21:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:30.733 21:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:30.733 21:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:30.992 21:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:30.992 "name": "BaseBdev2", 00:13:30.992 "aliases": [ 00:13:30.992 "1fd997bf-42f4-11ef-9f7f-e9a656123a8b" 00:13:30.992 ], 00:13:30.992 "product_name": "Malloc disk", 00:13:30.992 "block_size": 512, 00:13:30.992 "num_blocks": 65536, 00:13:30.992 "uuid": "1fd997bf-42f4-11ef-9f7f-e9a656123a8b", 00:13:30.992 "assigned_rate_limits": { 00:13:30.992 "rw_ios_per_sec": 0, 00:13:30.992 "rw_mbytes_per_sec": 0, 00:13:30.992 "r_mbytes_per_sec": 0, 00:13:30.992 "w_mbytes_per_sec": 0 00:13:30.992 }, 00:13:30.992 "claimed": true, 00:13:30.992 "claim_type": "exclusive_write", 00:13:30.992 "zoned": false, 00:13:30.992 "supported_io_types": { 00:13:30.993 "read": true, 00:13:30.993 "write": true, 00:13:30.993 "unmap": true, 00:13:30.993 "flush": true, 00:13:30.993 "reset": true, 00:13:30.993 "nvme_admin": false, 00:13:30.993 "nvme_io": false, 00:13:30.993 "nvme_io_md": false, 00:13:30.993 "write_zeroes": true, 00:13:30.993 "zcopy": true, 00:13:30.993 "get_zone_info": false, 00:13:30.993 "zone_management": false, 00:13:30.993 "zone_append": false, 00:13:30.993 "compare": false, 00:13:30.993 "compare_and_write": false, 00:13:30.993 "abort": true, 00:13:30.993 "seek_hole": false, 00:13:30.993 "seek_data": false, 00:13:30.993 "copy": true, 00:13:30.993 "nvme_iov_md": false 00:13:30.993 }, 00:13:30.993 "memory_domains": [ 00:13:30.993 { 00:13:30.993 "dma_device_id": "system", 00:13:30.993 "dma_device_type": 1 00:13:30.993 }, 00:13:30.993 { 00:13:30.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.993 "dma_device_type": 2 00:13:30.993 } 00:13:30.993 ], 00:13:30.993 "driver_specific": {} 00:13:30.993 }' 00:13:30.993 21:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:30.993 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:30.993 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:30.993 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:30.993 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:30.993 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:30.993 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:30.993 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:30.993 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:30.993 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:30.993 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:30.993 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:30.993 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:30.993 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:30.993 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:31.252 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:31.252 "name": "BaseBdev3", 00:13:31.252 "aliases": [ 00:13:31.252 "204b215e-42f4-11ef-9f7f-e9a656123a8b" 00:13:31.252 ], 00:13:31.252 "product_name": "Malloc disk", 00:13:31.252 "block_size": 512, 00:13:31.252 "num_blocks": 65536, 00:13:31.252 "uuid": "204b215e-42f4-11ef-9f7f-e9a656123a8b", 00:13:31.252 "assigned_rate_limits": { 00:13:31.252 "rw_ios_per_sec": 0, 00:13:31.252 "rw_mbytes_per_sec": 0, 00:13:31.252 "r_mbytes_per_sec": 0, 00:13:31.252 "w_mbytes_per_sec": 0 00:13:31.252 }, 00:13:31.252 "claimed": true, 00:13:31.252 "claim_type": "exclusive_write", 00:13:31.252 "zoned": false, 00:13:31.252 "supported_io_types": { 00:13:31.252 "read": true, 00:13:31.252 "write": true, 00:13:31.252 "unmap": true, 00:13:31.252 "flush": true, 00:13:31.252 "reset": true, 00:13:31.252 "nvme_admin": false, 00:13:31.252 "nvme_io": false, 00:13:31.252 "nvme_io_md": false, 00:13:31.252 "write_zeroes": true, 00:13:31.252 "zcopy": true, 00:13:31.252 "get_zone_info": false, 00:13:31.252 "zone_management": false, 00:13:31.252 "zone_append": false, 00:13:31.252 "compare": false, 00:13:31.252 "compare_and_write": false, 00:13:31.252 "abort": true, 00:13:31.252 "seek_hole": false, 00:13:31.252 "seek_data": false, 00:13:31.252 "copy": true, 00:13:31.252 "nvme_iov_md": false 00:13:31.252 }, 00:13:31.252 "memory_domains": [ 00:13:31.252 { 00:13:31.252 "dma_device_id": "system", 00:13:31.252 "dma_device_type": 1 00:13:31.252 }, 00:13:31.252 { 00:13:31.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.252 "dma_device_type": 2 00:13:31.252 } 00:13:31.252 ], 00:13:31.252 "driver_specific": {} 00:13:31.252 }' 00:13:31.252 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:31.252 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:31.252 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:31.252 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:31.252 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:31.252 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:31.252 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:31.252 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:31.252 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:31.252 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:31.252 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:31.252 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:31.252 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:31.252 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:13:31.252 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:31.512 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:31.512 "name": "BaseBdev4", 00:13:31.512 "aliases": [ 00:13:31.512 "20b72a0d-42f4-11ef-9f7f-e9a656123a8b" 00:13:31.512 ], 00:13:31.512 "product_name": "Malloc disk", 00:13:31.512 "block_size": 512, 00:13:31.512 "num_blocks": 65536, 00:13:31.512 "uuid": "20b72a0d-42f4-11ef-9f7f-e9a656123a8b", 00:13:31.512 "assigned_rate_limits": { 00:13:31.512 "rw_ios_per_sec": 0, 00:13:31.512 "rw_mbytes_per_sec": 0, 00:13:31.512 "r_mbytes_per_sec": 0, 00:13:31.512 "w_mbytes_per_sec": 0 00:13:31.512 }, 00:13:31.512 "claimed": true, 00:13:31.512 "claim_type": "exclusive_write", 00:13:31.512 "zoned": false, 00:13:31.512 "supported_io_types": { 00:13:31.512 "read": true, 00:13:31.512 "write": true, 00:13:31.512 "unmap": true, 00:13:31.512 "flush": true, 00:13:31.512 "reset": true, 00:13:31.512 "nvme_admin": false, 00:13:31.512 "nvme_io": false, 00:13:31.512 "nvme_io_md": false, 00:13:31.512 "write_zeroes": true, 00:13:31.512 "zcopy": true, 00:13:31.512 "get_zone_info": false, 00:13:31.512 "zone_management": false, 00:13:31.512 "zone_append": false, 00:13:31.512 "compare": false, 00:13:31.512 "compare_and_write": false, 00:13:31.512 "abort": true, 00:13:31.512 "seek_hole": false, 00:13:31.512 "seek_data": false, 00:13:31.512 "copy": true, 00:13:31.512 "nvme_iov_md": false 00:13:31.512 }, 00:13:31.512 "memory_domains": [ 00:13:31.512 { 00:13:31.512 "dma_device_id": "system", 00:13:31.512 "dma_device_type": 1 00:13:31.512 }, 00:13:31.512 { 00:13:31.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.512 "dma_device_type": 2 00:13:31.512 } 00:13:31.512 ], 00:13:31.512 "driver_specific": {} 00:13:31.512 }' 00:13:31.512 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:31.512 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:31.512 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:31.512 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:31.512 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:31.770 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:31.770 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:31.771 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:31.771 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:31.771 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:31.771 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:31.771 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:31.771 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:32.030 [2024-07-15 21:49:46.981773] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:32.030 [2024-07-15 21:49:46.981796] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:32.030 [2024-07-15 21:49:46.981833] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:32.030 [2024-07-15 21:49:46.981847] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:32.030 [2024-07-15 21:49:46.981851] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x30535ea34f00 name Existed_Raid, state offline 00:13:32.030 21:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 59170 00:13:32.030 21:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@942 -- # '[' -z 59170 ']' 00:13:32.030 21:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # kill -0 59170 00:13:32.030 21:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@947 -- # uname 00:13:32.030 21:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:13:32.030 21:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # ps -c -o command 59170 00:13:32.030 21:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # tail -1 00:13:32.030 21:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:13:32.030 21:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:13:32.030 killing process with pid 59170 00:13:32.030 21:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # echo 'killing process with pid 59170' 00:13:32.030 21:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@961 -- # kill 59170 00:13:32.030 [2024-07-15 21:49:47.006066] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:32.030 21:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # wait 59170 00:13:32.030 [2024-07-15 21:49:47.030706] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:32.030 21:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:13:32.030 00:13:32.030 real 0m25.568s 00:13:32.030 user 0m46.412s 00:13:32.030 sys 0m3.864s 00:13:32.030 21:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1118 -- # xtrace_disable 00:13:32.030 21:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.030 ************************************ 00:13:32.030 END TEST raid_state_function_test_sb 00:13:32.030 ************************************ 00:13:32.289 21:49:47 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:13:32.289 21:49:47 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:13:32.289 21:49:47 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:13:32.289 21:49:47 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:13:32.289 21:49:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:32.289 ************************************ 00:13:32.289 START TEST raid_superblock_test 00:13:32.289 ************************************ 00:13:32.289 21:49:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1117 -- # raid_superblock_test raid0 4 00:13:32.289 21:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:13:32.289 21:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:13:32.289 21:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:13:32.289 21:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:13:32.289 21:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:13:32.289 21:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:13:32.289 21:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:13:32.289 21:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:13:32.289 21:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:13:32.289 21:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:13:32.289 21:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:13:32.289 21:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:13:32.289 21:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:13:32.289 21:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:13:32.289 21:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:13:32.289 21:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:13:32.289 21:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=59984 00:13:32.289 21:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 59984 /var/tmp/spdk-raid.sock 00:13:32.289 21:49:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@823 -- # '[' -z 59984 ']' 00:13:32.289 21:49:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:32.289 21:49:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:32.289 21:49:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:13:32.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:32.289 21:49:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:32.289 21:49:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:13:32.289 21:49:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.289 [2024-07-15 21:49:47.269556] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:13:32.289 [2024-07-15 21:49:47.269777] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:13:32.856 EAL: TSC is not safe to use in SMP mode 00:13:32.856 EAL: TSC is not invariant 00:13:32.856 [2024-07-15 21:49:47.815875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.856 [2024-07-15 21:49:47.891622] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:32.856 [2024-07-15 21:49:47.894168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.856 [2024-07-15 21:49:47.895060] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:32.856 [2024-07-15 21:49:47.895075] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.424 21:49:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:13:33.424 21:49:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # return 0 00:13:33.424 21:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:13:33.424 21:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:33.424 21:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:13:33.424 21:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:13:33.424 21:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:33.424 21:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:33.424 21:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:33.424 21:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:33.424 21:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:33.424 malloc1 00:13:33.424 21:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:33.682 [2024-07-15 21:49:48.775978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:33.682 [2024-07-15 21:49:48.776042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.682 [2024-07-15 21:49:48.776069] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1baabc034780 00:13:33.682 [2024-07-15 21:49:48.776077] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.682 [2024-07-15 21:49:48.777075] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.682 [2024-07-15 21:49:48.777114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:33.682 pt1 00:13:33.682 21:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:33.682 21:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:33.683 21:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:13:33.683 21:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:13:33.683 21:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:33.683 21:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:33.683 21:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:33.683 21:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:33.683 21:49:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:34.000 malloc2 00:13:34.000 21:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:34.284 [2024-07-15 21:49:49.216007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:34.284 [2024-07-15 21:49:49.216068] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.284 [2024-07-15 21:49:49.216096] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1baabc034c80 00:13:34.284 [2024-07-15 21:49:49.216103] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.284 [2024-07-15 21:49:49.216845] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.284 [2024-07-15 21:49:49.216868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:34.284 pt2 00:13:34.284 21:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:34.284 21:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:34.284 21:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:13:34.284 21:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:13:34.284 21:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:34.284 21:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:34.284 21:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:34.284 21:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:34.284 21:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:13:34.542 malloc3 00:13:34.542 21:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:34.542 [2024-07-15 21:49:49.728026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:34.542 [2024-07-15 21:49:49.728092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.542 [2024-07-15 21:49:49.728127] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1baabc035180 00:13:34.542 [2024-07-15 21:49:49.728134] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.542 [2024-07-15 21:49:49.728903] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.542 [2024-07-15 21:49:49.728927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:34.801 pt3 00:13:34.801 21:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:34.801 21:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:34.801 21:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:13:34.801 21:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:13:34.801 21:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:34.801 21:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:34.801 21:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:34.801 21:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:34.801 21:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:13:34.801 malloc4 00:13:34.801 21:49:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:35.060 [2024-07-15 21:49:50.196026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:35.060 [2024-07-15 21:49:50.196082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.060 [2024-07-15 21:49:50.196110] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1baabc035680 00:13:35.060 [2024-07-15 21:49:50.196117] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.060 [2024-07-15 21:49:50.196742] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.060 [2024-07-15 21:49:50.196766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:35.060 pt4 00:13:35.060 21:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:35.060 21:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:35.060 21:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:13:35.320 [2024-07-15 21:49:50.412045] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:35.320 [2024-07-15 21:49:50.412647] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:35.320 [2024-07-15 21:49:50.412669] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:35.320 [2024-07-15 21:49:50.412680] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:35.320 [2024-07-15 21:49:50.412731] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1baabc035900 00:13:35.320 [2024-07-15 21:49:50.412737] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:35.320 [2024-07-15 21:49:50.412769] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1baabc097e20 00:13:35.320 [2024-07-15 21:49:50.412850] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1baabc035900 00:13:35.320 [2024-07-15 21:49:50.412855] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1baabc035900 00:13:35.320 [2024-07-15 21:49:50.412882] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.320 21:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:35.320 21:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:35.320 21:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:35.320 21:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:35.320 21:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:35.320 21:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:35.320 21:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:35.320 21:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:35.320 21:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:35.320 21:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:35.320 21:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:35.320 21:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.579 21:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:35.579 "name": "raid_bdev1", 00:13:35.579 "uuid": "294cb225-42f4-11ef-9f7f-e9a656123a8b", 00:13:35.579 "strip_size_kb": 64, 00:13:35.579 "state": "online", 00:13:35.579 "raid_level": "raid0", 00:13:35.579 "superblock": true, 00:13:35.579 "num_base_bdevs": 4, 00:13:35.579 "num_base_bdevs_discovered": 4, 00:13:35.579 "num_base_bdevs_operational": 4, 00:13:35.579 "base_bdevs_list": [ 00:13:35.579 { 00:13:35.579 "name": "pt1", 00:13:35.579 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:35.579 "is_configured": true, 00:13:35.579 "data_offset": 2048, 00:13:35.579 "data_size": 63488 00:13:35.579 }, 00:13:35.579 { 00:13:35.579 "name": "pt2", 00:13:35.579 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:35.579 "is_configured": true, 00:13:35.579 "data_offset": 2048, 00:13:35.579 "data_size": 63488 00:13:35.579 }, 00:13:35.579 { 00:13:35.579 "name": "pt3", 00:13:35.579 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:35.579 "is_configured": true, 00:13:35.579 "data_offset": 2048, 00:13:35.579 "data_size": 63488 00:13:35.579 }, 00:13:35.579 { 00:13:35.579 "name": "pt4", 00:13:35.579 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:35.579 "is_configured": true, 00:13:35.579 "data_offset": 2048, 00:13:35.579 "data_size": 63488 00:13:35.579 } 00:13:35.579 ] 00:13:35.579 }' 00:13:35.579 21:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:35.579 21:49:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.837 21:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:13:35.837 21:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:35.837 21:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:35.837 21:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:35.837 21:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:35.837 21:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:35.837 21:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:35.837 21:49:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:36.096 [2024-07-15 21:49:51.228088] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:36.096 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:36.096 "name": "raid_bdev1", 00:13:36.096 "aliases": [ 00:13:36.096 "294cb225-42f4-11ef-9f7f-e9a656123a8b" 00:13:36.096 ], 00:13:36.096 "product_name": "Raid Volume", 00:13:36.096 "block_size": 512, 00:13:36.096 "num_blocks": 253952, 00:13:36.096 "uuid": "294cb225-42f4-11ef-9f7f-e9a656123a8b", 00:13:36.096 "assigned_rate_limits": { 00:13:36.096 "rw_ios_per_sec": 0, 00:13:36.096 "rw_mbytes_per_sec": 0, 00:13:36.096 "r_mbytes_per_sec": 0, 00:13:36.096 "w_mbytes_per_sec": 0 00:13:36.096 }, 00:13:36.096 "claimed": false, 00:13:36.096 "zoned": false, 00:13:36.096 "supported_io_types": { 00:13:36.096 "read": true, 00:13:36.096 "write": true, 00:13:36.096 "unmap": true, 00:13:36.096 "flush": true, 00:13:36.096 "reset": true, 00:13:36.096 "nvme_admin": false, 00:13:36.096 "nvme_io": false, 00:13:36.096 "nvme_io_md": false, 00:13:36.096 "write_zeroes": true, 00:13:36.096 "zcopy": false, 00:13:36.096 "get_zone_info": false, 00:13:36.096 "zone_management": false, 00:13:36.096 "zone_append": false, 00:13:36.096 "compare": false, 00:13:36.096 "compare_and_write": false, 00:13:36.096 "abort": false, 00:13:36.096 "seek_hole": false, 00:13:36.096 "seek_data": false, 00:13:36.096 "copy": false, 00:13:36.096 "nvme_iov_md": false 00:13:36.096 }, 00:13:36.096 "memory_domains": [ 00:13:36.096 { 00:13:36.096 "dma_device_id": "system", 00:13:36.096 "dma_device_type": 1 00:13:36.096 }, 00:13:36.096 { 00:13:36.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.096 "dma_device_type": 2 00:13:36.096 }, 00:13:36.096 { 00:13:36.096 "dma_device_id": "system", 00:13:36.096 "dma_device_type": 1 00:13:36.096 }, 00:13:36.096 { 00:13:36.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.096 "dma_device_type": 2 00:13:36.096 }, 00:13:36.096 { 00:13:36.096 "dma_device_id": "system", 00:13:36.096 "dma_device_type": 1 00:13:36.096 }, 00:13:36.096 { 00:13:36.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.096 "dma_device_type": 2 00:13:36.096 }, 00:13:36.096 { 00:13:36.096 "dma_device_id": "system", 00:13:36.096 "dma_device_type": 1 00:13:36.096 }, 00:13:36.096 { 00:13:36.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.096 "dma_device_type": 2 00:13:36.096 } 00:13:36.096 ], 00:13:36.096 "driver_specific": { 00:13:36.096 "raid": { 00:13:36.096 "uuid": "294cb225-42f4-11ef-9f7f-e9a656123a8b", 00:13:36.096 "strip_size_kb": 64, 00:13:36.096 "state": "online", 00:13:36.096 "raid_level": "raid0", 00:13:36.096 "superblock": true, 00:13:36.096 "num_base_bdevs": 4, 00:13:36.096 "num_base_bdevs_discovered": 4, 00:13:36.096 "num_base_bdevs_operational": 4, 00:13:36.096 "base_bdevs_list": [ 00:13:36.096 { 00:13:36.096 "name": "pt1", 00:13:36.096 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:36.096 "is_configured": true, 00:13:36.096 "data_offset": 2048, 00:13:36.096 "data_size": 63488 00:13:36.096 }, 00:13:36.096 { 00:13:36.096 "name": "pt2", 00:13:36.096 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:36.096 "is_configured": true, 00:13:36.096 "data_offset": 2048, 00:13:36.096 "data_size": 63488 00:13:36.096 }, 00:13:36.096 { 00:13:36.096 "name": "pt3", 00:13:36.096 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:36.096 "is_configured": true, 00:13:36.096 "data_offset": 2048, 00:13:36.096 "data_size": 63488 00:13:36.096 }, 00:13:36.096 { 00:13:36.096 "name": "pt4", 00:13:36.096 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:36.096 "is_configured": true, 00:13:36.096 "data_offset": 2048, 00:13:36.096 "data_size": 63488 00:13:36.096 } 00:13:36.096 ] 00:13:36.096 } 00:13:36.096 } 00:13:36.096 }' 00:13:36.096 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:36.096 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:36.096 pt2 00:13:36.096 pt3 00:13:36.096 pt4' 00:13:36.096 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:36.096 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:36.096 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:36.356 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:36.356 "name": "pt1", 00:13:36.356 "aliases": [ 00:13:36.356 "00000000-0000-0000-0000-000000000001" 00:13:36.356 ], 00:13:36.356 "product_name": "passthru", 00:13:36.356 "block_size": 512, 00:13:36.356 "num_blocks": 65536, 00:13:36.356 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:36.356 "assigned_rate_limits": { 00:13:36.356 "rw_ios_per_sec": 0, 00:13:36.356 "rw_mbytes_per_sec": 0, 00:13:36.356 "r_mbytes_per_sec": 0, 00:13:36.356 "w_mbytes_per_sec": 0 00:13:36.356 }, 00:13:36.356 "claimed": true, 00:13:36.356 "claim_type": "exclusive_write", 00:13:36.356 "zoned": false, 00:13:36.356 "supported_io_types": { 00:13:36.356 "read": true, 00:13:36.356 "write": true, 00:13:36.356 "unmap": true, 00:13:36.356 "flush": true, 00:13:36.356 "reset": true, 00:13:36.356 "nvme_admin": false, 00:13:36.356 "nvme_io": false, 00:13:36.356 "nvme_io_md": false, 00:13:36.356 "write_zeroes": true, 00:13:36.356 "zcopy": true, 00:13:36.356 "get_zone_info": false, 00:13:36.356 "zone_management": false, 00:13:36.356 "zone_append": false, 00:13:36.356 "compare": false, 00:13:36.356 "compare_and_write": false, 00:13:36.356 "abort": true, 00:13:36.356 "seek_hole": false, 00:13:36.356 "seek_data": false, 00:13:36.356 "copy": true, 00:13:36.356 "nvme_iov_md": false 00:13:36.356 }, 00:13:36.356 "memory_domains": [ 00:13:36.356 { 00:13:36.356 "dma_device_id": "system", 00:13:36.356 "dma_device_type": 1 00:13:36.356 }, 00:13:36.356 { 00:13:36.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.356 "dma_device_type": 2 00:13:36.356 } 00:13:36.356 ], 00:13:36.356 "driver_specific": { 00:13:36.356 "passthru": { 00:13:36.356 "name": "pt1", 00:13:36.356 "base_bdev_name": "malloc1" 00:13:36.356 } 00:13:36.356 } 00:13:36.356 }' 00:13:36.356 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:36.356 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:36.356 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:36.356 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:36.356 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:36.356 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:36.356 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:36.356 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:36.356 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:36.356 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:36.356 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:36.356 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:36.356 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:36.356 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:36.356 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:36.615 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:36.615 "name": "pt2", 00:13:36.615 "aliases": [ 00:13:36.615 "00000000-0000-0000-0000-000000000002" 00:13:36.615 ], 00:13:36.615 "product_name": "passthru", 00:13:36.615 "block_size": 512, 00:13:36.615 "num_blocks": 65536, 00:13:36.615 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:36.615 "assigned_rate_limits": { 00:13:36.615 "rw_ios_per_sec": 0, 00:13:36.615 "rw_mbytes_per_sec": 0, 00:13:36.615 "r_mbytes_per_sec": 0, 00:13:36.615 "w_mbytes_per_sec": 0 00:13:36.615 }, 00:13:36.615 "claimed": true, 00:13:36.615 "claim_type": "exclusive_write", 00:13:36.615 "zoned": false, 00:13:36.615 "supported_io_types": { 00:13:36.615 "read": true, 00:13:36.615 "write": true, 00:13:36.615 "unmap": true, 00:13:36.615 "flush": true, 00:13:36.615 "reset": true, 00:13:36.615 "nvme_admin": false, 00:13:36.615 "nvme_io": false, 00:13:36.615 "nvme_io_md": false, 00:13:36.615 "write_zeroes": true, 00:13:36.615 "zcopy": true, 00:13:36.615 "get_zone_info": false, 00:13:36.615 "zone_management": false, 00:13:36.615 "zone_append": false, 00:13:36.615 "compare": false, 00:13:36.615 "compare_and_write": false, 00:13:36.615 "abort": true, 00:13:36.615 "seek_hole": false, 00:13:36.615 "seek_data": false, 00:13:36.615 "copy": true, 00:13:36.615 "nvme_iov_md": false 00:13:36.615 }, 00:13:36.615 "memory_domains": [ 00:13:36.615 { 00:13:36.615 "dma_device_id": "system", 00:13:36.615 "dma_device_type": 1 00:13:36.615 }, 00:13:36.615 { 00:13:36.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.615 "dma_device_type": 2 00:13:36.615 } 00:13:36.615 ], 00:13:36.615 "driver_specific": { 00:13:36.615 "passthru": { 00:13:36.615 "name": "pt2", 00:13:36.615 "base_bdev_name": "malloc2" 00:13:36.615 } 00:13:36.615 } 00:13:36.615 }' 00:13:36.615 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:36.615 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:36.615 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:36.615 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:36.615 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:36.615 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:36.615 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:36.615 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:36.615 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:36.615 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:36.874 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:36.874 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:36.874 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:36.874 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:13:36.874 21:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:36.874 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:36.874 "name": "pt3", 00:13:36.874 "aliases": [ 00:13:36.874 "00000000-0000-0000-0000-000000000003" 00:13:36.874 ], 00:13:36.874 "product_name": "passthru", 00:13:36.874 "block_size": 512, 00:13:36.874 "num_blocks": 65536, 00:13:36.874 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:36.874 "assigned_rate_limits": { 00:13:36.874 "rw_ios_per_sec": 0, 00:13:36.874 "rw_mbytes_per_sec": 0, 00:13:36.874 "r_mbytes_per_sec": 0, 00:13:36.874 "w_mbytes_per_sec": 0 00:13:36.874 }, 00:13:36.874 "claimed": true, 00:13:36.874 "claim_type": "exclusive_write", 00:13:36.874 "zoned": false, 00:13:36.874 "supported_io_types": { 00:13:36.874 "read": true, 00:13:36.874 "write": true, 00:13:36.874 "unmap": true, 00:13:36.874 "flush": true, 00:13:36.874 "reset": true, 00:13:36.874 "nvme_admin": false, 00:13:36.874 "nvme_io": false, 00:13:36.874 "nvme_io_md": false, 00:13:36.874 "write_zeroes": true, 00:13:36.874 "zcopy": true, 00:13:36.874 "get_zone_info": false, 00:13:36.874 "zone_management": false, 00:13:36.874 "zone_append": false, 00:13:36.874 "compare": false, 00:13:36.874 "compare_and_write": false, 00:13:36.874 "abort": true, 00:13:36.874 "seek_hole": false, 00:13:36.874 "seek_data": false, 00:13:36.874 "copy": true, 00:13:36.874 "nvme_iov_md": false 00:13:36.874 }, 00:13:36.874 "memory_domains": [ 00:13:36.874 { 00:13:36.874 "dma_device_id": "system", 00:13:36.874 "dma_device_type": 1 00:13:36.874 }, 00:13:36.874 { 00:13:36.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.874 "dma_device_type": 2 00:13:36.874 } 00:13:36.874 ], 00:13:36.874 "driver_specific": { 00:13:36.874 "passthru": { 00:13:36.874 "name": "pt3", 00:13:36.874 "base_bdev_name": "malloc3" 00:13:36.874 } 00:13:36.874 } 00:13:36.874 }' 00:13:36.874 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:36.874 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:36.874 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:36.874 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:36.874 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:37.134 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:37.134 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:37.134 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:37.134 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:37.134 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:37.134 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:37.134 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:37.134 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:37.134 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:13:37.134 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:37.393 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:37.393 "name": "pt4", 00:13:37.393 "aliases": [ 00:13:37.393 "00000000-0000-0000-0000-000000000004" 00:13:37.393 ], 00:13:37.393 "product_name": "passthru", 00:13:37.393 "block_size": 512, 00:13:37.393 "num_blocks": 65536, 00:13:37.393 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:37.393 "assigned_rate_limits": { 00:13:37.393 "rw_ios_per_sec": 0, 00:13:37.393 "rw_mbytes_per_sec": 0, 00:13:37.393 "r_mbytes_per_sec": 0, 00:13:37.393 "w_mbytes_per_sec": 0 00:13:37.393 }, 00:13:37.393 "claimed": true, 00:13:37.393 "claim_type": "exclusive_write", 00:13:37.393 "zoned": false, 00:13:37.393 "supported_io_types": { 00:13:37.393 "read": true, 00:13:37.393 "write": true, 00:13:37.393 "unmap": true, 00:13:37.393 "flush": true, 00:13:37.393 "reset": true, 00:13:37.393 "nvme_admin": false, 00:13:37.393 "nvme_io": false, 00:13:37.393 "nvme_io_md": false, 00:13:37.393 "write_zeroes": true, 00:13:37.393 "zcopy": true, 00:13:37.393 "get_zone_info": false, 00:13:37.393 "zone_management": false, 00:13:37.393 "zone_append": false, 00:13:37.393 "compare": false, 00:13:37.393 "compare_and_write": false, 00:13:37.393 "abort": true, 00:13:37.393 "seek_hole": false, 00:13:37.393 "seek_data": false, 00:13:37.393 "copy": true, 00:13:37.393 "nvme_iov_md": false 00:13:37.393 }, 00:13:37.393 "memory_domains": [ 00:13:37.393 { 00:13:37.393 "dma_device_id": "system", 00:13:37.393 "dma_device_type": 1 00:13:37.393 }, 00:13:37.393 { 00:13:37.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.393 "dma_device_type": 2 00:13:37.393 } 00:13:37.393 ], 00:13:37.393 "driver_specific": { 00:13:37.393 "passthru": { 00:13:37.393 "name": "pt4", 00:13:37.393 "base_bdev_name": "malloc4" 00:13:37.393 } 00:13:37.393 } 00:13:37.393 }' 00:13:37.393 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:37.393 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:37.393 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:37.393 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:37.393 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:37.393 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:37.393 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:37.393 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:37.393 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:37.393 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:37.393 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:37.393 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:37.393 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:37.393 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:13:37.652 [2024-07-15 21:49:52.684165] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:37.652 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=294cb225-42f4-11ef-9f7f-e9a656123a8b 00:13:37.652 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 294cb225-42f4-11ef-9f7f-e9a656123a8b ']' 00:13:37.652 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:37.912 [2024-07-15 21:49:52.900125] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:37.912 [2024-07-15 21:49:52.900144] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:37.912 [2024-07-15 21:49:52.900180] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.912 [2024-07-15 21:49:52.900194] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:37.912 [2024-07-15 21:49:52.900198] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1baabc035900 name raid_bdev1, state offline 00:13:37.912 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:37.912 21:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:13:38.177 21:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:13:38.177 21:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:13:38.177 21:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:38.177 21:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:38.441 21:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:38.441 21:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:38.698 21:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:38.698 21:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:13:38.955 21:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:38.955 21:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:13:38.955 21:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:38.955 21:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:39.213 21:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:13:39.213 21:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:13:39.213 21:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # local es=0 00:13:39.213 21:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:13:39.213 21:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@630 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:39.213 21:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:13:39.213 21:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:39.213 21:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:13:39.213 21:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:39.213 21:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:13:39.213 21:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:39.213 21:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:39.213 21:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:13:39.472 [2024-07-15 21:49:54.588195] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:39.472 [2024-07-15 21:49:54.588900] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:39.472 [2024-07-15 21:49:54.588956] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:39.472 [2024-07-15 21:49:54.588964] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:39.472 [2024-07-15 21:49:54.589000] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:39.472 [2024-07-15 21:49:54.589033] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:39.472 [2024-07-15 21:49:54.589060] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:39.472 [2024-07-15 21:49:54.589083] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:39.472 [2024-07-15 21:49:54.589091] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:39.472 [2024-07-15 21:49:54.589095] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1baabc035680 name raid_bdev1, state configuring 00:13:39.472 request: 00:13:39.472 { 00:13:39.472 "name": "raid_bdev1", 00:13:39.472 "raid_level": "raid0", 00:13:39.472 "base_bdevs": [ 00:13:39.472 "malloc1", 00:13:39.472 "malloc2", 00:13:39.472 "malloc3", 00:13:39.472 "malloc4" 00:13:39.472 ], 00:13:39.472 "strip_size_kb": 64, 00:13:39.472 "superblock": false, 00:13:39.472 "method": "bdev_raid_create", 00:13:39.472 "req_id": 1 00:13:39.472 } 00:13:39.472 Got JSON-RPC error response 00:13:39.472 response: 00:13:39.472 { 00:13:39.472 "code": -17, 00:13:39.472 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:39.472 } 00:13:39.472 21:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # es=1 00:13:39.472 21:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:13:39.472 21:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:13:39.472 21:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:13:39.472 21:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:13:39.472 21:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:39.730 21:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:13:39.730 21:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:13:39.730 21:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:40.015 [2024-07-15 21:49:55.020214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:40.015 [2024-07-15 21:49:55.020277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.015 [2024-07-15 21:49:55.020304] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1baabc035180 00:13:40.015 [2024-07-15 21:49:55.020311] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.015 [2024-07-15 21:49:55.021125] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.015 [2024-07-15 21:49:55.021168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:40.015 [2024-07-15 21:49:55.021193] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:40.015 [2024-07-15 21:49:55.021204] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:40.015 pt1 00:13:40.015 21:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:13:40.015 21:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:40.015 21:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:40.015 21:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:40.015 21:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:40.015 21:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:40.015 21:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:40.015 21:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:40.015 21:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:40.015 21:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:40.015 21:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:40.015 21:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.287 21:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:40.287 "name": "raid_bdev1", 00:13:40.287 "uuid": "294cb225-42f4-11ef-9f7f-e9a656123a8b", 00:13:40.287 "strip_size_kb": 64, 00:13:40.287 "state": "configuring", 00:13:40.287 "raid_level": "raid0", 00:13:40.287 "superblock": true, 00:13:40.287 "num_base_bdevs": 4, 00:13:40.287 "num_base_bdevs_discovered": 1, 00:13:40.287 "num_base_bdevs_operational": 4, 00:13:40.287 "base_bdevs_list": [ 00:13:40.287 { 00:13:40.287 "name": "pt1", 00:13:40.287 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:40.287 "is_configured": true, 00:13:40.287 "data_offset": 2048, 00:13:40.287 "data_size": 63488 00:13:40.287 }, 00:13:40.287 { 00:13:40.287 "name": null, 00:13:40.287 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:40.287 "is_configured": false, 00:13:40.287 "data_offset": 2048, 00:13:40.287 "data_size": 63488 00:13:40.287 }, 00:13:40.287 { 00:13:40.287 "name": null, 00:13:40.287 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:40.287 "is_configured": false, 00:13:40.287 "data_offset": 2048, 00:13:40.287 "data_size": 63488 00:13:40.287 }, 00:13:40.287 { 00:13:40.287 "name": null, 00:13:40.287 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:40.287 "is_configured": false, 00:13:40.287 "data_offset": 2048, 00:13:40.287 "data_size": 63488 00:13:40.287 } 00:13:40.287 ] 00:13:40.287 }' 00:13:40.287 21:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:40.287 21:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.546 21:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:13:40.546 21:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:40.804 [2024-07-15 21:49:55.816254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:40.804 [2024-07-15 21:49:55.816359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.804 [2024-07-15 21:49:55.816370] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1baabc034780 00:13:40.804 [2024-07-15 21:49:55.816378] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.804 [2024-07-15 21:49:55.816524] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.804 [2024-07-15 21:49:55.816534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:40.804 [2024-07-15 21:49:55.816565] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:40.804 [2024-07-15 21:49:55.816574] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:40.804 pt2 00:13:40.804 21:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:41.062 [2024-07-15 21:49:56.068269] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:41.062 21:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:13:41.062 21:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:41.062 21:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:41.062 21:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:41.062 21:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:41.062 21:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:41.062 21:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:41.062 21:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:41.062 21:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:41.062 21:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:41.062 21:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.062 21:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:41.320 21:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:41.320 "name": "raid_bdev1", 00:13:41.320 "uuid": "294cb225-42f4-11ef-9f7f-e9a656123a8b", 00:13:41.320 "strip_size_kb": 64, 00:13:41.320 "state": "configuring", 00:13:41.320 "raid_level": "raid0", 00:13:41.320 "superblock": true, 00:13:41.320 "num_base_bdevs": 4, 00:13:41.320 "num_base_bdevs_discovered": 1, 00:13:41.320 "num_base_bdevs_operational": 4, 00:13:41.320 "base_bdevs_list": [ 00:13:41.320 { 00:13:41.320 "name": "pt1", 00:13:41.320 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:41.320 "is_configured": true, 00:13:41.320 "data_offset": 2048, 00:13:41.320 "data_size": 63488 00:13:41.320 }, 00:13:41.320 { 00:13:41.320 "name": null, 00:13:41.320 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:41.320 "is_configured": false, 00:13:41.320 "data_offset": 2048, 00:13:41.320 "data_size": 63488 00:13:41.320 }, 00:13:41.320 { 00:13:41.320 "name": null, 00:13:41.320 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:41.320 "is_configured": false, 00:13:41.320 "data_offset": 2048, 00:13:41.320 "data_size": 63488 00:13:41.320 }, 00:13:41.320 { 00:13:41.320 "name": null, 00:13:41.320 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:41.320 "is_configured": false, 00:13:41.320 "data_offset": 2048, 00:13:41.320 "data_size": 63488 00:13:41.320 } 00:13:41.320 ] 00:13:41.320 }' 00:13:41.320 21:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:41.320 21:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.578 21:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:13:41.578 21:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:41.578 21:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:41.578 [2024-07-15 21:49:56.756277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:41.578 [2024-07-15 21:49:56.756338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.578 [2024-07-15 21:49:56.756364] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1baabc034780 00:13:41.578 [2024-07-15 21:49:56.756371] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.578 [2024-07-15 21:49:56.756523] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.578 [2024-07-15 21:49:56.756533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:41.578 [2024-07-15 21:49:56.756561] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:41.578 [2024-07-15 21:49:56.756569] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:41.578 pt2 00:13:41.836 21:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:13:41.836 21:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:41.836 21:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:41.836 [2024-07-15 21:49:56.972293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:41.836 [2024-07-15 21:49:56.972377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.836 [2024-07-15 21:49:56.972402] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1baabc035b80 00:13:41.836 [2024-07-15 21:49:56.972410] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.836 [2024-07-15 21:49:56.972539] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.836 [2024-07-15 21:49:56.972550] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:41.836 [2024-07-15 21:49:56.972572] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:41.836 [2024-07-15 21:49:56.972581] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:41.836 pt3 00:13:41.836 21:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:13:41.836 21:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:41.836 21:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:42.093 [2024-07-15 21:49:57.184292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:42.093 [2024-07-15 21:49:57.184352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.093 [2024-07-15 21:49:57.184377] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1baabc035900 00:13:42.093 [2024-07-15 21:49:57.184384] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.093 [2024-07-15 21:49:57.184507] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.093 [2024-07-15 21:49:57.184516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:42.093 [2024-07-15 21:49:57.184538] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:42.093 [2024-07-15 21:49:57.184546] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:42.093 [2024-07-15 21:49:57.184607] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1baabc034c80 00:13:42.093 [2024-07-15 21:49:57.184611] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:42.093 [2024-07-15 21:49:57.184635] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1baabc097e20 00:13:42.093 [2024-07-15 21:49:57.184716] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1baabc034c80 00:13:42.093 [2024-07-15 21:49:57.184736] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1baabc034c80 00:13:42.093 [2024-07-15 21:49:57.184761] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.093 pt4 00:13:42.093 21:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:13:42.093 21:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:42.093 21:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:42.093 21:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:42.093 21:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:42.093 21:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:42.093 21:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:42.093 21:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:42.093 21:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:42.093 21:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:42.093 21:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:42.093 21:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:42.093 21:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:42.093 21:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.350 21:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:42.350 "name": "raid_bdev1", 00:13:42.350 "uuid": "294cb225-42f4-11ef-9f7f-e9a656123a8b", 00:13:42.350 "strip_size_kb": 64, 00:13:42.350 "state": "online", 00:13:42.350 "raid_level": "raid0", 00:13:42.350 "superblock": true, 00:13:42.350 "num_base_bdevs": 4, 00:13:42.350 "num_base_bdevs_discovered": 4, 00:13:42.350 "num_base_bdevs_operational": 4, 00:13:42.350 "base_bdevs_list": [ 00:13:42.350 { 00:13:42.350 "name": "pt1", 00:13:42.350 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:42.350 "is_configured": true, 00:13:42.350 "data_offset": 2048, 00:13:42.350 "data_size": 63488 00:13:42.350 }, 00:13:42.350 { 00:13:42.350 "name": "pt2", 00:13:42.350 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:42.350 "is_configured": true, 00:13:42.350 "data_offset": 2048, 00:13:42.350 "data_size": 63488 00:13:42.350 }, 00:13:42.350 { 00:13:42.350 "name": "pt3", 00:13:42.350 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:42.350 "is_configured": true, 00:13:42.350 "data_offset": 2048, 00:13:42.350 "data_size": 63488 00:13:42.350 }, 00:13:42.350 { 00:13:42.350 "name": "pt4", 00:13:42.350 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:42.350 "is_configured": true, 00:13:42.350 "data_offset": 2048, 00:13:42.350 "data_size": 63488 00:13:42.350 } 00:13:42.350 ] 00:13:42.350 }' 00:13:42.350 21:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:42.350 21:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.608 21:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:13:42.608 21:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:42.608 21:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:42.608 21:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:42.608 21:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:42.608 21:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:42.608 21:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:42.608 21:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:42.866 [2024-07-15 21:49:57.972386] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:42.866 21:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:42.866 "name": "raid_bdev1", 00:13:42.866 "aliases": [ 00:13:42.866 "294cb225-42f4-11ef-9f7f-e9a656123a8b" 00:13:42.866 ], 00:13:42.866 "product_name": "Raid Volume", 00:13:42.866 "block_size": 512, 00:13:42.866 "num_blocks": 253952, 00:13:42.866 "uuid": "294cb225-42f4-11ef-9f7f-e9a656123a8b", 00:13:42.866 "assigned_rate_limits": { 00:13:42.866 "rw_ios_per_sec": 0, 00:13:42.866 "rw_mbytes_per_sec": 0, 00:13:42.866 "r_mbytes_per_sec": 0, 00:13:42.866 "w_mbytes_per_sec": 0 00:13:42.866 }, 00:13:42.866 "claimed": false, 00:13:42.866 "zoned": false, 00:13:42.866 "supported_io_types": { 00:13:42.866 "read": true, 00:13:42.866 "write": true, 00:13:42.866 "unmap": true, 00:13:42.866 "flush": true, 00:13:42.866 "reset": true, 00:13:42.866 "nvme_admin": false, 00:13:42.866 "nvme_io": false, 00:13:42.866 "nvme_io_md": false, 00:13:42.866 "write_zeroes": true, 00:13:42.866 "zcopy": false, 00:13:42.866 "get_zone_info": false, 00:13:42.866 "zone_management": false, 00:13:42.866 "zone_append": false, 00:13:42.866 "compare": false, 00:13:42.866 "compare_and_write": false, 00:13:42.866 "abort": false, 00:13:42.866 "seek_hole": false, 00:13:42.866 "seek_data": false, 00:13:42.866 "copy": false, 00:13:42.866 "nvme_iov_md": false 00:13:42.866 }, 00:13:42.866 "memory_domains": [ 00:13:42.866 { 00:13:42.866 "dma_device_id": "system", 00:13:42.866 "dma_device_type": 1 00:13:42.866 }, 00:13:42.866 { 00:13:42.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.866 "dma_device_type": 2 00:13:42.866 }, 00:13:42.866 { 00:13:42.866 "dma_device_id": "system", 00:13:42.866 "dma_device_type": 1 00:13:42.866 }, 00:13:42.866 { 00:13:42.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.866 "dma_device_type": 2 00:13:42.866 }, 00:13:42.866 { 00:13:42.866 "dma_device_id": "system", 00:13:42.866 "dma_device_type": 1 00:13:42.866 }, 00:13:42.866 { 00:13:42.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.866 "dma_device_type": 2 00:13:42.866 }, 00:13:42.866 { 00:13:42.866 "dma_device_id": "system", 00:13:42.866 "dma_device_type": 1 00:13:42.866 }, 00:13:42.866 { 00:13:42.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.866 "dma_device_type": 2 00:13:42.866 } 00:13:42.866 ], 00:13:42.866 "driver_specific": { 00:13:42.866 "raid": { 00:13:42.866 "uuid": "294cb225-42f4-11ef-9f7f-e9a656123a8b", 00:13:42.866 "strip_size_kb": 64, 00:13:42.866 "state": "online", 00:13:42.867 "raid_level": "raid0", 00:13:42.867 "superblock": true, 00:13:42.867 "num_base_bdevs": 4, 00:13:42.867 "num_base_bdevs_discovered": 4, 00:13:42.867 "num_base_bdevs_operational": 4, 00:13:42.867 "base_bdevs_list": [ 00:13:42.867 { 00:13:42.867 "name": "pt1", 00:13:42.867 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:42.867 "is_configured": true, 00:13:42.867 "data_offset": 2048, 00:13:42.867 "data_size": 63488 00:13:42.867 }, 00:13:42.867 { 00:13:42.867 "name": "pt2", 00:13:42.867 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:42.867 "is_configured": true, 00:13:42.867 "data_offset": 2048, 00:13:42.867 "data_size": 63488 00:13:42.867 }, 00:13:42.867 { 00:13:42.867 "name": "pt3", 00:13:42.867 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:42.867 "is_configured": true, 00:13:42.867 "data_offset": 2048, 00:13:42.867 "data_size": 63488 00:13:42.867 }, 00:13:42.867 { 00:13:42.867 "name": "pt4", 00:13:42.867 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:42.867 "is_configured": true, 00:13:42.867 "data_offset": 2048, 00:13:42.867 "data_size": 63488 00:13:42.867 } 00:13:42.867 ] 00:13:42.867 } 00:13:42.867 } 00:13:42.867 }' 00:13:42.867 21:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:42.867 21:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:42.867 pt2 00:13:42.867 pt3 00:13:42.867 pt4' 00:13:42.867 21:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:42.867 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:42.867 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:43.125 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:43.125 "name": "pt1", 00:13:43.125 "aliases": [ 00:13:43.125 "00000000-0000-0000-0000-000000000001" 00:13:43.125 ], 00:13:43.125 "product_name": "passthru", 00:13:43.125 "block_size": 512, 00:13:43.125 "num_blocks": 65536, 00:13:43.125 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:43.125 "assigned_rate_limits": { 00:13:43.125 "rw_ios_per_sec": 0, 00:13:43.125 "rw_mbytes_per_sec": 0, 00:13:43.125 "r_mbytes_per_sec": 0, 00:13:43.125 "w_mbytes_per_sec": 0 00:13:43.125 }, 00:13:43.125 "claimed": true, 00:13:43.125 "claim_type": "exclusive_write", 00:13:43.125 "zoned": false, 00:13:43.125 "supported_io_types": { 00:13:43.125 "read": true, 00:13:43.125 "write": true, 00:13:43.125 "unmap": true, 00:13:43.125 "flush": true, 00:13:43.125 "reset": true, 00:13:43.125 "nvme_admin": false, 00:13:43.125 "nvme_io": false, 00:13:43.125 "nvme_io_md": false, 00:13:43.125 "write_zeroes": true, 00:13:43.125 "zcopy": true, 00:13:43.125 "get_zone_info": false, 00:13:43.125 "zone_management": false, 00:13:43.125 "zone_append": false, 00:13:43.125 "compare": false, 00:13:43.125 "compare_and_write": false, 00:13:43.125 "abort": true, 00:13:43.125 "seek_hole": false, 00:13:43.125 "seek_data": false, 00:13:43.125 "copy": true, 00:13:43.125 "nvme_iov_md": false 00:13:43.125 }, 00:13:43.125 "memory_domains": [ 00:13:43.125 { 00:13:43.125 "dma_device_id": "system", 00:13:43.125 "dma_device_type": 1 00:13:43.125 }, 00:13:43.125 { 00:13:43.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.125 "dma_device_type": 2 00:13:43.125 } 00:13:43.125 ], 00:13:43.125 "driver_specific": { 00:13:43.125 "passthru": { 00:13:43.125 "name": "pt1", 00:13:43.125 "base_bdev_name": "malloc1" 00:13:43.125 } 00:13:43.125 } 00:13:43.125 }' 00:13:43.125 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:43.125 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:43.125 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:43.125 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:43.125 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:43.125 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:43.125 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:43.125 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:43.125 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:43.125 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:43.383 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:43.383 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:43.383 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:43.383 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:43.383 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:43.642 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:43.642 "name": "pt2", 00:13:43.642 "aliases": [ 00:13:43.642 "00000000-0000-0000-0000-000000000002" 00:13:43.642 ], 00:13:43.642 "product_name": "passthru", 00:13:43.642 "block_size": 512, 00:13:43.642 "num_blocks": 65536, 00:13:43.642 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:43.642 "assigned_rate_limits": { 00:13:43.642 "rw_ios_per_sec": 0, 00:13:43.642 "rw_mbytes_per_sec": 0, 00:13:43.642 "r_mbytes_per_sec": 0, 00:13:43.642 "w_mbytes_per_sec": 0 00:13:43.642 }, 00:13:43.642 "claimed": true, 00:13:43.642 "claim_type": "exclusive_write", 00:13:43.642 "zoned": false, 00:13:43.642 "supported_io_types": { 00:13:43.642 "read": true, 00:13:43.642 "write": true, 00:13:43.642 "unmap": true, 00:13:43.642 "flush": true, 00:13:43.642 "reset": true, 00:13:43.642 "nvme_admin": false, 00:13:43.642 "nvme_io": false, 00:13:43.642 "nvme_io_md": false, 00:13:43.642 "write_zeroes": true, 00:13:43.642 "zcopy": true, 00:13:43.642 "get_zone_info": false, 00:13:43.642 "zone_management": false, 00:13:43.642 "zone_append": false, 00:13:43.642 "compare": false, 00:13:43.642 "compare_and_write": false, 00:13:43.642 "abort": true, 00:13:43.642 "seek_hole": false, 00:13:43.642 "seek_data": false, 00:13:43.642 "copy": true, 00:13:43.642 "nvme_iov_md": false 00:13:43.642 }, 00:13:43.642 "memory_domains": [ 00:13:43.642 { 00:13:43.642 "dma_device_id": "system", 00:13:43.642 "dma_device_type": 1 00:13:43.642 }, 00:13:43.642 { 00:13:43.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.642 "dma_device_type": 2 00:13:43.642 } 00:13:43.642 ], 00:13:43.642 "driver_specific": { 00:13:43.642 "passthru": { 00:13:43.642 "name": "pt2", 00:13:43.642 "base_bdev_name": "malloc2" 00:13:43.642 } 00:13:43.642 } 00:13:43.642 }' 00:13:43.642 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:43.642 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:43.642 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:43.643 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:43.643 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:43.643 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:43.643 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:43.643 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:43.643 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:43.643 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:43.643 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:43.643 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:43.643 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:43.643 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:13:43.643 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:43.903 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:43.903 "name": "pt3", 00:13:43.903 "aliases": [ 00:13:43.903 "00000000-0000-0000-0000-000000000003" 00:13:43.903 ], 00:13:43.903 "product_name": "passthru", 00:13:43.903 "block_size": 512, 00:13:43.903 "num_blocks": 65536, 00:13:43.903 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:43.903 "assigned_rate_limits": { 00:13:43.903 "rw_ios_per_sec": 0, 00:13:43.903 "rw_mbytes_per_sec": 0, 00:13:43.903 "r_mbytes_per_sec": 0, 00:13:43.903 "w_mbytes_per_sec": 0 00:13:43.903 }, 00:13:43.903 "claimed": true, 00:13:43.903 "claim_type": "exclusive_write", 00:13:43.903 "zoned": false, 00:13:43.903 "supported_io_types": { 00:13:43.903 "read": true, 00:13:43.903 "write": true, 00:13:43.903 "unmap": true, 00:13:43.903 "flush": true, 00:13:43.903 "reset": true, 00:13:43.903 "nvme_admin": false, 00:13:43.903 "nvme_io": false, 00:13:43.903 "nvme_io_md": false, 00:13:43.903 "write_zeroes": true, 00:13:43.903 "zcopy": true, 00:13:43.903 "get_zone_info": false, 00:13:43.903 "zone_management": false, 00:13:43.903 "zone_append": false, 00:13:43.903 "compare": false, 00:13:43.903 "compare_and_write": false, 00:13:43.903 "abort": true, 00:13:43.903 "seek_hole": false, 00:13:43.903 "seek_data": false, 00:13:43.903 "copy": true, 00:13:43.903 "nvme_iov_md": false 00:13:43.903 }, 00:13:43.903 "memory_domains": [ 00:13:43.903 { 00:13:43.903 "dma_device_id": "system", 00:13:43.903 "dma_device_type": 1 00:13:43.903 }, 00:13:43.903 { 00:13:43.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.903 "dma_device_type": 2 00:13:43.903 } 00:13:43.903 ], 00:13:43.903 "driver_specific": { 00:13:43.903 "passthru": { 00:13:43.903 "name": "pt3", 00:13:43.903 "base_bdev_name": "malloc3" 00:13:43.903 } 00:13:43.903 } 00:13:43.903 }' 00:13:43.903 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:43.903 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:43.903 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:43.903 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:43.903 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:43.903 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:43.903 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:43.903 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:43.903 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:43.903 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:43.903 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:43.903 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:43.903 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:43.903 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:13:43.903 21:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:44.163 21:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:44.163 "name": "pt4", 00:13:44.163 "aliases": [ 00:13:44.163 "00000000-0000-0000-0000-000000000004" 00:13:44.163 ], 00:13:44.163 "product_name": "passthru", 00:13:44.163 "block_size": 512, 00:13:44.163 "num_blocks": 65536, 00:13:44.163 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:44.163 "assigned_rate_limits": { 00:13:44.163 "rw_ios_per_sec": 0, 00:13:44.163 "rw_mbytes_per_sec": 0, 00:13:44.163 "r_mbytes_per_sec": 0, 00:13:44.163 "w_mbytes_per_sec": 0 00:13:44.163 }, 00:13:44.163 "claimed": true, 00:13:44.163 "claim_type": "exclusive_write", 00:13:44.163 "zoned": false, 00:13:44.163 "supported_io_types": { 00:13:44.163 "read": true, 00:13:44.163 "write": true, 00:13:44.163 "unmap": true, 00:13:44.163 "flush": true, 00:13:44.163 "reset": true, 00:13:44.163 "nvme_admin": false, 00:13:44.163 "nvme_io": false, 00:13:44.163 "nvme_io_md": false, 00:13:44.163 "write_zeroes": true, 00:13:44.163 "zcopy": true, 00:13:44.163 "get_zone_info": false, 00:13:44.163 "zone_management": false, 00:13:44.163 "zone_append": false, 00:13:44.163 "compare": false, 00:13:44.163 "compare_and_write": false, 00:13:44.163 "abort": true, 00:13:44.163 "seek_hole": false, 00:13:44.163 "seek_data": false, 00:13:44.163 "copy": true, 00:13:44.163 "nvme_iov_md": false 00:13:44.163 }, 00:13:44.163 "memory_domains": [ 00:13:44.163 { 00:13:44.163 "dma_device_id": "system", 00:13:44.163 "dma_device_type": 1 00:13:44.163 }, 00:13:44.163 { 00:13:44.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.163 "dma_device_type": 2 00:13:44.163 } 00:13:44.163 ], 00:13:44.163 "driver_specific": { 00:13:44.163 "passthru": { 00:13:44.163 "name": "pt4", 00:13:44.163 "base_bdev_name": "malloc4" 00:13:44.163 } 00:13:44.163 } 00:13:44.163 }' 00:13:44.163 21:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:44.163 21:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:44.163 21:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:44.163 21:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:44.163 21:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:44.163 21:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:44.163 21:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:44.163 21:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:44.163 21:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:44.163 21:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:44.163 21:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:44.163 21:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:44.163 21:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:44.163 21:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:13:44.423 [2024-07-15 21:49:59.532479] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:44.423 21:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 294cb225-42f4-11ef-9f7f-e9a656123a8b '!=' 294cb225-42f4-11ef-9f7f-e9a656123a8b ']' 00:13:44.423 21:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:13:44.423 21:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:44.423 21:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:44.423 21:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 59984 00:13:44.423 21:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@942 -- # '[' -z 59984 ']' 00:13:44.423 21:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # kill -0 59984 00:13:44.423 21:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@947 -- # uname 00:13:44.423 21:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:13:44.423 21:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # ps -c -o command 59984 00:13:44.423 21:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # tail -1 00:13:44.423 21:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:13:44.423 21:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:13:44.423 killing process with pid 59984 00:13:44.423 21:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 59984' 00:13:44.423 21:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@961 -- # kill 59984 00:13:44.423 [2024-07-15 21:49:59.563175] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:44.423 [2024-07-15 21:49:59.563206] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:44.423 [2024-07-15 21:49:59.563228] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:44.423 [2024-07-15 21:49:59.563232] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1baabc034c80 name raid_bdev1, state offline 00:13:44.423 21:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # wait 59984 00:13:44.423 [2024-07-15 21:49:59.587138] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:44.683 21:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:13:44.683 00:13:44.683 real 0m12.494s 00:13:44.683 user 0m22.126s 00:13:44.683 sys 0m2.062s 00:13:44.683 21:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:13:44.683 21:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.683 ************************************ 00:13:44.683 END TEST raid_superblock_test 00:13:44.683 ************************************ 00:13:44.683 21:49:59 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:13:44.683 21:49:59 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:13:44.683 21:49:59 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:13:44.683 21:49:59 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:13:44.683 21:49:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:44.683 ************************************ 00:13:44.683 START TEST raid_read_error_test 00:13:44.683 ************************************ 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1117 -- # raid_io_error_test raid0 4 read 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.3aAtrhm7EF 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=60381 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 60381 /var/tmp/spdk-raid.sock 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@823 -- # '[' -z 60381 ']' 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:13:44.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:13:44.683 21:49:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.683 [2024-07-15 21:49:59.822157] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:13:44.683 [2024-07-15 21:49:59.822353] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:13:45.252 EAL: TSC is not safe to use in SMP mode 00:13:45.252 EAL: TSC is not invariant 00:13:45.252 [2024-07-15 21:50:00.368630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.510 [2024-07-15 21:50:00.444357] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:45.510 [2024-07-15 21:50:00.446858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.510 [2024-07-15 21:50:00.447796] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.510 [2024-07-15 21:50:00.447811] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.769 21:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:13:45.769 21:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # return 0 00:13:45.769 21:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:45.769 21:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:46.028 BaseBdev1_malloc 00:13:46.028 21:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:46.286 true 00:13:46.286 21:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:46.545 [2024-07-15 21:50:01.662842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:46.545 [2024-07-15 21:50:01.662927] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.545 [2024-07-15 21:50:01.662974] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a22d5434780 00:13:46.545 [2024-07-15 21:50:01.662982] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.545 [2024-07-15 21:50:01.663685] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.545 [2024-07-15 21:50:01.663746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:46.545 BaseBdev1 00:13:46.545 21:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:46.545 21:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:46.805 BaseBdev2_malloc 00:13:46.805 21:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:47.065 true 00:13:47.065 21:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:47.324 [2024-07-15 21:50:02.394957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:47.324 [2024-07-15 21:50:02.395057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.324 [2024-07-15 21:50:02.395097] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a22d5434c80 00:13:47.324 [2024-07-15 21:50:02.395105] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.324 [2024-07-15 21:50:02.395861] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.324 [2024-07-15 21:50:02.395886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:47.324 BaseBdev2 00:13:47.324 21:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:47.324 21:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:47.582 BaseBdev3_malloc 00:13:47.582 21:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:13:47.842 true 00:13:47.842 21:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:48.101 [2024-07-15 21:50:03.043034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:48.101 [2024-07-15 21:50:03.043111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.101 [2024-07-15 21:50:03.043153] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a22d5435180 00:13:48.101 [2024-07-15 21:50:03.043161] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.101 [2024-07-15 21:50:03.043870] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.101 [2024-07-15 21:50:03.043924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:48.101 BaseBdev3 00:13:48.101 21:50:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:48.101 21:50:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:48.101 BaseBdev4_malloc 00:13:48.101 21:50:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:13:48.359 true 00:13:48.360 21:50:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:48.618 [2024-07-15 21:50:03.755062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:48.618 [2024-07-15 21:50:03.755139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.618 [2024-07-15 21:50:03.755178] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a22d5435680 00:13:48.618 [2024-07-15 21:50:03.755186] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.618 [2024-07-15 21:50:03.755868] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.618 [2024-07-15 21:50:03.755893] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:48.618 BaseBdev4 00:13:48.618 21:50:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:13:48.878 [2024-07-15 21:50:03.975059] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:48.878 [2024-07-15 21:50:03.975635] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:48.878 [2024-07-15 21:50:03.975659] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:48.878 [2024-07-15 21:50:03.975673] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:48.878 [2024-07-15 21:50:03.975736] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1a22d5435900 00:13:48.878 [2024-07-15 21:50:03.975743] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:48.878 [2024-07-15 21:50:03.975793] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a22d54a0e20 00:13:48.878 [2024-07-15 21:50:03.975887] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1a22d5435900 00:13:48.878 [2024-07-15 21:50:03.975892] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1a22d5435900 00:13:48.878 [2024-07-15 21:50:03.975918] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.878 21:50:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:48.878 21:50:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:48.878 21:50:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:48.878 21:50:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:48.878 21:50:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:48.878 21:50:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:48.878 21:50:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:48.878 21:50:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:48.878 21:50:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:48.878 21:50:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:48.878 21:50:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:48.878 21:50:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.137 21:50:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:49.137 "name": "raid_bdev1", 00:13:49.137 "uuid": "31623f7e-42f4-11ef-9f7f-e9a656123a8b", 00:13:49.137 "strip_size_kb": 64, 00:13:49.137 "state": "online", 00:13:49.137 "raid_level": "raid0", 00:13:49.137 "superblock": true, 00:13:49.137 "num_base_bdevs": 4, 00:13:49.137 "num_base_bdevs_discovered": 4, 00:13:49.137 "num_base_bdevs_operational": 4, 00:13:49.137 "base_bdevs_list": [ 00:13:49.137 { 00:13:49.137 "name": "BaseBdev1", 00:13:49.137 "uuid": "a664bc2b-9530-c354-ad58-f8004d0f7add", 00:13:49.137 "is_configured": true, 00:13:49.137 "data_offset": 2048, 00:13:49.137 "data_size": 63488 00:13:49.137 }, 00:13:49.137 { 00:13:49.137 "name": "BaseBdev2", 00:13:49.137 "uuid": "41fbb716-1e68-f058-bae3-306b9072e0fc", 00:13:49.137 "is_configured": true, 00:13:49.137 "data_offset": 2048, 00:13:49.137 "data_size": 63488 00:13:49.137 }, 00:13:49.137 { 00:13:49.137 "name": "BaseBdev3", 00:13:49.137 "uuid": "3a7128ff-11a2-985e-92ab-69fb8743e7e7", 00:13:49.137 "is_configured": true, 00:13:49.137 "data_offset": 2048, 00:13:49.137 "data_size": 63488 00:13:49.137 }, 00:13:49.137 { 00:13:49.137 "name": "BaseBdev4", 00:13:49.137 "uuid": "c2b2b5a3-dd7b-1652-87e6-1daf6042f2db", 00:13:49.137 "is_configured": true, 00:13:49.137 "data_offset": 2048, 00:13:49.137 "data_size": 63488 00:13:49.137 } 00:13:49.137 ] 00:13:49.137 }' 00:13:49.137 21:50:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:49.137 21:50:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.398 21:50:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:13:49.398 21:50:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:49.657 [2024-07-15 21:50:04.703314] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a22d54a0ec0 00:13:50.595 21:50:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:50.854 21:50:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:13:50.854 21:50:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:50.854 21:50:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:13:50.854 21:50:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:50.854 21:50:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:50.854 21:50:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:50.854 21:50:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:50.854 21:50:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:50.854 21:50:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:50.854 21:50:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:50.854 21:50:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:50.854 21:50:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:50.854 21:50:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:50.854 21:50:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:50.854 21:50:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.113 21:50:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:51.113 "name": "raid_bdev1", 00:13:51.113 "uuid": "31623f7e-42f4-11ef-9f7f-e9a656123a8b", 00:13:51.113 "strip_size_kb": 64, 00:13:51.113 "state": "online", 00:13:51.113 "raid_level": "raid0", 00:13:51.113 "superblock": true, 00:13:51.113 "num_base_bdevs": 4, 00:13:51.113 "num_base_bdevs_discovered": 4, 00:13:51.113 "num_base_bdevs_operational": 4, 00:13:51.113 "base_bdevs_list": [ 00:13:51.113 { 00:13:51.113 "name": "BaseBdev1", 00:13:51.113 "uuid": "a664bc2b-9530-c354-ad58-f8004d0f7add", 00:13:51.113 "is_configured": true, 00:13:51.113 "data_offset": 2048, 00:13:51.113 "data_size": 63488 00:13:51.113 }, 00:13:51.113 { 00:13:51.113 "name": "BaseBdev2", 00:13:51.113 "uuid": "41fbb716-1e68-f058-bae3-306b9072e0fc", 00:13:51.113 "is_configured": true, 00:13:51.113 "data_offset": 2048, 00:13:51.113 "data_size": 63488 00:13:51.113 }, 00:13:51.113 { 00:13:51.113 "name": "BaseBdev3", 00:13:51.113 "uuid": "3a7128ff-11a2-985e-92ab-69fb8743e7e7", 00:13:51.113 "is_configured": true, 00:13:51.113 "data_offset": 2048, 00:13:51.113 "data_size": 63488 00:13:51.113 }, 00:13:51.113 { 00:13:51.113 "name": "BaseBdev4", 00:13:51.113 "uuid": "c2b2b5a3-dd7b-1652-87e6-1daf6042f2db", 00:13:51.113 "is_configured": true, 00:13:51.113 "data_offset": 2048, 00:13:51.113 "data_size": 63488 00:13:51.113 } 00:13:51.113 ] 00:13:51.113 }' 00:13:51.113 21:50:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:51.113 21:50:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.372 21:50:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:51.631 [2024-07-15 21:50:06.745781] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:51.631 [2024-07-15 21:50:06.745807] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:51.631 [2024-07-15 21:50:06.746206] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:51.631 [2024-07-15 21:50:06.746226] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.631 [2024-07-15 21:50:06.746234] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:51.631 [2024-07-15 21:50:06.746238] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a22d5435900 name raid_bdev1, state offline 00:13:51.631 0 00:13:51.631 21:50:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 60381 00:13:51.631 21:50:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@942 -- # '[' -z 60381 ']' 00:13:51.631 21:50:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # kill -0 60381 00:13:51.631 21:50:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@947 -- # uname 00:13:51.631 21:50:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:13:51.631 21:50:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # ps -c -o command 60381 00:13:51.631 21:50:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # tail -1 00:13:51.631 21:50:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # process_name=bdevperf 00:13:51.631 21:50:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' bdevperf = sudo ']' 00:13:51.631 killing process with pid 60381 00:13:51.632 21:50:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 60381' 00:13:51.632 21:50:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@961 -- # kill 60381 00:13:51.632 [2024-07-15 21:50:06.775433] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:51.632 21:50:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # wait 60381 00:13:51.632 [2024-07-15 21:50:06.800724] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:51.891 21:50:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:13:51.891 21:50:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.3aAtrhm7EF 00:13:51.891 21:50:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:13:51.891 21:50:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.49 00:13:51.891 21:50:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:13:51.891 21:50:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:51.891 21:50:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:51.891 21:50:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.49 != \0\.\0\0 ]] 00:13:51.891 ************************************ 00:13:51.891 END TEST raid_read_error_test 00:13:51.891 ************************************ 00:13:51.891 00:13:51.891 real 0m7.180s 00:13:51.891 user 0m11.506s 00:13:51.891 sys 0m1.084s 00:13:51.891 21:50:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:13:51.891 21:50:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.891 21:50:07 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:13:51.891 21:50:07 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:13:51.891 21:50:07 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:13:51.891 21:50:07 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:13:51.891 21:50:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:51.891 ************************************ 00:13:51.891 START TEST raid_write_error_test 00:13:51.891 ************************************ 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1117 -- # raid_io_error_test raid0 4 write 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.Jj2XfHOyK2 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=60515 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 60515 /var/tmp/spdk-raid.sock 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@823 -- # '[' -z 60515 ']' 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:51.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:13:51.891 21:50:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.891 [2024-07-15 21:50:07.058788] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:13:51.891 [2024-07-15 21:50:07.058983] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:13:52.829 EAL: TSC is not safe to use in SMP mode 00:13:52.829 EAL: TSC is not invariant 00:13:52.829 [2024-07-15 21:50:07.882617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.829 [2024-07-15 21:50:07.972488] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:52.829 [2024-07-15 21:50:07.974816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.829 [2024-07-15 21:50:07.975634] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.829 [2024-07-15 21:50:07.975646] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:53.088 21:50:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:13:53.088 21:50:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # return 0 00:13:53.088 21:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:53.088 21:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:53.346 BaseBdev1_malloc 00:13:53.346 21:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:53.605 true 00:13:53.605 21:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:53.863 [2024-07-15 21:50:08.863444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:53.864 [2024-07-15 21:50:08.863537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.864 [2024-07-15 21:50:08.863580] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3ede5d634780 00:13:53.864 [2024-07-15 21:50:08.863588] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.864 [2024-07-15 21:50:08.864089] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.864 [2024-07-15 21:50:08.864114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:53.864 BaseBdev1 00:13:53.864 21:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:53.864 21:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:54.123 BaseBdev2_malloc 00:13:54.123 21:50:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:54.382 true 00:13:54.382 21:50:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:54.382 [2024-07-15 21:50:09.523576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:54.382 [2024-07-15 21:50:09.523635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.382 [2024-07-15 21:50:09.523662] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3ede5d634c80 00:13:54.382 [2024-07-15 21:50:09.523671] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.382 [2024-07-15 21:50:09.524349] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.382 [2024-07-15 21:50:09.524375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:54.382 BaseBdev2 00:13:54.382 21:50:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:54.382 21:50:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:54.640 BaseBdev3_malloc 00:13:54.641 21:50:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:13:54.899 true 00:13:54.899 21:50:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:55.157 [2024-07-15 21:50:10.247605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:55.157 [2024-07-15 21:50:10.247651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.157 [2024-07-15 21:50:10.247691] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3ede5d635180 00:13:55.157 [2024-07-15 21:50:10.247699] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.157 [2024-07-15 21:50:10.248324] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.157 [2024-07-15 21:50:10.248348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:55.157 BaseBdev3 00:13:55.157 21:50:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:55.157 21:50:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:55.415 BaseBdev4_malloc 00:13:55.415 21:50:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:13:55.674 true 00:13:55.674 21:50:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:55.932 [2024-07-15 21:50:10.963685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:55.932 [2024-07-15 21:50:10.963735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.932 [2024-07-15 21:50:10.963776] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3ede5d635680 00:13:55.932 [2024-07-15 21:50:10.963785] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.932 [2024-07-15 21:50:10.964595] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.932 [2024-07-15 21:50:10.964621] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:55.932 BaseBdev4 00:13:55.932 21:50:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:13:56.190 [2024-07-15 21:50:11.179704] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:56.190 [2024-07-15 21:50:11.180379] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:56.190 [2024-07-15 21:50:11.180402] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:56.190 [2024-07-15 21:50:11.180418] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:56.190 [2024-07-15 21:50:11.180520] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3ede5d635900 00:13:56.190 [2024-07-15 21:50:11.180527] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:56.190 [2024-07-15 21:50:11.180584] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3ede5d6a0e20 00:13:56.190 [2024-07-15 21:50:11.180717] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3ede5d635900 00:13:56.190 [2024-07-15 21:50:11.180722] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3ede5d635900 00:13:56.190 [2024-07-15 21:50:11.180761] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.190 21:50:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:56.190 21:50:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:56.190 21:50:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:56.190 21:50:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:56.190 21:50:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:56.190 21:50:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:56.190 21:50:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:56.190 21:50:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:56.190 21:50:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:56.190 21:50:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:56.190 21:50:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.190 21:50:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.448 21:50:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:56.448 "name": "raid_bdev1", 00:13:56.448 "uuid": "35ad96f0-42f4-11ef-9f7f-e9a656123a8b", 00:13:56.448 "strip_size_kb": 64, 00:13:56.448 "state": "online", 00:13:56.448 "raid_level": "raid0", 00:13:56.448 "superblock": true, 00:13:56.448 "num_base_bdevs": 4, 00:13:56.448 "num_base_bdevs_discovered": 4, 00:13:56.448 "num_base_bdevs_operational": 4, 00:13:56.448 "base_bdevs_list": [ 00:13:56.448 { 00:13:56.448 "name": "BaseBdev1", 00:13:56.448 "uuid": "9796551c-c8ee-0e5e-9cfa-05fe38cac605", 00:13:56.448 "is_configured": true, 00:13:56.448 "data_offset": 2048, 00:13:56.448 "data_size": 63488 00:13:56.448 }, 00:13:56.448 { 00:13:56.448 "name": "BaseBdev2", 00:13:56.448 "uuid": "eaea31f2-e2c7-4d5b-8f93-7a214ee32530", 00:13:56.448 "is_configured": true, 00:13:56.448 "data_offset": 2048, 00:13:56.448 "data_size": 63488 00:13:56.448 }, 00:13:56.448 { 00:13:56.448 "name": "BaseBdev3", 00:13:56.448 "uuid": "21e9b638-9eee-4451-a028-b1e9bddea2f8", 00:13:56.448 "is_configured": true, 00:13:56.448 "data_offset": 2048, 00:13:56.448 "data_size": 63488 00:13:56.448 }, 00:13:56.448 { 00:13:56.448 "name": "BaseBdev4", 00:13:56.448 "uuid": "8d1b23a3-c3fc-f15c-8e5e-b65da63bd300", 00:13:56.448 "is_configured": true, 00:13:56.448 "data_offset": 2048, 00:13:56.448 "data_size": 63488 00:13:56.448 } 00:13:56.448 ] 00:13:56.448 }' 00:13:56.448 21:50:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:56.448 21:50:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.706 21:50:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:13:56.706 21:50:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:56.706 [2024-07-15 21:50:11.875951] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3ede5d6a0ec0 00:13:57.639 21:50:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:57.897 21:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:13:57.897 21:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:57.897 21:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:13:57.897 21:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:57.897 21:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:57.897 21:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:57.898 21:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:57.898 21:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:57.898 21:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:57.898 21:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:57.898 21:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:57.898 21:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:57.898 21:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:57.898 21:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.898 21:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:58.156 21:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:58.156 "name": "raid_bdev1", 00:13:58.156 "uuid": "35ad96f0-42f4-11ef-9f7f-e9a656123a8b", 00:13:58.156 "strip_size_kb": 64, 00:13:58.156 "state": "online", 00:13:58.156 "raid_level": "raid0", 00:13:58.156 "superblock": true, 00:13:58.156 "num_base_bdevs": 4, 00:13:58.156 "num_base_bdevs_discovered": 4, 00:13:58.156 "num_base_bdevs_operational": 4, 00:13:58.156 "base_bdevs_list": [ 00:13:58.156 { 00:13:58.156 "name": "BaseBdev1", 00:13:58.156 "uuid": "9796551c-c8ee-0e5e-9cfa-05fe38cac605", 00:13:58.156 "is_configured": true, 00:13:58.156 "data_offset": 2048, 00:13:58.156 "data_size": 63488 00:13:58.156 }, 00:13:58.156 { 00:13:58.156 "name": "BaseBdev2", 00:13:58.156 "uuid": "eaea31f2-e2c7-4d5b-8f93-7a214ee32530", 00:13:58.156 "is_configured": true, 00:13:58.156 "data_offset": 2048, 00:13:58.156 "data_size": 63488 00:13:58.156 }, 00:13:58.156 { 00:13:58.156 "name": "BaseBdev3", 00:13:58.156 "uuid": "21e9b638-9eee-4451-a028-b1e9bddea2f8", 00:13:58.156 "is_configured": true, 00:13:58.156 "data_offset": 2048, 00:13:58.156 "data_size": 63488 00:13:58.156 }, 00:13:58.156 { 00:13:58.156 "name": "BaseBdev4", 00:13:58.156 "uuid": "8d1b23a3-c3fc-f15c-8e5e-b65da63bd300", 00:13:58.156 "is_configured": true, 00:13:58.156 "data_offset": 2048, 00:13:58.156 "data_size": 63488 00:13:58.156 } 00:13:58.156 ] 00:13:58.156 }' 00:13:58.156 21:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:58.156 21:50:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.721 21:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:58.980 [2024-07-15 21:50:13.938498] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:58.980 [2024-07-15 21:50:13.938542] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:58.980 [2024-07-15 21:50:13.938932] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:58.980 [2024-07-15 21:50:13.938943] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.980 [2024-07-15 21:50:13.938951] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:58.980 [2024-07-15 21:50:13.938975] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3ede5d635900 name raid_bdev1, state offline 00:13:58.980 0 00:13:58.980 21:50:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 60515 00:13:58.980 21:50:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@942 -- # '[' -z 60515 ']' 00:13:58.980 21:50:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # kill -0 60515 00:13:58.980 21:50:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@947 -- # uname 00:13:58.980 21:50:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:13:58.980 21:50:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # ps -c -o command 60515 00:13:58.980 21:50:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # tail -1 00:13:58.980 21:50:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # process_name=bdevperf 00:13:58.980 killing process with pid 60515 00:13:58.980 21:50:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' bdevperf = sudo ']' 00:13:58.980 21:50:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 60515' 00:13:58.980 21:50:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@961 -- # kill 60515 00:13:58.980 [2024-07-15 21:50:13.973000] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:58.980 21:50:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # wait 60515 00:13:58.980 [2024-07-15 21:50:13.997762] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:59.240 21:50:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:13:59.240 21:50:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:13:59.240 21:50:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.Jj2XfHOyK2 00:13:59.240 21:50:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.49 00:13:59.240 21:50:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:13:59.240 21:50:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:59.240 21:50:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:59.240 21:50:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.49 != \0\.\0\0 ]] 00:13:59.240 00:13:59.240 real 0m7.142s 00:13:59.240 user 0m10.977s 00:13:59.240 sys 0m1.612s 00:13:59.240 21:50:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:13:59.240 ************************************ 00:13:59.240 END TEST raid_write_error_test 00:13:59.240 ************************************ 00:13:59.240 21:50:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.240 21:50:14 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:13:59.240 21:50:14 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:13:59.240 21:50:14 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:13:59.240 21:50:14 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:13:59.240 21:50:14 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:13:59.240 21:50:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:59.240 ************************************ 00:13:59.240 START TEST raid_state_function_test 00:13:59.240 ************************************ 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1117 -- # raid_state_function_test concat 4 false 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=60651 00:13:59.240 Process raid pid: 60651 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 60651' 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 60651 /var/tmp/spdk-raid.sock 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@823 -- # '[' -z 60651 ']' 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:13:59.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:13:59.240 21:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.240 [2024-07-15 21:50:14.245583] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:13:59.240 [2024-07-15 21:50:14.245854] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:13:59.807 EAL: TSC is not safe to use in SMP mode 00:13:59.808 EAL: TSC is not invariant 00:13:59.808 [2024-07-15 21:50:14.845004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.808 [2024-07-15 21:50:14.927413] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:59.808 [2024-07-15 21:50:14.929792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.808 [2024-07-15 21:50:14.930729] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.808 [2024-07-15 21:50:14.930743] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:00.375 21:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:14:00.375 21:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # return 0 00:14:00.375 21:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:00.375 [2024-07-15 21:50:15.530481] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:00.375 [2024-07-15 21:50:15.530540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:00.375 [2024-07-15 21:50:15.530544] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:00.375 [2024-07-15 21:50:15.530568] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:00.375 [2024-07-15 21:50:15.530586] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:00.375 [2024-07-15 21:50:15.530593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:00.375 [2024-07-15 21:50:15.530620] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:00.375 [2024-07-15 21:50:15.530627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:00.375 21:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:00.375 21:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:00.375 21:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:00.375 21:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:00.375 21:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:00.375 21:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:00.375 21:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:00.375 21:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:00.375 21:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:00.375 21:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:00.375 21:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.375 21:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.940 21:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:00.940 "name": "Existed_Raid", 00:14:00.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.940 "strip_size_kb": 64, 00:14:00.940 "state": "configuring", 00:14:00.940 "raid_level": "concat", 00:14:00.940 "superblock": false, 00:14:00.940 "num_base_bdevs": 4, 00:14:00.940 "num_base_bdevs_discovered": 0, 00:14:00.940 "num_base_bdevs_operational": 4, 00:14:00.940 "base_bdevs_list": [ 00:14:00.940 { 00:14:00.940 "name": "BaseBdev1", 00:14:00.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.940 "is_configured": false, 00:14:00.940 "data_offset": 0, 00:14:00.940 "data_size": 0 00:14:00.940 }, 00:14:00.940 { 00:14:00.940 "name": "BaseBdev2", 00:14:00.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.940 "is_configured": false, 00:14:00.940 "data_offset": 0, 00:14:00.940 "data_size": 0 00:14:00.940 }, 00:14:00.940 { 00:14:00.940 "name": "BaseBdev3", 00:14:00.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.940 "is_configured": false, 00:14:00.940 "data_offset": 0, 00:14:00.940 "data_size": 0 00:14:00.940 }, 00:14:00.940 { 00:14:00.940 "name": "BaseBdev4", 00:14:00.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.940 "is_configured": false, 00:14:00.940 "data_offset": 0, 00:14:00.940 "data_size": 0 00:14:00.940 } 00:14:00.940 ] 00:14:00.940 }' 00:14:00.940 21:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:00.940 21:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.198 21:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:01.198 [2024-07-15 21:50:16.366604] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:01.198 [2024-07-15 21:50:16.366630] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x30b72fe34500 name Existed_Raid, state configuring 00:14:01.198 21:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:01.456 [2024-07-15 21:50:16.594619] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:01.456 [2024-07-15 21:50:16.594677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:01.456 [2024-07-15 21:50:16.594681] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:01.456 [2024-07-15 21:50:16.594704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:01.456 [2024-07-15 21:50:16.594707] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:01.457 [2024-07-15 21:50:16.594713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:01.457 [2024-07-15 21:50:16.594716] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:01.457 [2024-07-15 21:50:16.594722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:01.457 21:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:01.715 [2024-07-15 21:50:16.827632] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:01.716 BaseBdev1 00:14:01.716 21:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:01.716 21:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev1 00:14:01.716 21:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:14:01.716 21:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:14:01.716 21:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:14:01.716 21:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:14:01.716 21:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:01.974 21:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:02.233 [ 00:14:02.233 { 00:14:02.233 "name": "BaseBdev1", 00:14:02.233 "aliases": [ 00:14:02.233 "390b3feb-42f4-11ef-9f7f-e9a656123a8b" 00:14:02.233 ], 00:14:02.233 "product_name": "Malloc disk", 00:14:02.233 "block_size": 512, 00:14:02.233 "num_blocks": 65536, 00:14:02.233 "uuid": "390b3feb-42f4-11ef-9f7f-e9a656123a8b", 00:14:02.233 "assigned_rate_limits": { 00:14:02.233 "rw_ios_per_sec": 0, 00:14:02.233 "rw_mbytes_per_sec": 0, 00:14:02.233 "r_mbytes_per_sec": 0, 00:14:02.233 "w_mbytes_per_sec": 0 00:14:02.233 }, 00:14:02.233 "claimed": true, 00:14:02.233 "claim_type": "exclusive_write", 00:14:02.233 "zoned": false, 00:14:02.233 "supported_io_types": { 00:14:02.233 "read": true, 00:14:02.233 "write": true, 00:14:02.233 "unmap": true, 00:14:02.233 "flush": true, 00:14:02.233 "reset": true, 00:14:02.233 "nvme_admin": false, 00:14:02.233 "nvme_io": false, 00:14:02.233 "nvme_io_md": false, 00:14:02.233 "write_zeroes": true, 00:14:02.233 "zcopy": true, 00:14:02.233 "get_zone_info": false, 00:14:02.233 "zone_management": false, 00:14:02.233 "zone_append": false, 00:14:02.233 "compare": false, 00:14:02.233 "compare_and_write": false, 00:14:02.233 "abort": true, 00:14:02.233 "seek_hole": false, 00:14:02.233 "seek_data": false, 00:14:02.233 "copy": true, 00:14:02.233 "nvme_iov_md": false 00:14:02.233 }, 00:14:02.233 "memory_domains": [ 00:14:02.233 { 00:14:02.233 "dma_device_id": "system", 00:14:02.233 "dma_device_type": 1 00:14:02.233 }, 00:14:02.233 { 00:14:02.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.233 "dma_device_type": 2 00:14:02.233 } 00:14:02.233 ], 00:14:02.233 "driver_specific": {} 00:14:02.233 } 00:14:02.233 ] 00:14:02.233 21:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:14:02.233 21:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:02.233 21:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:02.233 21:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:02.233 21:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:02.233 21:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:02.233 21:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:02.233 21:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:02.233 21:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:02.233 21:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:02.233 21:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:02.233 21:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:02.233 21:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.492 21:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:02.492 "name": "Existed_Raid", 00:14:02.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.492 "strip_size_kb": 64, 00:14:02.492 "state": "configuring", 00:14:02.492 "raid_level": "concat", 00:14:02.492 "superblock": false, 00:14:02.492 "num_base_bdevs": 4, 00:14:02.492 "num_base_bdevs_discovered": 1, 00:14:02.492 "num_base_bdevs_operational": 4, 00:14:02.492 "base_bdevs_list": [ 00:14:02.492 { 00:14:02.492 "name": "BaseBdev1", 00:14:02.492 "uuid": "390b3feb-42f4-11ef-9f7f-e9a656123a8b", 00:14:02.492 "is_configured": true, 00:14:02.492 "data_offset": 0, 00:14:02.492 "data_size": 65536 00:14:02.492 }, 00:14:02.492 { 00:14:02.492 "name": "BaseBdev2", 00:14:02.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.492 "is_configured": false, 00:14:02.492 "data_offset": 0, 00:14:02.492 "data_size": 0 00:14:02.492 }, 00:14:02.492 { 00:14:02.492 "name": "BaseBdev3", 00:14:02.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.492 "is_configured": false, 00:14:02.492 "data_offset": 0, 00:14:02.492 "data_size": 0 00:14:02.492 }, 00:14:02.492 { 00:14:02.492 "name": "BaseBdev4", 00:14:02.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.492 "is_configured": false, 00:14:02.492 "data_offset": 0, 00:14:02.492 "data_size": 0 00:14:02.492 } 00:14:02.492 ] 00:14:02.492 }' 00:14:02.492 21:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:02.492 21:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.751 21:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:03.010 [2024-07-15 21:50:18.122759] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:03.010 [2024-07-15 21:50:18.122804] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x30b72fe34500 name Existed_Raid, state configuring 00:14:03.010 21:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:03.269 [2024-07-15 21:50:18.338783] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:03.269 [2024-07-15 21:50:18.339769] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:03.269 [2024-07-15 21:50:18.339822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:03.269 [2024-07-15 21:50:18.339841] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:03.269 [2024-07-15 21:50:18.339849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:03.269 [2024-07-15 21:50:18.339852] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:03.269 [2024-07-15 21:50:18.339859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:03.269 21:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:03.269 21:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:03.269 21:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:03.269 21:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:03.269 21:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:03.269 21:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:03.269 21:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:03.269 21:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:03.269 21:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:03.269 21:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:03.269 21:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:03.269 21:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:03.269 21:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:03.269 21:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.527 21:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:03.527 "name": "Existed_Raid", 00:14:03.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.527 "strip_size_kb": 64, 00:14:03.527 "state": "configuring", 00:14:03.527 "raid_level": "concat", 00:14:03.527 "superblock": false, 00:14:03.527 "num_base_bdevs": 4, 00:14:03.527 "num_base_bdevs_discovered": 1, 00:14:03.527 "num_base_bdevs_operational": 4, 00:14:03.527 "base_bdevs_list": [ 00:14:03.527 { 00:14:03.527 "name": "BaseBdev1", 00:14:03.527 "uuid": "390b3feb-42f4-11ef-9f7f-e9a656123a8b", 00:14:03.527 "is_configured": true, 00:14:03.527 "data_offset": 0, 00:14:03.527 "data_size": 65536 00:14:03.527 }, 00:14:03.527 { 00:14:03.527 "name": "BaseBdev2", 00:14:03.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.527 "is_configured": false, 00:14:03.527 "data_offset": 0, 00:14:03.527 "data_size": 0 00:14:03.527 }, 00:14:03.527 { 00:14:03.527 "name": "BaseBdev3", 00:14:03.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.527 "is_configured": false, 00:14:03.527 "data_offset": 0, 00:14:03.527 "data_size": 0 00:14:03.527 }, 00:14:03.527 { 00:14:03.527 "name": "BaseBdev4", 00:14:03.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.527 "is_configured": false, 00:14:03.527 "data_offset": 0, 00:14:03.527 "data_size": 0 00:14:03.527 } 00:14:03.527 ] 00:14:03.527 }' 00:14:03.527 21:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:03.527 21:50:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.786 21:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:04.051 [2024-07-15 21:50:19.146954] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:04.051 BaseBdev2 00:14:04.051 21:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:04.051 21:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev2 00:14:04.051 21:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:14:04.051 21:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:14:04.051 21:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:14:04.051 21:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:14:04.051 21:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:04.321 21:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:04.580 [ 00:14:04.580 { 00:14:04.580 "name": "BaseBdev2", 00:14:04.580 "aliases": [ 00:14:04.580 "3a6d46bf-42f4-11ef-9f7f-e9a656123a8b" 00:14:04.580 ], 00:14:04.580 "product_name": "Malloc disk", 00:14:04.580 "block_size": 512, 00:14:04.580 "num_blocks": 65536, 00:14:04.580 "uuid": "3a6d46bf-42f4-11ef-9f7f-e9a656123a8b", 00:14:04.580 "assigned_rate_limits": { 00:14:04.580 "rw_ios_per_sec": 0, 00:14:04.580 "rw_mbytes_per_sec": 0, 00:14:04.581 "r_mbytes_per_sec": 0, 00:14:04.581 "w_mbytes_per_sec": 0 00:14:04.581 }, 00:14:04.581 "claimed": true, 00:14:04.581 "claim_type": "exclusive_write", 00:14:04.581 "zoned": false, 00:14:04.581 "supported_io_types": { 00:14:04.581 "read": true, 00:14:04.581 "write": true, 00:14:04.581 "unmap": true, 00:14:04.581 "flush": true, 00:14:04.581 "reset": true, 00:14:04.581 "nvme_admin": false, 00:14:04.581 "nvme_io": false, 00:14:04.581 "nvme_io_md": false, 00:14:04.581 "write_zeroes": true, 00:14:04.581 "zcopy": true, 00:14:04.581 "get_zone_info": false, 00:14:04.581 "zone_management": false, 00:14:04.581 "zone_append": false, 00:14:04.581 "compare": false, 00:14:04.581 "compare_and_write": false, 00:14:04.581 "abort": true, 00:14:04.581 "seek_hole": false, 00:14:04.581 "seek_data": false, 00:14:04.581 "copy": true, 00:14:04.581 "nvme_iov_md": false 00:14:04.581 }, 00:14:04.581 "memory_domains": [ 00:14:04.581 { 00:14:04.581 "dma_device_id": "system", 00:14:04.581 "dma_device_type": 1 00:14:04.581 }, 00:14:04.581 { 00:14:04.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.581 "dma_device_type": 2 00:14:04.581 } 00:14:04.581 ], 00:14:04.581 "driver_specific": {} 00:14:04.581 } 00:14:04.581 ] 00:14:04.581 21:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:14:04.581 21:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:04.581 21:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:04.581 21:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:04.581 21:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:04.581 21:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:04.581 21:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:04.581 21:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:04.581 21:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:04.581 21:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:04.581 21:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:04.581 21:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:04.581 21:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:04.581 21:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:04.581 21:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.840 21:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:04.840 "name": "Existed_Raid", 00:14:04.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.840 "strip_size_kb": 64, 00:14:04.840 "state": "configuring", 00:14:04.840 "raid_level": "concat", 00:14:04.840 "superblock": false, 00:14:04.840 "num_base_bdevs": 4, 00:14:04.840 "num_base_bdevs_discovered": 2, 00:14:04.840 "num_base_bdevs_operational": 4, 00:14:04.840 "base_bdevs_list": [ 00:14:04.840 { 00:14:04.840 "name": "BaseBdev1", 00:14:04.840 "uuid": "390b3feb-42f4-11ef-9f7f-e9a656123a8b", 00:14:04.840 "is_configured": true, 00:14:04.840 "data_offset": 0, 00:14:04.840 "data_size": 65536 00:14:04.840 }, 00:14:04.840 { 00:14:04.840 "name": "BaseBdev2", 00:14:04.840 "uuid": "3a6d46bf-42f4-11ef-9f7f-e9a656123a8b", 00:14:04.840 "is_configured": true, 00:14:04.840 "data_offset": 0, 00:14:04.840 "data_size": 65536 00:14:04.840 }, 00:14:04.840 { 00:14:04.840 "name": "BaseBdev3", 00:14:04.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.840 "is_configured": false, 00:14:04.840 "data_offset": 0, 00:14:04.840 "data_size": 0 00:14:04.840 }, 00:14:04.840 { 00:14:04.840 "name": "BaseBdev4", 00:14:04.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.840 "is_configured": false, 00:14:04.840 "data_offset": 0, 00:14:04.840 "data_size": 0 00:14:04.840 } 00:14:04.840 ] 00:14:04.840 }' 00:14:04.840 21:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:04.840 21:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.408 21:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:05.408 [2024-07-15 21:50:20.547153] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:05.408 BaseBdev3 00:14:05.408 21:50:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:14:05.408 21:50:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev3 00:14:05.408 21:50:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:14:05.408 21:50:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:14:05.408 21:50:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:14:05.408 21:50:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:14:05.408 21:50:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:05.667 21:50:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:05.926 [ 00:14:05.926 { 00:14:05.926 "name": "BaseBdev3", 00:14:05.926 "aliases": [ 00:14:05.926 "3b42eda2-42f4-11ef-9f7f-e9a656123a8b" 00:14:05.926 ], 00:14:05.927 "product_name": "Malloc disk", 00:14:05.927 "block_size": 512, 00:14:05.927 "num_blocks": 65536, 00:14:05.927 "uuid": "3b42eda2-42f4-11ef-9f7f-e9a656123a8b", 00:14:05.927 "assigned_rate_limits": { 00:14:05.927 "rw_ios_per_sec": 0, 00:14:05.927 "rw_mbytes_per_sec": 0, 00:14:05.927 "r_mbytes_per_sec": 0, 00:14:05.927 "w_mbytes_per_sec": 0 00:14:05.927 }, 00:14:05.927 "claimed": true, 00:14:05.927 "claim_type": "exclusive_write", 00:14:05.927 "zoned": false, 00:14:05.927 "supported_io_types": { 00:14:05.927 "read": true, 00:14:05.927 "write": true, 00:14:05.927 "unmap": true, 00:14:05.927 "flush": true, 00:14:05.927 "reset": true, 00:14:05.927 "nvme_admin": false, 00:14:05.927 "nvme_io": false, 00:14:05.927 "nvme_io_md": false, 00:14:05.927 "write_zeroes": true, 00:14:05.927 "zcopy": true, 00:14:05.927 "get_zone_info": false, 00:14:05.927 "zone_management": false, 00:14:05.927 "zone_append": false, 00:14:05.927 "compare": false, 00:14:05.927 "compare_and_write": false, 00:14:05.927 "abort": true, 00:14:05.927 "seek_hole": false, 00:14:05.927 "seek_data": false, 00:14:05.927 "copy": true, 00:14:05.927 "nvme_iov_md": false 00:14:05.927 }, 00:14:05.927 "memory_domains": [ 00:14:05.927 { 00:14:05.927 "dma_device_id": "system", 00:14:05.927 "dma_device_type": 1 00:14:05.927 }, 00:14:05.927 { 00:14:05.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.927 "dma_device_type": 2 00:14:05.927 } 00:14:05.927 ], 00:14:05.927 "driver_specific": {} 00:14:05.927 } 00:14:05.927 ] 00:14:05.927 21:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:14:05.927 21:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:05.927 21:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:06.186 21:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:06.186 21:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:06.186 21:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:06.186 21:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:06.186 21:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:06.186 21:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:06.186 21:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:06.186 21:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:06.186 21:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:06.186 21:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:06.186 21:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:06.186 21:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.186 21:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:06.186 "name": "Existed_Raid", 00:14:06.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.186 "strip_size_kb": 64, 00:14:06.186 "state": "configuring", 00:14:06.186 "raid_level": "concat", 00:14:06.186 "superblock": false, 00:14:06.186 "num_base_bdevs": 4, 00:14:06.186 "num_base_bdevs_discovered": 3, 00:14:06.186 "num_base_bdevs_operational": 4, 00:14:06.186 "base_bdevs_list": [ 00:14:06.186 { 00:14:06.186 "name": "BaseBdev1", 00:14:06.186 "uuid": "390b3feb-42f4-11ef-9f7f-e9a656123a8b", 00:14:06.186 "is_configured": true, 00:14:06.186 "data_offset": 0, 00:14:06.186 "data_size": 65536 00:14:06.186 }, 00:14:06.186 { 00:14:06.186 "name": "BaseBdev2", 00:14:06.186 "uuid": "3a6d46bf-42f4-11ef-9f7f-e9a656123a8b", 00:14:06.186 "is_configured": true, 00:14:06.186 "data_offset": 0, 00:14:06.186 "data_size": 65536 00:14:06.186 }, 00:14:06.186 { 00:14:06.186 "name": "BaseBdev3", 00:14:06.186 "uuid": "3b42eda2-42f4-11ef-9f7f-e9a656123a8b", 00:14:06.186 "is_configured": true, 00:14:06.186 "data_offset": 0, 00:14:06.186 "data_size": 65536 00:14:06.186 }, 00:14:06.186 { 00:14:06.186 "name": "BaseBdev4", 00:14:06.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.186 "is_configured": false, 00:14:06.186 "data_offset": 0, 00:14:06.186 "data_size": 0 00:14:06.186 } 00:14:06.186 ] 00:14:06.186 }' 00:14:06.186 21:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:06.186 21:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.754 21:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:06.754 [2024-07-15 21:50:21.919261] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:06.754 [2024-07-15 21:50:21.919286] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x30b72fe34a00 00:14:06.754 [2024-07-15 21:50:21.919290] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:06.754 [2024-07-15 21:50:21.919334] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x30b72fe97e20 00:14:06.754 [2024-07-15 21:50:21.919414] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x30b72fe34a00 00:14:06.754 [2024-07-15 21:50:21.919418] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x30b72fe34a00 00:14:06.754 [2024-07-15 21:50:21.919460] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.754 BaseBdev4 00:14:06.754 21:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:14:06.754 21:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev4 00:14:06.754 21:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:14:06.754 21:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:14:06.754 21:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:14:06.754 21:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:14:06.754 21:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:07.013 21:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:07.273 [ 00:14:07.273 { 00:14:07.273 "name": "BaseBdev4", 00:14:07.273 "aliases": [ 00:14:07.273 "3c144c92-42f4-11ef-9f7f-e9a656123a8b" 00:14:07.273 ], 00:14:07.273 "product_name": "Malloc disk", 00:14:07.273 "block_size": 512, 00:14:07.273 "num_blocks": 65536, 00:14:07.273 "uuid": "3c144c92-42f4-11ef-9f7f-e9a656123a8b", 00:14:07.273 "assigned_rate_limits": { 00:14:07.273 "rw_ios_per_sec": 0, 00:14:07.273 "rw_mbytes_per_sec": 0, 00:14:07.273 "r_mbytes_per_sec": 0, 00:14:07.273 "w_mbytes_per_sec": 0 00:14:07.273 }, 00:14:07.273 "claimed": true, 00:14:07.273 "claim_type": "exclusive_write", 00:14:07.273 "zoned": false, 00:14:07.273 "supported_io_types": { 00:14:07.273 "read": true, 00:14:07.273 "write": true, 00:14:07.273 "unmap": true, 00:14:07.273 "flush": true, 00:14:07.273 "reset": true, 00:14:07.273 "nvme_admin": false, 00:14:07.273 "nvme_io": false, 00:14:07.273 "nvme_io_md": false, 00:14:07.273 "write_zeroes": true, 00:14:07.273 "zcopy": true, 00:14:07.273 "get_zone_info": false, 00:14:07.273 "zone_management": false, 00:14:07.273 "zone_append": false, 00:14:07.273 "compare": false, 00:14:07.273 "compare_and_write": false, 00:14:07.273 "abort": true, 00:14:07.273 "seek_hole": false, 00:14:07.273 "seek_data": false, 00:14:07.273 "copy": true, 00:14:07.273 "nvme_iov_md": false 00:14:07.273 }, 00:14:07.273 "memory_domains": [ 00:14:07.273 { 00:14:07.273 "dma_device_id": "system", 00:14:07.273 "dma_device_type": 1 00:14:07.273 }, 00:14:07.273 { 00:14:07.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.273 "dma_device_type": 2 00:14:07.273 } 00:14:07.273 ], 00:14:07.273 "driver_specific": {} 00:14:07.273 } 00:14:07.273 ] 00:14:07.273 21:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:14:07.273 21:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:07.273 21:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:07.273 21:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:07.273 21:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:07.273 21:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:07.273 21:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:07.273 21:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:07.273 21:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:07.273 21:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:07.273 21:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:07.273 21:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:07.273 21:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:07.273 21:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:07.273 21:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.533 21:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:07.533 "name": "Existed_Raid", 00:14:07.533 "uuid": "3c145343-42f4-11ef-9f7f-e9a656123a8b", 00:14:07.533 "strip_size_kb": 64, 00:14:07.533 "state": "online", 00:14:07.533 "raid_level": "concat", 00:14:07.533 "superblock": false, 00:14:07.533 "num_base_bdevs": 4, 00:14:07.533 "num_base_bdevs_discovered": 4, 00:14:07.533 "num_base_bdevs_operational": 4, 00:14:07.533 "base_bdevs_list": [ 00:14:07.533 { 00:14:07.533 "name": "BaseBdev1", 00:14:07.533 "uuid": "390b3feb-42f4-11ef-9f7f-e9a656123a8b", 00:14:07.533 "is_configured": true, 00:14:07.533 "data_offset": 0, 00:14:07.533 "data_size": 65536 00:14:07.533 }, 00:14:07.533 { 00:14:07.533 "name": "BaseBdev2", 00:14:07.533 "uuid": "3a6d46bf-42f4-11ef-9f7f-e9a656123a8b", 00:14:07.533 "is_configured": true, 00:14:07.533 "data_offset": 0, 00:14:07.533 "data_size": 65536 00:14:07.533 }, 00:14:07.533 { 00:14:07.533 "name": "BaseBdev3", 00:14:07.533 "uuid": "3b42eda2-42f4-11ef-9f7f-e9a656123a8b", 00:14:07.533 "is_configured": true, 00:14:07.533 "data_offset": 0, 00:14:07.533 "data_size": 65536 00:14:07.533 }, 00:14:07.533 { 00:14:07.533 "name": "BaseBdev4", 00:14:07.533 "uuid": "3c144c92-42f4-11ef-9f7f-e9a656123a8b", 00:14:07.533 "is_configured": true, 00:14:07.533 "data_offset": 0, 00:14:07.533 "data_size": 65536 00:14:07.533 } 00:14:07.533 ] 00:14:07.533 }' 00:14:07.533 21:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:07.533 21:50:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.101 21:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:08.101 21:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:08.101 21:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:08.101 21:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:08.101 21:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:08.101 21:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:08.101 21:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:08.101 21:50:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:08.101 [2024-07-15 21:50:23.203263] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:08.101 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:08.101 "name": "Existed_Raid", 00:14:08.101 "aliases": [ 00:14:08.101 "3c145343-42f4-11ef-9f7f-e9a656123a8b" 00:14:08.101 ], 00:14:08.101 "product_name": "Raid Volume", 00:14:08.101 "block_size": 512, 00:14:08.101 "num_blocks": 262144, 00:14:08.101 "uuid": "3c145343-42f4-11ef-9f7f-e9a656123a8b", 00:14:08.101 "assigned_rate_limits": { 00:14:08.101 "rw_ios_per_sec": 0, 00:14:08.101 "rw_mbytes_per_sec": 0, 00:14:08.101 "r_mbytes_per_sec": 0, 00:14:08.101 "w_mbytes_per_sec": 0 00:14:08.101 }, 00:14:08.101 "claimed": false, 00:14:08.101 "zoned": false, 00:14:08.102 "supported_io_types": { 00:14:08.102 "read": true, 00:14:08.102 "write": true, 00:14:08.102 "unmap": true, 00:14:08.102 "flush": true, 00:14:08.102 "reset": true, 00:14:08.102 "nvme_admin": false, 00:14:08.102 "nvme_io": false, 00:14:08.102 "nvme_io_md": false, 00:14:08.102 "write_zeroes": true, 00:14:08.102 "zcopy": false, 00:14:08.102 "get_zone_info": false, 00:14:08.102 "zone_management": false, 00:14:08.102 "zone_append": false, 00:14:08.102 "compare": false, 00:14:08.102 "compare_and_write": false, 00:14:08.102 "abort": false, 00:14:08.102 "seek_hole": false, 00:14:08.102 "seek_data": false, 00:14:08.102 "copy": false, 00:14:08.102 "nvme_iov_md": false 00:14:08.102 }, 00:14:08.102 "memory_domains": [ 00:14:08.102 { 00:14:08.102 "dma_device_id": "system", 00:14:08.102 "dma_device_type": 1 00:14:08.102 }, 00:14:08.102 { 00:14:08.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.102 "dma_device_type": 2 00:14:08.102 }, 00:14:08.102 { 00:14:08.102 "dma_device_id": "system", 00:14:08.102 "dma_device_type": 1 00:14:08.102 }, 00:14:08.102 { 00:14:08.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.102 "dma_device_type": 2 00:14:08.102 }, 00:14:08.102 { 00:14:08.102 "dma_device_id": "system", 00:14:08.102 "dma_device_type": 1 00:14:08.102 }, 00:14:08.102 { 00:14:08.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.102 "dma_device_type": 2 00:14:08.102 }, 00:14:08.102 { 00:14:08.102 "dma_device_id": "system", 00:14:08.102 "dma_device_type": 1 00:14:08.102 }, 00:14:08.102 { 00:14:08.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.102 "dma_device_type": 2 00:14:08.102 } 00:14:08.102 ], 00:14:08.102 "driver_specific": { 00:14:08.102 "raid": { 00:14:08.102 "uuid": "3c145343-42f4-11ef-9f7f-e9a656123a8b", 00:14:08.102 "strip_size_kb": 64, 00:14:08.102 "state": "online", 00:14:08.102 "raid_level": "concat", 00:14:08.102 "superblock": false, 00:14:08.102 "num_base_bdevs": 4, 00:14:08.102 "num_base_bdevs_discovered": 4, 00:14:08.102 "num_base_bdevs_operational": 4, 00:14:08.102 "base_bdevs_list": [ 00:14:08.102 { 00:14:08.102 "name": "BaseBdev1", 00:14:08.102 "uuid": "390b3feb-42f4-11ef-9f7f-e9a656123a8b", 00:14:08.102 "is_configured": true, 00:14:08.102 "data_offset": 0, 00:14:08.102 "data_size": 65536 00:14:08.102 }, 00:14:08.102 { 00:14:08.102 "name": "BaseBdev2", 00:14:08.102 "uuid": "3a6d46bf-42f4-11ef-9f7f-e9a656123a8b", 00:14:08.102 "is_configured": true, 00:14:08.102 "data_offset": 0, 00:14:08.102 "data_size": 65536 00:14:08.102 }, 00:14:08.102 { 00:14:08.102 "name": "BaseBdev3", 00:14:08.102 "uuid": "3b42eda2-42f4-11ef-9f7f-e9a656123a8b", 00:14:08.102 "is_configured": true, 00:14:08.102 "data_offset": 0, 00:14:08.102 "data_size": 65536 00:14:08.102 }, 00:14:08.102 { 00:14:08.102 "name": "BaseBdev4", 00:14:08.102 "uuid": "3c144c92-42f4-11ef-9f7f-e9a656123a8b", 00:14:08.102 "is_configured": true, 00:14:08.102 "data_offset": 0, 00:14:08.102 "data_size": 65536 00:14:08.102 } 00:14:08.102 ] 00:14:08.102 } 00:14:08.102 } 00:14:08.102 }' 00:14:08.102 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:08.102 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:08.102 BaseBdev2 00:14:08.102 BaseBdev3 00:14:08.102 BaseBdev4' 00:14:08.102 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:08.102 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:08.102 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:08.360 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:08.360 "name": "BaseBdev1", 00:14:08.360 "aliases": [ 00:14:08.360 "390b3feb-42f4-11ef-9f7f-e9a656123a8b" 00:14:08.360 ], 00:14:08.360 "product_name": "Malloc disk", 00:14:08.360 "block_size": 512, 00:14:08.360 "num_blocks": 65536, 00:14:08.360 "uuid": "390b3feb-42f4-11ef-9f7f-e9a656123a8b", 00:14:08.360 "assigned_rate_limits": { 00:14:08.360 "rw_ios_per_sec": 0, 00:14:08.360 "rw_mbytes_per_sec": 0, 00:14:08.360 "r_mbytes_per_sec": 0, 00:14:08.360 "w_mbytes_per_sec": 0 00:14:08.360 }, 00:14:08.360 "claimed": true, 00:14:08.360 "claim_type": "exclusive_write", 00:14:08.360 "zoned": false, 00:14:08.360 "supported_io_types": { 00:14:08.360 "read": true, 00:14:08.360 "write": true, 00:14:08.360 "unmap": true, 00:14:08.360 "flush": true, 00:14:08.360 "reset": true, 00:14:08.360 "nvme_admin": false, 00:14:08.360 "nvme_io": false, 00:14:08.360 "nvme_io_md": false, 00:14:08.360 "write_zeroes": true, 00:14:08.360 "zcopy": true, 00:14:08.360 "get_zone_info": false, 00:14:08.360 "zone_management": false, 00:14:08.360 "zone_append": false, 00:14:08.360 "compare": false, 00:14:08.360 "compare_and_write": false, 00:14:08.360 "abort": true, 00:14:08.360 "seek_hole": false, 00:14:08.360 "seek_data": false, 00:14:08.360 "copy": true, 00:14:08.360 "nvme_iov_md": false 00:14:08.360 }, 00:14:08.360 "memory_domains": [ 00:14:08.360 { 00:14:08.360 "dma_device_id": "system", 00:14:08.360 "dma_device_type": 1 00:14:08.360 }, 00:14:08.360 { 00:14:08.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.360 "dma_device_type": 2 00:14:08.360 } 00:14:08.360 ], 00:14:08.360 "driver_specific": {} 00:14:08.360 }' 00:14:08.360 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:08.360 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:08.360 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:08.360 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:08.361 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:08.361 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:08.619 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:08.619 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:08.619 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:08.619 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:08.619 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:08.619 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:08.619 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:08.619 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:08.619 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:08.877 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:08.878 "name": "BaseBdev2", 00:14:08.878 "aliases": [ 00:14:08.878 "3a6d46bf-42f4-11ef-9f7f-e9a656123a8b" 00:14:08.878 ], 00:14:08.878 "product_name": "Malloc disk", 00:14:08.878 "block_size": 512, 00:14:08.878 "num_blocks": 65536, 00:14:08.878 "uuid": "3a6d46bf-42f4-11ef-9f7f-e9a656123a8b", 00:14:08.878 "assigned_rate_limits": { 00:14:08.878 "rw_ios_per_sec": 0, 00:14:08.878 "rw_mbytes_per_sec": 0, 00:14:08.878 "r_mbytes_per_sec": 0, 00:14:08.878 "w_mbytes_per_sec": 0 00:14:08.878 }, 00:14:08.878 "claimed": true, 00:14:08.878 "claim_type": "exclusive_write", 00:14:08.878 "zoned": false, 00:14:08.878 "supported_io_types": { 00:14:08.878 "read": true, 00:14:08.878 "write": true, 00:14:08.878 "unmap": true, 00:14:08.878 "flush": true, 00:14:08.878 "reset": true, 00:14:08.878 "nvme_admin": false, 00:14:08.878 "nvme_io": false, 00:14:08.878 "nvme_io_md": false, 00:14:08.878 "write_zeroes": true, 00:14:08.878 "zcopy": true, 00:14:08.878 "get_zone_info": false, 00:14:08.878 "zone_management": false, 00:14:08.878 "zone_append": false, 00:14:08.878 "compare": false, 00:14:08.878 "compare_and_write": false, 00:14:08.878 "abort": true, 00:14:08.878 "seek_hole": false, 00:14:08.878 "seek_data": false, 00:14:08.878 "copy": true, 00:14:08.878 "nvme_iov_md": false 00:14:08.878 }, 00:14:08.878 "memory_domains": [ 00:14:08.878 { 00:14:08.878 "dma_device_id": "system", 00:14:08.878 "dma_device_type": 1 00:14:08.878 }, 00:14:08.878 { 00:14:08.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.878 "dma_device_type": 2 00:14:08.878 } 00:14:08.878 ], 00:14:08.878 "driver_specific": {} 00:14:08.878 }' 00:14:08.878 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:08.878 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:08.878 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:08.878 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:08.878 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:08.878 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:08.878 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:08.878 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:08.878 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:08.878 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:08.878 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:08.878 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:08.878 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:08.878 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:08.878 21:50:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:09.137 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:09.137 "name": "BaseBdev3", 00:14:09.137 "aliases": [ 00:14:09.137 "3b42eda2-42f4-11ef-9f7f-e9a656123a8b" 00:14:09.137 ], 00:14:09.137 "product_name": "Malloc disk", 00:14:09.137 "block_size": 512, 00:14:09.137 "num_blocks": 65536, 00:14:09.137 "uuid": "3b42eda2-42f4-11ef-9f7f-e9a656123a8b", 00:14:09.137 "assigned_rate_limits": { 00:14:09.137 "rw_ios_per_sec": 0, 00:14:09.137 "rw_mbytes_per_sec": 0, 00:14:09.137 "r_mbytes_per_sec": 0, 00:14:09.137 "w_mbytes_per_sec": 0 00:14:09.137 }, 00:14:09.137 "claimed": true, 00:14:09.137 "claim_type": "exclusive_write", 00:14:09.137 "zoned": false, 00:14:09.137 "supported_io_types": { 00:14:09.137 "read": true, 00:14:09.137 "write": true, 00:14:09.137 "unmap": true, 00:14:09.137 "flush": true, 00:14:09.137 "reset": true, 00:14:09.137 "nvme_admin": false, 00:14:09.137 "nvme_io": false, 00:14:09.137 "nvme_io_md": false, 00:14:09.137 "write_zeroes": true, 00:14:09.137 "zcopy": true, 00:14:09.137 "get_zone_info": false, 00:14:09.137 "zone_management": false, 00:14:09.137 "zone_append": false, 00:14:09.137 "compare": false, 00:14:09.137 "compare_and_write": false, 00:14:09.137 "abort": true, 00:14:09.137 "seek_hole": false, 00:14:09.137 "seek_data": false, 00:14:09.137 "copy": true, 00:14:09.137 "nvme_iov_md": false 00:14:09.137 }, 00:14:09.137 "memory_domains": [ 00:14:09.137 { 00:14:09.137 "dma_device_id": "system", 00:14:09.137 "dma_device_type": 1 00:14:09.137 }, 00:14:09.137 { 00:14:09.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.137 "dma_device_type": 2 00:14:09.137 } 00:14:09.137 ], 00:14:09.137 "driver_specific": {} 00:14:09.137 }' 00:14:09.137 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:09.137 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:09.137 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:09.137 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:09.137 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:09.137 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:09.137 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:09.137 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:09.137 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:09.137 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:09.137 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:09.137 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:09.137 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:09.137 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:14:09.137 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:09.396 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:09.396 "name": "BaseBdev4", 00:14:09.396 "aliases": [ 00:14:09.396 "3c144c92-42f4-11ef-9f7f-e9a656123a8b" 00:14:09.396 ], 00:14:09.396 "product_name": "Malloc disk", 00:14:09.396 "block_size": 512, 00:14:09.396 "num_blocks": 65536, 00:14:09.396 "uuid": "3c144c92-42f4-11ef-9f7f-e9a656123a8b", 00:14:09.396 "assigned_rate_limits": { 00:14:09.396 "rw_ios_per_sec": 0, 00:14:09.396 "rw_mbytes_per_sec": 0, 00:14:09.396 "r_mbytes_per_sec": 0, 00:14:09.396 "w_mbytes_per_sec": 0 00:14:09.396 }, 00:14:09.396 "claimed": true, 00:14:09.396 "claim_type": "exclusive_write", 00:14:09.396 "zoned": false, 00:14:09.396 "supported_io_types": { 00:14:09.396 "read": true, 00:14:09.396 "write": true, 00:14:09.396 "unmap": true, 00:14:09.396 "flush": true, 00:14:09.396 "reset": true, 00:14:09.396 "nvme_admin": false, 00:14:09.396 "nvme_io": false, 00:14:09.396 "nvme_io_md": false, 00:14:09.396 "write_zeroes": true, 00:14:09.396 "zcopy": true, 00:14:09.396 "get_zone_info": false, 00:14:09.396 "zone_management": false, 00:14:09.396 "zone_append": false, 00:14:09.396 "compare": false, 00:14:09.396 "compare_and_write": false, 00:14:09.396 "abort": true, 00:14:09.396 "seek_hole": false, 00:14:09.396 "seek_data": false, 00:14:09.396 "copy": true, 00:14:09.396 "nvme_iov_md": false 00:14:09.396 }, 00:14:09.396 "memory_domains": [ 00:14:09.396 { 00:14:09.396 "dma_device_id": "system", 00:14:09.396 "dma_device_type": 1 00:14:09.396 }, 00:14:09.396 { 00:14:09.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.396 "dma_device_type": 2 00:14:09.396 } 00:14:09.396 ], 00:14:09.396 "driver_specific": {} 00:14:09.396 }' 00:14:09.397 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:09.397 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:09.397 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:09.397 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:09.397 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:09.397 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:09.397 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:09.397 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:09.397 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:09.397 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:09.397 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:09.397 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:09.397 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:09.656 [2024-07-15 21:50:24.783394] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:09.656 [2024-07-15 21:50:24.783457] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:09.656 [2024-07-15 21:50:24.783500] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:09.656 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:09.656 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:14:09.656 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:09.656 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:09.656 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:09.656 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:14:09.656 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:09.656 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:09.656 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:09.656 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:09.656 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:09.656 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:09.656 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:09.656 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:09.656 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:09.656 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.656 21:50:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.914 21:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:09.914 "name": "Existed_Raid", 00:14:09.914 "uuid": "3c145343-42f4-11ef-9f7f-e9a656123a8b", 00:14:09.914 "strip_size_kb": 64, 00:14:09.914 "state": "offline", 00:14:09.914 "raid_level": "concat", 00:14:09.914 "superblock": false, 00:14:09.914 "num_base_bdevs": 4, 00:14:09.914 "num_base_bdevs_discovered": 3, 00:14:09.914 "num_base_bdevs_operational": 3, 00:14:09.914 "base_bdevs_list": [ 00:14:09.914 { 00:14:09.914 "name": null, 00:14:09.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.914 "is_configured": false, 00:14:09.914 "data_offset": 0, 00:14:09.914 "data_size": 65536 00:14:09.914 }, 00:14:09.914 { 00:14:09.914 "name": "BaseBdev2", 00:14:09.914 "uuid": "3a6d46bf-42f4-11ef-9f7f-e9a656123a8b", 00:14:09.914 "is_configured": true, 00:14:09.914 "data_offset": 0, 00:14:09.914 "data_size": 65536 00:14:09.914 }, 00:14:09.914 { 00:14:09.914 "name": "BaseBdev3", 00:14:09.914 "uuid": "3b42eda2-42f4-11ef-9f7f-e9a656123a8b", 00:14:09.914 "is_configured": true, 00:14:09.914 "data_offset": 0, 00:14:09.914 "data_size": 65536 00:14:09.914 }, 00:14:09.914 { 00:14:09.914 "name": "BaseBdev4", 00:14:09.914 "uuid": "3c144c92-42f4-11ef-9f7f-e9a656123a8b", 00:14:09.914 "is_configured": true, 00:14:09.914 "data_offset": 0, 00:14:09.914 "data_size": 65536 00:14:09.914 } 00:14:09.914 ] 00:14:09.914 }' 00:14:09.914 21:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:09.914 21:50:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.481 21:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:10.481 21:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:10.481 21:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.481 21:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:10.481 21:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:10.481 21:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:10.481 21:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:10.740 [2024-07-15 21:50:25.845685] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:10.740 21:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:10.740 21:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:10.740 21:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.740 21:50:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:10.998 21:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:10.998 21:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:10.998 21:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:11.257 [2024-07-15 21:50:26.316299] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:11.257 21:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:11.257 21:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:11.257 21:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:11.257 21:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:11.518 21:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:11.518 21:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:11.518 21:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:14:11.783 [2024-07-15 21:50:26.818943] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:11.783 [2024-07-15 21:50:26.818993] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x30b72fe34a00 name Existed_Raid, state offline 00:14:11.783 21:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:11.783 21:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:11.783 21:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:11.783 21:50:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:12.041 21:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:12.041 21:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:12.041 21:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:14:12.041 21:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:14:12.041 21:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:12.041 21:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:12.300 BaseBdev2 00:14:12.300 21:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:14:12.300 21:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev2 00:14:12.300 21:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:14:12.300 21:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:14:12.300 21:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:14:12.300 21:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:14:12.300 21:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:12.558 21:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:12.818 [ 00:14:12.818 { 00:14:12.818 "name": "BaseBdev2", 00:14:12.818 "aliases": [ 00:14:12.818 "3f441e34-42f4-11ef-9f7f-e9a656123a8b" 00:14:12.818 ], 00:14:12.818 "product_name": "Malloc disk", 00:14:12.818 "block_size": 512, 00:14:12.818 "num_blocks": 65536, 00:14:12.818 "uuid": "3f441e34-42f4-11ef-9f7f-e9a656123a8b", 00:14:12.818 "assigned_rate_limits": { 00:14:12.818 "rw_ios_per_sec": 0, 00:14:12.818 "rw_mbytes_per_sec": 0, 00:14:12.818 "r_mbytes_per_sec": 0, 00:14:12.818 "w_mbytes_per_sec": 0 00:14:12.818 }, 00:14:12.818 "claimed": false, 00:14:12.818 "zoned": false, 00:14:12.818 "supported_io_types": { 00:14:12.818 "read": true, 00:14:12.818 "write": true, 00:14:12.818 "unmap": true, 00:14:12.818 "flush": true, 00:14:12.818 "reset": true, 00:14:12.818 "nvme_admin": false, 00:14:12.818 "nvme_io": false, 00:14:12.818 "nvme_io_md": false, 00:14:12.818 "write_zeroes": true, 00:14:12.818 "zcopy": true, 00:14:12.818 "get_zone_info": false, 00:14:12.818 "zone_management": false, 00:14:12.818 "zone_append": false, 00:14:12.818 "compare": false, 00:14:12.818 "compare_and_write": false, 00:14:12.818 "abort": true, 00:14:12.818 "seek_hole": false, 00:14:12.818 "seek_data": false, 00:14:12.818 "copy": true, 00:14:12.818 "nvme_iov_md": false 00:14:12.818 }, 00:14:12.818 "memory_domains": [ 00:14:12.818 { 00:14:12.818 "dma_device_id": "system", 00:14:12.818 "dma_device_type": 1 00:14:12.818 }, 00:14:12.818 { 00:14:12.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.818 "dma_device_type": 2 00:14:12.818 } 00:14:12.818 ], 00:14:12.818 "driver_specific": {} 00:14:12.818 } 00:14:12.818 ] 00:14:12.818 21:50:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:14:12.818 21:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:12.818 21:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:12.818 21:50:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:13.076 BaseBdev3 00:14:13.076 21:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:14:13.076 21:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev3 00:14:13.076 21:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:14:13.076 21:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:14:13.076 21:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:14:13.076 21:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:14:13.076 21:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:13.076 21:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:13.335 [ 00:14:13.335 { 00:14:13.335 "name": "BaseBdev3", 00:14:13.335 "aliases": [ 00:14:13.335 "3fb8b2e2-42f4-11ef-9f7f-e9a656123a8b" 00:14:13.335 ], 00:14:13.335 "product_name": "Malloc disk", 00:14:13.335 "block_size": 512, 00:14:13.335 "num_blocks": 65536, 00:14:13.335 "uuid": "3fb8b2e2-42f4-11ef-9f7f-e9a656123a8b", 00:14:13.335 "assigned_rate_limits": { 00:14:13.335 "rw_ios_per_sec": 0, 00:14:13.335 "rw_mbytes_per_sec": 0, 00:14:13.335 "r_mbytes_per_sec": 0, 00:14:13.335 "w_mbytes_per_sec": 0 00:14:13.335 }, 00:14:13.335 "claimed": false, 00:14:13.335 "zoned": false, 00:14:13.335 "supported_io_types": { 00:14:13.335 "read": true, 00:14:13.335 "write": true, 00:14:13.335 "unmap": true, 00:14:13.335 "flush": true, 00:14:13.335 "reset": true, 00:14:13.335 "nvme_admin": false, 00:14:13.335 "nvme_io": false, 00:14:13.335 "nvme_io_md": false, 00:14:13.335 "write_zeroes": true, 00:14:13.335 "zcopy": true, 00:14:13.335 "get_zone_info": false, 00:14:13.335 "zone_management": false, 00:14:13.335 "zone_append": false, 00:14:13.335 "compare": false, 00:14:13.335 "compare_and_write": false, 00:14:13.335 "abort": true, 00:14:13.335 "seek_hole": false, 00:14:13.335 "seek_data": false, 00:14:13.335 "copy": true, 00:14:13.335 "nvme_iov_md": false 00:14:13.335 }, 00:14:13.335 "memory_domains": [ 00:14:13.335 { 00:14:13.335 "dma_device_id": "system", 00:14:13.335 "dma_device_type": 1 00:14:13.335 }, 00:14:13.335 { 00:14:13.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.335 "dma_device_type": 2 00:14:13.335 } 00:14:13.335 ], 00:14:13.335 "driver_specific": {} 00:14:13.335 } 00:14:13.335 ] 00:14:13.335 21:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:14:13.335 21:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:13.335 21:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:13.335 21:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:13.594 BaseBdev4 00:14:13.594 21:50:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:14:13.594 21:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev4 00:14:13.594 21:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:14:13.594 21:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:14:13.594 21:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:14:13.594 21:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:14:13.594 21:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:13.852 21:50:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:14.112 [ 00:14:14.112 { 00:14:14.112 "name": "BaseBdev4", 00:14:14.112 "aliases": [ 00:14:14.112 "4019bfc0-42f4-11ef-9f7f-e9a656123a8b" 00:14:14.112 ], 00:14:14.112 "product_name": "Malloc disk", 00:14:14.112 "block_size": 512, 00:14:14.112 "num_blocks": 65536, 00:14:14.112 "uuid": "4019bfc0-42f4-11ef-9f7f-e9a656123a8b", 00:14:14.112 "assigned_rate_limits": { 00:14:14.112 "rw_ios_per_sec": 0, 00:14:14.112 "rw_mbytes_per_sec": 0, 00:14:14.112 "r_mbytes_per_sec": 0, 00:14:14.112 "w_mbytes_per_sec": 0 00:14:14.112 }, 00:14:14.112 "claimed": false, 00:14:14.112 "zoned": false, 00:14:14.112 "supported_io_types": { 00:14:14.112 "read": true, 00:14:14.112 "write": true, 00:14:14.112 "unmap": true, 00:14:14.112 "flush": true, 00:14:14.112 "reset": true, 00:14:14.112 "nvme_admin": false, 00:14:14.112 "nvme_io": false, 00:14:14.112 "nvme_io_md": false, 00:14:14.112 "write_zeroes": true, 00:14:14.112 "zcopy": true, 00:14:14.112 "get_zone_info": false, 00:14:14.112 "zone_management": false, 00:14:14.112 "zone_append": false, 00:14:14.112 "compare": false, 00:14:14.112 "compare_and_write": false, 00:14:14.112 "abort": true, 00:14:14.112 "seek_hole": false, 00:14:14.112 "seek_data": false, 00:14:14.112 "copy": true, 00:14:14.112 "nvme_iov_md": false 00:14:14.112 }, 00:14:14.112 "memory_domains": [ 00:14:14.112 { 00:14:14.112 "dma_device_id": "system", 00:14:14.112 "dma_device_type": 1 00:14:14.112 }, 00:14:14.112 { 00:14:14.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.112 "dma_device_type": 2 00:14:14.112 } 00:14:14.112 ], 00:14:14.112 "driver_specific": {} 00:14:14.112 } 00:14:14.112 ] 00:14:14.112 21:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:14:14.112 21:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:14.112 21:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:14.112 21:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:14.371 [2024-07-15 21:50:29.433781] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:14.371 [2024-07-15 21:50:29.433839] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:14.371 [2024-07-15 21:50:29.433848] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:14.371 [2024-07-15 21:50:29.434664] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:14.371 [2024-07-15 21:50:29.434683] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:14.371 21:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:14.371 21:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:14.371 21:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:14.371 21:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:14.371 21:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:14.371 21:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:14.371 21:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:14.371 21:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:14.371 21:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:14.371 21:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:14.371 21:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.371 21:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.630 21:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:14.630 "name": "Existed_Raid", 00:14:14.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.630 "strip_size_kb": 64, 00:14:14.630 "state": "configuring", 00:14:14.630 "raid_level": "concat", 00:14:14.630 "superblock": false, 00:14:14.630 "num_base_bdevs": 4, 00:14:14.630 "num_base_bdevs_discovered": 3, 00:14:14.630 "num_base_bdevs_operational": 4, 00:14:14.630 "base_bdevs_list": [ 00:14:14.630 { 00:14:14.630 "name": "BaseBdev1", 00:14:14.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.630 "is_configured": false, 00:14:14.630 "data_offset": 0, 00:14:14.630 "data_size": 0 00:14:14.630 }, 00:14:14.630 { 00:14:14.630 "name": "BaseBdev2", 00:14:14.630 "uuid": "3f441e34-42f4-11ef-9f7f-e9a656123a8b", 00:14:14.630 "is_configured": true, 00:14:14.630 "data_offset": 0, 00:14:14.630 "data_size": 65536 00:14:14.630 }, 00:14:14.630 { 00:14:14.630 "name": "BaseBdev3", 00:14:14.630 "uuid": "3fb8b2e2-42f4-11ef-9f7f-e9a656123a8b", 00:14:14.630 "is_configured": true, 00:14:14.630 "data_offset": 0, 00:14:14.630 "data_size": 65536 00:14:14.630 }, 00:14:14.630 { 00:14:14.630 "name": "BaseBdev4", 00:14:14.630 "uuid": "4019bfc0-42f4-11ef-9f7f-e9a656123a8b", 00:14:14.630 "is_configured": true, 00:14:14.630 "data_offset": 0, 00:14:14.630 "data_size": 65536 00:14:14.630 } 00:14:14.630 ] 00:14:14.630 }' 00:14:14.630 21:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:14.630 21:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.889 21:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:15.147 [2024-07-15 21:50:30.225813] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:15.147 21:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:15.147 21:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:15.147 21:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:15.147 21:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:15.147 21:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:15.147 21:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:15.147 21:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:15.147 21:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:15.147 21:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:15.147 21:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:15.147 21:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.147 21:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.406 21:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:15.406 "name": "Existed_Raid", 00:14:15.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.406 "strip_size_kb": 64, 00:14:15.406 "state": "configuring", 00:14:15.406 "raid_level": "concat", 00:14:15.406 "superblock": false, 00:14:15.406 "num_base_bdevs": 4, 00:14:15.406 "num_base_bdevs_discovered": 2, 00:14:15.406 "num_base_bdevs_operational": 4, 00:14:15.406 "base_bdevs_list": [ 00:14:15.406 { 00:14:15.406 "name": "BaseBdev1", 00:14:15.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.406 "is_configured": false, 00:14:15.406 "data_offset": 0, 00:14:15.406 "data_size": 0 00:14:15.406 }, 00:14:15.406 { 00:14:15.406 "name": null, 00:14:15.406 "uuid": "3f441e34-42f4-11ef-9f7f-e9a656123a8b", 00:14:15.406 "is_configured": false, 00:14:15.406 "data_offset": 0, 00:14:15.406 "data_size": 65536 00:14:15.406 }, 00:14:15.406 { 00:14:15.406 "name": "BaseBdev3", 00:14:15.406 "uuid": "3fb8b2e2-42f4-11ef-9f7f-e9a656123a8b", 00:14:15.406 "is_configured": true, 00:14:15.406 "data_offset": 0, 00:14:15.406 "data_size": 65536 00:14:15.406 }, 00:14:15.406 { 00:14:15.406 "name": "BaseBdev4", 00:14:15.406 "uuid": "4019bfc0-42f4-11ef-9f7f-e9a656123a8b", 00:14:15.406 "is_configured": true, 00:14:15.406 "data_offset": 0, 00:14:15.406 "data_size": 65536 00:14:15.406 } 00:14:15.406 ] 00:14:15.406 }' 00:14:15.406 21:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:15.406 21:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.665 21:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.665 21:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:15.923 21:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:14:15.923 21:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:16.181 [2024-07-15 21:50:31.262043] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:16.181 BaseBdev1 00:14:16.181 21:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:14:16.181 21:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev1 00:14:16.181 21:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:14:16.181 21:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:14:16.181 21:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:14:16.181 21:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:14:16.181 21:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:16.441 21:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:16.700 [ 00:14:16.700 { 00:14:16.700 "name": "BaseBdev1", 00:14:16.700 "aliases": [ 00:14:16.700 "41a5e19b-42f4-11ef-9f7f-e9a656123a8b" 00:14:16.700 ], 00:14:16.700 "product_name": "Malloc disk", 00:14:16.700 "block_size": 512, 00:14:16.700 "num_blocks": 65536, 00:14:16.700 "uuid": "41a5e19b-42f4-11ef-9f7f-e9a656123a8b", 00:14:16.700 "assigned_rate_limits": { 00:14:16.700 "rw_ios_per_sec": 0, 00:14:16.700 "rw_mbytes_per_sec": 0, 00:14:16.700 "r_mbytes_per_sec": 0, 00:14:16.700 "w_mbytes_per_sec": 0 00:14:16.700 }, 00:14:16.700 "claimed": true, 00:14:16.700 "claim_type": "exclusive_write", 00:14:16.700 "zoned": false, 00:14:16.700 "supported_io_types": { 00:14:16.700 "read": true, 00:14:16.700 "write": true, 00:14:16.700 "unmap": true, 00:14:16.700 "flush": true, 00:14:16.700 "reset": true, 00:14:16.700 "nvme_admin": false, 00:14:16.700 "nvme_io": false, 00:14:16.700 "nvme_io_md": false, 00:14:16.700 "write_zeroes": true, 00:14:16.700 "zcopy": true, 00:14:16.700 "get_zone_info": false, 00:14:16.700 "zone_management": false, 00:14:16.700 "zone_append": false, 00:14:16.700 "compare": false, 00:14:16.700 "compare_and_write": false, 00:14:16.700 "abort": true, 00:14:16.700 "seek_hole": false, 00:14:16.700 "seek_data": false, 00:14:16.700 "copy": true, 00:14:16.700 "nvme_iov_md": false 00:14:16.700 }, 00:14:16.700 "memory_domains": [ 00:14:16.700 { 00:14:16.700 "dma_device_id": "system", 00:14:16.700 "dma_device_type": 1 00:14:16.700 }, 00:14:16.700 { 00:14:16.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.700 "dma_device_type": 2 00:14:16.700 } 00:14:16.700 ], 00:14:16.700 "driver_specific": {} 00:14:16.700 } 00:14:16.700 ] 00:14:16.700 21:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:14:16.700 21:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:16.700 21:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:16.700 21:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:16.700 21:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:16.700 21:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:16.700 21:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:16.700 21:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:16.700 21:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:16.700 21:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:16.700 21:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:16.700 21:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.700 21:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.959 21:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:16.959 "name": "Existed_Raid", 00:14:16.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.959 "strip_size_kb": 64, 00:14:16.959 "state": "configuring", 00:14:16.959 "raid_level": "concat", 00:14:16.959 "superblock": false, 00:14:16.959 "num_base_bdevs": 4, 00:14:16.959 "num_base_bdevs_discovered": 3, 00:14:16.959 "num_base_bdevs_operational": 4, 00:14:16.959 "base_bdevs_list": [ 00:14:16.959 { 00:14:16.959 "name": "BaseBdev1", 00:14:16.959 "uuid": "41a5e19b-42f4-11ef-9f7f-e9a656123a8b", 00:14:16.959 "is_configured": true, 00:14:16.959 "data_offset": 0, 00:14:16.959 "data_size": 65536 00:14:16.959 }, 00:14:16.959 { 00:14:16.959 "name": null, 00:14:16.959 "uuid": "3f441e34-42f4-11ef-9f7f-e9a656123a8b", 00:14:16.959 "is_configured": false, 00:14:16.959 "data_offset": 0, 00:14:16.959 "data_size": 65536 00:14:16.959 }, 00:14:16.959 { 00:14:16.959 "name": "BaseBdev3", 00:14:16.959 "uuid": "3fb8b2e2-42f4-11ef-9f7f-e9a656123a8b", 00:14:16.959 "is_configured": true, 00:14:16.959 "data_offset": 0, 00:14:16.959 "data_size": 65536 00:14:16.959 }, 00:14:16.959 { 00:14:16.959 "name": "BaseBdev4", 00:14:16.959 "uuid": "4019bfc0-42f4-11ef-9f7f-e9a656123a8b", 00:14:16.959 "is_configured": true, 00:14:16.959 "data_offset": 0, 00:14:16.959 "data_size": 65536 00:14:16.959 } 00:14:16.959 ] 00:14:16.959 }' 00:14:16.959 21:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:16.959 21:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.217 21:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.218 21:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:17.786 21:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:14:17.786 21:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:17.786 [2024-07-15 21:50:32.861882] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:17.786 21:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:17.786 21:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:17.786 21:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:17.786 21:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:17.786 21:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:17.786 21:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:17.786 21:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:17.786 21:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:17.786 21:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:17.786 21:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:17.786 21:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.786 21:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.045 21:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:18.045 "name": "Existed_Raid", 00:14:18.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.045 "strip_size_kb": 64, 00:14:18.045 "state": "configuring", 00:14:18.045 "raid_level": "concat", 00:14:18.045 "superblock": false, 00:14:18.045 "num_base_bdevs": 4, 00:14:18.045 "num_base_bdevs_discovered": 2, 00:14:18.045 "num_base_bdevs_operational": 4, 00:14:18.045 "base_bdevs_list": [ 00:14:18.045 { 00:14:18.045 "name": "BaseBdev1", 00:14:18.045 "uuid": "41a5e19b-42f4-11ef-9f7f-e9a656123a8b", 00:14:18.045 "is_configured": true, 00:14:18.045 "data_offset": 0, 00:14:18.045 "data_size": 65536 00:14:18.045 }, 00:14:18.045 { 00:14:18.045 "name": null, 00:14:18.045 "uuid": "3f441e34-42f4-11ef-9f7f-e9a656123a8b", 00:14:18.045 "is_configured": false, 00:14:18.045 "data_offset": 0, 00:14:18.045 "data_size": 65536 00:14:18.045 }, 00:14:18.045 { 00:14:18.045 "name": null, 00:14:18.045 "uuid": "3fb8b2e2-42f4-11ef-9f7f-e9a656123a8b", 00:14:18.045 "is_configured": false, 00:14:18.045 "data_offset": 0, 00:14:18.045 "data_size": 65536 00:14:18.045 }, 00:14:18.045 { 00:14:18.045 "name": "BaseBdev4", 00:14:18.045 "uuid": "4019bfc0-42f4-11ef-9f7f-e9a656123a8b", 00:14:18.045 "is_configured": true, 00:14:18.045 "data_offset": 0, 00:14:18.045 "data_size": 65536 00:14:18.045 } 00:14:18.045 ] 00:14:18.045 }' 00:14:18.045 21:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:18.046 21:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.304 21:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:18.304 21:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:18.562 21:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:14:18.562 21:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:18.909 [2024-07-15 21:50:33.977913] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:18.909 21:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:18.909 21:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:18.909 21:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:18.909 21:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:18.909 21:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:18.909 21:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:18.909 21:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:18.909 21:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:18.909 21:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:18.909 21:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:18.909 21:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:18.909 21:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.174 21:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:19.174 "name": "Existed_Raid", 00:14:19.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.174 "strip_size_kb": 64, 00:14:19.174 "state": "configuring", 00:14:19.174 "raid_level": "concat", 00:14:19.174 "superblock": false, 00:14:19.174 "num_base_bdevs": 4, 00:14:19.174 "num_base_bdevs_discovered": 3, 00:14:19.174 "num_base_bdevs_operational": 4, 00:14:19.174 "base_bdevs_list": [ 00:14:19.174 { 00:14:19.174 "name": "BaseBdev1", 00:14:19.174 "uuid": "41a5e19b-42f4-11ef-9f7f-e9a656123a8b", 00:14:19.174 "is_configured": true, 00:14:19.174 "data_offset": 0, 00:14:19.174 "data_size": 65536 00:14:19.174 }, 00:14:19.174 { 00:14:19.174 "name": null, 00:14:19.174 "uuid": "3f441e34-42f4-11ef-9f7f-e9a656123a8b", 00:14:19.174 "is_configured": false, 00:14:19.174 "data_offset": 0, 00:14:19.174 "data_size": 65536 00:14:19.174 }, 00:14:19.174 { 00:14:19.174 "name": "BaseBdev3", 00:14:19.174 "uuid": "3fb8b2e2-42f4-11ef-9f7f-e9a656123a8b", 00:14:19.174 "is_configured": true, 00:14:19.174 "data_offset": 0, 00:14:19.174 "data_size": 65536 00:14:19.174 }, 00:14:19.174 { 00:14:19.174 "name": "BaseBdev4", 00:14:19.174 "uuid": "4019bfc0-42f4-11ef-9f7f-e9a656123a8b", 00:14:19.174 "is_configured": true, 00:14:19.174 "data_offset": 0, 00:14:19.174 "data_size": 65536 00:14:19.174 } 00:14:19.174 ] 00:14:19.174 }' 00:14:19.174 21:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:19.174 21:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.433 21:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.433 21:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:19.692 21:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:14:19.692 21:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:19.951 [2024-07-15 21:50:34.950055] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:19.951 21:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:19.951 21:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:19.951 21:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:19.951 21:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:19.951 21:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:19.951 21:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:19.951 21:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:19.951 21:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:19.951 21:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:19.951 21:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:19.951 21:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.951 21:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.210 21:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:20.210 "name": "Existed_Raid", 00:14:20.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.210 "strip_size_kb": 64, 00:14:20.210 "state": "configuring", 00:14:20.210 "raid_level": "concat", 00:14:20.210 "superblock": false, 00:14:20.210 "num_base_bdevs": 4, 00:14:20.210 "num_base_bdevs_discovered": 2, 00:14:20.210 "num_base_bdevs_operational": 4, 00:14:20.210 "base_bdevs_list": [ 00:14:20.210 { 00:14:20.210 "name": null, 00:14:20.210 "uuid": "41a5e19b-42f4-11ef-9f7f-e9a656123a8b", 00:14:20.210 "is_configured": false, 00:14:20.210 "data_offset": 0, 00:14:20.210 "data_size": 65536 00:14:20.210 }, 00:14:20.210 { 00:14:20.210 "name": null, 00:14:20.210 "uuid": "3f441e34-42f4-11ef-9f7f-e9a656123a8b", 00:14:20.210 "is_configured": false, 00:14:20.210 "data_offset": 0, 00:14:20.210 "data_size": 65536 00:14:20.210 }, 00:14:20.210 { 00:14:20.210 "name": "BaseBdev3", 00:14:20.210 "uuid": "3fb8b2e2-42f4-11ef-9f7f-e9a656123a8b", 00:14:20.210 "is_configured": true, 00:14:20.210 "data_offset": 0, 00:14:20.210 "data_size": 65536 00:14:20.210 }, 00:14:20.210 { 00:14:20.210 "name": "BaseBdev4", 00:14:20.210 "uuid": "4019bfc0-42f4-11ef-9f7f-e9a656123a8b", 00:14:20.210 "is_configured": true, 00:14:20.210 "data_offset": 0, 00:14:20.210 "data_size": 65536 00:14:20.210 } 00:14:20.210 ] 00:14:20.210 }' 00:14:20.210 21:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:20.210 21:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.469 21:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.469 21:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:20.728 21:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:14:20.728 21:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:20.987 [2024-07-15 21:50:35.976429] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:20.987 21:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:20.987 21:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:20.987 21:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:20.987 21:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:20.987 21:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:20.987 21:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:20.987 21:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:20.987 21:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:20.987 21:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:20.988 21:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:20.988 21:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.988 21:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.246 21:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:21.246 "name": "Existed_Raid", 00:14:21.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.246 "strip_size_kb": 64, 00:14:21.246 "state": "configuring", 00:14:21.246 "raid_level": "concat", 00:14:21.246 "superblock": false, 00:14:21.246 "num_base_bdevs": 4, 00:14:21.246 "num_base_bdevs_discovered": 3, 00:14:21.246 "num_base_bdevs_operational": 4, 00:14:21.246 "base_bdevs_list": [ 00:14:21.246 { 00:14:21.246 "name": null, 00:14:21.246 "uuid": "41a5e19b-42f4-11ef-9f7f-e9a656123a8b", 00:14:21.246 "is_configured": false, 00:14:21.246 "data_offset": 0, 00:14:21.246 "data_size": 65536 00:14:21.246 }, 00:14:21.246 { 00:14:21.246 "name": "BaseBdev2", 00:14:21.246 "uuid": "3f441e34-42f4-11ef-9f7f-e9a656123a8b", 00:14:21.246 "is_configured": true, 00:14:21.246 "data_offset": 0, 00:14:21.246 "data_size": 65536 00:14:21.246 }, 00:14:21.246 { 00:14:21.246 "name": "BaseBdev3", 00:14:21.246 "uuid": "3fb8b2e2-42f4-11ef-9f7f-e9a656123a8b", 00:14:21.246 "is_configured": true, 00:14:21.246 "data_offset": 0, 00:14:21.246 "data_size": 65536 00:14:21.246 }, 00:14:21.246 { 00:14:21.246 "name": "BaseBdev4", 00:14:21.246 "uuid": "4019bfc0-42f4-11ef-9f7f-e9a656123a8b", 00:14:21.246 "is_configured": true, 00:14:21.246 "data_offset": 0, 00:14:21.246 "data_size": 65536 00:14:21.246 } 00:14:21.246 ] 00:14:21.246 }' 00:14:21.246 21:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:21.246 21:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.504 21:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.504 21:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:21.504 21:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:14:21.504 21:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.504 21:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:21.763 21:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 41a5e19b-42f4-11ef-9f7f-e9a656123a8b 00:14:22.022 [2024-07-15 21:50:37.120955] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:22.022 [2024-07-15 21:50:37.120996] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x30b72fe34f00 00:14:22.022 [2024-07-15 21:50:37.121001] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:22.022 [2024-07-15 21:50:37.121043] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x30b72fe97e20 00:14:22.022 [2024-07-15 21:50:37.121177] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x30b72fe34f00 00:14:22.022 [2024-07-15 21:50:37.121197] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x30b72fe34f00 00:14:22.022 [2024-07-15 21:50:37.121233] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.022 NewBaseBdev 00:14:22.022 21:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:14:22.022 21:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=NewBaseBdev 00:14:22.022 21:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:14:22.022 21:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:14:22.022 21:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:14:22.022 21:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:14:22.022 21:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:22.280 21:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:22.539 [ 00:14:22.539 { 00:14:22.539 "name": "NewBaseBdev", 00:14:22.539 "aliases": [ 00:14:22.539 "41a5e19b-42f4-11ef-9f7f-e9a656123a8b" 00:14:22.539 ], 00:14:22.539 "product_name": "Malloc disk", 00:14:22.539 "block_size": 512, 00:14:22.539 "num_blocks": 65536, 00:14:22.539 "uuid": "41a5e19b-42f4-11ef-9f7f-e9a656123a8b", 00:14:22.539 "assigned_rate_limits": { 00:14:22.539 "rw_ios_per_sec": 0, 00:14:22.539 "rw_mbytes_per_sec": 0, 00:14:22.539 "r_mbytes_per_sec": 0, 00:14:22.539 "w_mbytes_per_sec": 0 00:14:22.539 }, 00:14:22.539 "claimed": true, 00:14:22.539 "claim_type": "exclusive_write", 00:14:22.539 "zoned": false, 00:14:22.539 "supported_io_types": { 00:14:22.539 "read": true, 00:14:22.539 "write": true, 00:14:22.539 "unmap": true, 00:14:22.539 "flush": true, 00:14:22.539 "reset": true, 00:14:22.539 "nvme_admin": false, 00:14:22.539 "nvme_io": false, 00:14:22.539 "nvme_io_md": false, 00:14:22.539 "write_zeroes": true, 00:14:22.539 "zcopy": true, 00:14:22.539 "get_zone_info": false, 00:14:22.539 "zone_management": false, 00:14:22.539 "zone_append": false, 00:14:22.539 "compare": false, 00:14:22.539 "compare_and_write": false, 00:14:22.539 "abort": true, 00:14:22.539 "seek_hole": false, 00:14:22.539 "seek_data": false, 00:14:22.539 "copy": true, 00:14:22.539 "nvme_iov_md": false 00:14:22.539 }, 00:14:22.539 "memory_domains": [ 00:14:22.539 { 00:14:22.539 "dma_device_id": "system", 00:14:22.539 "dma_device_type": 1 00:14:22.539 }, 00:14:22.539 { 00:14:22.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.539 "dma_device_type": 2 00:14:22.539 } 00:14:22.539 ], 00:14:22.539 "driver_specific": {} 00:14:22.539 } 00:14:22.539 ] 00:14:22.539 21:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:14:22.539 21:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:22.539 21:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:22.539 21:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:22.539 21:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:22.539 21:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:22.539 21:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:22.539 21:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:22.539 21:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:22.539 21:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:22.539 21:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:22.539 21:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.539 21:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.798 21:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:22.798 "name": "Existed_Raid", 00:14:22.798 "uuid": "4523eb66-42f4-11ef-9f7f-e9a656123a8b", 00:14:22.798 "strip_size_kb": 64, 00:14:22.798 "state": "online", 00:14:22.798 "raid_level": "concat", 00:14:22.798 "superblock": false, 00:14:22.798 "num_base_bdevs": 4, 00:14:22.798 "num_base_bdevs_discovered": 4, 00:14:22.798 "num_base_bdevs_operational": 4, 00:14:22.798 "base_bdevs_list": [ 00:14:22.798 { 00:14:22.798 "name": "NewBaseBdev", 00:14:22.798 "uuid": "41a5e19b-42f4-11ef-9f7f-e9a656123a8b", 00:14:22.798 "is_configured": true, 00:14:22.798 "data_offset": 0, 00:14:22.798 "data_size": 65536 00:14:22.798 }, 00:14:22.798 { 00:14:22.798 "name": "BaseBdev2", 00:14:22.798 "uuid": "3f441e34-42f4-11ef-9f7f-e9a656123a8b", 00:14:22.798 "is_configured": true, 00:14:22.798 "data_offset": 0, 00:14:22.798 "data_size": 65536 00:14:22.798 }, 00:14:22.798 { 00:14:22.798 "name": "BaseBdev3", 00:14:22.798 "uuid": "3fb8b2e2-42f4-11ef-9f7f-e9a656123a8b", 00:14:22.798 "is_configured": true, 00:14:22.798 "data_offset": 0, 00:14:22.798 "data_size": 65536 00:14:22.798 }, 00:14:22.798 { 00:14:22.798 "name": "BaseBdev4", 00:14:22.798 "uuid": "4019bfc0-42f4-11ef-9f7f-e9a656123a8b", 00:14:22.798 "is_configured": true, 00:14:22.798 "data_offset": 0, 00:14:22.798 "data_size": 65536 00:14:22.798 } 00:14:22.798 ] 00:14:22.798 }' 00:14:22.798 21:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:22.798 21:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.055 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:14:23.055 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:23.055 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:23.055 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:23.056 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:23.056 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:23.056 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:23.056 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:23.313 [2024-07-15 21:50:38.280935] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:23.313 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:23.313 "name": "Existed_Raid", 00:14:23.313 "aliases": [ 00:14:23.313 "4523eb66-42f4-11ef-9f7f-e9a656123a8b" 00:14:23.313 ], 00:14:23.313 "product_name": "Raid Volume", 00:14:23.313 "block_size": 512, 00:14:23.313 "num_blocks": 262144, 00:14:23.313 "uuid": "4523eb66-42f4-11ef-9f7f-e9a656123a8b", 00:14:23.313 "assigned_rate_limits": { 00:14:23.313 "rw_ios_per_sec": 0, 00:14:23.313 "rw_mbytes_per_sec": 0, 00:14:23.313 "r_mbytes_per_sec": 0, 00:14:23.313 "w_mbytes_per_sec": 0 00:14:23.313 }, 00:14:23.313 "claimed": false, 00:14:23.313 "zoned": false, 00:14:23.313 "supported_io_types": { 00:14:23.313 "read": true, 00:14:23.313 "write": true, 00:14:23.313 "unmap": true, 00:14:23.313 "flush": true, 00:14:23.313 "reset": true, 00:14:23.313 "nvme_admin": false, 00:14:23.313 "nvme_io": false, 00:14:23.313 "nvme_io_md": false, 00:14:23.313 "write_zeroes": true, 00:14:23.313 "zcopy": false, 00:14:23.313 "get_zone_info": false, 00:14:23.313 "zone_management": false, 00:14:23.313 "zone_append": false, 00:14:23.313 "compare": false, 00:14:23.313 "compare_and_write": false, 00:14:23.313 "abort": false, 00:14:23.313 "seek_hole": false, 00:14:23.313 "seek_data": false, 00:14:23.313 "copy": false, 00:14:23.313 "nvme_iov_md": false 00:14:23.313 }, 00:14:23.313 "memory_domains": [ 00:14:23.313 { 00:14:23.313 "dma_device_id": "system", 00:14:23.313 "dma_device_type": 1 00:14:23.313 }, 00:14:23.313 { 00:14:23.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.313 "dma_device_type": 2 00:14:23.313 }, 00:14:23.313 { 00:14:23.313 "dma_device_id": "system", 00:14:23.313 "dma_device_type": 1 00:14:23.313 }, 00:14:23.313 { 00:14:23.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.313 "dma_device_type": 2 00:14:23.313 }, 00:14:23.313 { 00:14:23.313 "dma_device_id": "system", 00:14:23.313 "dma_device_type": 1 00:14:23.313 }, 00:14:23.313 { 00:14:23.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.313 "dma_device_type": 2 00:14:23.313 }, 00:14:23.313 { 00:14:23.313 "dma_device_id": "system", 00:14:23.313 "dma_device_type": 1 00:14:23.313 }, 00:14:23.313 { 00:14:23.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.313 "dma_device_type": 2 00:14:23.313 } 00:14:23.313 ], 00:14:23.313 "driver_specific": { 00:14:23.313 "raid": { 00:14:23.313 "uuid": "4523eb66-42f4-11ef-9f7f-e9a656123a8b", 00:14:23.313 "strip_size_kb": 64, 00:14:23.313 "state": "online", 00:14:23.313 "raid_level": "concat", 00:14:23.313 "superblock": false, 00:14:23.313 "num_base_bdevs": 4, 00:14:23.313 "num_base_bdevs_discovered": 4, 00:14:23.313 "num_base_bdevs_operational": 4, 00:14:23.313 "base_bdevs_list": [ 00:14:23.313 { 00:14:23.313 "name": "NewBaseBdev", 00:14:23.313 "uuid": "41a5e19b-42f4-11ef-9f7f-e9a656123a8b", 00:14:23.313 "is_configured": true, 00:14:23.313 "data_offset": 0, 00:14:23.313 "data_size": 65536 00:14:23.313 }, 00:14:23.313 { 00:14:23.313 "name": "BaseBdev2", 00:14:23.313 "uuid": "3f441e34-42f4-11ef-9f7f-e9a656123a8b", 00:14:23.313 "is_configured": true, 00:14:23.313 "data_offset": 0, 00:14:23.313 "data_size": 65536 00:14:23.313 }, 00:14:23.313 { 00:14:23.313 "name": "BaseBdev3", 00:14:23.313 "uuid": "3fb8b2e2-42f4-11ef-9f7f-e9a656123a8b", 00:14:23.313 "is_configured": true, 00:14:23.313 "data_offset": 0, 00:14:23.313 "data_size": 65536 00:14:23.313 }, 00:14:23.313 { 00:14:23.313 "name": "BaseBdev4", 00:14:23.313 "uuid": "4019bfc0-42f4-11ef-9f7f-e9a656123a8b", 00:14:23.313 "is_configured": true, 00:14:23.313 "data_offset": 0, 00:14:23.313 "data_size": 65536 00:14:23.313 } 00:14:23.313 ] 00:14:23.313 } 00:14:23.313 } 00:14:23.313 }' 00:14:23.313 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:23.313 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:14:23.313 BaseBdev2 00:14:23.313 BaseBdev3 00:14:23.313 BaseBdev4' 00:14:23.313 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:23.313 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:23.313 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:23.571 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:23.571 "name": "NewBaseBdev", 00:14:23.571 "aliases": [ 00:14:23.571 "41a5e19b-42f4-11ef-9f7f-e9a656123a8b" 00:14:23.571 ], 00:14:23.571 "product_name": "Malloc disk", 00:14:23.571 "block_size": 512, 00:14:23.571 "num_blocks": 65536, 00:14:23.571 "uuid": "41a5e19b-42f4-11ef-9f7f-e9a656123a8b", 00:14:23.571 "assigned_rate_limits": { 00:14:23.571 "rw_ios_per_sec": 0, 00:14:23.571 "rw_mbytes_per_sec": 0, 00:14:23.571 "r_mbytes_per_sec": 0, 00:14:23.571 "w_mbytes_per_sec": 0 00:14:23.571 }, 00:14:23.571 "claimed": true, 00:14:23.571 "claim_type": "exclusive_write", 00:14:23.571 "zoned": false, 00:14:23.571 "supported_io_types": { 00:14:23.571 "read": true, 00:14:23.571 "write": true, 00:14:23.571 "unmap": true, 00:14:23.571 "flush": true, 00:14:23.571 "reset": true, 00:14:23.571 "nvme_admin": false, 00:14:23.571 "nvme_io": false, 00:14:23.571 "nvme_io_md": false, 00:14:23.571 "write_zeroes": true, 00:14:23.571 "zcopy": true, 00:14:23.571 "get_zone_info": false, 00:14:23.571 "zone_management": false, 00:14:23.571 "zone_append": false, 00:14:23.571 "compare": false, 00:14:23.571 "compare_and_write": false, 00:14:23.571 "abort": true, 00:14:23.571 "seek_hole": false, 00:14:23.571 "seek_data": false, 00:14:23.571 "copy": true, 00:14:23.571 "nvme_iov_md": false 00:14:23.571 }, 00:14:23.571 "memory_domains": [ 00:14:23.571 { 00:14:23.571 "dma_device_id": "system", 00:14:23.571 "dma_device_type": 1 00:14:23.571 }, 00:14:23.571 { 00:14:23.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.571 "dma_device_type": 2 00:14:23.571 } 00:14:23.571 ], 00:14:23.571 "driver_specific": {} 00:14:23.571 }' 00:14:23.571 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:23.571 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:23.571 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:23.571 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:23.571 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:23.571 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:23.571 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:23.571 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:23.571 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:23.571 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:23.571 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:23.571 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:23.571 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:23.571 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:23.571 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:23.829 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:23.829 "name": "BaseBdev2", 00:14:23.829 "aliases": [ 00:14:23.829 "3f441e34-42f4-11ef-9f7f-e9a656123a8b" 00:14:23.829 ], 00:14:23.829 "product_name": "Malloc disk", 00:14:23.829 "block_size": 512, 00:14:23.829 "num_blocks": 65536, 00:14:23.829 "uuid": "3f441e34-42f4-11ef-9f7f-e9a656123a8b", 00:14:23.829 "assigned_rate_limits": { 00:14:23.829 "rw_ios_per_sec": 0, 00:14:23.829 "rw_mbytes_per_sec": 0, 00:14:23.829 "r_mbytes_per_sec": 0, 00:14:23.829 "w_mbytes_per_sec": 0 00:14:23.829 }, 00:14:23.829 "claimed": true, 00:14:23.829 "claim_type": "exclusive_write", 00:14:23.829 "zoned": false, 00:14:23.829 "supported_io_types": { 00:14:23.829 "read": true, 00:14:23.829 "write": true, 00:14:23.829 "unmap": true, 00:14:23.829 "flush": true, 00:14:23.829 "reset": true, 00:14:23.829 "nvme_admin": false, 00:14:23.829 "nvme_io": false, 00:14:23.829 "nvme_io_md": false, 00:14:23.829 "write_zeroes": true, 00:14:23.829 "zcopy": true, 00:14:23.829 "get_zone_info": false, 00:14:23.829 "zone_management": false, 00:14:23.829 "zone_append": false, 00:14:23.829 "compare": false, 00:14:23.829 "compare_and_write": false, 00:14:23.829 "abort": true, 00:14:23.829 "seek_hole": false, 00:14:23.829 "seek_data": false, 00:14:23.829 "copy": true, 00:14:23.829 "nvme_iov_md": false 00:14:23.829 }, 00:14:23.829 "memory_domains": [ 00:14:23.829 { 00:14:23.829 "dma_device_id": "system", 00:14:23.829 "dma_device_type": 1 00:14:23.829 }, 00:14:23.829 { 00:14:23.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.829 "dma_device_type": 2 00:14:23.829 } 00:14:23.829 ], 00:14:23.829 "driver_specific": {} 00:14:23.829 }' 00:14:23.829 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:23.829 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:23.829 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:23.829 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:23.829 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:23.829 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:23.829 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:23.829 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:23.829 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:23.829 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:23.829 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:23.829 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:23.829 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:23.829 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:23.829 21:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:24.087 21:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:24.087 "name": "BaseBdev3", 00:14:24.087 "aliases": [ 00:14:24.087 "3fb8b2e2-42f4-11ef-9f7f-e9a656123a8b" 00:14:24.087 ], 00:14:24.087 "product_name": "Malloc disk", 00:14:24.087 "block_size": 512, 00:14:24.087 "num_blocks": 65536, 00:14:24.087 "uuid": "3fb8b2e2-42f4-11ef-9f7f-e9a656123a8b", 00:14:24.087 "assigned_rate_limits": { 00:14:24.087 "rw_ios_per_sec": 0, 00:14:24.087 "rw_mbytes_per_sec": 0, 00:14:24.087 "r_mbytes_per_sec": 0, 00:14:24.087 "w_mbytes_per_sec": 0 00:14:24.087 }, 00:14:24.087 "claimed": true, 00:14:24.087 "claim_type": "exclusive_write", 00:14:24.087 "zoned": false, 00:14:24.087 "supported_io_types": { 00:14:24.087 "read": true, 00:14:24.087 "write": true, 00:14:24.087 "unmap": true, 00:14:24.087 "flush": true, 00:14:24.087 "reset": true, 00:14:24.087 "nvme_admin": false, 00:14:24.087 "nvme_io": false, 00:14:24.087 "nvme_io_md": false, 00:14:24.087 "write_zeroes": true, 00:14:24.087 "zcopy": true, 00:14:24.087 "get_zone_info": false, 00:14:24.087 "zone_management": false, 00:14:24.087 "zone_append": false, 00:14:24.087 "compare": false, 00:14:24.087 "compare_and_write": false, 00:14:24.087 "abort": true, 00:14:24.087 "seek_hole": false, 00:14:24.087 "seek_data": false, 00:14:24.087 "copy": true, 00:14:24.087 "nvme_iov_md": false 00:14:24.087 }, 00:14:24.087 "memory_domains": [ 00:14:24.087 { 00:14:24.087 "dma_device_id": "system", 00:14:24.087 "dma_device_type": 1 00:14:24.087 }, 00:14:24.087 { 00:14:24.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.087 "dma_device_type": 2 00:14:24.087 } 00:14:24.087 ], 00:14:24.087 "driver_specific": {} 00:14:24.087 }' 00:14:24.087 21:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:24.087 21:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:24.087 21:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:24.087 21:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:24.087 21:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:24.087 21:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:24.087 21:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.087 21:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.087 21:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:24.087 21:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:24.087 21:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:24.087 21:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:24.087 21:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:24.087 21:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:24.087 21:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:14:24.344 21:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:24.344 "name": "BaseBdev4", 00:14:24.344 "aliases": [ 00:14:24.344 "4019bfc0-42f4-11ef-9f7f-e9a656123a8b" 00:14:24.344 ], 00:14:24.344 "product_name": "Malloc disk", 00:14:24.344 "block_size": 512, 00:14:24.344 "num_blocks": 65536, 00:14:24.345 "uuid": "4019bfc0-42f4-11ef-9f7f-e9a656123a8b", 00:14:24.345 "assigned_rate_limits": { 00:14:24.345 "rw_ios_per_sec": 0, 00:14:24.345 "rw_mbytes_per_sec": 0, 00:14:24.345 "r_mbytes_per_sec": 0, 00:14:24.345 "w_mbytes_per_sec": 0 00:14:24.345 }, 00:14:24.345 "claimed": true, 00:14:24.345 "claim_type": "exclusive_write", 00:14:24.345 "zoned": false, 00:14:24.345 "supported_io_types": { 00:14:24.345 "read": true, 00:14:24.345 "write": true, 00:14:24.345 "unmap": true, 00:14:24.345 "flush": true, 00:14:24.345 "reset": true, 00:14:24.345 "nvme_admin": false, 00:14:24.345 "nvme_io": false, 00:14:24.345 "nvme_io_md": false, 00:14:24.345 "write_zeroes": true, 00:14:24.345 "zcopy": true, 00:14:24.345 "get_zone_info": false, 00:14:24.345 "zone_management": false, 00:14:24.345 "zone_append": false, 00:14:24.345 "compare": false, 00:14:24.345 "compare_and_write": false, 00:14:24.345 "abort": true, 00:14:24.345 "seek_hole": false, 00:14:24.345 "seek_data": false, 00:14:24.345 "copy": true, 00:14:24.345 "nvme_iov_md": false 00:14:24.345 }, 00:14:24.345 "memory_domains": [ 00:14:24.345 { 00:14:24.345 "dma_device_id": "system", 00:14:24.345 "dma_device_type": 1 00:14:24.345 }, 00:14:24.345 { 00:14:24.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.345 "dma_device_type": 2 00:14:24.345 } 00:14:24.345 ], 00:14:24.345 "driver_specific": {} 00:14:24.345 }' 00:14:24.345 21:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:24.345 21:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:24.345 21:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:24.345 21:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:24.345 21:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:24.345 21:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:24.345 21:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.345 21:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.345 21:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:24.345 21:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:24.345 21:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:24.345 21:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:24.345 21:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:24.603 [2024-07-15 21:50:39.700941] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:24.603 [2024-07-15 21:50:39.700972] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:24.603 [2024-07-15 21:50:39.701000] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:24.603 [2024-07-15 21:50:39.701018] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:24.603 [2024-07-15 21:50:39.701038] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x30b72fe34f00 name Existed_Raid, state offline 00:14:24.603 21:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 60651 00:14:24.603 21:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@942 -- # '[' -z 60651 ']' 00:14:24.603 21:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # kill -0 60651 00:14:24.603 21:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@947 -- # uname 00:14:24.603 21:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:14:24.603 21:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # ps -c -o command 60651 00:14:24.603 21:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # tail -1 00:14:24.603 21:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:14:24.603 21:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:14:24.603 killing process with pid 60651 00:14:24.603 21:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 60651' 00:14:24.603 21:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@961 -- # kill 60651 00:14:24.603 [2024-07-15 21:50:39.726875] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:24.603 21:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # wait 60651 00:14:24.603 [2024-07-15 21:50:39.760969] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:24.860 21:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:14:24.860 00:14:24.860 real 0m25.763s 00:14:24.860 user 0m46.851s 00:14:24.860 sys 0m3.770s 00:14:24.860 21:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:14:24.860 21:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.860 ************************************ 00:14:24.861 END TEST raid_state_function_test 00:14:24.861 ************************************ 00:14:24.861 21:50:40 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:14:24.861 21:50:40 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:14:24.861 21:50:40 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:14:24.861 21:50:40 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:14:24.861 21:50:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:24.861 ************************************ 00:14:24.861 START TEST raid_state_function_test_sb 00:14:24.861 ************************************ 00:14:24.861 21:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1117 -- # raid_state_function_test concat 4 true 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=61466 00:14:25.118 Process raid pid: 61466 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 61466' 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 61466 /var/tmp/spdk-raid.sock 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@823 -- # '[' -z 61466 ']' 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@828 -- # local max_retries=100 00:14:25.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # xtrace_disable 00:14:25.118 21:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.119 [2024-07-15 21:50:40.062138] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:14:25.119 [2024-07-15 21:50:40.062302] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:14:25.685 EAL: TSC is not safe to use in SMP mode 00:14:25.685 EAL: TSC is not invariant 00:14:25.685 [2024-07-15 21:50:40.593625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.685 [2024-07-15 21:50:40.686528] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:25.685 [2024-07-15 21:50:40.689501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.685 [2024-07-15 21:50:40.690634] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:25.685 [2024-07-15 21:50:40.690654] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:25.945 21:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:14:25.945 21:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # return 0 00:14:25.945 21:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:26.205 [2024-07-15 21:50:41.276059] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:26.205 [2024-07-15 21:50:41.276131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:26.205 [2024-07-15 21:50:41.276148] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:26.205 [2024-07-15 21:50:41.276161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:26.205 [2024-07-15 21:50:41.276166] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:26.205 [2024-07-15 21:50:41.276182] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:26.205 [2024-07-15 21:50:41.276187] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:26.205 [2024-07-15 21:50:41.276196] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:26.205 21:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:26.205 21:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:26.205 21:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:26.205 21:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:26.205 21:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:26.205 21:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:26.205 21:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:26.205 21:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:26.205 21:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:26.205 21:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:26.205 21:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:26.205 21:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.464 21:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:26.464 "name": "Existed_Raid", 00:14:26.464 "uuid": "479deddc-42f4-11ef-9f7f-e9a656123a8b", 00:14:26.464 "strip_size_kb": 64, 00:14:26.464 "state": "configuring", 00:14:26.464 "raid_level": "concat", 00:14:26.464 "superblock": true, 00:14:26.464 "num_base_bdevs": 4, 00:14:26.464 "num_base_bdevs_discovered": 0, 00:14:26.464 "num_base_bdevs_operational": 4, 00:14:26.464 "base_bdevs_list": [ 00:14:26.464 { 00:14:26.464 "name": "BaseBdev1", 00:14:26.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.464 "is_configured": false, 00:14:26.464 "data_offset": 0, 00:14:26.464 "data_size": 0 00:14:26.464 }, 00:14:26.464 { 00:14:26.464 "name": "BaseBdev2", 00:14:26.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.464 "is_configured": false, 00:14:26.464 "data_offset": 0, 00:14:26.464 "data_size": 0 00:14:26.464 }, 00:14:26.464 { 00:14:26.464 "name": "BaseBdev3", 00:14:26.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.464 "is_configured": false, 00:14:26.464 "data_offset": 0, 00:14:26.464 "data_size": 0 00:14:26.464 }, 00:14:26.464 { 00:14:26.464 "name": "BaseBdev4", 00:14:26.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.464 "is_configured": false, 00:14:26.464 "data_offset": 0, 00:14:26.464 "data_size": 0 00:14:26.464 } 00:14:26.464 ] 00:14:26.465 }' 00:14:26.465 21:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:26.465 21:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.724 21:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:26.982 [2024-07-15 21:50:42.012047] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:26.982 [2024-07-15 21:50:42.012074] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xfeffee34500 name Existed_Raid, state configuring 00:14:26.982 21:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:27.239 [2024-07-15 21:50:42.328100] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:27.239 [2024-07-15 21:50:42.328174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:27.240 [2024-07-15 21:50:42.328182] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:27.240 [2024-07-15 21:50:42.328195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:27.240 [2024-07-15 21:50:42.328199] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:27.240 [2024-07-15 21:50:42.328210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:27.240 [2024-07-15 21:50:42.328214] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:27.240 [2024-07-15 21:50:42.328224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:27.240 21:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:27.497 [2024-07-15 21:50:42.605157] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:27.497 BaseBdev1 00:14:27.497 21:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:27.497 21:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev1 00:14:27.497 21:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:14:27.497 21:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:14:27.497 21:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:14:27.497 21:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:14:27.497 21:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:27.755 21:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:28.013 [ 00:14:28.013 { 00:14:28.013 "name": "BaseBdev1", 00:14:28.013 "aliases": [ 00:14:28.014 "486891c7-42f4-11ef-9f7f-e9a656123a8b" 00:14:28.014 ], 00:14:28.014 "product_name": "Malloc disk", 00:14:28.014 "block_size": 512, 00:14:28.014 "num_blocks": 65536, 00:14:28.014 "uuid": "486891c7-42f4-11ef-9f7f-e9a656123a8b", 00:14:28.014 "assigned_rate_limits": { 00:14:28.014 "rw_ios_per_sec": 0, 00:14:28.014 "rw_mbytes_per_sec": 0, 00:14:28.014 "r_mbytes_per_sec": 0, 00:14:28.014 "w_mbytes_per_sec": 0 00:14:28.014 }, 00:14:28.014 "claimed": true, 00:14:28.014 "claim_type": "exclusive_write", 00:14:28.014 "zoned": false, 00:14:28.014 "supported_io_types": { 00:14:28.014 "read": true, 00:14:28.014 "write": true, 00:14:28.014 "unmap": true, 00:14:28.014 "flush": true, 00:14:28.014 "reset": true, 00:14:28.014 "nvme_admin": false, 00:14:28.014 "nvme_io": false, 00:14:28.014 "nvme_io_md": false, 00:14:28.014 "write_zeroes": true, 00:14:28.014 "zcopy": true, 00:14:28.014 "get_zone_info": false, 00:14:28.014 "zone_management": false, 00:14:28.014 "zone_append": false, 00:14:28.014 "compare": false, 00:14:28.014 "compare_and_write": false, 00:14:28.014 "abort": true, 00:14:28.014 "seek_hole": false, 00:14:28.014 "seek_data": false, 00:14:28.014 "copy": true, 00:14:28.014 "nvme_iov_md": false 00:14:28.014 }, 00:14:28.014 "memory_domains": [ 00:14:28.014 { 00:14:28.014 "dma_device_id": "system", 00:14:28.014 "dma_device_type": 1 00:14:28.014 }, 00:14:28.014 { 00:14:28.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.014 "dma_device_type": 2 00:14:28.014 } 00:14:28.014 ], 00:14:28.014 "driver_specific": {} 00:14:28.014 } 00:14:28.014 ] 00:14:28.014 21:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:14:28.014 21:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:28.014 21:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:28.014 21:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:28.014 21:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:28.014 21:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:28.014 21:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:28.014 21:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:28.014 21:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:28.014 21:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:28.014 21:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:28.014 21:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.014 21:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.272 21:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:28.272 "name": "Existed_Raid", 00:14:28.272 "uuid": "483e7521-42f4-11ef-9f7f-e9a656123a8b", 00:14:28.272 "strip_size_kb": 64, 00:14:28.272 "state": "configuring", 00:14:28.272 "raid_level": "concat", 00:14:28.272 "superblock": true, 00:14:28.272 "num_base_bdevs": 4, 00:14:28.272 "num_base_bdevs_discovered": 1, 00:14:28.272 "num_base_bdevs_operational": 4, 00:14:28.272 "base_bdevs_list": [ 00:14:28.272 { 00:14:28.272 "name": "BaseBdev1", 00:14:28.272 "uuid": "486891c7-42f4-11ef-9f7f-e9a656123a8b", 00:14:28.272 "is_configured": true, 00:14:28.272 "data_offset": 2048, 00:14:28.272 "data_size": 63488 00:14:28.272 }, 00:14:28.272 { 00:14:28.272 "name": "BaseBdev2", 00:14:28.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.272 "is_configured": false, 00:14:28.272 "data_offset": 0, 00:14:28.272 "data_size": 0 00:14:28.272 }, 00:14:28.272 { 00:14:28.272 "name": "BaseBdev3", 00:14:28.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.272 "is_configured": false, 00:14:28.272 "data_offset": 0, 00:14:28.272 "data_size": 0 00:14:28.272 }, 00:14:28.272 { 00:14:28.272 "name": "BaseBdev4", 00:14:28.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.272 "is_configured": false, 00:14:28.272 "data_offset": 0, 00:14:28.272 "data_size": 0 00:14:28.272 } 00:14:28.272 ] 00:14:28.272 }' 00:14:28.272 21:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:28.272 21:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.855 21:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:28.855 [2024-07-15 21:50:43.980102] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:28.855 [2024-07-15 21:50:43.980146] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xfeffee34500 name Existed_Raid, state configuring 00:14:28.855 21:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:29.127 [2024-07-15 21:50:44.288149] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:29.127 [2024-07-15 21:50:44.289103] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:29.127 [2024-07-15 21:50:44.289157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:29.127 [2024-07-15 21:50:44.289167] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:29.127 [2024-07-15 21:50:44.289179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:29.127 [2024-07-15 21:50:44.289184] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:29.127 [2024-07-15 21:50:44.289195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:29.127 21:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:29.127 21:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:29.127 21:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:29.127 21:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:29.127 21:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:29.127 21:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:29.127 21:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:29.127 21:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:29.127 21:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:29.127 21:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:29.127 21:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:29.127 21:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:29.127 21:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:29.127 21:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.386 21:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:29.386 "name": "Existed_Raid", 00:14:29.386 "uuid": "4969897b-42f4-11ef-9f7f-e9a656123a8b", 00:14:29.386 "strip_size_kb": 64, 00:14:29.386 "state": "configuring", 00:14:29.386 "raid_level": "concat", 00:14:29.386 "superblock": true, 00:14:29.386 "num_base_bdevs": 4, 00:14:29.386 "num_base_bdevs_discovered": 1, 00:14:29.386 "num_base_bdevs_operational": 4, 00:14:29.386 "base_bdevs_list": [ 00:14:29.386 { 00:14:29.386 "name": "BaseBdev1", 00:14:29.386 "uuid": "486891c7-42f4-11ef-9f7f-e9a656123a8b", 00:14:29.386 "is_configured": true, 00:14:29.386 "data_offset": 2048, 00:14:29.386 "data_size": 63488 00:14:29.386 }, 00:14:29.386 { 00:14:29.386 "name": "BaseBdev2", 00:14:29.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.386 "is_configured": false, 00:14:29.386 "data_offset": 0, 00:14:29.386 "data_size": 0 00:14:29.386 }, 00:14:29.386 { 00:14:29.386 "name": "BaseBdev3", 00:14:29.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.386 "is_configured": false, 00:14:29.386 "data_offset": 0, 00:14:29.386 "data_size": 0 00:14:29.386 }, 00:14:29.386 { 00:14:29.386 "name": "BaseBdev4", 00:14:29.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.386 "is_configured": false, 00:14:29.386 "data_offset": 0, 00:14:29.386 "data_size": 0 00:14:29.386 } 00:14:29.386 ] 00:14:29.386 }' 00:14:29.386 21:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:29.645 21:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.903 21:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:30.162 [2024-07-15 21:50:45.144314] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:30.162 BaseBdev2 00:14:30.162 21:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:30.162 21:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev2 00:14:30.162 21:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:14:30.162 21:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:14:30.162 21:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:14:30.162 21:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:14:30.162 21:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:30.420 21:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:30.679 [ 00:14:30.679 { 00:14:30.679 "name": "BaseBdev2", 00:14:30.679 "aliases": [ 00:14:30.679 "49ec28dc-42f4-11ef-9f7f-e9a656123a8b" 00:14:30.679 ], 00:14:30.679 "product_name": "Malloc disk", 00:14:30.679 "block_size": 512, 00:14:30.679 "num_blocks": 65536, 00:14:30.679 "uuid": "49ec28dc-42f4-11ef-9f7f-e9a656123a8b", 00:14:30.679 "assigned_rate_limits": { 00:14:30.679 "rw_ios_per_sec": 0, 00:14:30.679 "rw_mbytes_per_sec": 0, 00:14:30.679 "r_mbytes_per_sec": 0, 00:14:30.679 "w_mbytes_per_sec": 0 00:14:30.679 }, 00:14:30.679 "claimed": true, 00:14:30.679 "claim_type": "exclusive_write", 00:14:30.679 "zoned": false, 00:14:30.679 "supported_io_types": { 00:14:30.679 "read": true, 00:14:30.679 "write": true, 00:14:30.679 "unmap": true, 00:14:30.679 "flush": true, 00:14:30.679 "reset": true, 00:14:30.679 "nvme_admin": false, 00:14:30.679 "nvme_io": false, 00:14:30.679 "nvme_io_md": false, 00:14:30.679 "write_zeroes": true, 00:14:30.679 "zcopy": true, 00:14:30.679 "get_zone_info": false, 00:14:30.679 "zone_management": false, 00:14:30.679 "zone_append": false, 00:14:30.679 "compare": false, 00:14:30.679 "compare_and_write": false, 00:14:30.679 "abort": true, 00:14:30.679 "seek_hole": false, 00:14:30.679 "seek_data": false, 00:14:30.679 "copy": true, 00:14:30.679 "nvme_iov_md": false 00:14:30.679 }, 00:14:30.679 "memory_domains": [ 00:14:30.679 { 00:14:30.679 "dma_device_id": "system", 00:14:30.679 "dma_device_type": 1 00:14:30.679 }, 00:14:30.679 { 00:14:30.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.679 "dma_device_type": 2 00:14:30.679 } 00:14:30.679 ], 00:14:30.679 "driver_specific": {} 00:14:30.679 } 00:14:30.679 ] 00:14:30.679 21:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:14:30.679 21:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:30.679 21:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:30.679 21:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:30.679 21:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:30.679 21:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:30.679 21:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:30.679 21:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:30.679 21:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:30.679 21:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:30.679 21:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:30.679 21:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:30.679 21:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:30.679 21:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.679 21:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.938 21:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:30.938 "name": "Existed_Raid", 00:14:30.938 "uuid": "4969897b-42f4-11ef-9f7f-e9a656123a8b", 00:14:30.938 "strip_size_kb": 64, 00:14:30.938 "state": "configuring", 00:14:30.938 "raid_level": "concat", 00:14:30.938 "superblock": true, 00:14:30.938 "num_base_bdevs": 4, 00:14:30.938 "num_base_bdevs_discovered": 2, 00:14:30.938 "num_base_bdevs_operational": 4, 00:14:30.938 "base_bdevs_list": [ 00:14:30.938 { 00:14:30.938 "name": "BaseBdev1", 00:14:30.938 "uuid": "486891c7-42f4-11ef-9f7f-e9a656123a8b", 00:14:30.938 "is_configured": true, 00:14:30.938 "data_offset": 2048, 00:14:30.938 "data_size": 63488 00:14:30.938 }, 00:14:30.938 { 00:14:30.938 "name": "BaseBdev2", 00:14:30.938 "uuid": "49ec28dc-42f4-11ef-9f7f-e9a656123a8b", 00:14:30.938 "is_configured": true, 00:14:30.938 "data_offset": 2048, 00:14:30.938 "data_size": 63488 00:14:30.938 }, 00:14:30.938 { 00:14:30.938 "name": "BaseBdev3", 00:14:30.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.938 "is_configured": false, 00:14:30.938 "data_offset": 0, 00:14:30.938 "data_size": 0 00:14:30.938 }, 00:14:30.938 { 00:14:30.938 "name": "BaseBdev4", 00:14:30.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.938 "is_configured": false, 00:14:30.938 "data_offset": 0, 00:14:30.939 "data_size": 0 00:14:30.939 } 00:14:30.939 ] 00:14:30.939 }' 00:14:30.939 21:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:30.939 21:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.196 21:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:31.454 [2024-07-15 21:50:46.396350] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:31.454 BaseBdev3 00:14:31.454 21:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:14:31.454 21:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev3 00:14:31.454 21:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:14:31.454 21:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:14:31.454 21:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:14:31.454 21:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:14:31.454 21:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:31.712 21:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:31.970 [ 00:14:31.970 { 00:14:31.970 "name": "BaseBdev3", 00:14:31.970 "aliases": [ 00:14:31.970 "4aab34ea-42f4-11ef-9f7f-e9a656123a8b" 00:14:31.970 ], 00:14:31.970 "product_name": "Malloc disk", 00:14:31.970 "block_size": 512, 00:14:31.970 "num_blocks": 65536, 00:14:31.970 "uuid": "4aab34ea-42f4-11ef-9f7f-e9a656123a8b", 00:14:31.971 "assigned_rate_limits": { 00:14:31.971 "rw_ios_per_sec": 0, 00:14:31.971 "rw_mbytes_per_sec": 0, 00:14:31.971 "r_mbytes_per_sec": 0, 00:14:31.971 "w_mbytes_per_sec": 0 00:14:31.971 }, 00:14:31.971 "claimed": true, 00:14:31.971 "claim_type": "exclusive_write", 00:14:31.971 "zoned": false, 00:14:31.971 "supported_io_types": { 00:14:31.971 "read": true, 00:14:31.971 "write": true, 00:14:31.971 "unmap": true, 00:14:31.971 "flush": true, 00:14:31.971 "reset": true, 00:14:31.971 "nvme_admin": false, 00:14:31.971 "nvme_io": false, 00:14:31.971 "nvme_io_md": false, 00:14:31.971 "write_zeroes": true, 00:14:31.971 "zcopy": true, 00:14:31.971 "get_zone_info": false, 00:14:31.971 "zone_management": false, 00:14:31.971 "zone_append": false, 00:14:31.971 "compare": false, 00:14:31.971 "compare_and_write": false, 00:14:31.971 "abort": true, 00:14:31.971 "seek_hole": false, 00:14:31.971 "seek_data": false, 00:14:31.971 "copy": true, 00:14:31.971 "nvme_iov_md": false 00:14:31.971 }, 00:14:31.971 "memory_domains": [ 00:14:31.971 { 00:14:31.971 "dma_device_id": "system", 00:14:31.971 "dma_device_type": 1 00:14:31.971 }, 00:14:31.971 { 00:14:31.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.971 "dma_device_type": 2 00:14:31.971 } 00:14:31.971 ], 00:14:31.971 "driver_specific": {} 00:14:31.971 } 00:14:31.971 ] 00:14:31.971 21:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:14:31.971 21:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:31.971 21:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:31.971 21:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:31.971 21:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:31.971 21:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:31.971 21:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:31.971 21:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:31.971 21:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:31.971 21:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:31.971 21:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:31.971 21:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:31.971 21:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:31.971 21:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:31.971 21:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.228 21:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:32.228 "name": "Existed_Raid", 00:14:32.228 "uuid": "4969897b-42f4-11ef-9f7f-e9a656123a8b", 00:14:32.228 "strip_size_kb": 64, 00:14:32.229 "state": "configuring", 00:14:32.229 "raid_level": "concat", 00:14:32.229 "superblock": true, 00:14:32.229 "num_base_bdevs": 4, 00:14:32.229 "num_base_bdevs_discovered": 3, 00:14:32.229 "num_base_bdevs_operational": 4, 00:14:32.229 "base_bdevs_list": [ 00:14:32.229 { 00:14:32.229 "name": "BaseBdev1", 00:14:32.229 "uuid": "486891c7-42f4-11ef-9f7f-e9a656123a8b", 00:14:32.229 "is_configured": true, 00:14:32.229 "data_offset": 2048, 00:14:32.229 "data_size": 63488 00:14:32.229 }, 00:14:32.229 { 00:14:32.229 "name": "BaseBdev2", 00:14:32.229 "uuid": "49ec28dc-42f4-11ef-9f7f-e9a656123a8b", 00:14:32.229 "is_configured": true, 00:14:32.229 "data_offset": 2048, 00:14:32.229 "data_size": 63488 00:14:32.229 }, 00:14:32.229 { 00:14:32.229 "name": "BaseBdev3", 00:14:32.229 "uuid": "4aab34ea-42f4-11ef-9f7f-e9a656123a8b", 00:14:32.229 "is_configured": true, 00:14:32.229 "data_offset": 2048, 00:14:32.229 "data_size": 63488 00:14:32.229 }, 00:14:32.229 { 00:14:32.229 "name": "BaseBdev4", 00:14:32.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.229 "is_configured": false, 00:14:32.229 "data_offset": 0, 00:14:32.229 "data_size": 0 00:14:32.229 } 00:14:32.229 ] 00:14:32.229 }' 00:14:32.229 21:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:32.229 21:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.487 21:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:32.745 [2024-07-15 21:50:47.772442] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:32.745 [2024-07-15 21:50:47.772527] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xfeffee34a00 00:14:32.745 [2024-07-15 21:50:47.772533] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:32.745 [2024-07-15 21:50:47.772553] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xfeffee97e20 00:14:32.745 [2024-07-15 21:50:47.772640] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xfeffee34a00 00:14:32.745 [2024-07-15 21:50:47.772644] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xfeffee34a00 00:14:32.745 [2024-07-15 21:50:47.772685] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.745 BaseBdev4 00:14:32.745 21:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:14:32.745 21:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev4 00:14:32.745 21:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:14:32.745 21:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:14:32.745 21:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:14:32.745 21:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:14:32.745 21:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:33.005 21:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:33.264 [ 00:14:33.264 { 00:14:33.264 "name": "BaseBdev4", 00:14:33.264 "aliases": [ 00:14:33.264 "4b7d2ca5-42f4-11ef-9f7f-e9a656123a8b" 00:14:33.264 ], 00:14:33.264 "product_name": "Malloc disk", 00:14:33.264 "block_size": 512, 00:14:33.264 "num_blocks": 65536, 00:14:33.264 "uuid": "4b7d2ca5-42f4-11ef-9f7f-e9a656123a8b", 00:14:33.264 "assigned_rate_limits": { 00:14:33.264 "rw_ios_per_sec": 0, 00:14:33.264 "rw_mbytes_per_sec": 0, 00:14:33.264 "r_mbytes_per_sec": 0, 00:14:33.264 "w_mbytes_per_sec": 0 00:14:33.264 }, 00:14:33.264 "claimed": true, 00:14:33.264 "claim_type": "exclusive_write", 00:14:33.264 "zoned": false, 00:14:33.264 "supported_io_types": { 00:14:33.264 "read": true, 00:14:33.264 "write": true, 00:14:33.264 "unmap": true, 00:14:33.264 "flush": true, 00:14:33.264 "reset": true, 00:14:33.264 "nvme_admin": false, 00:14:33.264 "nvme_io": false, 00:14:33.264 "nvme_io_md": false, 00:14:33.264 "write_zeroes": true, 00:14:33.264 "zcopy": true, 00:14:33.264 "get_zone_info": false, 00:14:33.264 "zone_management": false, 00:14:33.264 "zone_append": false, 00:14:33.264 "compare": false, 00:14:33.264 "compare_and_write": false, 00:14:33.264 "abort": true, 00:14:33.264 "seek_hole": false, 00:14:33.264 "seek_data": false, 00:14:33.264 "copy": true, 00:14:33.264 "nvme_iov_md": false 00:14:33.264 }, 00:14:33.264 "memory_domains": [ 00:14:33.264 { 00:14:33.264 "dma_device_id": "system", 00:14:33.264 "dma_device_type": 1 00:14:33.264 }, 00:14:33.264 { 00:14:33.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.264 "dma_device_type": 2 00:14:33.264 } 00:14:33.264 ], 00:14:33.264 "driver_specific": {} 00:14:33.264 } 00:14:33.264 ] 00:14:33.264 21:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:14:33.265 21:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:33.265 21:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:33.265 21:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:33.265 21:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:33.265 21:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:33.265 21:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:33.265 21:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:33.265 21:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:33.265 21:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:33.265 21:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:33.265 21:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:33.265 21:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:33.265 21:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:33.265 21:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.523 21:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:33.523 "name": "Existed_Raid", 00:14:33.523 "uuid": "4969897b-42f4-11ef-9f7f-e9a656123a8b", 00:14:33.523 "strip_size_kb": 64, 00:14:33.523 "state": "online", 00:14:33.523 "raid_level": "concat", 00:14:33.523 "superblock": true, 00:14:33.523 "num_base_bdevs": 4, 00:14:33.523 "num_base_bdevs_discovered": 4, 00:14:33.523 "num_base_bdevs_operational": 4, 00:14:33.523 "base_bdevs_list": [ 00:14:33.523 { 00:14:33.523 "name": "BaseBdev1", 00:14:33.523 "uuid": "486891c7-42f4-11ef-9f7f-e9a656123a8b", 00:14:33.523 "is_configured": true, 00:14:33.523 "data_offset": 2048, 00:14:33.523 "data_size": 63488 00:14:33.523 }, 00:14:33.523 { 00:14:33.523 "name": "BaseBdev2", 00:14:33.523 "uuid": "49ec28dc-42f4-11ef-9f7f-e9a656123a8b", 00:14:33.523 "is_configured": true, 00:14:33.523 "data_offset": 2048, 00:14:33.523 "data_size": 63488 00:14:33.523 }, 00:14:33.523 { 00:14:33.523 "name": "BaseBdev3", 00:14:33.523 "uuid": "4aab34ea-42f4-11ef-9f7f-e9a656123a8b", 00:14:33.523 "is_configured": true, 00:14:33.523 "data_offset": 2048, 00:14:33.523 "data_size": 63488 00:14:33.523 }, 00:14:33.523 { 00:14:33.523 "name": "BaseBdev4", 00:14:33.523 "uuid": "4b7d2ca5-42f4-11ef-9f7f-e9a656123a8b", 00:14:33.523 "is_configured": true, 00:14:33.523 "data_offset": 2048, 00:14:33.523 "data_size": 63488 00:14:33.523 } 00:14:33.523 ] 00:14:33.523 }' 00:14:33.523 21:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:33.523 21:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.782 21:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:33.782 21:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:33.782 21:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:33.782 21:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:33.782 21:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:33.782 21:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:33.782 21:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:33.782 21:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:34.041 [2024-07-15 21:50:49.164379] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:34.041 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:34.041 "name": "Existed_Raid", 00:14:34.041 "aliases": [ 00:14:34.041 "4969897b-42f4-11ef-9f7f-e9a656123a8b" 00:14:34.041 ], 00:14:34.041 "product_name": "Raid Volume", 00:14:34.041 "block_size": 512, 00:14:34.041 "num_blocks": 253952, 00:14:34.041 "uuid": "4969897b-42f4-11ef-9f7f-e9a656123a8b", 00:14:34.041 "assigned_rate_limits": { 00:14:34.041 "rw_ios_per_sec": 0, 00:14:34.041 "rw_mbytes_per_sec": 0, 00:14:34.041 "r_mbytes_per_sec": 0, 00:14:34.041 "w_mbytes_per_sec": 0 00:14:34.041 }, 00:14:34.041 "claimed": false, 00:14:34.041 "zoned": false, 00:14:34.041 "supported_io_types": { 00:14:34.041 "read": true, 00:14:34.041 "write": true, 00:14:34.041 "unmap": true, 00:14:34.041 "flush": true, 00:14:34.041 "reset": true, 00:14:34.041 "nvme_admin": false, 00:14:34.041 "nvme_io": false, 00:14:34.041 "nvme_io_md": false, 00:14:34.041 "write_zeroes": true, 00:14:34.041 "zcopy": false, 00:14:34.041 "get_zone_info": false, 00:14:34.041 "zone_management": false, 00:14:34.041 "zone_append": false, 00:14:34.041 "compare": false, 00:14:34.042 "compare_and_write": false, 00:14:34.042 "abort": false, 00:14:34.042 "seek_hole": false, 00:14:34.042 "seek_data": false, 00:14:34.042 "copy": false, 00:14:34.042 "nvme_iov_md": false 00:14:34.042 }, 00:14:34.042 "memory_domains": [ 00:14:34.042 { 00:14:34.042 "dma_device_id": "system", 00:14:34.042 "dma_device_type": 1 00:14:34.042 }, 00:14:34.042 { 00:14:34.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.042 "dma_device_type": 2 00:14:34.042 }, 00:14:34.042 { 00:14:34.042 "dma_device_id": "system", 00:14:34.042 "dma_device_type": 1 00:14:34.042 }, 00:14:34.042 { 00:14:34.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.042 "dma_device_type": 2 00:14:34.042 }, 00:14:34.042 { 00:14:34.042 "dma_device_id": "system", 00:14:34.042 "dma_device_type": 1 00:14:34.042 }, 00:14:34.042 { 00:14:34.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.042 "dma_device_type": 2 00:14:34.042 }, 00:14:34.042 { 00:14:34.042 "dma_device_id": "system", 00:14:34.042 "dma_device_type": 1 00:14:34.042 }, 00:14:34.042 { 00:14:34.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.042 "dma_device_type": 2 00:14:34.042 } 00:14:34.042 ], 00:14:34.042 "driver_specific": { 00:14:34.042 "raid": { 00:14:34.042 "uuid": "4969897b-42f4-11ef-9f7f-e9a656123a8b", 00:14:34.042 "strip_size_kb": 64, 00:14:34.042 "state": "online", 00:14:34.042 "raid_level": "concat", 00:14:34.042 "superblock": true, 00:14:34.042 "num_base_bdevs": 4, 00:14:34.042 "num_base_bdevs_discovered": 4, 00:14:34.042 "num_base_bdevs_operational": 4, 00:14:34.042 "base_bdevs_list": [ 00:14:34.042 { 00:14:34.042 "name": "BaseBdev1", 00:14:34.042 "uuid": "486891c7-42f4-11ef-9f7f-e9a656123a8b", 00:14:34.042 "is_configured": true, 00:14:34.042 "data_offset": 2048, 00:14:34.042 "data_size": 63488 00:14:34.042 }, 00:14:34.042 { 00:14:34.042 "name": "BaseBdev2", 00:14:34.042 "uuid": "49ec28dc-42f4-11ef-9f7f-e9a656123a8b", 00:14:34.042 "is_configured": true, 00:14:34.042 "data_offset": 2048, 00:14:34.042 "data_size": 63488 00:14:34.042 }, 00:14:34.042 { 00:14:34.042 "name": "BaseBdev3", 00:14:34.042 "uuid": "4aab34ea-42f4-11ef-9f7f-e9a656123a8b", 00:14:34.042 "is_configured": true, 00:14:34.042 "data_offset": 2048, 00:14:34.042 "data_size": 63488 00:14:34.042 }, 00:14:34.042 { 00:14:34.042 "name": "BaseBdev4", 00:14:34.042 "uuid": "4b7d2ca5-42f4-11ef-9f7f-e9a656123a8b", 00:14:34.042 "is_configured": true, 00:14:34.042 "data_offset": 2048, 00:14:34.042 "data_size": 63488 00:14:34.042 } 00:14:34.042 ] 00:14:34.042 } 00:14:34.042 } 00:14:34.042 }' 00:14:34.042 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:34.042 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:34.042 BaseBdev2 00:14:34.042 BaseBdev3 00:14:34.042 BaseBdev4' 00:14:34.042 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:34.042 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:34.042 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:34.301 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:34.301 "name": "BaseBdev1", 00:14:34.301 "aliases": [ 00:14:34.301 "486891c7-42f4-11ef-9f7f-e9a656123a8b" 00:14:34.301 ], 00:14:34.301 "product_name": "Malloc disk", 00:14:34.301 "block_size": 512, 00:14:34.301 "num_blocks": 65536, 00:14:34.301 "uuid": "486891c7-42f4-11ef-9f7f-e9a656123a8b", 00:14:34.301 "assigned_rate_limits": { 00:14:34.301 "rw_ios_per_sec": 0, 00:14:34.302 "rw_mbytes_per_sec": 0, 00:14:34.302 "r_mbytes_per_sec": 0, 00:14:34.302 "w_mbytes_per_sec": 0 00:14:34.302 }, 00:14:34.302 "claimed": true, 00:14:34.302 "claim_type": "exclusive_write", 00:14:34.302 "zoned": false, 00:14:34.302 "supported_io_types": { 00:14:34.302 "read": true, 00:14:34.302 "write": true, 00:14:34.302 "unmap": true, 00:14:34.302 "flush": true, 00:14:34.302 "reset": true, 00:14:34.302 "nvme_admin": false, 00:14:34.302 "nvme_io": false, 00:14:34.302 "nvme_io_md": false, 00:14:34.302 "write_zeroes": true, 00:14:34.302 "zcopy": true, 00:14:34.302 "get_zone_info": false, 00:14:34.302 "zone_management": false, 00:14:34.302 "zone_append": false, 00:14:34.302 "compare": false, 00:14:34.302 "compare_and_write": false, 00:14:34.302 "abort": true, 00:14:34.302 "seek_hole": false, 00:14:34.302 "seek_data": false, 00:14:34.302 "copy": true, 00:14:34.302 "nvme_iov_md": false 00:14:34.302 }, 00:14:34.302 "memory_domains": [ 00:14:34.302 { 00:14:34.302 "dma_device_id": "system", 00:14:34.302 "dma_device_type": 1 00:14:34.302 }, 00:14:34.302 { 00:14:34.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.302 "dma_device_type": 2 00:14:34.302 } 00:14:34.302 ], 00:14:34.302 "driver_specific": {} 00:14:34.302 }' 00:14:34.302 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:34.302 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:34.302 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:34.302 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:34.561 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:34.561 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:34.561 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:34.561 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:34.561 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:34.561 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:34.561 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:34.561 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:34.561 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:34.561 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:34.561 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:34.822 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:34.822 "name": "BaseBdev2", 00:14:34.822 "aliases": [ 00:14:34.822 "49ec28dc-42f4-11ef-9f7f-e9a656123a8b" 00:14:34.822 ], 00:14:34.822 "product_name": "Malloc disk", 00:14:34.822 "block_size": 512, 00:14:34.822 "num_blocks": 65536, 00:14:34.822 "uuid": "49ec28dc-42f4-11ef-9f7f-e9a656123a8b", 00:14:34.822 "assigned_rate_limits": { 00:14:34.822 "rw_ios_per_sec": 0, 00:14:34.822 "rw_mbytes_per_sec": 0, 00:14:34.822 "r_mbytes_per_sec": 0, 00:14:34.822 "w_mbytes_per_sec": 0 00:14:34.822 }, 00:14:34.822 "claimed": true, 00:14:34.822 "claim_type": "exclusive_write", 00:14:34.822 "zoned": false, 00:14:34.822 "supported_io_types": { 00:14:34.822 "read": true, 00:14:34.822 "write": true, 00:14:34.822 "unmap": true, 00:14:34.822 "flush": true, 00:14:34.822 "reset": true, 00:14:34.822 "nvme_admin": false, 00:14:34.822 "nvme_io": false, 00:14:34.822 "nvme_io_md": false, 00:14:34.822 "write_zeroes": true, 00:14:34.822 "zcopy": true, 00:14:34.822 "get_zone_info": false, 00:14:34.822 "zone_management": false, 00:14:34.822 "zone_append": false, 00:14:34.822 "compare": false, 00:14:34.822 "compare_and_write": false, 00:14:34.822 "abort": true, 00:14:34.822 "seek_hole": false, 00:14:34.822 "seek_data": false, 00:14:34.822 "copy": true, 00:14:34.822 "nvme_iov_md": false 00:14:34.822 }, 00:14:34.822 "memory_domains": [ 00:14:34.822 { 00:14:34.822 "dma_device_id": "system", 00:14:34.822 "dma_device_type": 1 00:14:34.822 }, 00:14:34.822 { 00:14:34.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.822 "dma_device_type": 2 00:14:34.822 } 00:14:34.822 ], 00:14:34.822 "driver_specific": {} 00:14:34.822 }' 00:14:34.822 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:34.822 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:34.822 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:34.822 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:34.822 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:34.822 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:34.822 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:34.822 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:34.822 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:34.822 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:34.822 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:34.822 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:34.822 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:34.822 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:34.822 21:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:35.080 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:35.080 "name": "BaseBdev3", 00:14:35.080 "aliases": [ 00:14:35.080 "4aab34ea-42f4-11ef-9f7f-e9a656123a8b" 00:14:35.080 ], 00:14:35.080 "product_name": "Malloc disk", 00:14:35.080 "block_size": 512, 00:14:35.080 "num_blocks": 65536, 00:14:35.080 "uuid": "4aab34ea-42f4-11ef-9f7f-e9a656123a8b", 00:14:35.080 "assigned_rate_limits": { 00:14:35.080 "rw_ios_per_sec": 0, 00:14:35.080 "rw_mbytes_per_sec": 0, 00:14:35.080 "r_mbytes_per_sec": 0, 00:14:35.080 "w_mbytes_per_sec": 0 00:14:35.080 }, 00:14:35.080 "claimed": true, 00:14:35.080 "claim_type": "exclusive_write", 00:14:35.080 "zoned": false, 00:14:35.080 "supported_io_types": { 00:14:35.080 "read": true, 00:14:35.080 "write": true, 00:14:35.080 "unmap": true, 00:14:35.080 "flush": true, 00:14:35.080 "reset": true, 00:14:35.080 "nvme_admin": false, 00:14:35.080 "nvme_io": false, 00:14:35.080 "nvme_io_md": false, 00:14:35.080 "write_zeroes": true, 00:14:35.080 "zcopy": true, 00:14:35.080 "get_zone_info": false, 00:14:35.080 "zone_management": false, 00:14:35.081 "zone_append": false, 00:14:35.081 "compare": false, 00:14:35.081 "compare_and_write": false, 00:14:35.081 "abort": true, 00:14:35.081 "seek_hole": false, 00:14:35.081 "seek_data": false, 00:14:35.081 "copy": true, 00:14:35.081 "nvme_iov_md": false 00:14:35.081 }, 00:14:35.081 "memory_domains": [ 00:14:35.081 { 00:14:35.081 "dma_device_id": "system", 00:14:35.081 "dma_device_type": 1 00:14:35.081 }, 00:14:35.081 { 00:14:35.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.081 "dma_device_type": 2 00:14:35.081 } 00:14:35.081 ], 00:14:35.081 "driver_specific": {} 00:14:35.081 }' 00:14:35.081 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:35.081 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:35.081 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:35.081 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:35.081 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:35.081 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:35.081 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:35.081 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:35.081 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:35.081 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:35.081 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:35.081 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:35.081 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:35.081 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:35.081 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:14:35.648 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:35.648 "name": "BaseBdev4", 00:14:35.648 "aliases": [ 00:14:35.648 "4b7d2ca5-42f4-11ef-9f7f-e9a656123a8b" 00:14:35.648 ], 00:14:35.648 "product_name": "Malloc disk", 00:14:35.648 "block_size": 512, 00:14:35.648 "num_blocks": 65536, 00:14:35.648 "uuid": "4b7d2ca5-42f4-11ef-9f7f-e9a656123a8b", 00:14:35.648 "assigned_rate_limits": { 00:14:35.648 "rw_ios_per_sec": 0, 00:14:35.648 "rw_mbytes_per_sec": 0, 00:14:35.648 "r_mbytes_per_sec": 0, 00:14:35.648 "w_mbytes_per_sec": 0 00:14:35.648 }, 00:14:35.648 "claimed": true, 00:14:35.648 "claim_type": "exclusive_write", 00:14:35.648 "zoned": false, 00:14:35.648 "supported_io_types": { 00:14:35.648 "read": true, 00:14:35.648 "write": true, 00:14:35.648 "unmap": true, 00:14:35.648 "flush": true, 00:14:35.648 "reset": true, 00:14:35.648 "nvme_admin": false, 00:14:35.648 "nvme_io": false, 00:14:35.648 "nvme_io_md": false, 00:14:35.648 "write_zeroes": true, 00:14:35.648 "zcopy": true, 00:14:35.648 "get_zone_info": false, 00:14:35.648 "zone_management": false, 00:14:35.648 "zone_append": false, 00:14:35.648 "compare": false, 00:14:35.648 "compare_and_write": false, 00:14:35.648 "abort": true, 00:14:35.648 "seek_hole": false, 00:14:35.648 "seek_data": false, 00:14:35.648 "copy": true, 00:14:35.648 "nvme_iov_md": false 00:14:35.648 }, 00:14:35.648 "memory_domains": [ 00:14:35.648 { 00:14:35.648 "dma_device_id": "system", 00:14:35.648 "dma_device_type": 1 00:14:35.648 }, 00:14:35.648 { 00:14:35.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.648 "dma_device_type": 2 00:14:35.648 } 00:14:35.648 ], 00:14:35.649 "driver_specific": {} 00:14:35.649 }' 00:14:35.649 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:35.649 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:35.649 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:35.649 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:35.649 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:35.649 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:35.649 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:35.649 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:35.649 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:35.649 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:35.649 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:35.649 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:35.649 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:35.907 [2024-07-15 21:50:50.864454] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:35.907 [2024-07-15 21:50:50.864482] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:35.907 [2024-07-15 21:50:50.864505] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:35.907 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:35.907 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:14:35.907 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:35.907 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:14:35.907 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:35.907 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:14:35.907 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:35.907 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:35.907 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:35.907 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:35.907 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:35.907 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:35.907 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:35.907 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:35.907 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:35.907 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:35.907 21:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.166 21:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:36.166 "name": "Existed_Raid", 00:14:36.166 "uuid": "4969897b-42f4-11ef-9f7f-e9a656123a8b", 00:14:36.166 "strip_size_kb": 64, 00:14:36.166 "state": "offline", 00:14:36.166 "raid_level": "concat", 00:14:36.166 "superblock": true, 00:14:36.166 "num_base_bdevs": 4, 00:14:36.166 "num_base_bdevs_discovered": 3, 00:14:36.166 "num_base_bdevs_operational": 3, 00:14:36.166 "base_bdevs_list": [ 00:14:36.166 { 00:14:36.166 "name": null, 00:14:36.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.166 "is_configured": false, 00:14:36.166 "data_offset": 2048, 00:14:36.166 "data_size": 63488 00:14:36.166 }, 00:14:36.166 { 00:14:36.166 "name": "BaseBdev2", 00:14:36.166 "uuid": "49ec28dc-42f4-11ef-9f7f-e9a656123a8b", 00:14:36.166 "is_configured": true, 00:14:36.166 "data_offset": 2048, 00:14:36.166 "data_size": 63488 00:14:36.166 }, 00:14:36.166 { 00:14:36.166 "name": "BaseBdev3", 00:14:36.166 "uuid": "4aab34ea-42f4-11ef-9f7f-e9a656123a8b", 00:14:36.166 "is_configured": true, 00:14:36.166 "data_offset": 2048, 00:14:36.166 "data_size": 63488 00:14:36.166 }, 00:14:36.166 { 00:14:36.166 "name": "BaseBdev4", 00:14:36.166 "uuid": "4b7d2ca5-42f4-11ef-9f7f-e9a656123a8b", 00:14:36.166 "is_configured": true, 00:14:36.166 "data_offset": 2048, 00:14:36.166 "data_size": 63488 00:14:36.166 } 00:14:36.166 ] 00:14:36.166 }' 00:14:36.166 21:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:36.166 21:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.426 21:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:36.426 21:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:36.426 21:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.426 21:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:36.688 21:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:36.688 21:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:36.688 21:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:36.947 [2024-07-15 21:50:51.987496] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:36.947 21:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:36.947 21:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:36.947 21:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.947 21:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:37.206 21:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:37.206 21:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:37.206 21:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:37.465 [2024-07-15 21:50:52.522009] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:37.465 21:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:37.465 21:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:37.465 21:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.465 21:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:37.725 21:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:37.725 21:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:37.725 21:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:14:37.985 [2024-07-15 21:50:53.088530] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:37.985 [2024-07-15 21:50:53.088575] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xfeffee34a00 name Existed_Raid, state offline 00:14:37.985 21:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:37.985 21:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:37.985 21:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.985 21:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:38.248 21:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:38.248 21:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:38.248 21:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:14:38.248 21:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:14:38.248 21:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:38.248 21:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:38.508 BaseBdev2 00:14:38.508 21:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:14:38.508 21:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev2 00:14:38.508 21:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:14:38.508 21:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:14:38.508 21:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:14:38.508 21:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:14:38.508 21:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:38.765 21:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:39.024 [ 00:14:39.024 { 00:14:39.024 "name": "BaseBdev2", 00:14:39.024 "aliases": [ 00:14:39.024 "4efbc2c3-42f4-11ef-9f7f-e9a656123a8b" 00:14:39.024 ], 00:14:39.024 "product_name": "Malloc disk", 00:14:39.024 "block_size": 512, 00:14:39.024 "num_blocks": 65536, 00:14:39.024 "uuid": "4efbc2c3-42f4-11ef-9f7f-e9a656123a8b", 00:14:39.024 "assigned_rate_limits": { 00:14:39.024 "rw_ios_per_sec": 0, 00:14:39.024 "rw_mbytes_per_sec": 0, 00:14:39.024 "r_mbytes_per_sec": 0, 00:14:39.024 "w_mbytes_per_sec": 0 00:14:39.024 }, 00:14:39.024 "claimed": false, 00:14:39.024 "zoned": false, 00:14:39.024 "supported_io_types": { 00:14:39.024 "read": true, 00:14:39.024 "write": true, 00:14:39.024 "unmap": true, 00:14:39.024 "flush": true, 00:14:39.024 "reset": true, 00:14:39.024 "nvme_admin": false, 00:14:39.024 "nvme_io": false, 00:14:39.024 "nvme_io_md": false, 00:14:39.024 "write_zeroes": true, 00:14:39.024 "zcopy": true, 00:14:39.024 "get_zone_info": false, 00:14:39.024 "zone_management": false, 00:14:39.024 "zone_append": false, 00:14:39.024 "compare": false, 00:14:39.024 "compare_and_write": false, 00:14:39.024 "abort": true, 00:14:39.024 "seek_hole": false, 00:14:39.024 "seek_data": false, 00:14:39.024 "copy": true, 00:14:39.024 "nvme_iov_md": false 00:14:39.024 }, 00:14:39.024 "memory_domains": [ 00:14:39.024 { 00:14:39.024 "dma_device_id": "system", 00:14:39.024 "dma_device_type": 1 00:14:39.024 }, 00:14:39.024 { 00:14:39.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.024 "dma_device_type": 2 00:14:39.024 } 00:14:39.024 ], 00:14:39.024 "driver_specific": {} 00:14:39.024 } 00:14:39.024 ] 00:14:39.024 21:50:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:14:39.024 21:50:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:39.024 21:50:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:39.024 21:50:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:39.283 BaseBdev3 00:14:39.283 21:50:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:14:39.283 21:50:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev3 00:14:39.283 21:50:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:14:39.283 21:50:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:14:39.283 21:50:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:14:39.283 21:50:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:14:39.283 21:50:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:39.543 21:50:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:39.802 [ 00:14:39.802 { 00:14:39.802 "name": "BaseBdev3", 00:14:39.802 "aliases": [ 00:14:39.802 "4f70f25e-42f4-11ef-9f7f-e9a656123a8b" 00:14:39.802 ], 00:14:39.802 "product_name": "Malloc disk", 00:14:39.802 "block_size": 512, 00:14:39.802 "num_blocks": 65536, 00:14:39.802 "uuid": "4f70f25e-42f4-11ef-9f7f-e9a656123a8b", 00:14:39.802 "assigned_rate_limits": { 00:14:39.802 "rw_ios_per_sec": 0, 00:14:39.802 "rw_mbytes_per_sec": 0, 00:14:39.802 "r_mbytes_per_sec": 0, 00:14:39.802 "w_mbytes_per_sec": 0 00:14:39.802 }, 00:14:39.802 "claimed": false, 00:14:39.802 "zoned": false, 00:14:39.802 "supported_io_types": { 00:14:39.802 "read": true, 00:14:39.802 "write": true, 00:14:39.802 "unmap": true, 00:14:39.802 "flush": true, 00:14:39.802 "reset": true, 00:14:39.802 "nvme_admin": false, 00:14:39.802 "nvme_io": false, 00:14:39.802 "nvme_io_md": false, 00:14:39.802 "write_zeroes": true, 00:14:39.802 "zcopy": true, 00:14:39.802 "get_zone_info": false, 00:14:39.802 "zone_management": false, 00:14:39.802 "zone_append": false, 00:14:39.802 "compare": false, 00:14:39.802 "compare_and_write": false, 00:14:39.802 "abort": true, 00:14:39.802 "seek_hole": false, 00:14:39.802 "seek_data": false, 00:14:39.802 "copy": true, 00:14:39.802 "nvme_iov_md": false 00:14:39.802 }, 00:14:39.802 "memory_domains": [ 00:14:39.802 { 00:14:39.802 "dma_device_id": "system", 00:14:39.802 "dma_device_type": 1 00:14:39.802 }, 00:14:39.802 { 00:14:39.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.802 "dma_device_type": 2 00:14:39.802 } 00:14:39.802 ], 00:14:39.802 "driver_specific": {} 00:14:39.802 } 00:14:39.802 ] 00:14:39.802 21:50:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:14:39.802 21:50:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:39.802 21:50:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:39.802 21:50:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:40.062 BaseBdev4 00:14:40.062 21:50:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:14:40.062 21:50:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev4 00:14:40.062 21:50:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:14:40.062 21:50:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:14:40.062 21:50:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:14:40.062 21:50:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:14:40.062 21:50:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:40.320 21:50:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:40.577 [ 00:14:40.577 { 00:14:40.577 "name": "BaseBdev4", 00:14:40.577 "aliases": [ 00:14:40.577 "4fd50c02-42f4-11ef-9f7f-e9a656123a8b" 00:14:40.577 ], 00:14:40.577 "product_name": "Malloc disk", 00:14:40.577 "block_size": 512, 00:14:40.577 "num_blocks": 65536, 00:14:40.577 "uuid": "4fd50c02-42f4-11ef-9f7f-e9a656123a8b", 00:14:40.577 "assigned_rate_limits": { 00:14:40.577 "rw_ios_per_sec": 0, 00:14:40.577 "rw_mbytes_per_sec": 0, 00:14:40.577 "r_mbytes_per_sec": 0, 00:14:40.577 "w_mbytes_per_sec": 0 00:14:40.577 }, 00:14:40.577 "claimed": false, 00:14:40.577 "zoned": false, 00:14:40.577 "supported_io_types": { 00:14:40.577 "read": true, 00:14:40.577 "write": true, 00:14:40.578 "unmap": true, 00:14:40.578 "flush": true, 00:14:40.578 "reset": true, 00:14:40.578 "nvme_admin": false, 00:14:40.578 "nvme_io": false, 00:14:40.578 "nvme_io_md": false, 00:14:40.578 "write_zeroes": true, 00:14:40.578 "zcopy": true, 00:14:40.578 "get_zone_info": false, 00:14:40.578 "zone_management": false, 00:14:40.578 "zone_append": false, 00:14:40.578 "compare": false, 00:14:40.578 "compare_and_write": false, 00:14:40.578 "abort": true, 00:14:40.578 "seek_hole": false, 00:14:40.578 "seek_data": false, 00:14:40.578 "copy": true, 00:14:40.578 "nvme_iov_md": false 00:14:40.578 }, 00:14:40.578 "memory_domains": [ 00:14:40.578 { 00:14:40.578 "dma_device_id": "system", 00:14:40.578 "dma_device_type": 1 00:14:40.578 }, 00:14:40.578 { 00:14:40.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.578 "dma_device_type": 2 00:14:40.578 } 00:14:40.578 ], 00:14:40.578 "driver_specific": {} 00:14:40.578 } 00:14:40.578 ] 00:14:40.578 21:50:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:14:40.578 21:50:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:40.578 21:50:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:40.578 21:50:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:40.835 [2024-07-15 21:50:55.823067] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:40.835 [2024-07-15 21:50:55.823132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:40.835 [2024-07-15 21:50:55.823156] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:40.835 [2024-07-15 21:50:55.823869] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:40.835 [2024-07-15 21:50:55.823888] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:40.835 21:50:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:40.835 21:50:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:40.835 21:50:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:40.835 21:50:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:40.835 21:50:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:40.835 21:50:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:40.835 21:50:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:40.835 21:50:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:40.835 21:50:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:40.835 21:50:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:40.835 21:50:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:40.835 21:50:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.093 21:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:41.093 "name": "Existed_Raid", 00:14:41.093 "uuid": "5049a059-42f4-11ef-9f7f-e9a656123a8b", 00:14:41.093 "strip_size_kb": 64, 00:14:41.093 "state": "configuring", 00:14:41.093 "raid_level": "concat", 00:14:41.093 "superblock": true, 00:14:41.093 "num_base_bdevs": 4, 00:14:41.093 "num_base_bdevs_discovered": 3, 00:14:41.093 "num_base_bdevs_operational": 4, 00:14:41.093 "base_bdevs_list": [ 00:14:41.093 { 00:14:41.093 "name": "BaseBdev1", 00:14:41.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.093 "is_configured": false, 00:14:41.093 "data_offset": 0, 00:14:41.093 "data_size": 0 00:14:41.093 }, 00:14:41.093 { 00:14:41.093 "name": "BaseBdev2", 00:14:41.093 "uuid": "4efbc2c3-42f4-11ef-9f7f-e9a656123a8b", 00:14:41.093 "is_configured": true, 00:14:41.093 "data_offset": 2048, 00:14:41.093 "data_size": 63488 00:14:41.093 }, 00:14:41.093 { 00:14:41.093 "name": "BaseBdev3", 00:14:41.093 "uuid": "4f70f25e-42f4-11ef-9f7f-e9a656123a8b", 00:14:41.093 "is_configured": true, 00:14:41.093 "data_offset": 2048, 00:14:41.093 "data_size": 63488 00:14:41.093 }, 00:14:41.093 { 00:14:41.093 "name": "BaseBdev4", 00:14:41.093 "uuid": "4fd50c02-42f4-11ef-9f7f-e9a656123a8b", 00:14:41.093 "is_configured": true, 00:14:41.093 "data_offset": 2048, 00:14:41.093 "data_size": 63488 00:14:41.093 } 00:14:41.093 ] 00:14:41.093 }' 00:14:41.093 21:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:41.093 21:50:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.350 21:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:41.608 [2024-07-15 21:50:56.643087] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:41.608 21:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:41.608 21:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:41.608 21:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:41.608 21:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:41.608 21:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:41.608 21:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:41.608 21:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:41.608 21:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:41.608 21:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:41.608 21:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:41.608 21:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.608 21:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.866 21:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:41.866 "name": "Existed_Raid", 00:14:41.866 "uuid": "5049a059-42f4-11ef-9f7f-e9a656123a8b", 00:14:41.866 "strip_size_kb": 64, 00:14:41.866 "state": "configuring", 00:14:41.866 "raid_level": "concat", 00:14:41.866 "superblock": true, 00:14:41.866 "num_base_bdevs": 4, 00:14:41.866 "num_base_bdevs_discovered": 2, 00:14:41.866 "num_base_bdevs_operational": 4, 00:14:41.866 "base_bdevs_list": [ 00:14:41.866 { 00:14:41.866 "name": "BaseBdev1", 00:14:41.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.866 "is_configured": false, 00:14:41.866 "data_offset": 0, 00:14:41.866 "data_size": 0 00:14:41.866 }, 00:14:41.866 { 00:14:41.866 "name": null, 00:14:41.866 "uuid": "4efbc2c3-42f4-11ef-9f7f-e9a656123a8b", 00:14:41.866 "is_configured": false, 00:14:41.866 "data_offset": 2048, 00:14:41.866 "data_size": 63488 00:14:41.866 }, 00:14:41.866 { 00:14:41.866 "name": "BaseBdev3", 00:14:41.866 "uuid": "4f70f25e-42f4-11ef-9f7f-e9a656123a8b", 00:14:41.866 "is_configured": true, 00:14:41.866 "data_offset": 2048, 00:14:41.866 "data_size": 63488 00:14:41.866 }, 00:14:41.866 { 00:14:41.866 "name": "BaseBdev4", 00:14:41.866 "uuid": "4fd50c02-42f4-11ef-9f7f-e9a656123a8b", 00:14:41.866 "is_configured": true, 00:14:41.866 "data_offset": 2048, 00:14:41.866 "data_size": 63488 00:14:41.866 } 00:14:41.866 ] 00:14:41.866 }' 00:14:41.866 21:50:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:41.866 21:50:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.124 21:50:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.124 21:50:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:42.381 21:50:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:14:42.381 21:50:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:42.638 [2024-07-15 21:50:57.663281] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:42.638 BaseBdev1 00:14:42.638 21:50:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:14:42.638 21:50:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev1 00:14:42.638 21:50:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:14:42.638 21:50:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:14:42.638 21:50:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:14:42.638 21:50:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:14:42.638 21:50:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:42.895 21:50:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:43.153 [ 00:14:43.153 { 00:14:43.153 "name": "BaseBdev1", 00:14:43.153 "aliases": [ 00:14:43.153 "516266c5-42f4-11ef-9f7f-e9a656123a8b" 00:14:43.153 ], 00:14:43.153 "product_name": "Malloc disk", 00:14:43.153 "block_size": 512, 00:14:43.153 "num_blocks": 65536, 00:14:43.153 "uuid": "516266c5-42f4-11ef-9f7f-e9a656123a8b", 00:14:43.153 "assigned_rate_limits": { 00:14:43.153 "rw_ios_per_sec": 0, 00:14:43.153 "rw_mbytes_per_sec": 0, 00:14:43.153 "r_mbytes_per_sec": 0, 00:14:43.153 "w_mbytes_per_sec": 0 00:14:43.153 }, 00:14:43.153 "claimed": true, 00:14:43.153 "claim_type": "exclusive_write", 00:14:43.153 "zoned": false, 00:14:43.153 "supported_io_types": { 00:14:43.153 "read": true, 00:14:43.153 "write": true, 00:14:43.153 "unmap": true, 00:14:43.153 "flush": true, 00:14:43.153 "reset": true, 00:14:43.153 "nvme_admin": false, 00:14:43.153 "nvme_io": false, 00:14:43.153 "nvme_io_md": false, 00:14:43.153 "write_zeroes": true, 00:14:43.153 "zcopy": true, 00:14:43.153 "get_zone_info": false, 00:14:43.153 "zone_management": false, 00:14:43.153 "zone_append": false, 00:14:43.153 "compare": false, 00:14:43.153 "compare_and_write": false, 00:14:43.153 "abort": true, 00:14:43.153 "seek_hole": false, 00:14:43.153 "seek_data": false, 00:14:43.153 "copy": true, 00:14:43.153 "nvme_iov_md": false 00:14:43.153 }, 00:14:43.153 "memory_domains": [ 00:14:43.153 { 00:14:43.153 "dma_device_id": "system", 00:14:43.153 "dma_device_type": 1 00:14:43.153 }, 00:14:43.153 { 00:14:43.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.153 "dma_device_type": 2 00:14:43.153 } 00:14:43.153 ], 00:14:43.153 "driver_specific": {} 00:14:43.153 } 00:14:43.153 ] 00:14:43.153 21:50:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:14:43.153 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:43.153 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:43.153 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:43.153 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:43.153 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:43.153 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:43.153 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:43.153 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:43.153 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:43.153 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:43.153 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:43.153 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.411 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:43.411 "name": "Existed_Raid", 00:14:43.411 "uuid": "5049a059-42f4-11ef-9f7f-e9a656123a8b", 00:14:43.411 "strip_size_kb": 64, 00:14:43.411 "state": "configuring", 00:14:43.411 "raid_level": "concat", 00:14:43.411 "superblock": true, 00:14:43.411 "num_base_bdevs": 4, 00:14:43.411 "num_base_bdevs_discovered": 3, 00:14:43.411 "num_base_bdevs_operational": 4, 00:14:43.411 "base_bdevs_list": [ 00:14:43.411 { 00:14:43.411 "name": "BaseBdev1", 00:14:43.411 "uuid": "516266c5-42f4-11ef-9f7f-e9a656123a8b", 00:14:43.411 "is_configured": true, 00:14:43.411 "data_offset": 2048, 00:14:43.411 "data_size": 63488 00:14:43.411 }, 00:14:43.411 { 00:14:43.411 "name": null, 00:14:43.411 "uuid": "4efbc2c3-42f4-11ef-9f7f-e9a656123a8b", 00:14:43.411 "is_configured": false, 00:14:43.411 "data_offset": 2048, 00:14:43.411 "data_size": 63488 00:14:43.411 }, 00:14:43.411 { 00:14:43.411 "name": "BaseBdev3", 00:14:43.411 "uuid": "4f70f25e-42f4-11ef-9f7f-e9a656123a8b", 00:14:43.411 "is_configured": true, 00:14:43.411 "data_offset": 2048, 00:14:43.411 "data_size": 63488 00:14:43.411 }, 00:14:43.411 { 00:14:43.411 "name": "BaseBdev4", 00:14:43.411 "uuid": "4fd50c02-42f4-11ef-9f7f-e9a656123a8b", 00:14:43.411 "is_configured": true, 00:14:43.411 "data_offset": 2048, 00:14:43.411 "data_size": 63488 00:14:43.411 } 00:14:43.411 ] 00:14:43.411 }' 00:14:43.411 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:43.411 21:50:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.669 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:43.669 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:43.928 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:14:43.928 21:50:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:44.186 [2024-07-15 21:50:59.135238] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:44.186 21:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:44.186 21:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:44.186 21:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:44.186 21:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:44.186 21:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:44.186 21:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:44.186 21:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:44.186 21:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:44.186 21:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:44.186 21:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:44.186 21:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.187 21:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.444 21:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:44.444 "name": "Existed_Raid", 00:14:44.444 "uuid": "5049a059-42f4-11ef-9f7f-e9a656123a8b", 00:14:44.444 "strip_size_kb": 64, 00:14:44.444 "state": "configuring", 00:14:44.444 "raid_level": "concat", 00:14:44.444 "superblock": true, 00:14:44.444 "num_base_bdevs": 4, 00:14:44.444 "num_base_bdevs_discovered": 2, 00:14:44.444 "num_base_bdevs_operational": 4, 00:14:44.444 "base_bdevs_list": [ 00:14:44.444 { 00:14:44.444 "name": "BaseBdev1", 00:14:44.444 "uuid": "516266c5-42f4-11ef-9f7f-e9a656123a8b", 00:14:44.444 "is_configured": true, 00:14:44.444 "data_offset": 2048, 00:14:44.444 "data_size": 63488 00:14:44.444 }, 00:14:44.444 { 00:14:44.444 "name": null, 00:14:44.444 "uuid": "4efbc2c3-42f4-11ef-9f7f-e9a656123a8b", 00:14:44.445 "is_configured": false, 00:14:44.445 "data_offset": 2048, 00:14:44.445 "data_size": 63488 00:14:44.445 }, 00:14:44.445 { 00:14:44.445 "name": null, 00:14:44.445 "uuid": "4f70f25e-42f4-11ef-9f7f-e9a656123a8b", 00:14:44.445 "is_configured": false, 00:14:44.445 "data_offset": 2048, 00:14:44.445 "data_size": 63488 00:14:44.445 }, 00:14:44.445 { 00:14:44.445 "name": "BaseBdev4", 00:14:44.445 "uuid": "4fd50c02-42f4-11ef-9f7f-e9a656123a8b", 00:14:44.445 "is_configured": true, 00:14:44.445 "data_offset": 2048, 00:14:44.445 "data_size": 63488 00:14:44.445 } 00:14:44.445 ] 00:14:44.445 }' 00:14:44.445 21:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:44.445 21:50:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.703 21:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.703 21:50:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:44.966 21:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:14:44.966 21:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:45.235 [2024-07-15 21:51:00.287291] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:45.235 21:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:45.235 21:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:45.235 21:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:45.235 21:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:45.235 21:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:45.235 21:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:45.235 21:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:45.235 21:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:45.235 21:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:45.235 21:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:45.235 21:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.235 21:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.495 21:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:45.495 "name": "Existed_Raid", 00:14:45.495 "uuid": "5049a059-42f4-11ef-9f7f-e9a656123a8b", 00:14:45.495 "strip_size_kb": 64, 00:14:45.495 "state": "configuring", 00:14:45.495 "raid_level": "concat", 00:14:45.495 "superblock": true, 00:14:45.495 "num_base_bdevs": 4, 00:14:45.495 "num_base_bdevs_discovered": 3, 00:14:45.495 "num_base_bdevs_operational": 4, 00:14:45.495 "base_bdevs_list": [ 00:14:45.495 { 00:14:45.495 "name": "BaseBdev1", 00:14:45.495 "uuid": "516266c5-42f4-11ef-9f7f-e9a656123a8b", 00:14:45.495 "is_configured": true, 00:14:45.495 "data_offset": 2048, 00:14:45.495 "data_size": 63488 00:14:45.495 }, 00:14:45.495 { 00:14:45.495 "name": null, 00:14:45.495 "uuid": "4efbc2c3-42f4-11ef-9f7f-e9a656123a8b", 00:14:45.495 "is_configured": false, 00:14:45.495 "data_offset": 2048, 00:14:45.495 "data_size": 63488 00:14:45.495 }, 00:14:45.495 { 00:14:45.495 "name": "BaseBdev3", 00:14:45.495 "uuid": "4f70f25e-42f4-11ef-9f7f-e9a656123a8b", 00:14:45.495 "is_configured": true, 00:14:45.495 "data_offset": 2048, 00:14:45.495 "data_size": 63488 00:14:45.495 }, 00:14:45.495 { 00:14:45.495 "name": "BaseBdev4", 00:14:45.495 "uuid": "4fd50c02-42f4-11ef-9f7f-e9a656123a8b", 00:14:45.495 "is_configured": true, 00:14:45.495 "data_offset": 2048, 00:14:45.495 "data_size": 63488 00:14:45.495 } 00:14:45.495 ] 00:14:45.495 }' 00:14:45.495 21:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:45.495 21:51:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.754 21:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.754 21:51:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:46.013 21:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:14:46.013 21:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:46.272 [2024-07-15 21:51:01.319359] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:46.272 21:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:46.272 21:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:46.272 21:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:46.272 21:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:46.272 21:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:46.272 21:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:46.272 21:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:46.272 21:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:46.272 21:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:46.272 21:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:46.272 21:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.272 21:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.531 21:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:46.531 "name": "Existed_Raid", 00:14:46.531 "uuid": "5049a059-42f4-11ef-9f7f-e9a656123a8b", 00:14:46.531 "strip_size_kb": 64, 00:14:46.531 "state": "configuring", 00:14:46.531 "raid_level": "concat", 00:14:46.531 "superblock": true, 00:14:46.531 "num_base_bdevs": 4, 00:14:46.531 "num_base_bdevs_discovered": 2, 00:14:46.531 "num_base_bdevs_operational": 4, 00:14:46.531 "base_bdevs_list": [ 00:14:46.531 { 00:14:46.531 "name": null, 00:14:46.531 "uuid": "516266c5-42f4-11ef-9f7f-e9a656123a8b", 00:14:46.531 "is_configured": false, 00:14:46.531 "data_offset": 2048, 00:14:46.531 "data_size": 63488 00:14:46.531 }, 00:14:46.531 { 00:14:46.531 "name": null, 00:14:46.531 "uuid": "4efbc2c3-42f4-11ef-9f7f-e9a656123a8b", 00:14:46.531 "is_configured": false, 00:14:46.531 "data_offset": 2048, 00:14:46.531 "data_size": 63488 00:14:46.531 }, 00:14:46.531 { 00:14:46.531 "name": "BaseBdev3", 00:14:46.531 "uuid": "4f70f25e-42f4-11ef-9f7f-e9a656123a8b", 00:14:46.531 "is_configured": true, 00:14:46.531 "data_offset": 2048, 00:14:46.531 "data_size": 63488 00:14:46.531 }, 00:14:46.531 { 00:14:46.531 "name": "BaseBdev4", 00:14:46.531 "uuid": "4fd50c02-42f4-11ef-9f7f-e9a656123a8b", 00:14:46.531 "is_configured": true, 00:14:46.531 "data_offset": 2048, 00:14:46.531 "data_size": 63488 00:14:46.531 } 00:14:46.531 ] 00:14:46.531 }' 00:14:46.531 21:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:46.531 21:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.790 21:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.790 21:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:47.049 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:14:47.049 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:47.308 [2024-07-15 21:51:02.330354] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:47.308 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:47.308 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:47.308 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:47.308 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:47.308 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:47.308 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:47.308 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:47.308 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:47.308 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:47.308 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:47.309 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.309 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.568 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:47.568 "name": "Existed_Raid", 00:14:47.568 "uuid": "5049a059-42f4-11ef-9f7f-e9a656123a8b", 00:14:47.568 "strip_size_kb": 64, 00:14:47.568 "state": "configuring", 00:14:47.568 "raid_level": "concat", 00:14:47.568 "superblock": true, 00:14:47.568 "num_base_bdevs": 4, 00:14:47.568 "num_base_bdevs_discovered": 3, 00:14:47.568 "num_base_bdevs_operational": 4, 00:14:47.568 "base_bdevs_list": [ 00:14:47.568 { 00:14:47.568 "name": null, 00:14:47.568 "uuid": "516266c5-42f4-11ef-9f7f-e9a656123a8b", 00:14:47.568 "is_configured": false, 00:14:47.568 "data_offset": 2048, 00:14:47.568 "data_size": 63488 00:14:47.568 }, 00:14:47.568 { 00:14:47.568 "name": "BaseBdev2", 00:14:47.568 "uuid": "4efbc2c3-42f4-11ef-9f7f-e9a656123a8b", 00:14:47.568 "is_configured": true, 00:14:47.568 "data_offset": 2048, 00:14:47.568 "data_size": 63488 00:14:47.568 }, 00:14:47.568 { 00:14:47.568 "name": "BaseBdev3", 00:14:47.568 "uuid": "4f70f25e-42f4-11ef-9f7f-e9a656123a8b", 00:14:47.568 "is_configured": true, 00:14:47.568 "data_offset": 2048, 00:14:47.568 "data_size": 63488 00:14:47.568 }, 00:14:47.568 { 00:14:47.568 "name": "BaseBdev4", 00:14:47.568 "uuid": "4fd50c02-42f4-11ef-9f7f-e9a656123a8b", 00:14:47.568 "is_configured": true, 00:14:47.568 "data_offset": 2048, 00:14:47.568 "data_size": 63488 00:14:47.568 } 00:14:47.568 ] 00:14:47.568 }' 00:14:47.568 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:47.568 21:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.826 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.826 21:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:48.086 21:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:14:48.086 21:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:48.086 21:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.344 21:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 516266c5-42f4-11ef-9f7f-e9a656123a8b 00:14:48.603 [2024-07-15 21:51:03.638596] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:48.603 [2024-07-15 21:51:03.638652] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xfeffee34f00 00:14:48.603 [2024-07-15 21:51:03.638658] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:48.603 [2024-07-15 21:51:03.638679] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xfeffee97e20 00:14:48.603 [2024-07-15 21:51:03.638748] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xfeffee34f00 00:14:48.604 [2024-07-15 21:51:03.638767] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xfeffee34f00 00:14:48.604 [2024-07-15 21:51:03.638802] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.604 NewBaseBdev 00:14:48.604 21:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:14:48.604 21:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=NewBaseBdev 00:14:48.604 21:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:14:48.604 21:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:14:48.604 21:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:14:48.604 21:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:14:48.604 21:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:48.864 21:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:49.122 [ 00:14:49.122 { 00:14:49.122 "name": "NewBaseBdev", 00:14:49.122 "aliases": [ 00:14:49.122 "516266c5-42f4-11ef-9f7f-e9a656123a8b" 00:14:49.122 ], 00:14:49.122 "product_name": "Malloc disk", 00:14:49.122 "block_size": 512, 00:14:49.122 "num_blocks": 65536, 00:14:49.122 "uuid": "516266c5-42f4-11ef-9f7f-e9a656123a8b", 00:14:49.122 "assigned_rate_limits": { 00:14:49.122 "rw_ios_per_sec": 0, 00:14:49.122 "rw_mbytes_per_sec": 0, 00:14:49.122 "r_mbytes_per_sec": 0, 00:14:49.122 "w_mbytes_per_sec": 0 00:14:49.122 }, 00:14:49.122 "claimed": true, 00:14:49.122 "claim_type": "exclusive_write", 00:14:49.122 "zoned": false, 00:14:49.122 "supported_io_types": { 00:14:49.122 "read": true, 00:14:49.123 "write": true, 00:14:49.123 "unmap": true, 00:14:49.123 "flush": true, 00:14:49.123 "reset": true, 00:14:49.123 "nvme_admin": false, 00:14:49.123 "nvme_io": false, 00:14:49.123 "nvme_io_md": false, 00:14:49.123 "write_zeroes": true, 00:14:49.123 "zcopy": true, 00:14:49.123 "get_zone_info": false, 00:14:49.123 "zone_management": false, 00:14:49.123 "zone_append": false, 00:14:49.123 "compare": false, 00:14:49.123 "compare_and_write": false, 00:14:49.123 "abort": true, 00:14:49.123 "seek_hole": false, 00:14:49.123 "seek_data": false, 00:14:49.123 "copy": true, 00:14:49.123 "nvme_iov_md": false 00:14:49.123 }, 00:14:49.123 "memory_domains": [ 00:14:49.123 { 00:14:49.123 "dma_device_id": "system", 00:14:49.123 "dma_device_type": 1 00:14:49.123 }, 00:14:49.123 { 00:14:49.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.123 "dma_device_type": 2 00:14:49.123 } 00:14:49.123 ], 00:14:49.123 "driver_specific": {} 00:14:49.123 } 00:14:49.123 ] 00:14:49.123 21:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:14:49.123 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:49.123 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:49.123 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:49.123 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:49.123 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:49.123 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:49.123 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:49.123 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:49.123 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:49.123 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:49.123 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.123 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.381 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:49.381 "name": "Existed_Raid", 00:14:49.381 "uuid": "5049a059-42f4-11ef-9f7f-e9a656123a8b", 00:14:49.381 "strip_size_kb": 64, 00:14:49.381 "state": "online", 00:14:49.381 "raid_level": "concat", 00:14:49.381 "superblock": true, 00:14:49.381 "num_base_bdevs": 4, 00:14:49.381 "num_base_bdevs_discovered": 4, 00:14:49.381 "num_base_bdevs_operational": 4, 00:14:49.381 "base_bdevs_list": [ 00:14:49.381 { 00:14:49.381 "name": "NewBaseBdev", 00:14:49.381 "uuid": "516266c5-42f4-11ef-9f7f-e9a656123a8b", 00:14:49.381 "is_configured": true, 00:14:49.381 "data_offset": 2048, 00:14:49.381 "data_size": 63488 00:14:49.381 }, 00:14:49.381 { 00:14:49.381 "name": "BaseBdev2", 00:14:49.381 "uuid": "4efbc2c3-42f4-11ef-9f7f-e9a656123a8b", 00:14:49.381 "is_configured": true, 00:14:49.381 "data_offset": 2048, 00:14:49.381 "data_size": 63488 00:14:49.381 }, 00:14:49.381 { 00:14:49.381 "name": "BaseBdev3", 00:14:49.381 "uuid": "4f70f25e-42f4-11ef-9f7f-e9a656123a8b", 00:14:49.381 "is_configured": true, 00:14:49.381 "data_offset": 2048, 00:14:49.381 "data_size": 63488 00:14:49.381 }, 00:14:49.381 { 00:14:49.381 "name": "BaseBdev4", 00:14:49.381 "uuid": "4fd50c02-42f4-11ef-9f7f-e9a656123a8b", 00:14:49.381 "is_configured": true, 00:14:49.381 "data_offset": 2048, 00:14:49.381 "data_size": 63488 00:14:49.381 } 00:14:49.381 ] 00:14:49.381 }' 00:14:49.382 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:49.382 21:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.641 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:14:49.641 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:49.641 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:49.641 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:49.641 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:49.641 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:49.641 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:49.641 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:49.901 [2024-07-15 21:51:04.834549] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:49.901 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:49.901 "name": "Existed_Raid", 00:14:49.901 "aliases": [ 00:14:49.901 "5049a059-42f4-11ef-9f7f-e9a656123a8b" 00:14:49.901 ], 00:14:49.901 "product_name": "Raid Volume", 00:14:49.901 "block_size": 512, 00:14:49.901 "num_blocks": 253952, 00:14:49.901 "uuid": "5049a059-42f4-11ef-9f7f-e9a656123a8b", 00:14:49.901 "assigned_rate_limits": { 00:14:49.901 "rw_ios_per_sec": 0, 00:14:49.901 "rw_mbytes_per_sec": 0, 00:14:49.901 "r_mbytes_per_sec": 0, 00:14:49.901 "w_mbytes_per_sec": 0 00:14:49.901 }, 00:14:49.901 "claimed": false, 00:14:49.901 "zoned": false, 00:14:49.901 "supported_io_types": { 00:14:49.901 "read": true, 00:14:49.901 "write": true, 00:14:49.901 "unmap": true, 00:14:49.901 "flush": true, 00:14:49.901 "reset": true, 00:14:49.901 "nvme_admin": false, 00:14:49.901 "nvme_io": false, 00:14:49.901 "nvme_io_md": false, 00:14:49.901 "write_zeroes": true, 00:14:49.901 "zcopy": false, 00:14:49.902 "get_zone_info": false, 00:14:49.902 "zone_management": false, 00:14:49.902 "zone_append": false, 00:14:49.902 "compare": false, 00:14:49.902 "compare_and_write": false, 00:14:49.902 "abort": false, 00:14:49.902 "seek_hole": false, 00:14:49.902 "seek_data": false, 00:14:49.902 "copy": false, 00:14:49.902 "nvme_iov_md": false 00:14:49.902 }, 00:14:49.902 "memory_domains": [ 00:14:49.902 { 00:14:49.902 "dma_device_id": "system", 00:14:49.902 "dma_device_type": 1 00:14:49.902 }, 00:14:49.902 { 00:14:49.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.902 "dma_device_type": 2 00:14:49.902 }, 00:14:49.902 { 00:14:49.902 "dma_device_id": "system", 00:14:49.902 "dma_device_type": 1 00:14:49.902 }, 00:14:49.902 { 00:14:49.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.902 "dma_device_type": 2 00:14:49.902 }, 00:14:49.902 { 00:14:49.902 "dma_device_id": "system", 00:14:49.902 "dma_device_type": 1 00:14:49.902 }, 00:14:49.902 { 00:14:49.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.902 "dma_device_type": 2 00:14:49.902 }, 00:14:49.902 { 00:14:49.902 "dma_device_id": "system", 00:14:49.902 "dma_device_type": 1 00:14:49.902 }, 00:14:49.902 { 00:14:49.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.902 "dma_device_type": 2 00:14:49.902 } 00:14:49.902 ], 00:14:49.902 "driver_specific": { 00:14:49.902 "raid": { 00:14:49.902 "uuid": "5049a059-42f4-11ef-9f7f-e9a656123a8b", 00:14:49.902 "strip_size_kb": 64, 00:14:49.902 "state": "online", 00:14:49.902 "raid_level": "concat", 00:14:49.902 "superblock": true, 00:14:49.902 "num_base_bdevs": 4, 00:14:49.902 "num_base_bdevs_discovered": 4, 00:14:49.902 "num_base_bdevs_operational": 4, 00:14:49.902 "base_bdevs_list": [ 00:14:49.902 { 00:14:49.902 "name": "NewBaseBdev", 00:14:49.902 "uuid": "516266c5-42f4-11ef-9f7f-e9a656123a8b", 00:14:49.902 "is_configured": true, 00:14:49.902 "data_offset": 2048, 00:14:49.902 "data_size": 63488 00:14:49.902 }, 00:14:49.902 { 00:14:49.902 "name": "BaseBdev2", 00:14:49.902 "uuid": "4efbc2c3-42f4-11ef-9f7f-e9a656123a8b", 00:14:49.902 "is_configured": true, 00:14:49.902 "data_offset": 2048, 00:14:49.902 "data_size": 63488 00:14:49.902 }, 00:14:49.902 { 00:14:49.902 "name": "BaseBdev3", 00:14:49.902 "uuid": "4f70f25e-42f4-11ef-9f7f-e9a656123a8b", 00:14:49.902 "is_configured": true, 00:14:49.902 "data_offset": 2048, 00:14:49.902 "data_size": 63488 00:14:49.902 }, 00:14:49.902 { 00:14:49.902 "name": "BaseBdev4", 00:14:49.902 "uuid": "4fd50c02-42f4-11ef-9f7f-e9a656123a8b", 00:14:49.902 "is_configured": true, 00:14:49.902 "data_offset": 2048, 00:14:49.902 "data_size": 63488 00:14:49.902 } 00:14:49.902 ] 00:14:49.902 } 00:14:49.902 } 00:14:49.902 }' 00:14:49.902 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:49.902 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:14:49.902 BaseBdev2 00:14:49.902 BaseBdev3 00:14:49.902 BaseBdev4' 00:14:49.902 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:49.902 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:49.902 21:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:50.161 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:50.161 "name": "NewBaseBdev", 00:14:50.161 "aliases": [ 00:14:50.161 "516266c5-42f4-11ef-9f7f-e9a656123a8b" 00:14:50.161 ], 00:14:50.161 "product_name": "Malloc disk", 00:14:50.161 "block_size": 512, 00:14:50.161 "num_blocks": 65536, 00:14:50.161 "uuid": "516266c5-42f4-11ef-9f7f-e9a656123a8b", 00:14:50.161 "assigned_rate_limits": { 00:14:50.161 "rw_ios_per_sec": 0, 00:14:50.161 "rw_mbytes_per_sec": 0, 00:14:50.161 "r_mbytes_per_sec": 0, 00:14:50.161 "w_mbytes_per_sec": 0 00:14:50.161 }, 00:14:50.161 "claimed": true, 00:14:50.161 "claim_type": "exclusive_write", 00:14:50.161 "zoned": false, 00:14:50.161 "supported_io_types": { 00:14:50.161 "read": true, 00:14:50.161 "write": true, 00:14:50.161 "unmap": true, 00:14:50.161 "flush": true, 00:14:50.161 "reset": true, 00:14:50.161 "nvme_admin": false, 00:14:50.161 "nvme_io": false, 00:14:50.161 "nvme_io_md": false, 00:14:50.161 "write_zeroes": true, 00:14:50.161 "zcopy": true, 00:14:50.161 "get_zone_info": false, 00:14:50.161 "zone_management": false, 00:14:50.161 "zone_append": false, 00:14:50.161 "compare": false, 00:14:50.161 "compare_and_write": false, 00:14:50.161 "abort": true, 00:14:50.161 "seek_hole": false, 00:14:50.161 "seek_data": false, 00:14:50.161 "copy": true, 00:14:50.161 "nvme_iov_md": false 00:14:50.161 }, 00:14:50.161 "memory_domains": [ 00:14:50.161 { 00:14:50.161 "dma_device_id": "system", 00:14:50.161 "dma_device_type": 1 00:14:50.161 }, 00:14:50.161 { 00:14:50.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.161 "dma_device_type": 2 00:14:50.161 } 00:14:50.161 ], 00:14:50.161 "driver_specific": {} 00:14:50.161 }' 00:14:50.161 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:50.161 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:50.161 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:50.161 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:50.161 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:50.161 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:50.161 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:50.161 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:50.161 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:50.161 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:50.161 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:50.161 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:50.161 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:50.161 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:50.161 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:50.420 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:50.420 "name": "BaseBdev2", 00:14:50.420 "aliases": [ 00:14:50.420 "4efbc2c3-42f4-11ef-9f7f-e9a656123a8b" 00:14:50.420 ], 00:14:50.420 "product_name": "Malloc disk", 00:14:50.420 "block_size": 512, 00:14:50.420 "num_blocks": 65536, 00:14:50.420 "uuid": "4efbc2c3-42f4-11ef-9f7f-e9a656123a8b", 00:14:50.420 "assigned_rate_limits": { 00:14:50.420 "rw_ios_per_sec": 0, 00:14:50.421 "rw_mbytes_per_sec": 0, 00:14:50.421 "r_mbytes_per_sec": 0, 00:14:50.421 "w_mbytes_per_sec": 0 00:14:50.421 }, 00:14:50.421 "claimed": true, 00:14:50.421 "claim_type": "exclusive_write", 00:14:50.421 "zoned": false, 00:14:50.421 "supported_io_types": { 00:14:50.421 "read": true, 00:14:50.421 "write": true, 00:14:50.421 "unmap": true, 00:14:50.421 "flush": true, 00:14:50.421 "reset": true, 00:14:50.421 "nvme_admin": false, 00:14:50.421 "nvme_io": false, 00:14:50.421 "nvme_io_md": false, 00:14:50.421 "write_zeroes": true, 00:14:50.421 "zcopy": true, 00:14:50.421 "get_zone_info": false, 00:14:50.421 "zone_management": false, 00:14:50.421 "zone_append": false, 00:14:50.421 "compare": false, 00:14:50.421 "compare_and_write": false, 00:14:50.421 "abort": true, 00:14:50.421 "seek_hole": false, 00:14:50.421 "seek_data": false, 00:14:50.421 "copy": true, 00:14:50.421 "nvme_iov_md": false 00:14:50.421 }, 00:14:50.421 "memory_domains": [ 00:14:50.421 { 00:14:50.421 "dma_device_id": "system", 00:14:50.421 "dma_device_type": 1 00:14:50.421 }, 00:14:50.421 { 00:14:50.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.421 "dma_device_type": 2 00:14:50.421 } 00:14:50.421 ], 00:14:50.421 "driver_specific": {} 00:14:50.421 }' 00:14:50.421 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:50.421 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:50.421 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:50.421 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:50.421 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:50.421 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:50.421 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:50.421 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:50.421 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:50.421 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:50.421 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:50.421 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:50.421 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:50.421 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:50.421 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:50.680 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:50.680 "name": "BaseBdev3", 00:14:50.680 "aliases": [ 00:14:50.680 "4f70f25e-42f4-11ef-9f7f-e9a656123a8b" 00:14:50.680 ], 00:14:50.680 "product_name": "Malloc disk", 00:14:50.680 "block_size": 512, 00:14:50.680 "num_blocks": 65536, 00:14:50.680 "uuid": "4f70f25e-42f4-11ef-9f7f-e9a656123a8b", 00:14:50.680 "assigned_rate_limits": { 00:14:50.680 "rw_ios_per_sec": 0, 00:14:50.680 "rw_mbytes_per_sec": 0, 00:14:50.680 "r_mbytes_per_sec": 0, 00:14:50.680 "w_mbytes_per_sec": 0 00:14:50.680 }, 00:14:50.680 "claimed": true, 00:14:50.680 "claim_type": "exclusive_write", 00:14:50.680 "zoned": false, 00:14:50.680 "supported_io_types": { 00:14:50.680 "read": true, 00:14:50.680 "write": true, 00:14:50.680 "unmap": true, 00:14:50.680 "flush": true, 00:14:50.680 "reset": true, 00:14:50.680 "nvme_admin": false, 00:14:50.680 "nvme_io": false, 00:14:50.680 "nvme_io_md": false, 00:14:50.680 "write_zeroes": true, 00:14:50.680 "zcopy": true, 00:14:50.680 "get_zone_info": false, 00:14:50.680 "zone_management": false, 00:14:50.680 "zone_append": false, 00:14:50.680 "compare": false, 00:14:50.680 "compare_and_write": false, 00:14:50.680 "abort": true, 00:14:50.680 "seek_hole": false, 00:14:50.680 "seek_data": false, 00:14:50.680 "copy": true, 00:14:50.680 "nvme_iov_md": false 00:14:50.680 }, 00:14:50.680 "memory_domains": [ 00:14:50.680 { 00:14:50.680 "dma_device_id": "system", 00:14:50.680 "dma_device_type": 1 00:14:50.680 }, 00:14:50.680 { 00:14:50.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.680 "dma_device_type": 2 00:14:50.680 } 00:14:50.680 ], 00:14:50.680 "driver_specific": {} 00:14:50.680 }' 00:14:50.680 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:50.680 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:50.680 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:50.680 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:50.680 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:50.940 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:50.940 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:50.940 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:50.940 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:50.940 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:50.940 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:50.940 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:50.940 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:50.940 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:14:50.940 21:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:50.940 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:50.940 "name": "BaseBdev4", 00:14:50.940 "aliases": [ 00:14:50.940 "4fd50c02-42f4-11ef-9f7f-e9a656123a8b" 00:14:50.940 ], 00:14:50.940 "product_name": "Malloc disk", 00:14:50.940 "block_size": 512, 00:14:50.940 "num_blocks": 65536, 00:14:50.940 "uuid": "4fd50c02-42f4-11ef-9f7f-e9a656123a8b", 00:14:50.940 "assigned_rate_limits": { 00:14:50.940 "rw_ios_per_sec": 0, 00:14:50.940 "rw_mbytes_per_sec": 0, 00:14:50.940 "r_mbytes_per_sec": 0, 00:14:50.940 "w_mbytes_per_sec": 0 00:14:50.940 }, 00:14:50.940 "claimed": true, 00:14:50.940 "claim_type": "exclusive_write", 00:14:50.940 "zoned": false, 00:14:50.940 "supported_io_types": { 00:14:50.940 "read": true, 00:14:50.940 "write": true, 00:14:50.940 "unmap": true, 00:14:50.940 "flush": true, 00:14:50.940 "reset": true, 00:14:50.940 "nvme_admin": false, 00:14:50.940 "nvme_io": false, 00:14:50.940 "nvme_io_md": false, 00:14:50.940 "write_zeroes": true, 00:14:50.940 "zcopy": true, 00:14:50.940 "get_zone_info": false, 00:14:50.940 "zone_management": false, 00:14:50.940 "zone_append": false, 00:14:50.940 "compare": false, 00:14:50.940 "compare_and_write": false, 00:14:50.940 "abort": true, 00:14:50.940 "seek_hole": false, 00:14:50.940 "seek_data": false, 00:14:50.940 "copy": true, 00:14:50.940 "nvme_iov_md": false 00:14:50.940 }, 00:14:50.940 "memory_domains": [ 00:14:50.940 { 00:14:50.940 "dma_device_id": "system", 00:14:50.940 "dma_device_type": 1 00:14:50.940 }, 00:14:50.940 { 00:14:50.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.940 "dma_device_type": 2 00:14:50.940 } 00:14:50.940 ], 00:14:50.940 "driver_specific": {} 00:14:50.940 }' 00:14:50.940 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:51.199 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:51.199 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:51.199 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:51.199 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:51.199 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:51.199 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:51.199 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:51.199 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:51.199 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:51.199 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:51.199 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:51.199 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:51.458 [2024-07-15 21:51:06.394581] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:51.458 [2024-07-15 21:51:06.394618] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:51.458 [2024-07-15 21:51:06.394661] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:51.458 [2024-07-15 21:51:06.394675] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:51.458 [2024-07-15 21:51:06.394678] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xfeffee34f00 name Existed_Raid, state offline 00:14:51.458 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 61466 00:14:51.458 21:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@942 -- # '[' -z 61466 ']' 00:14:51.458 21:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # kill -0 61466 00:14:51.458 21:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@947 -- # uname 00:14:51.458 21:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:14:51.458 21:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # ps -c -o command 61466 00:14:51.458 21:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # tail -1 00:14:51.458 21:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:14:51.458 killing process with pid 61466 00:14:51.458 21:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:14:51.458 21:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # echo 'killing process with pid 61466' 00:14:51.458 21:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@961 -- # kill 61466 00:14:51.458 [2024-07-15 21:51:06.421522] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:51.458 21:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # wait 61466 00:14:51.458 [2024-07-15 21:51:06.447485] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:51.458 21:51:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:14:51.458 ************************************ 00:14:51.458 END TEST raid_state_function_test_sb 00:14:51.458 ************************************ 00:14:51.458 00:14:51.458 real 0m26.572s 00:14:51.458 user 0m48.315s 00:14:51.458 sys 0m4.015s 00:14:51.458 21:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1118 -- # xtrace_disable 00:14:51.458 21:51:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.717 21:51:06 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:14:51.717 21:51:06 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:14:51.717 21:51:06 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:14:51.717 21:51:06 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:14:51.717 21:51:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:51.717 ************************************ 00:14:51.717 START TEST raid_superblock_test 00:14:51.717 ************************************ 00:14:51.717 21:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1117 -- # raid_superblock_test concat 4 00:14:51.717 21:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:14:51.717 21:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:14:51.717 21:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:14:51.717 21:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:14:51.717 21:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:14:51.717 21:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:14:51.717 21:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:14:51.717 21:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:14:51.717 21:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:14:51.717 21:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:14:51.717 21:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:14:51.717 21:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:14:51.717 21:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:14:51.717 21:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:14:51.717 21:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:14:51.717 21:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:14:51.717 21:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=62280 00:14:51.717 21:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 62280 /var/tmp/spdk-raid.sock 00:14:51.717 21:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@823 -- # '[' -z 62280 ']' 00:14:51.717 21:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:51.717 21:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:51.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:51.718 21:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:14:51.718 21:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:51.718 21:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:14:51.718 21:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.718 [2024-07-15 21:51:06.685406] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:14:51.718 [2024-07-15 21:51:06.685657] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:14:52.654 EAL: TSC is not safe to use in SMP mode 00:14:52.654 EAL: TSC is not invariant 00:14:52.654 [2024-07-15 21:51:07.519705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.654 [2024-07-15 21:51:07.611630] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:52.654 [2024-07-15 21:51:07.614033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.654 [2024-07-15 21:51:07.614889] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:52.654 [2024-07-15 21:51:07.614919] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:52.655 21:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:14:52.655 21:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # return 0 00:14:52.655 21:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:14:52.655 21:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:52.655 21:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:14:52.655 21:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:14:52.655 21:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:52.655 21:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:52.655 21:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:52.655 21:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:52.655 21:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:52.914 malloc1 00:14:52.914 21:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:53.178 [2024-07-15 21:51:08.146449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:53.178 [2024-07-15 21:51:08.146540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:53.178 [2024-07-15 21:51:08.146558] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x31c858c34780 00:14:53.178 [2024-07-15 21:51:08.146566] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:53.178 [2024-07-15 21:51:08.147754] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:53.178 [2024-07-15 21:51:08.147783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:53.178 pt1 00:14:53.178 21:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:53.178 21:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:53.178 21:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:14:53.178 21:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:14:53.178 21:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:53.178 21:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:53.178 21:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:53.178 21:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:53.178 21:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:53.476 malloc2 00:14:53.476 21:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:53.735 [2024-07-15 21:51:08.722517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:53.735 [2024-07-15 21:51:08.722621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:53.735 [2024-07-15 21:51:08.722633] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x31c858c34c80 00:14:53.735 [2024-07-15 21:51:08.722648] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:53.735 [2024-07-15 21:51:08.723524] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:53.735 [2024-07-15 21:51:08.723561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:53.735 pt2 00:14:53.735 21:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:53.735 21:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:53.735 21:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:14:53.735 21:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:14:53.735 21:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:53.735 21:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:53.735 21:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:53.735 21:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:53.735 21:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:14:53.994 malloc3 00:14:53.994 21:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:54.253 [2024-07-15 21:51:09.198522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:54.253 [2024-07-15 21:51:09.198599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.253 [2024-07-15 21:51:09.198627] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x31c858c35180 00:14:54.253 [2024-07-15 21:51:09.198635] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.253 [2024-07-15 21:51:09.199410] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.253 [2024-07-15 21:51:09.199462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:54.253 pt3 00:14:54.253 21:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:54.253 21:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:54.253 21:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:14:54.253 21:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:14:54.253 21:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:54.253 21:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:54.253 21:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:54.253 21:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:54.253 21:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:14:54.512 malloc4 00:14:54.512 21:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:54.771 [2024-07-15 21:51:09.734524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:54.771 [2024-07-15 21:51:09.734615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.771 [2024-07-15 21:51:09.734627] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x31c858c35680 00:14:54.771 [2024-07-15 21:51:09.734635] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.771 [2024-07-15 21:51:09.735537] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.771 [2024-07-15 21:51:09.735569] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:54.771 pt4 00:14:54.771 21:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:54.771 21:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:54.771 21:51:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:14:55.030 [2024-07-15 21:51:09.998548] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:55.030 [2024-07-15 21:51:09.999207] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:55.030 [2024-07-15 21:51:09.999222] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:55.030 [2024-07-15 21:51:09.999233] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:55.030 [2024-07-15 21:51:09.999290] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x31c858c35900 00:14:55.030 [2024-07-15 21:51:09.999297] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:55.030 [2024-07-15 21:51:09.999381] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x31c858c97e20 00:14:55.030 [2024-07-15 21:51:09.999456] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x31c858c35900 00:14:55.030 [2024-07-15 21:51:09.999461] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x31c858c35900 00:14:55.030 [2024-07-15 21:51:09.999488] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.030 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:55.030 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:55.030 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:55.030 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:55.030 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:55.030 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:55.030 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:55.030 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:55.030 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:55.030 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:55.030 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.030 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.288 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:55.288 "name": "raid_bdev1", 00:14:55.288 "uuid": "58bca1ee-42f4-11ef-9f7f-e9a656123a8b", 00:14:55.288 "strip_size_kb": 64, 00:14:55.288 "state": "online", 00:14:55.288 "raid_level": "concat", 00:14:55.288 "superblock": true, 00:14:55.288 "num_base_bdevs": 4, 00:14:55.288 "num_base_bdevs_discovered": 4, 00:14:55.288 "num_base_bdevs_operational": 4, 00:14:55.288 "base_bdevs_list": [ 00:14:55.288 { 00:14:55.288 "name": "pt1", 00:14:55.288 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:55.288 "is_configured": true, 00:14:55.288 "data_offset": 2048, 00:14:55.288 "data_size": 63488 00:14:55.288 }, 00:14:55.288 { 00:14:55.288 "name": "pt2", 00:14:55.288 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:55.288 "is_configured": true, 00:14:55.288 "data_offset": 2048, 00:14:55.288 "data_size": 63488 00:14:55.288 }, 00:14:55.288 { 00:14:55.288 "name": "pt3", 00:14:55.288 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:55.288 "is_configured": true, 00:14:55.288 "data_offset": 2048, 00:14:55.288 "data_size": 63488 00:14:55.288 }, 00:14:55.288 { 00:14:55.288 "name": "pt4", 00:14:55.288 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:55.288 "is_configured": true, 00:14:55.288 "data_offset": 2048, 00:14:55.288 "data_size": 63488 00:14:55.288 } 00:14:55.288 ] 00:14:55.288 }' 00:14:55.288 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:55.288 21:51:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.547 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:14:55.547 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:55.547 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:55.547 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:55.547 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:55.547 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:55.547 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:55.547 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:55.807 [2024-07-15 21:51:10.786605] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:55.807 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:55.807 "name": "raid_bdev1", 00:14:55.807 "aliases": [ 00:14:55.807 "58bca1ee-42f4-11ef-9f7f-e9a656123a8b" 00:14:55.807 ], 00:14:55.807 "product_name": "Raid Volume", 00:14:55.807 "block_size": 512, 00:14:55.807 "num_blocks": 253952, 00:14:55.807 "uuid": "58bca1ee-42f4-11ef-9f7f-e9a656123a8b", 00:14:55.807 "assigned_rate_limits": { 00:14:55.807 "rw_ios_per_sec": 0, 00:14:55.807 "rw_mbytes_per_sec": 0, 00:14:55.807 "r_mbytes_per_sec": 0, 00:14:55.807 "w_mbytes_per_sec": 0 00:14:55.807 }, 00:14:55.807 "claimed": false, 00:14:55.807 "zoned": false, 00:14:55.807 "supported_io_types": { 00:14:55.807 "read": true, 00:14:55.807 "write": true, 00:14:55.807 "unmap": true, 00:14:55.807 "flush": true, 00:14:55.807 "reset": true, 00:14:55.807 "nvme_admin": false, 00:14:55.807 "nvme_io": false, 00:14:55.807 "nvme_io_md": false, 00:14:55.807 "write_zeroes": true, 00:14:55.807 "zcopy": false, 00:14:55.807 "get_zone_info": false, 00:14:55.807 "zone_management": false, 00:14:55.807 "zone_append": false, 00:14:55.807 "compare": false, 00:14:55.807 "compare_and_write": false, 00:14:55.807 "abort": false, 00:14:55.807 "seek_hole": false, 00:14:55.807 "seek_data": false, 00:14:55.807 "copy": false, 00:14:55.807 "nvme_iov_md": false 00:14:55.807 }, 00:14:55.807 "memory_domains": [ 00:14:55.807 { 00:14:55.807 "dma_device_id": "system", 00:14:55.807 "dma_device_type": 1 00:14:55.807 }, 00:14:55.807 { 00:14:55.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.807 "dma_device_type": 2 00:14:55.807 }, 00:14:55.807 { 00:14:55.807 "dma_device_id": "system", 00:14:55.807 "dma_device_type": 1 00:14:55.807 }, 00:14:55.807 { 00:14:55.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.807 "dma_device_type": 2 00:14:55.807 }, 00:14:55.807 { 00:14:55.807 "dma_device_id": "system", 00:14:55.807 "dma_device_type": 1 00:14:55.807 }, 00:14:55.807 { 00:14:55.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.807 "dma_device_type": 2 00:14:55.807 }, 00:14:55.807 { 00:14:55.807 "dma_device_id": "system", 00:14:55.807 "dma_device_type": 1 00:14:55.807 }, 00:14:55.807 { 00:14:55.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.807 "dma_device_type": 2 00:14:55.807 } 00:14:55.807 ], 00:14:55.807 "driver_specific": { 00:14:55.807 "raid": { 00:14:55.807 "uuid": "58bca1ee-42f4-11ef-9f7f-e9a656123a8b", 00:14:55.807 "strip_size_kb": 64, 00:14:55.807 "state": "online", 00:14:55.807 "raid_level": "concat", 00:14:55.807 "superblock": true, 00:14:55.807 "num_base_bdevs": 4, 00:14:55.807 "num_base_bdevs_discovered": 4, 00:14:55.807 "num_base_bdevs_operational": 4, 00:14:55.807 "base_bdevs_list": [ 00:14:55.807 { 00:14:55.807 "name": "pt1", 00:14:55.807 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:55.807 "is_configured": true, 00:14:55.807 "data_offset": 2048, 00:14:55.807 "data_size": 63488 00:14:55.807 }, 00:14:55.807 { 00:14:55.807 "name": "pt2", 00:14:55.807 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:55.807 "is_configured": true, 00:14:55.807 "data_offset": 2048, 00:14:55.807 "data_size": 63488 00:14:55.807 }, 00:14:55.807 { 00:14:55.807 "name": "pt3", 00:14:55.807 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:55.807 "is_configured": true, 00:14:55.807 "data_offset": 2048, 00:14:55.807 "data_size": 63488 00:14:55.807 }, 00:14:55.807 { 00:14:55.807 "name": "pt4", 00:14:55.807 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:55.807 "is_configured": true, 00:14:55.807 "data_offset": 2048, 00:14:55.807 "data_size": 63488 00:14:55.807 } 00:14:55.807 ] 00:14:55.807 } 00:14:55.807 } 00:14:55.807 }' 00:14:55.807 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:55.807 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:55.807 pt2 00:14:55.807 pt3 00:14:55.807 pt4' 00:14:55.807 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:55.807 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:55.807 21:51:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:56.066 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:56.066 "name": "pt1", 00:14:56.066 "aliases": [ 00:14:56.066 "00000000-0000-0000-0000-000000000001" 00:14:56.066 ], 00:14:56.066 "product_name": "passthru", 00:14:56.066 "block_size": 512, 00:14:56.066 "num_blocks": 65536, 00:14:56.066 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:56.066 "assigned_rate_limits": { 00:14:56.066 "rw_ios_per_sec": 0, 00:14:56.066 "rw_mbytes_per_sec": 0, 00:14:56.066 "r_mbytes_per_sec": 0, 00:14:56.066 "w_mbytes_per_sec": 0 00:14:56.066 }, 00:14:56.066 "claimed": true, 00:14:56.066 "claim_type": "exclusive_write", 00:14:56.066 "zoned": false, 00:14:56.066 "supported_io_types": { 00:14:56.066 "read": true, 00:14:56.066 "write": true, 00:14:56.066 "unmap": true, 00:14:56.066 "flush": true, 00:14:56.066 "reset": true, 00:14:56.066 "nvme_admin": false, 00:14:56.066 "nvme_io": false, 00:14:56.066 "nvme_io_md": false, 00:14:56.066 "write_zeroes": true, 00:14:56.066 "zcopy": true, 00:14:56.066 "get_zone_info": false, 00:14:56.066 "zone_management": false, 00:14:56.066 "zone_append": false, 00:14:56.066 "compare": false, 00:14:56.066 "compare_and_write": false, 00:14:56.066 "abort": true, 00:14:56.066 "seek_hole": false, 00:14:56.066 "seek_data": false, 00:14:56.066 "copy": true, 00:14:56.066 "nvme_iov_md": false 00:14:56.066 }, 00:14:56.066 "memory_domains": [ 00:14:56.066 { 00:14:56.066 "dma_device_id": "system", 00:14:56.066 "dma_device_type": 1 00:14:56.066 }, 00:14:56.066 { 00:14:56.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.066 "dma_device_type": 2 00:14:56.066 } 00:14:56.066 ], 00:14:56.066 "driver_specific": { 00:14:56.066 "passthru": { 00:14:56.066 "name": "pt1", 00:14:56.066 "base_bdev_name": "malloc1" 00:14:56.066 } 00:14:56.066 } 00:14:56.066 }' 00:14:56.066 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:56.066 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:56.066 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:56.066 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:56.066 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:56.066 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:56.066 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:56.066 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:56.066 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:56.066 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:56.066 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:56.067 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:56.067 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:56.067 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:56.067 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:56.325 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:56.325 "name": "pt2", 00:14:56.325 "aliases": [ 00:14:56.325 "00000000-0000-0000-0000-000000000002" 00:14:56.325 ], 00:14:56.325 "product_name": "passthru", 00:14:56.325 "block_size": 512, 00:14:56.325 "num_blocks": 65536, 00:14:56.325 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:56.325 "assigned_rate_limits": { 00:14:56.325 "rw_ios_per_sec": 0, 00:14:56.325 "rw_mbytes_per_sec": 0, 00:14:56.325 "r_mbytes_per_sec": 0, 00:14:56.325 "w_mbytes_per_sec": 0 00:14:56.325 }, 00:14:56.325 "claimed": true, 00:14:56.325 "claim_type": "exclusive_write", 00:14:56.325 "zoned": false, 00:14:56.325 "supported_io_types": { 00:14:56.325 "read": true, 00:14:56.325 "write": true, 00:14:56.325 "unmap": true, 00:14:56.325 "flush": true, 00:14:56.325 "reset": true, 00:14:56.325 "nvme_admin": false, 00:14:56.325 "nvme_io": false, 00:14:56.325 "nvme_io_md": false, 00:14:56.325 "write_zeroes": true, 00:14:56.325 "zcopy": true, 00:14:56.325 "get_zone_info": false, 00:14:56.325 "zone_management": false, 00:14:56.325 "zone_append": false, 00:14:56.325 "compare": false, 00:14:56.325 "compare_and_write": false, 00:14:56.325 "abort": true, 00:14:56.325 "seek_hole": false, 00:14:56.325 "seek_data": false, 00:14:56.325 "copy": true, 00:14:56.325 "nvme_iov_md": false 00:14:56.325 }, 00:14:56.325 "memory_domains": [ 00:14:56.325 { 00:14:56.325 "dma_device_id": "system", 00:14:56.325 "dma_device_type": 1 00:14:56.325 }, 00:14:56.325 { 00:14:56.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.325 "dma_device_type": 2 00:14:56.325 } 00:14:56.325 ], 00:14:56.325 "driver_specific": { 00:14:56.325 "passthru": { 00:14:56.325 "name": "pt2", 00:14:56.325 "base_bdev_name": "malloc2" 00:14:56.325 } 00:14:56.325 } 00:14:56.325 }' 00:14:56.325 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:56.325 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:56.325 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:56.325 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:56.325 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:56.325 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:56.325 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:56.584 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:56.584 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:56.584 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:56.584 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:56.584 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:56.584 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:56.584 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:14:56.584 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:56.843 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:56.843 "name": "pt3", 00:14:56.843 "aliases": [ 00:14:56.843 "00000000-0000-0000-0000-000000000003" 00:14:56.843 ], 00:14:56.843 "product_name": "passthru", 00:14:56.843 "block_size": 512, 00:14:56.843 "num_blocks": 65536, 00:14:56.843 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:56.843 "assigned_rate_limits": { 00:14:56.843 "rw_ios_per_sec": 0, 00:14:56.843 "rw_mbytes_per_sec": 0, 00:14:56.843 "r_mbytes_per_sec": 0, 00:14:56.843 "w_mbytes_per_sec": 0 00:14:56.843 }, 00:14:56.843 "claimed": true, 00:14:56.843 "claim_type": "exclusive_write", 00:14:56.843 "zoned": false, 00:14:56.843 "supported_io_types": { 00:14:56.843 "read": true, 00:14:56.843 "write": true, 00:14:56.843 "unmap": true, 00:14:56.843 "flush": true, 00:14:56.843 "reset": true, 00:14:56.843 "nvme_admin": false, 00:14:56.843 "nvme_io": false, 00:14:56.843 "nvme_io_md": false, 00:14:56.843 "write_zeroes": true, 00:14:56.843 "zcopy": true, 00:14:56.843 "get_zone_info": false, 00:14:56.843 "zone_management": false, 00:14:56.843 "zone_append": false, 00:14:56.843 "compare": false, 00:14:56.843 "compare_and_write": false, 00:14:56.843 "abort": true, 00:14:56.843 "seek_hole": false, 00:14:56.843 "seek_data": false, 00:14:56.843 "copy": true, 00:14:56.843 "nvme_iov_md": false 00:14:56.843 }, 00:14:56.843 "memory_domains": [ 00:14:56.843 { 00:14:56.843 "dma_device_id": "system", 00:14:56.843 "dma_device_type": 1 00:14:56.843 }, 00:14:56.843 { 00:14:56.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.843 "dma_device_type": 2 00:14:56.843 } 00:14:56.843 ], 00:14:56.843 "driver_specific": { 00:14:56.843 "passthru": { 00:14:56.843 "name": "pt3", 00:14:56.843 "base_bdev_name": "malloc3" 00:14:56.843 } 00:14:56.843 } 00:14:56.843 }' 00:14:56.843 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:56.843 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:56.843 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:56.843 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:56.843 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:56.843 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:56.843 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:56.843 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:56.843 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:56.843 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:56.843 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:56.843 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:56.843 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:56.843 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:14:56.843 21:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:57.106 21:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:57.106 "name": "pt4", 00:14:57.106 "aliases": [ 00:14:57.106 "00000000-0000-0000-0000-000000000004" 00:14:57.106 ], 00:14:57.106 "product_name": "passthru", 00:14:57.106 "block_size": 512, 00:14:57.106 "num_blocks": 65536, 00:14:57.106 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:57.106 "assigned_rate_limits": { 00:14:57.106 "rw_ios_per_sec": 0, 00:14:57.106 "rw_mbytes_per_sec": 0, 00:14:57.106 "r_mbytes_per_sec": 0, 00:14:57.106 "w_mbytes_per_sec": 0 00:14:57.106 }, 00:14:57.106 "claimed": true, 00:14:57.106 "claim_type": "exclusive_write", 00:14:57.106 "zoned": false, 00:14:57.106 "supported_io_types": { 00:14:57.106 "read": true, 00:14:57.106 "write": true, 00:14:57.106 "unmap": true, 00:14:57.106 "flush": true, 00:14:57.106 "reset": true, 00:14:57.106 "nvme_admin": false, 00:14:57.106 "nvme_io": false, 00:14:57.106 "nvme_io_md": false, 00:14:57.106 "write_zeroes": true, 00:14:57.106 "zcopy": true, 00:14:57.106 "get_zone_info": false, 00:14:57.106 "zone_management": false, 00:14:57.106 "zone_append": false, 00:14:57.106 "compare": false, 00:14:57.106 "compare_and_write": false, 00:14:57.106 "abort": true, 00:14:57.106 "seek_hole": false, 00:14:57.106 "seek_data": false, 00:14:57.106 "copy": true, 00:14:57.106 "nvme_iov_md": false 00:14:57.106 }, 00:14:57.106 "memory_domains": [ 00:14:57.106 { 00:14:57.106 "dma_device_id": "system", 00:14:57.106 "dma_device_type": 1 00:14:57.106 }, 00:14:57.106 { 00:14:57.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.106 "dma_device_type": 2 00:14:57.106 } 00:14:57.106 ], 00:14:57.106 "driver_specific": { 00:14:57.106 "passthru": { 00:14:57.106 "name": "pt4", 00:14:57.106 "base_bdev_name": "malloc4" 00:14:57.106 } 00:14:57.106 } 00:14:57.106 }' 00:14:57.106 21:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:57.106 21:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:57.106 21:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:57.106 21:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:57.106 21:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:57.106 21:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:57.106 21:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:57.106 21:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:57.106 21:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:57.106 21:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:57.106 21:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:57.106 21:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:57.106 21:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:14:57.106 21:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:57.365 [2024-07-15 21:51:12.490665] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.365 21:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=58bca1ee-42f4-11ef-9f7f-e9a656123a8b 00:14:57.365 21:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 58bca1ee-42f4-11ef-9f7f-e9a656123a8b ']' 00:14:57.365 21:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:57.623 [2024-07-15 21:51:12.706635] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:57.623 [2024-07-15 21:51:12.706656] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:57.623 [2024-07-15 21:51:12.706693] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:57.623 [2024-07-15 21:51:12.706709] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:57.623 [2024-07-15 21:51:12.706713] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x31c858c35900 name raid_bdev1, state offline 00:14:57.623 21:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.623 21:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:14:57.882 21:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:14:57.882 21:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:14:57.882 21:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:57.882 21:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:58.141 21:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:58.141 21:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:58.400 21:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:58.400 21:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:58.659 21:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:58.659 21:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:14:58.920 21:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:58.920 21:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:59.486 21:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:14:59.486 21:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:14:59.486 21:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # local es=0 00:14:59.486 21:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:14:59.486 21:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@630 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:59.486 21:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:14:59.486 21:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:59.486 21:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:14:59.486 21:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:59.486 21:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:14:59.486 21:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:59.486 21:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:59.486 21:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:14:59.486 [2024-07-15 21:51:14.582699] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:59.486 [2024-07-15 21:51:14.583363] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:59.486 [2024-07-15 21:51:14.583418] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:59.486 [2024-07-15 21:51:14.583449] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:59.486 [2024-07-15 21:51:14.583463] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:59.486 [2024-07-15 21:51:14.583505] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:59.486 [2024-07-15 21:51:14.583516] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:59.486 [2024-07-15 21:51:14.583526] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:59.486 [2024-07-15 21:51:14.583533] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:59.486 [2024-07-15 21:51:14.583537] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x31c858c35680 name raid_bdev1, state configuring 00:14:59.486 request: 00:14:59.486 { 00:14:59.486 "name": "raid_bdev1", 00:14:59.486 "raid_level": "concat", 00:14:59.486 "base_bdevs": [ 00:14:59.486 "malloc1", 00:14:59.486 "malloc2", 00:14:59.486 "malloc3", 00:14:59.486 "malloc4" 00:14:59.486 ], 00:14:59.486 "strip_size_kb": 64, 00:14:59.486 "superblock": false, 00:14:59.486 "method": "bdev_raid_create", 00:14:59.486 "req_id": 1 00:14:59.486 } 00:14:59.486 Got JSON-RPC error response 00:14:59.486 response: 00:14:59.486 { 00:14:59.486 "code": -17, 00:14:59.486 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:59.486 } 00:14:59.486 21:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # es=1 00:14:59.486 21:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:14:59.486 21:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:14:59.486 21:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:14:59.486 21:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.486 21:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:14:59.744 21:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:14:59.744 21:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:14:59.744 21:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:00.003 [2024-07-15 21:51:15.070791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:00.003 [2024-07-15 21:51:15.070850] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.003 [2024-07-15 21:51:15.070879] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x31c858c35180 00:15:00.003 [2024-07-15 21:51:15.070887] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.003 [2024-07-15 21:51:15.071638] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.003 [2024-07-15 21:51:15.071680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:00.003 [2024-07-15 21:51:15.071746] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:00.003 [2024-07-15 21:51:15.071774] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:00.003 pt1 00:15:00.003 21:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:15:00.003 21:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:00.003 21:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:00.003 21:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:00.003 21:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:00.003 21:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:00.003 21:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:00.003 21:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:00.003 21:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:00.003 21:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:00.003 21:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.003 21:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.263 21:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:00.263 "name": "raid_bdev1", 00:15:00.263 "uuid": "58bca1ee-42f4-11ef-9f7f-e9a656123a8b", 00:15:00.263 "strip_size_kb": 64, 00:15:00.263 "state": "configuring", 00:15:00.263 "raid_level": "concat", 00:15:00.263 "superblock": true, 00:15:00.263 "num_base_bdevs": 4, 00:15:00.263 "num_base_bdevs_discovered": 1, 00:15:00.263 "num_base_bdevs_operational": 4, 00:15:00.263 "base_bdevs_list": [ 00:15:00.263 { 00:15:00.263 "name": "pt1", 00:15:00.263 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:00.263 "is_configured": true, 00:15:00.263 "data_offset": 2048, 00:15:00.263 "data_size": 63488 00:15:00.263 }, 00:15:00.263 { 00:15:00.263 "name": null, 00:15:00.263 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:00.263 "is_configured": false, 00:15:00.263 "data_offset": 2048, 00:15:00.263 "data_size": 63488 00:15:00.263 }, 00:15:00.263 { 00:15:00.263 "name": null, 00:15:00.263 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:00.263 "is_configured": false, 00:15:00.263 "data_offset": 2048, 00:15:00.263 "data_size": 63488 00:15:00.263 }, 00:15:00.263 { 00:15:00.263 "name": null, 00:15:00.263 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:00.263 "is_configured": false, 00:15:00.263 "data_offset": 2048, 00:15:00.263 "data_size": 63488 00:15:00.263 } 00:15:00.263 ] 00:15:00.263 }' 00:15:00.263 21:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:00.263 21:51:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.521 21:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:15:00.521 21:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:00.779 [2024-07-15 21:51:15.762826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:00.779 [2024-07-15 21:51:15.762888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.779 [2024-07-15 21:51:15.762916] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x31c858c34780 00:15:00.779 [2024-07-15 21:51:15.762924] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.779 [2024-07-15 21:51:15.763044] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.779 [2024-07-15 21:51:15.763087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:00.779 [2024-07-15 21:51:15.763121] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:00.779 [2024-07-15 21:51:15.763130] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:00.779 pt2 00:15:00.779 21:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:01.038 [2024-07-15 21:51:16.066844] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:01.038 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:15:01.038 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:01.038 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:01.038 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:01.038 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:01.038 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:01.038 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:01.038 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:01.038 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:01.038 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:01.038 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.038 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.297 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:01.297 "name": "raid_bdev1", 00:15:01.297 "uuid": "58bca1ee-42f4-11ef-9f7f-e9a656123a8b", 00:15:01.297 "strip_size_kb": 64, 00:15:01.297 "state": "configuring", 00:15:01.297 "raid_level": "concat", 00:15:01.297 "superblock": true, 00:15:01.297 "num_base_bdevs": 4, 00:15:01.297 "num_base_bdevs_discovered": 1, 00:15:01.297 "num_base_bdevs_operational": 4, 00:15:01.297 "base_bdevs_list": [ 00:15:01.297 { 00:15:01.297 "name": "pt1", 00:15:01.297 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:01.297 "is_configured": true, 00:15:01.297 "data_offset": 2048, 00:15:01.297 "data_size": 63488 00:15:01.297 }, 00:15:01.297 { 00:15:01.297 "name": null, 00:15:01.297 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:01.297 "is_configured": false, 00:15:01.297 "data_offset": 2048, 00:15:01.297 "data_size": 63488 00:15:01.297 }, 00:15:01.297 { 00:15:01.297 "name": null, 00:15:01.297 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:01.297 "is_configured": false, 00:15:01.297 "data_offset": 2048, 00:15:01.297 "data_size": 63488 00:15:01.297 }, 00:15:01.297 { 00:15:01.297 "name": null, 00:15:01.297 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:01.297 "is_configured": false, 00:15:01.297 "data_offset": 2048, 00:15:01.297 "data_size": 63488 00:15:01.297 } 00:15:01.297 ] 00:15:01.297 }' 00:15:01.297 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:01.297 21:51:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.580 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:15:01.580 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:01.580 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:01.855 [2024-07-15 21:51:16.882870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:01.855 [2024-07-15 21:51:16.882914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.855 [2024-07-15 21:51:16.882941] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x31c858c34780 00:15:01.855 [2024-07-15 21:51:16.882949] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.855 [2024-07-15 21:51:16.883097] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.855 [2024-07-15 21:51:16.883116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:01.855 [2024-07-15 21:51:16.883140] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:01.855 [2024-07-15 21:51:16.883149] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:01.855 pt2 00:15:01.855 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:01.855 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:01.855 21:51:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:02.121 [2024-07-15 21:51:17.098866] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:02.121 [2024-07-15 21:51:17.098909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.121 [2024-07-15 21:51:17.098937] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x31c858c35b80 00:15:02.121 [2024-07-15 21:51:17.098944] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.121 [2024-07-15 21:51:17.099061] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.121 [2024-07-15 21:51:17.099079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:02.121 [2024-07-15 21:51:17.099103] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:02.121 [2024-07-15 21:51:17.099111] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:02.121 pt3 00:15:02.121 21:51:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:02.121 21:51:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:02.121 21:51:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:02.121 [2024-07-15 21:51:17.294863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:02.121 [2024-07-15 21:51:17.294888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.121 [2024-07-15 21:51:17.294916] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x31c858c35900 00:15:02.121 [2024-07-15 21:51:17.294923] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.121 [2024-07-15 21:51:17.295011] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.121 [2024-07-15 21:51:17.295022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:02.121 [2024-07-15 21:51:17.295055] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:02.121 [2024-07-15 21:51:17.295064] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:02.121 [2024-07-15 21:51:17.295089] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x31c858c34c80 00:15:02.121 [2024-07-15 21:51:17.295094] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:02.121 [2024-07-15 21:51:17.295129] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x31c858c97e20 00:15:02.121 [2024-07-15 21:51:17.295206] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x31c858c34c80 00:15:02.121 [2024-07-15 21:51:17.295211] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x31c858c34c80 00:15:02.121 [2024-07-15 21:51:17.295241] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.121 pt4 00:15:02.379 21:51:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:02.379 21:51:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:02.379 21:51:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:02.379 21:51:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:02.379 21:51:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:02.379 21:51:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:02.379 21:51:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:02.379 21:51:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:02.379 21:51:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:02.379 21:51:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:02.379 21:51:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:02.379 21:51:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:02.379 21:51:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:02.379 21:51:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.637 21:51:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:02.637 "name": "raid_bdev1", 00:15:02.637 "uuid": "58bca1ee-42f4-11ef-9f7f-e9a656123a8b", 00:15:02.637 "strip_size_kb": 64, 00:15:02.637 "state": "online", 00:15:02.637 "raid_level": "concat", 00:15:02.637 "superblock": true, 00:15:02.637 "num_base_bdevs": 4, 00:15:02.637 "num_base_bdevs_discovered": 4, 00:15:02.637 "num_base_bdevs_operational": 4, 00:15:02.637 "base_bdevs_list": [ 00:15:02.637 { 00:15:02.637 "name": "pt1", 00:15:02.637 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:02.637 "is_configured": true, 00:15:02.637 "data_offset": 2048, 00:15:02.637 "data_size": 63488 00:15:02.637 }, 00:15:02.637 { 00:15:02.637 "name": "pt2", 00:15:02.637 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:02.637 "is_configured": true, 00:15:02.637 "data_offset": 2048, 00:15:02.637 "data_size": 63488 00:15:02.637 }, 00:15:02.637 { 00:15:02.637 "name": "pt3", 00:15:02.637 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:02.637 "is_configured": true, 00:15:02.637 "data_offset": 2048, 00:15:02.637 "data_size": 63488 00:15:02.637 }, 00:15:02.637 { 00:15:02.637 "name": "pt4", 00:15:02.637 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:02.637 "is_configured": true, 00:15:02.637 "data_offset": 2048, 00:15:02.637 "data_size": 63488 00:15:02.637 } 00:15:02.637 ] 00:15:02.637 }' 00:15:02.637 21:51:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:02.637 21:51:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.895 21:51:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:15:02.895 21:51:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:02.895 21:51:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:02.895 21:51:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:02.895 21:51:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:02.895 21:51:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:02.895 21:51:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:02.895 21:51:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:03.152 [2024-07-15 21:51:18.158972] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:03.152 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:03.152 "name": "raid_bdev1", 00:15:03.152 "aliases": [ 00:15:03.152 "58bca1ee-42f4-11ef-9f7f-e9a656123a8b" 00:15:03.152 ], 00:15:03.152 "product_name": "Raid Volume", 00:15:03.152 "block_size": 512, 00:15:03.152 "num_blocks": 253952, 00:15:03.152 "uuid": "58bca1ee-42f4-11ef-9f7f-e9a656123a8b", 00:15:03.152 "assigned_rate_limits": { 00:15:03.152 "rw_ios_per_sec": 0, 00:15:03.152 "rw_mbytes_per_sec": 0, 00:15:03.152 "r_mbytes_per_sec": 0, 00:15:03.152 "w_mbytes_per_sec": 0 00:15:03.152 }, 00:15:03.152 "claimed": false, 00:15:03.152 "zoned": false, 00:15:03.152 "supported_io_types": { 00:15:03.152 "read": true, 00:15:03.152 "write": true, 00:15:03.152 "unmap": true, 00:15:03.152 "flush": true, 00:15:03.152 "reset": true, 00:15:03.152 "nvme_admin": false, 00:15:03.152 "nvme_io": false, 00:15:03.152 "nvme_io_md": false, 00:15:03.152 "write_zeroes": true, 00:15:03.152 "zcopy": false, 00:15:03.152 "get_zone_info": false, 00:15:03.152 "zone_management": false, 00:15:03.152 "zone_append": false, 00:15:03.152 "compare": false, 00:15:03.152 "compare_and_write": false, 00:15:03.152 "abort": false, 00:15:03.152 "seek_hole": false, 00:15:03.152 "seek_data": false, 00:15:03.152 "copy": false, 00:15:03.152 "nvme_iov_md": false 00:15:03.152 }, 00:15:03.152 "memory_domains": [ 00:15:03.152 { 00:15:03.152 "dma_device_id": "system", 00:15:03.152 "dma_device_type": 1 00:15:03.152 }, 00:15:03.152 { 00:15:03.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.152 "dma_device_type": 2 00:15:03.152 }, 00:15:03.152 { 00:15:03.152 "dma_device_id": "system", 00:15:03.152 "dma_device_type": 1 00:15:03.152 }, 00:15:03.152 { 00:15:03.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.152 "dma_device_type": 2 00:15:03.152 }, 00:15:03.152 { 00:15:03.152 "dma_device_id": "system", 00:15:03.152 "dma_device_type": 1 00:15:03.152 }, 00:15:03.152 { 00:15:03.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.152 "dma_device_type": 2 00:15:03.152 }, 00:15:03.152 { 00:15:03.152 "dma_device_id": "system", 00:15:03.152 "dma_device_type": 1 00:15:03.152 }, 00:15:03.152 { 00:15:03.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.152 "dma_device_type": 2 00:15:03.152 } 00:15:03.152 ], 00:15:03.152 "driver_specific": { 00:15:03.152 "raid": { 00:15:03.152 "uuid": "58bca1ee-42f4-11ef-9f7f-e9a656123a8b", 00:15:03.152 "strip_size_kb": 64, 00:15:03.152 "state": "online", 00:15:03.152 "raid_level": "concat", 00:15:03.152 "superblock": true, 00:15:03.152 "num_base_bdevs": 4, 00:15:03.152 "num_base_bdevs_discovered": 4, 00:15:03.152 "num_base_bdevs_operational": 4, 00:15:03.152 "base_bdevs_list": [ 00:15:03.152 { 00:15:03.152 "name": "pt1", 00:15:03.152 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:03.152 "is_configured": true, 00:15:03.152 "data_offset": 2048, 00:15:03.152 "data_size": 63488 00:15:03.152 }, 00:15:03.152 { 00:15:03.152 "name": "pt2", 00:15:03.152 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:03.152 "is_configured": true, 00:15:03.152 "data_offset": 2048, 00:15:03.152 "data_size": 63488 00:15:03.152 }, 00:15:03.152 { 00:15:03.152 "name": "pt3", 00:15:03.152 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:03.152 "is_configured": true, 00:15:03.152 "data_offset": 2048, 00:15:03.152 "data_size": 63488 00:15:03.152 }, 00:15:03.152 { 00:15:03.152 "name": "pt4", 00:15:03.152 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:03.152 "is_configured": true, 00:15:03.152 "data_offset": 2048, 00:15:03.152 "data_size": 63488 00:15:03.152 } 00:15:03.152 ] 00:15:03.152 } 00:15:03.152 } 00:15:03.152 }' 00:15:03.152 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:03.152 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:03.152 pt2 00:15:03.152 pt3 00:15:03.152 pt4' 00:15:03.152 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:03.152 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:03.152 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:03.410 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:03.410 "name": "pt1", 00:15:03.410 "aliases": [ 00:15:03.410 "00000000-0000-0000-0000-000000000001" 00:15:03.410 ], 00:15:03.410 "product_name": "passthru", 00:15:03.410 "block_size": 512, 00:15:03.410 "num_blocks": 65536, 00:15:03.410 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:03.410 "assigned_rate_limits": { 00:15:03.410 "rw_ios_per_sec": 0, 00:15:03.410 "rw_mbytes_per_sec": 0, 00:15:03.410 "r_mbytes_per_sec": 0, 00:15:03.410 "w_mbytes_per_sec": 0 00:15:03.410 }, 00:15:03.410 "claimed": true, 00:15:03.410 "claim_type": "exclusive_write", 00:15:03.410 "zoned": false, 00:15:03.410 "supported_io_types": { 00:15:03.410 "read": true, 00:15:03.410 "write": true, 00:15:03.410 "unmap": true, 00:15:03.410 "flush": true, 00:15:03.410 "reset": true, 00:15:03.410 "nvme_admin": false, 00:15:03.410 "nvme_io": false, 00:15:03.410 "nvme_io_md": false, 00:15:03.410 "write_zeroes": true, 00:15:03.410 "zcopy": true, 00:15:03.410 "get_zone_info": false, 00:15:03.410 "zone_management": false, 00:15:03.410 "zone_append": false, 00:15:03.410 "compare": false, 00:15:03.410 "compare_and_write": false, 00:15:03.410 "abort": true, 00:15:03.410 "seek_hole": false, 00:15:03.410 "seek_data": false, 00:15:03.410 "copy": true, 00:15:03.410 "nvme_iov_md": false 00:15:03.410 }, 00:15:03.410 "memory_domains": [ 00:15:03.410 { 00:15:03.410 "dma_device_id": "system", 00:15:03.410 "dma_device_type": 1 00:15:03.410 }, 00:15:03.410 { 00:15:03.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.410 "dma_device_type": 2 00:15:03.410 } 00:15:03.410 ], 00:15:03.410 "driver_specific": { 00:15:03.410 "passthru": { 00:15:03.410 "name": "pt1", 00:15:03.410 "base_bdev_name": "malloc1" 00:15:03.410 } 00:15:03.410 } 00:15:03.410 }' 00:15:03.410 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:03.410 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:03.410 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:03.410 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:03.410 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:03.410 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:03.410 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:03.410 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:03.410 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:03.410 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:03.410 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:03.410 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:03.410 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:03.410 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:03.410 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:03.668 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:03.668 "name": "pt2", 00:15:03.668 "aliases": [ 00:15:03.668 "00000000-0000-0000-0000-000000000002" 00:15:03.668 ], 00:15:03.668 "product_name": "passthru", 00:15:03.668 "block_size": 512, 00:15:03.668 "num_blocks": 65536, 00:15:03.668 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:03.668 "assigned_rate_limits": { 00:15:03.668 "rw_ios_per_sec": 0, 00:15:03.668 "rw_mbytes_per_sec": 0, 00:15:03.668 "r_mbytes_per_sec": 0, 00:15:03.668 "w_mbytes_per_sec": 0 00:15:03.668 }, 00:15:03.668 "claimed": true, 00:15:03.668 "claim_type": "exclusive_write", 00:15:03.668 "zoned": false, 00:15:03.668 "supported_io_types": { 00:15:03.668 "read": true, 00:15:03.668 "write": true, 00:15:03.668 "unmap": true, 00:15:03.668 "flush": true, 00:15:03.668 "reset": true, 00:15:03.668 "nvme_admin": false, 00:15:03.668 "nvme_io": false, 00:15:03.668 "nvme_io_md": false, 00:15:03.668 "write_zeroes": true, 00:15:03.668 "zcopy": true, 00:15:03.668 "get_zone_info": false, 00:15:03.668 "zone_management": false, 00:15:03.668 "zone_append": false, 00:15:03.668 "compare": false, 00:15:03.668 "compare_and_write": false, 00:15:03.668 "abort": true, 00:15:03.668 "seek_hole": false, 00:15:03.668 "seek_data": false, 00:15:03.668 "copy": true, 00:15:03.668 "nvme_iov_md": false 00:15:03.668 }, 00:15:03.668 "memory_domains": [ 00:15:03.668 { 00:15:03.668 "dma_device_id": "system", 00:15:03.668 "dma_device_type": 1 00:15:03.668 }, 00:15:03.668 { 00:15:03.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.668 "dma_device_type": 2 00:15:03.668 } 00:15:03.668 ], 00:15:03.668 "driver_specific": { 00:15:03.668 "passthru": { 00:15:03.668 "name": "pt2", 00:15:03.668 "base_bdev_name": "malloc2" 00:15:03.668 } 00:15:03.668 } 00:15:03.668 }' 00:15:03.668 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:03.668 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:03.668 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:03.668 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:03.668 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:03.668 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:03.668 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:03.668 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:03.668 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:03.668 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:03.668 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:03.668 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:03.668 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:03.668 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:03.668 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:03.926 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:03.926 "name": "pt3", 00:15:03.926 "aliases": [ 00:15:03.926 "00000000-0000-0000-0000-000000000003" 00:15:03.926 ], 00:15:03.926 "product_name": "passthru", 00:15:03.926 "block_size": 512, 00:15:03.926 "num_blocks": 65536, 00:15:03.926 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:03.926 "assigned_rate_limits": { 00:15:03.926 "rw_ios_per_sec": 0, 00:15:03.926 "rw_mbytes_per_sec": 0, 00:15:03.926 "r_mbytes_per_sec": 0, 00:15:03.926 "w_mbytes_per_sec": 0 00:15:03.926 }, 00:15:03.926 "claimed": true, 00:15:03.926 "claim_type": "exclusive_write", 00:15:03.926 "zoned": false, 00:15:03.926 "supported_io_types": { 00:15:03.926 "read": true, 00:15:03.926 "write": true, 00:15:03.926 "unmap": true, 00:15:03.926 "flush": true, 00:15:03.926 "reset": true, 00:15:03.926 "nvme_admin": false, 00:15:03.926 "nvme_io": false, 00:15:03.926 "nvme_io_md": false, 00:15:03.926 "write_zeroes": true, 00:15:03.926 "zcopy": true, 00:15:03.926 "get_zone_info": false, 00:15:03.926 "zone_management": false, 00:15:03.926 "zone_append": false, 00:15:03.926 "compare": false, 00:15:03.926 "compare_and_write": false, 00:15:03.926 "abort": true, 00:15:03.926 "seek_hole": false, 00:15:03.926 "seek_data": false, 00:15:03.926 "copy": true, 00:15:03.926 "nvme_iov_md": false 00:15:03.926 }, 00:15:03.926 "memory_domains": [ 00:15:03.926 { 00:15:03.926 "dma_device_id": "system", 00:15:03.926 "dma_device_type": 1 00:15:03.926 }, 00:15:03.926 { 00:15:03.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.926 "dma_device_type": 2 00:15:03.926 } 00:15:03.926 ], 00:15:03.926 "driver_specific": { 00:15:03.926 "passthru": { 00:15:03.926 "name": "pt3", 00:15:03.926 "base_bdev_name": "malloc3" 00:15:03.926 } 00:15:03.926 } 00:15:03.926 }' 00:15:03.926 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:03.926 21:51:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:03.926 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:03.926 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:03.926 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:03.926 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:03.926 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:03.926 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:03.926 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:03.926 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:03.926 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:03.926 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:03.926 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:03.926 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:03.926 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:15:04.185 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:04.185 "name": "pt4", 00:15:04.185 "aliases": [ 00:15:04.185 "00000000-0000-0000-0000-000000000004" 00:15:04.185 ], 00:15:04.185 "product_name": "passthru", 00:15:04.185 "block_size": 512, 00:15:04.185 "num_blocks": 65536, 00:15:04.185 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:04.185 "assigned_rate_limits": { 00:15:04.185 "rw_ios_per_sec": 0, 00:15:04.185 "rw_mbytes_per_sec": 0, 00:15:04.185 "r_mbytes_per_sec": 0, 00:15:04.185 "w_mbytes_per_sec": 0 00:15:04.185 }, 00:15:04.185 "claimed": true, 00:15:04.185 "claim_type": "exclusive_write", 00:15:04.185 "zoned": false, 00:15:04.185 "supported_io_types": { 00:15:04.185 "read": true, 00:15:04.185 "write": true, 00:15:04.185 "unmap": true, 00:15:04.185 "flush": true, 00:15:04.185 "reset": true, 00:15:04.185 "nvme_admin": false, 00:15:04.185 "nvme_io": false, 00:15:04.185 "nvme_io_md": false, 00:15:04.185 "write_zeroes": true, 00:15:04.185 "zcopy": true, 00:15:04.185 "get_zone_info": false, 00:15:04.185 "zone_management": false, 00:15:04.185 "zone_append": false, 00:15:04.185 "compare": false, 00:15:04.185 "compare_and_write": false, 00:15:04.185 "abort": true, 00:15:04.185 "seek_hole": false, 00:15:04.185 "seek_data": false, 00:15:04.185 "copy": true, 00:15:04.185 "nvme_iov_md": false 00:15:04.185 }, 00:15:04.185 "memory_domains": [ 00:15:04.185 { 00:15:04.185 "dma_device_id": "system", 00:15:04.185 "dma_device_type": 1 00:15:04.185 }, 00:15:04.185 { 00:15:04.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.185 "dma_device_type": 2 00:15:04.185 } 00:15:04.185 ], 00:15:04.185 "driver_specific": { 00:15:04.185 "passthru": { 00:15:04.185 "name": "pt4", 00:15:04.185 "base_bdev_name": "malloc4" 00:15:04.185 } 00:15:04.185 } 00:15:04.185 }' 00:15:04.185 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:04.185 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:04.443 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:04.443 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:04.443 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:04.443 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:04.443 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:04.443 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:04.443 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:04.443 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:04.443 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:04.443 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:04.443 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:04.443 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:15:04.701 [2024-07-15 21:51:19.719004] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.702 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 58bca1ee-42f4-11ef-9f7f-e9a656123a8b '!=' 58bca1ee-42f4-11ef-9f7f-e9a656123a8b ']' 00:15:04.702 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:15:04.702 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:04.702 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:04.702 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 62280 00:15:04.702 21:51:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@942 -- # '[' -z 62280 ']' 00:15:04.702 21:51:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # kill -0 62280 00:15:04.702 21:51:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@947 -- # uname 00:15:04.702 21:51:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:15:04.702 21:51:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # ps -c -o command 62280 00:15:04.702 21:51:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # tail -1 00:15:04.702 21:51:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:15:04.702 killing process with pid 62280 00:15:04.702 21:51:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:15:04.702 21:51:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 62280' 00:15:04.702 21:51:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@961 -- # kill 62280 00:15:04.702 [2024-07-15 21:51:19.751564] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:04.702 [2024-07-15 21:51:19.751585] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:04.702 [2024-07-15 21:51:19.751600] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:04.702 [2024-07-15 21:51:19.751604] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x31c858c34c80 name raid_bdev1, state offline 00:15:04.702 21:51:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # wait 62280 00:15:04.702 [2024-07-15 21:51:19.778503] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:04.960 21:51:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:15:04.960 00:15:04.960 real 0m13.286s 00:15:04.960 user 0m23.207s 00:15:04.960 sys 0m2.561s 00:15:04.960 21:51:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:15:04.961 21:51:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.961 ************************************ 00:15:04.961 END TEST raid_superblock_test 00:15:04.961 ************************************ 00:15:04.961 21:51:20 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:15:04.961 21:51:20 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:15:04.961 21:51:20 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:15:04.961 21:51:20 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:15:04.961 21:51:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:04.961 ************************************ 00:15:04.961 START TEST raid_read_error_test 00:15:04.961 ************************************ 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1117 -- # raid_io_error_test concat 4 read 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.ckKNvFWjUi 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=62681 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 62681 /var/tmp/spdk-raid.sock 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@823 -- # '[' -z 62681 ']' 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:04.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:15:04.961 21:51:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.961 [2024-07-15 21:51:20.035375] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:15:04.961 [2024-07-15 21:51:20.035575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:05.896 EAL: TSC is not safe to use in SMP mode 00:15:05.896 EAL: TSC is not invariant 00:15:05.896 [2024-07-15 21:51:20.861146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.896 [2024-07-15 21:51:20.963681] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:05.896 [2024-07-15 21:51:20.966029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.896 [2024-07-15 21:51:20.966936] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:05.896 [2024-07-15 21:51:20.966953] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:05.896 21:51:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:15:05.896 21:51:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # return 0 00:15:05.896 21:51:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:05.896 21:51:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:06.154 BaseBdev1_malloc 00:15:06.154 21:51:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:06.413 true 00:15:06.413 21:51:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:06.671 [2024-07-15 21:51:21.774011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:06.671 [2024-07-15 21:51:21.774077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.671 [2024-07-15 21:51:21.774137] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3b3f71834780 00:15:06.671 [2024-07-15 21:51:21.774147] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.671 [2024-07-15 21:51:21.774902] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.671 [2024-07-15 21:51:21.774946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:06.671 BaseBdev1 00:15:06.671 21:51:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:06.671 21:51:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:06.929 BaseBdev2_malloc 00:15:06.929 21:51:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:07.187 true 00:15:07.188 21:51:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:07.446 [2024-07-15 21:51:22.518063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:07.446 [2024-07-15 21:51:22.518150] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.446 [2024-07-15 21:51:22.518199] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3b3f71834c80 00:15:07.446 [2024-07-15 21:51:22.518208] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.446 [2024-07-15 21:51:22.518972] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.446 [2024-07-15 21:51:22.519031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:07.446 BaseBdev2 00:15:07.446 21:51:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:07.446 21:51:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:07.705 BaseBdev3_malloc 00:15:07.705 21:51:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:15:07.963 true 00:15:07.963 21:51:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:08.222 [2024-07-15 21:51:23.310083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:08.222 [2024-07-15 21:51:23.310154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.222 [2024-07-15 21:51:23.310198] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3b3f71835180 00:15:08.222 [2024-07-15 21:51:23.310206] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.222 [2024-07-15 21:51:23.311016] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.222 [2024-07-15 21:51:23.311044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:08.222 BaseBdev3 00:15:08.222 21:51:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:08.222 21:51:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:08.481 BaseBdev4_malloc 00:15:08.481 21:51:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:15:08.739 true 00:15:08.739 21:51:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:08.998 [2024-07-15 21:51:23.978109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:08.998 [2024-07-15 21:51:23.978177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.998 [2024-07-15 21:51:23.978218] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3b3f71835680 00:15:08.998 [2024-07-15 21:51:23.978227] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.998 [2024-07-15 21:51:23.978995] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.998 [2024-07-15 21:51:23.979025] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:08.998 BaseBdev4 00:15:08.998 21:51:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:15:09.257 [2024-07-15 21:51:24.234188] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:09.257 [2024-07-15 21:51:24.234865] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:09.257 [2024-07-15 21:51:24.234935] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:09.257 [2024-07-15 21:51:24.234950] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:09.257 [2024-07-15 21:51:24.235011] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3b3f71835900 00:15:09.257 [2024-07-15 21:51:24.235018] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:09.257 [2024-07-15 21:51:24.235054] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3b3f718a0e20 00:15:09.257 [2024-07-15 21:51:24.235143] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3b3f71835900 00:15:09.257 [2024-07-15 21:51:24.235151] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3b3f71835900 00:15:09.257 [2024-07-15 21:51:24.235176] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.257 21:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:09.257 21:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:09.257 21:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:09.257 21:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:09.257 21:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:09.257 21:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:09.257 21:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:09.257 21:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:09.257 21:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:09.257 21:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:09.257 21:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.257 21:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.516 21:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:09.516 "name": "raid_bdev1", 00:15:09.516 "uuid": "6138d158-42f4-11ef-9f7f-e9a656123a8b", 00:15:09.516 "strip_size_kb": 64, 00:15:09.516 "state": "online", 00:15:09.516 "raid_level": "concat", 00:15:09.516 "superblock": true, 00:15:09.516 "num_base_bdevs": 4, 00:15:09.516 "num_base_bdevs_discovered": 4, 00:15:09.516 "num_base_bdevs_operational": 4, 00:15:09.516 "base_bdevs_list": [ 00:15:09.516 { 00:15:09.516 "name": "BaseBdev1", 00:15:09.516 "uuid": "e4aabb71-1d40-745b-a67d-00f8a78e0d78", 00:15:09.516 "is_configured": true, 00:15:09.516 "data_offset": 2048, 00:15:09.516 "data_size": 63488 00:15:09.516 }, 00:15:09.516 { 00:15:09.516 "name": "BaseBdev2", 00:15:09.516 "uuid": "ba5bab40-dc6c-d25b-bc82-4efd0f1de6c7", 00:15:09.516 "is_configured": true, 00:15:09.516 "data_offset": 2048, 00:15:09.516 "data_size": 63488 00:15:09.516 }, 00:15:09.516 { 00:15:09.516 "name": "BaseBdev3", 00:15:09.516 "uuid": "bac37b34-8a16-335b-ac99-5cd393b01427", 00:15:09.516 "is_configured": true, 00:15:09.516 "data_offset": 2048, 00:15:09.516 "data_size": 63488 00:15:09.516 }, 00:15:09.516 { 00:15:09.516 "name": "BaseBdev4", 00:15:09.516 "uuid": "ce0517ef-27f2-3f51-b6bb-feea13943896", 00:15:09.516 "is_configured": true, 00:15:09.516 "data_offset": 2048, 00:15:09.516 "data_size": 63488 00:15:09.516 } 00:15:09.516 ] 00:15:09.516 }' 00:15:09.516 21:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:09.516 21:51:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.819 21:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:09.819 21:51:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:09.819 [2024-07-15 21:51:24.882369] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3b3f718a0ec0 00:15:10.757 21:51:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:11.016 21:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:11.016 21:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:15:11.016 21:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:15:11.016 21:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:11.016 21:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:11.016 21:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:11.016 21:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:11.016 21:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:11.016 21:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:11.016 21:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:11.016 21:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:11.016 21:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:11.016 21:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:11.016 21:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.016 21:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.275 21:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:11.275 "name": "raid_bdev1", 00:15:11.275 "uuid": "6138d158-42f4-11ef-9f7f-e9a656123a8b", 00:15:11.275 "strip_size_kb": 64, 00:15:11.275 "state": "online", 00:15:11.275 "raid_level": "concat", 00:15:11.275 "superblock": true, 00:15:11.275 "num_base_bdevs": 4, 00:15:11.275 "num_base_bdevs_discovered": 4, 00:15:11.275 "num_base_bdevs_operational": 4, 00:15:11.275 "base_bdevs_list": [ 00:15:11.275 { 00:15:11.275 "name": "BaseBdev1", 00:15:11.275 "uuid": "e4aabb71-1d40-745b-a67d-00f8a78e0d78", 00:15:11.275 "is_configured": true, 00:15:11.275 "data_offset": 2048, 00:15:11.275 "data_size": 63488 00:15:11.275 }, 00:15:11.275 { 00:15:11.275 "name": "BaseBdev2", 00:15:11.275 "uuid": "ba5bab40-dc6c-d25b-bc82-4efd0f1de6c7", 00:15:11.275 "is_configured": true, 00:15:11.275 "data_offset": 2048, 00:15:11.275 "data_size": 63488 00:15:11.275 }, 00:15:11.275 { 00:15:11.275 "name": "BaseBdev3", 00:15:11.275 "uuid": "bac37b34-8a16-335b-ac99-5cd393b01427", 00:15:11.275 "is_configured": true, 00:15:11.275 "data_offset": 2048, 00:15:11.275 "data_size": 63488 00:15:11.275 }, 00:15:11.275 { 00:15:11.275 "name": "BaseBdev4", 00:15:11.275 "uuid": "ce0517ef-27f2-3f51-b6bb-feea13943896", 00:15:11.275 "is_configured": true, 00:15:11.275 "data_offset": 2048, 00:15:11.275 "data_size": 63488 00:15:11.275 } 00:15:11.275 ] 00:15:11.275 }' 00:15:11.275 21:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:11.275 21:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.533 21:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:11.792 [2024-07-15 21:51:26.912027] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:11.792 [2024-07-15 21:51:26.912058] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:11.792 [2024-07-15 21:51:26.912407] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:11.792 [2024-07-15 21:51:26.912418] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.792 [2024-07-15 21:51:26.912443] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:11.792 [2024-07-15 21:51:26.912448] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3b3f71835900 name raid_bdev1, state offline 00:15:11.792 0 00:15:11.793 21:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 62681 00:15:11.793 21:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@942 -- # '[' -z 62681 ']' 00:15:11.793 21:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # kill -0 62681 00:15:11.793 21:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@947 -- # uname 00:15:11.793 21:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:15:11.793 21:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # ps -c -o command 62681 00:15:11.793 21:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # tail -1 00:15:11.793 21:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # process_name=bdevperf 00:15:11.793 21:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' bdevperf = sudo ']' 00:15:11.793 killing process with pid 62681 00:15:11.793 21:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 62681' 00:15:11.793 21:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@961 -- # kill 62681 00:15:11.793 [2024-07-15 21:51:26.942356] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:11.793 21:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # wait 62681 00:15:11.793 [2024-07-15 21:51:26.964753] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:12.052 21:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.ckKNvFWjUi 00:15:12.052 21:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:12.052 21:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:12.052 21:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.49 00:15:12.052 21:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:15:12.052 21:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:12.052 21:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:12.052 21:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.49 != \0\.\0\0 ]] 00:15:12.052 00:15:12.052 real 0m7.115s 00:15:12.052 user 0m11.011s 00:15:12.052 sys 0m1.422s 00:15:12.052 21:51:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:15:12.052 21:51:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.052 ************************************ 00:15:12.052 END TEST raid_read_error_test 00:15:12.052 ************************************ 00:15:12.052 21:51:27 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:15:12.052 21:51:27 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:15:12.052 21:51:27 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:15:12.052 21:51:27 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:15:12.052 21:51:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:12.052 ************************************ 00:15:12.052 START TEST raid_write_error_test 00:15:12.052 ************************************ 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1117 -- # raid_io_error_test concat 4 write 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.PP6tWdRjiS 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=62815 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 62815 /var/tmp/spdk-raid.sock 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@823 -- # '[' -z 62815 ']' 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:15:12.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:15:12.052 21:51:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.053 [2024-07-15 21:51:27.189319] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:15:12.053 [2024-07-15 21:51:27.189551] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:12.620 EAL: TSC is not safe to use in SMP mode 00:15:12.620 EAL: TSC is not invariant 00:15:12.620 [2024-07-15 21:51:27.713023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.620 [2024-07-15 21:51:27.788902] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:12.620 [2024-07-15 21:51:27.791186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.620 [2024-07-15 21:51:27.792078] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:12.620 [2024-07-15 21:51:27.792093] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:13.188 21:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:15:13.188 21:51:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # return 0 00:15:13.188 21:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:13.188 21:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:13.447 BaseBdev1_malloc 00:15:13.447 21:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:13.706 true 00:15:13.706 21:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:13.706 [2024-07-15 21:51:28.875908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:13.706 [2024-07-15 21:51:28.875977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.706 [2024-07-15 21:51:28.876011] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3139fc234780 00:15:13.706 [2024-07-15 21:51:28.876020] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.706 [2024-07-15 21:51:28.876455] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.706 [2024-07-15 21:51:28.876486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:13.706 BaseBdev1 00:15:13.706 21:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:13.706 21:51:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:13.965 BaseBdev2_malloc 00:15:13.965 21:51:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:14.223 true 00:15:14.223 21:51:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:14.481 [2024-07-15 21:51:29.647941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:14.481 [2024-07-15 21:51:29.648009] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.481 [2024-07-15 21:51:29.648053] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3139fc234c80 00:15:14.481 [2024-07-15 21:51:29.648061] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.481 [2024-07-15 21:51:29.648804] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.481 [2024-07-15 21:51:29.648832] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:14.481 BaseBdev2 00:15:14.481 21:51:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:14.481 21:51:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:14.741 BaseBdev3_malloc 00:15:14.741 21:51:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:15:14.999 true 00:15:14.999 21:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:15.258 [2024-07-15 21:51:30.287949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:15.258 [2024-07-15 21:51:30.288025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.258 [2024-07-15 21:51:30.288068] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3139fc235180 00:15:15.258 [2024-07-15 21:51:30.288076] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.258 [2024-07-15 21:51:30.288860] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.258 [2024-07-15 21:51:30.288905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:15.258 BaseBdev3 00:15:15.258 21:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:15.258 21:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:15.517 BaseBdev4_malloc 00:15:15.517 21:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:15:15.776 true 00:15:15.776 21:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:15.776 [2024-07-15 21:51:30.915977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:15.776 [2024-07-15 21:51:30.916046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.776 [2024-07-15 21:51:30.916087] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3139fc235680 00:15:15.776 [2024-07-15 21:51:30.916096] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.776 [2024-07-15 21:51:30.916741] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.776 [2024-07-15 21:51:30.916770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:15.776 BaseBdev4 00:15:15.776 21:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:15:16.035 [2024-07-15 21:51:31.164007] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:16.035 [2024-07-15 21:51:31.164647] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:16.035 [2024-07-15 21:51:31.164677] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:16.035 [2024-07-15 21:51:31.164692] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:16.035 [2024-07-15 21:51:31.164759] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3139fc235900 00:15:16.035 [2024-07-15 21:51:31.164765] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:16.035 [2024-07-15 21:51:31.164858] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3139fc2a0e20 00:15:16.035 [2024-07-15 21:51:31.164986] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3139fc235900 00:15:16.035 [2024-07-15 21:51:31.164992] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3139fc235900 00:15:16.035 [2024-07-15 21:51:31.165019] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.036 21:51:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:16.036 21:51:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:16.036 21:51:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:16.036 21:51:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:16.036 21:51:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:16.036 21:51:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:16.036 21:51:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:16.036 21:51:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:16.036 21:51:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:16.036 21:51:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:16.036 21:51:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.036 21:51:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.295 21:51:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:16.295 "name": "raid_bdev1", 00:15:16.295 "uuid": "655a39b8-42f4-11ef-9f7f-e9a656123a8b", 00:15:16.295 "strip_size_kb": 64, 00:15:16.295 "state": "online", 00:15:16.295 "raid_level": "concat", 00:15:16.295 "superblock": true, 00:15:16.295 "num_base_bdevs": 4, 00:15:16.295 "num_base_bdevs_discovered": 4, 00:15:16.295 "num_base_bdevs_operational": 4, 00:15:16.295 "base_bdevs_list": [ 00:15:16.295 { 00:15:16.295 "name": "BaseBdev1", 00:15:16.295 "uuid": "ccfd092c-9731-0d55-bb68-f101805dac01", 00:15:16.295 "is_configured": true, 00:15:16.295 "data_offset": 2048, 00:15:16.295 "data_size": 63488 00:15:16.295 }, 00:15:16.295 { 00:15:16.295 "name": "BaseBdev2", 00:15:16.295 "uuid": "b77f9664-a74a-0151-8a0f-9441062468fd", 00:15:16.295 "is_configured": true, 00:15:16.295 "data_offset": 2048, 00:15:16.295 "data_size": 63488 00:15:16.295 }, 00:15:16.295 { 00:15:16.295 "name": "BaseBdev3", 00:15:16.295 "uuid": "c707af28-6247-a65d-bfe8-0aeb24216ba4", 00:15:16.295 "is_configured": true, 00:15:16.295 "data_offset": 2048, 00:15:16.295 "data_size": 63488 00:15:16.295 }, 00:15:16.295 { 00:15:16.295 "name": "BaseBdev4", 00:15:16.295 "uuid": "e9f9f41c-3cfe-d059-aa9e-050bdfe8d97a", 00:15:16.295 "is_configured": true, 00:15:16.295 "data_offset": 2048, 00:15:16.295 "data_size": 63488 00:15:16.295 } 00:15:16.295 ] 00:15:16.295 }' 00:15:16.295 21:51:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:16.295 21:51:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.554 21:51:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:16.554 21:51:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:16.812 [2024-07-15 21:51:31.776194] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3139fc2a0ec0 00:15:17.748 21:51:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:18.066 21:51:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:18.066 21:51:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:15:18.066 21:51:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:15:18.066 21:51:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:18.066 21:51:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:18.066 21:51:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:18.066 21:51:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:18.066 21:51:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:18.066 21:51:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:18.066 21:51:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:18.066 21:51:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:18.066 21:51:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:18.066 21:51:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:18.066 21:51:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.066 21:51:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.066 21:51:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:18.066 "name": "raid_bdev1", 00:15:18.066 "uuid": "655a39b8-42f4-11ef-9f7f-e9a656123a8b", 00:15:18.066 "strip_size_kb": 64, 00:15:18.066 "state": "online", 00:15:18.066 "raid_level": "concat", 00:15:18.066 "superblock": true, 00:15:18.066 "num_base_bdevs": 4, 00:15:18.066 "num_base_bdevs_discovered": 4, 00:15:18.066 "num_base_bdevs_operational": 4, 00:15:18.066 "base_bdevs_list": [ 00:15:18.066 { 00:15:18.066 "name": "BaseBdev1", 00:15:18.066 "uuid": "ccfd092c-9731-0d55-bb68-f101805dac01", 00:15:18.066 "is_configured": true, 00:15:18.066 "data_offset": 2048, 00:15:18.066 "data_size": 63488 00:15:18.066 }, 00:15:18.066 { 00:15:18.066 "name": "BaseBdev2", 00:15:18.066 "uuid": "b77f9664-a74a-0151-8a0f-9441062468fd", 00:15:18.066 "is_configured": true, 00:15:18.066 "data_offset": 2048, 00:15:18.066 "data_size": 63488 00:15:18.066 }, 00:15:18.066 { 00:15:18.066 "name": "BaseBdev3", 00:15:18.066 "uuid": "c707af28-6247-a65d-bfe8-0aeb24216ba4", 00:15:18.066 "is_configured": true, 00:15:18.066 "data_offset": 2048, 00:15:18.066 "data_size": 63488 00:15:18.066 }, 00:15:18.066 { 00:15:18.066 "name": "BaseBdev4", 00:15:18.066 "uuid": "e9f9f41c-3cfe-d059-aa9e-050bdfe8d97a", 00:15:18.066 "is_configured": true, 00:15:18.066 "data_offset": 2048, 00:15:18.066 "data_size": 63488 00:15:18.066 } 00:15:18.066 ] 00:15:18.066 }' 00:15:18.066 21:51:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:18.066 21:51:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.651 21:51:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:18.651 [2024-07-15 21:51:33.750116] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:18.651 [2024-07-15 21:51:33.750145] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:18.651 [2024-07-15 21:51:33.750477] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.651 [2024-07-15 21:51:33.750488] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.651 [2024-07-15 21:51:33.750496] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:18.651 [2024-07-15 21:51:33.750517] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3139fc235900 name raid_bdev1, state offline 00:15:18.651 0 00:15:18.651 21:51:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 62815 00:15:18.651 21:51:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@942 -- # '[' -z 62815 ']' 00:15:18.651 21:51:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # kill -0 62815 00:15:18.651 21:51:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@947 -- # uname 00:15:18.651 21:51:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:15:18.651 21:51:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # ps -c -o command 62815 00:15:18.651 21:51:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # tail -1 00:15:18.651 21:51:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # process_name=bdevperf 00:15:18.651 killing process with pid 62815 00:15:18.651 21:51:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' bdevperf = sudo ']' 00:15:18.651 21:51:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 62815' 00:15:18.651 21:51:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@961 -- # kill 62815 00:15:18.651 [2024-07-15 21:51:33.785932] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:18.651 21:51:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # wait 62815 00:15:18.651 [2024-07-15 21:51:33.808018] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:18.910 21:51:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.PP6tWdRjiS 00:15:18.910 21:51:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:18.910 21:51:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:18.910 21:51:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.51 00:15:18.910 21:51:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:15:18.910 21:51:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:18.910 21:51:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:18.910 21:51:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.51 != \0\.\0\0 ]] 00:15:18.910 ************************************ 00:15:18.910 00:15:18.910 real 0m6.803s 00:15:18.910 user 0m10.669s 00:15:18.910 sys 0m1.155s 00:15:18.910 21:51:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:15:18.910 21:51:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.910 END TEST raid_write_error_test 00:15:18.910 ************************************ 00:15:18.910 21:51:34 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:15:18.910 21:51:34 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:15:18.910 21:51:34 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:15:18.910 21:51:34 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:15:18.910 21:51:34 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:15:18.910 21:51:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:18.910 ************************************ 00:15:18.910 START TEST raid_state_function_test 00:15:18.910 ************************************ 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1117 -- # raid_state_function_test raid1 4 false 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=62951 00:15:18.910 Process raid pid: 62951 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 62951' 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 62951 /var/tmp/spdk-raid.sock 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@823 -- # '[' -z 62951 ']' 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:18.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:15:18.910 21:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.910 [2024-07-15 21:51:34.041789] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:15:18.910 [2024-07-15 21:51:34.042069] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:19.478 EAL: TSC is not safe to use in SMP mode 00:15:19.478 EAL: TSC is not invariant 00:15:19.478 [2024-07-15 21:51:34.541000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.478 [2024-07-15 21:51:34.619221] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:19.478 [2024-07-15 21:51:34.621722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.478 [2024-07-15 21:51:34.622675] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:19.478 [2024-07-15 21:51:34.622692] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:20.046 21:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:15:20.046 21:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # return 0 00:15:20.046 21:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:20.305 [2024-07-15 21:51:35.297644] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:20.305 [2024-07-15 21:51:35.297715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:20.306 [2024-07-15 21:51:35.297738] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:20.306 [2024-07-15 21:51:35.297747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:20.306 [2024-07-15 21:51:35.297751] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:20.306 [2024-07-15 21:51:35.297759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:20.306 [2024-07-15 21:51:35.297778] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:20.306 [2024-07-15 21:51:35.297785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:20.306 21:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:20.306 21:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:20.306 21:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:20.306 21:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:20.306 21:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:20.306 21:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:20.306 21:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:20.306 21:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:20.306 21:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:20.306 21:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:20.306 21:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.306 21:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.565 21:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:20.565 "name": "Existed_Raid", 00:15:20.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.565 "strip_size_kb": 0, 00:15:20.565 "state": "configuring", 00:15:20.565 "raid_level": "raid1", 00:15:20.565 "superblock": false, 00:15:20.565 "num_base_bdevs": 4, 00:15:20.565 "num_base_bdevs_discovered": 0, 00:15:20.565 "num_base_bdevs_operational": 4, 00:15:20.565 "base_bdevs_list": [ 00:15:20.565 { 00:15:20.565 "name": "BaseBdev1", 00:15:20.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.565 "is_configured": false, 00:15:20.565 "data_offset": 0, 00:15:20.565 "data_size": 0 00:15:20.565 }, 00:15:20.565 { 00:15:20.565 "name": "BaseBdev2", 00:15:20.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.565 "is_configured": false, 00:15:20.565 "data_offset": 0, 00:15:20.565 "data_size": 0 00:15:20.565 }, 00:15:20.565 { 00:15:20.565 "name": "BaseBdev3", 00:15:20.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.565 "is_configured": false, 00:15:20.565 "data_offset": 0, 00:15:20.565 "data_size": 0 00:15:20.565 }, 00:15:20.565 { 00:15:20.565 "name": "BaseBdev4", 00:15:20.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.565 "is_configured": false, 00:15:20.565 "data_offset": 0, 00:15:20.565 "data_size": 0 00:15:20.565 } 00:15:20.565 ] 00:15:20.565 }' 00:15:20.565 21:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:20.565 21:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.824 21:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:21.083 [2024-07-15 21:51:36.065639] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:21.083 [2024-07-15 21:51:36.065664] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x49f62434500 name Existed_Raid, state configuring 00:15:21.083 21:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:21.341 [2024-07-15 21:51:36.329651] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:21.341 [2024-07-15 21:51:36.329698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:21.341 [2024-07-15 21:51:36.329703] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:21.341 [2024-07-15 21:51:36.329712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:21.341 [2024-07-15 21:51:36.329716] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:21.341 [2024-07-15 21:51:36.329723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:21.341 [2024-07-15 21:51:36.329727] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:21.341 [2024-07-15 21:51:36.329735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:21.341 21:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:21.600 [2024-07-15 21:51:36.542601] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:21.600 BaseBdev1 00:15:21.600 21:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:21.600 21:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev1 00:15:21.600 21:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:15:21.600 21:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:15:21.600 21:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:15:21.600 21:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:15:21.600 21:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:21.859 21:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:21.859 [ 00:15:21.859 { 00:15:21.859 "name": "BaseBdev1", 00:15:21.859 "aliases": [ 00:15:21.859 "688ecaf0-42f4-11ef-9f7f-e9a656123a8b" 00:15:21.859 ], 00:15:21.859 "product_name": "Malloc disk", 00:15:21.859 "block_size": 512, 00:15:21.859 "num_blocks": 65536, 00:15:21.859 "uuid": "688ecaf0-42f4-11ef-9f7f-e9a656123a8b", 00:15:21.859 "assigned_rate_limits": { 00:15:21.859 "rw_ios_per_sec": 0, 00:15:21.859 "rw_mbytes_per_sec": 0, 00:15:21.859 "r_mbytes_per_sec": 0, 00:15:21.859 "w_mbytes_per_sec": 0 00:15:21.859 }, 00:15:21.859 "claimed": true, 00:15:21.859 "claim_type": "exclusive_write", 00:15:21.859 "zoned": false, 00:15:21.859 "supported_io_types": { 00:15:21.859 "read": true, 00:15:21.859 "write": true, 00:15:21.859 "unmap": true, 00:15:21.859 "flush": true, 00:15:21.859 "reset": true, 00:15:21.859 "nvme_admin": false, 00:15:21.859 "nvme_io": false, 00:15:21.859 "nvme_io_md": false, 00:15:21.859 "write_zeroes": true, 00:15:21.859 "zcopy": true, 00:15:21.859 "get_zone_info": false, 00:15:21.859 "zone_management": false, 00:15:21.859 "zone_append": false, 00:15:21.859 "compare": false, 00:15:21.859 "compare_and_write": false, 00:15:21.859 "abort": true, 00:15:21.859 "seek_hole": false, 00:15:21.859 "seek_data": false, 00:15:21.859 "copy": true, 00:15:21.859 "nvme_iov_md": false 00:15:21.859 }, 00:15:21.859 "memory_domains": [ 00:15:21.859 { 00:15:21.859 "dma_device_id": "system", 00:15:21.859 "dma_device_type": 1 00:15:21.859 }, 00:15:21.859 { 00:15:21.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.859 "dma_device_type": 2 00:15:21.859 } 00:15:21.859 ], 00:15:21.859 "driver_specific": {} 00:15:21.859 } 00:15:21.859 ] 00:15:21.859 21:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:15:21.859 21:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:21.859 21:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:21.859 21:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:21.859 21:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:21.859 21:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:21.859 21:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:21.859 21:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:21.859 21:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:21.859 21:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:21.859 21:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:21.859 21:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.859 21:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.118 21:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:22.118 "name": "Existed_Raid", 00:15:22.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.118 "strip_size_kb": 0, 00:15:22.118 "state": "configuring", 00:15:22.118 "raid_level": "raid1", 00:15:22.118 "superblock": false, 00:15:22.118 "num_base_bdevs": 4, 00:15:22.118 "num_base_bdevs_discovered": 1, 00:15:22.118 "num_base_bdevs_operational": 4, 00:15:22.118 "base_bdevs_list": [ 00:15:22.118 { 00:15:22.118 "name": "BaseBdev1", 00:15:22.118 "uuid": "688ecaf0-42f4-11ef-9f7f-e9a656123a8b", 00:15:22.118 "is_configured": true, 00:15:22.118 "data_offset": 0, 00:15:22.118 "data_size": 65536 00:15:22.118 }, 00:15:22.118 { 00:15:22.118 "name": "BaseBdev2", 00:15:22.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.118 "is_configured": false, 00:15:22.118 "data_offset": 0, 00:15:22.118 "data_size": 0 00:15:22.118 }, 00:15:22.118 { 00:15:22.118 "name": "BaseBdev3", 00:15:22.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.118 "is_configured": false, 00:15:22.118 "data_offset": 0, 00:15:22.118 "data_size": 0 00:15:22.118 }, 00:15:22.118 { 00:15:22.118 "name": "BaseBdev4", 00:15:22.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.118 "is_configured": false, 00:15:22.118 "data_offset": 0, 00:15:22.118 "data_size": 0 00:15:22.118 } 00:15:22.118 ] 00:15:22.118 }' 00:15:22.118 21:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:22.118 21:51:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.377 21:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:22.636 [2024-07-15 21:51:37.745710] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:22.636 [2024-07-15 21:51:37.745759] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x49f62434500 name Existed_Raid, state configuring 00:15:22.636 21:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:22.896 [2024-07-15 21:51:37.965745] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:22.896 [2024-07-15 21:51:37.966749] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:22.896 [2024-07-15 21:51:37.966804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:22.896 [2024-07-15 21:51:37.966809] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:22.896 [2024-07-15 21:51:37.966834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:22.896 [2024-07-15 21:51:37.966837] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:22.896 [2024-07-15 21:51:37.966844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:22.896 21:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:22.896 21:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:22.896 21:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:22.896 21:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:22.896 21:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:22.896 21:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:22.896 21:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:22.896 21:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:22.896 21:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:22.896 21:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:22.896 21:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:22.896 21:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:22.896 21:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.896 21:51:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.155 21:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:23.155 "name": "Existed_Raid", 00:15:23.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.155 "strip_size_kb": 0, 00:15:23.155 "state": "configuring", 00:15:23.155 "raid_level": "raid1", 00:15:23.155 "superblock": false, 00:15:23.155 "num_base_bdevs": 4, 00:15:23.155 "num_base_bdevs_discovered": 1, 00:15:23.155 "num_base_bdevs_operational": 4, 00:15:23.155 "base_bdevs_list": [ 00:15:23.155 { 00:15:23.155 "name": "BaseBdev1", 00:15:23.155 "uuid": "688ecaf0-42f4-11ef-9f7f-e9a656123a8b", 00:15:23.155 "is_configured": true, 00:15:23.155 "data_offset": 0, 00:15:23.155 "data_size": 65536 00:15:23.155 }, 00:15:23.155 { 00:15:23.155 "name": "BaseBdev2", 00:15:23.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.155 "is_configured": false, 00:15:23.155 "data_offset": 0, 00:15:23.155 "data_size": 0 00:15:23.155 }, 00:15:23.155 { 00:15:23.155 "name": "BaseBdev3", 00:15:23.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.155 "is_configured": false, 00:15:23.155 "data_offset": 0, 00:15:23.155 "data_size": 0 00:15:23.155 }, 00:15:23.155 { 00:15:23.155 "name": "BaseBdev4", 00:15:23.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.155 "is_configured": false, 00:15:23.155 "data_offset": 0, 00:15:23.155 "data_size": 0 00:15:23.155 } 00:15:23.155 ] 00:15:23.155 }' 00:15:23.155 21:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:23.155 21:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.415 21:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:23.673 [2024-07-15 21:51:38.685933] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:23.673 BaseBdev2 00:15:23.674 21:51:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:23.674 21:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev2 00:15:23.674 21:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:15:23.674 21:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:15:23.674 21:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:15:23.674 21:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:15:23.674 21:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:23.932 21:51:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:24.193 [ 00:15:24.193 { 00:15:24.193 "name": "BaseBdev2", 00:15:24.193 "aliases": [ 00:15:24.193 "69d5f57c-42f4-11ef-9f7f-e9a656123a8b" 00:15:24.193 ], 00:15:24.193 "product_name": "Malloc disk", 00:15:24.193 "block_size": 512, 00:15:24.193 "num_blocks": 65536, 00:15:24.193 "uuid": "69d5f57c-42f4-11ef-9f7f-e9a656123a8b", 00:15:24.193 "assigned_rate_limits": { 00:15:24.193 "rw_ios_per_sec": 0, 00:15:24.193 "rw_mbytes_per_sec": 0, 00:15:24.193 "r_mbytes_per_sec": 0, 00:15:24.193 "w_mbytes_per_sec": 0 00:15:24.193 }, 00:15:24.193 "claimed": true, 00:15:24.193 "claim_type": "exclusive_write", 00:15:24.193 "zoned": false, 00:15:24.193 "supported_io_types": { 00:15:24.193 "read": true, 00:15:24.193 "write": true, 00:15:24.193 "unmap": true, 00:15:24.193 "flush": true, 00:15:24.193 "reset": true, 00:15:24.193 "nvme_admin": false, 00:15:24.193 "nvme_io": false, 00:15:24.193 "nvme_io_md": false, 00:15:24.193 "write_zeroes": true, 00:15:24.193 "zcopy": true, 00:15:24.193 "get_zone_info": false, 00:15:24.193 "zone_management": false, 00:15:24.193 "zone_append": false, 00:15:24.193 "compare": false, 00:15:24.193 "compare_and_write": false, 00:15:24.193 "abort": true, 00:15:24.193 "seek_hole": false, 00:15:24.193 "seek_data": false, 00:15:24.193 "copy": true, 00:15:24.193 "nvme_iov_md": false 00:15:24.193 }, 00:15:24.193 "memory_domains": [ 00:15:24.193 { 00:15:24.193 "dma_device_id": "system", 00:15:24.193 "dma_device_type": 1 00:15:24.193 }, 00:15:24.193 { 00:15:24.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.193 "dma_device_type": 2 00:15:24.193 } 00:15:24.193 ], 00:15:24.193 "driver_specific": {} 00:15:24.193 } 00:15:24.193 ] 00:15:24.193 21:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:15:24.193 21:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:24.193 21:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:24.193 21:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:24.193 21:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:24.193 21:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:24.193 21:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:24.193 21:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:24.193 21:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:24.193 21:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:24.193 21:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:24.193 21:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:24.193 21:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:24.193 21:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.193 21:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.453 21:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:24.453 "name": "Existed_Raid", 00:15:24.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.453 "strip_size_kb": 0, 00:15:24.453 "state": "configuring", 00:15:24.453 "raid_level": "raid1", 00:15:24.453 "superblock": false, 00:15:24.453 "num_base_bdevs": 4, 00:15:24.453 "num_base_bdevs_discovered": 2, 00:15:24.453 "num_base_bdevs_operational": 4, 00:15:24.453 "base_bdevs_list": [ 00:15:24.453 { 00:15:24.453 "name": "BaseBdev1", 00:15:24.453 "uuid": "688ecaf0-42f4-11ef-9f7f-e9a656123a8b", 00:15:24.453 "is_configured": true, 00:15:24.453 "data_offset": 0, 00:15:24.453 "data_size": 65536 00:15:24.453 }, 00:15:24.453 { 00:15:24.453 "name": "BaseBdev2", 00:15:24.453 "uuid": "69d5f57c-42f4-11ef-9f7f-e9a656123a8b", 00:15:24.453 "is_configured": true, 00:15:24.453 "data_offset": 0, 00:15:24.453 "data_size": 65536 00:15:24.453 }, 00:15:24.453 { 00:15:24.453 "name": "BaseBdev3", 00:15:24.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.453 "is_configured": false, 00:15:24.453 "data_offset": 0, 00:15:24.453 "data_size": 0 00:15:24.453 }, 00:15:24.453 { 00:15:24.453 "name": "BaseBdev4", 00:15:24.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.453 "is_configured": false, 00:15:24.453 "data_offset": 0, 00:15:24.453 "data_size": 0 00:15:24.453 } 00:15:24.453 ] 00:15:24.453 }' 00:15:24.453 21:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:24.453 21:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.712 21:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:24.712 [2024-07-15 21:51:39.838026] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:24.712 BaseBdev3 00:15:24.712 21:51:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:15:24.712 21:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev3 00:15:24.712 21:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:15:24.712 21:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:15:24.712 21:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:15:24.712 21:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:15:24.712 21:51:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:24.971 21:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:25.230 [ 00:15:25.230 { 00:15:25.230 "name": "BaseBdev3", 00:15:25.230 "aliases": [ 00:15:25.230 "6a85c263-42f4-11ef-9f7f-e9a656123a8b" 00:15:25.230 ], 00:15:25.230 "product_name": "Malloc disk", 00:15:25.230 "block_size": 512, 00:15:25.230 "num_blocks": 65536, 00:15:25.230 "uuid": "6a85c263-42f4-11ef-9f7f-e9a656123a8b", 00:15:25.230 "assigned_rate_limits": { 00:15:25.230 "rw_ios_per_sec": 0, 00:15:25.230 "rw_mbytes_per_sec": 0, 00:15:25.230 "r_mbytes_per_sec": 0, 00:15:25.230 "w_mbytes_per_sec": 0 00:15:25.230 }, 00:15:25.230 "claimed": true, 00:15:25.230 "claim_type": "exclusive_write", 00:15:25.230 "zoned": false, 00:15:25.230 "supported_io_types": { 00:15:25.230 "read": true, 00:15:25.230 "write": true, 00:15:25.230 "unmap": true, 00:15:25.230 "flush": true, 00:15:25.230 "reset": true, 00:15:25.230 "nvme_admin": false, 00:15:25.230 "nvme_io": false, 00:15:25.230 "nvme_io_md": false, 00:15:25.230 "write_zeroes": true, 00:15:25.230 "zcopy": true, 00:15:25.230 "get_zone_info": false, 00:15:25.230 "zone_management": false, 00:15:25.230 "zone_append": false, 00:15:25.230 "compare": false, 00:15:25.230 "compare_and_write": false, 00:15:25.230 "abort": true, 00:15:25.230 "seek_hole": false, 00:15:25.230 "seek_data": false, 00:15:25.230 "copy": true, 00:15:25.230 "nvme_iov_md": false 00:15:25.230 }, 00:15:25.230 "memory_domains": [ 00:15:25.230 { 00:15:25.230 "dma_device_id": "system", 00:15:25.230 "dma_device_type": 1 00:15:25.230 }, 00:15:25.230 { 00:15:25.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.230 "dma_device_type": 2 00:15:25.230 } 00:15:25.230 ], 00:15:25.230 "driver_specific": {} 00:15:25.230 } 00:15:25.230 ] 00:15:25.230 21:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:15:25.230 21:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:25.230 21:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:25.230 21:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:25.230 21:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:25.230 21:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:25.230 21:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:25.230 21:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:25.230 21:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:25.230 21:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:25.230 21:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:25.230 21:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:25.230 21:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:25.230 21:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.230 21:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.489 21:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:25.489 "name": "Existed_Raid", 00:15:25.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.489 "strip_size_kb": 0, 00:15:25.489 "state": "configuring", 00:15:25.489 "raid_level": "raid1", 00:15:25.489 "superblock": false, 00:15:25.489 "num_base_bdevs": 4, 00:15:25.489 "num_base_bdevs_discovered": 3, 00:15:25.489 "num_base_bdevs_operational": 4, 00:15:25.489 "base_bdevs_list": [ 00:15:25.489 { 00:15:25.489 "name": "BaseBdev1", 00:15:25.489 "uuid": "688ecaf0-42f4-11ef-9f7f-e9a656123a8b", 00:15:25.489 "is_configured": true, 00:15:25.489 "data_offset": 0, 00:15:25.489 "data_size": 65536 00:15:25.489 }, 00:15:25.489 { 00:15:25.489 "name": "BaseBdev2", 00:15:25.489 "uuid": "69d5f57c-42f4-11ef-9f7f-e9a656123a8b", 00:15:25.489 "is_configured": true, 00:15:25.489 "data_offset": 0, 00:15:25.489 "data_size": 65536 00:15:25.489 }, 00:15:25.489 { 00:15:25.489 "name": "BaseBdev3", 00:15:25.489 "uuid": "6a85c263-42f4-11ef-9f7f-e9a656123a8b", 00:15:25.489 "is_configured": true, 00:15:25.489 "data_offset": 0, 00:15:25.489 "data_size": 65536 00:15:25.489 }, 00:15:25.489 { 00:15:25.489 "name": "BaseBdev4", 00:15:25.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.489 "is_configured": false, 00:15:25.489 "data_offset": 0, 00:15:25.489 "data_size": 0 00:15:25.489 } 00:15:25.489 ] 00:15:25.489 }' 00:15:25.489 21:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:25.489 21:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.748 21:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:26.007 [2024-07-15 21:51:41.046052] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:26.007 [2024-07-15 21:51:41.046077] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x49f62434a00 00:15:26.007 [2024-07-15 21:51:41.046098] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:26.007 [2024-07-15 21:51:41.046127] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x49f62497e20 00:15:26.007 [2024-07-15 21:51:41.046216] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x49f62434a00 00:15:26.007 [2024-07-15 21:51:41.046221] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x49f62434a00 00:15:26.007 [2024-07-15 21:51:41.046257] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.007 BaseBdev4 00:15:26.007 21:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:15:26.007 21:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev4 00:15:26.007 21:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:15:26.007 21:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:15:26.007 21:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:15:26.007 21:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:15:26.007 21:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:26.266 21:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:26.569 [ 00:15:26.569 { 00:15:26.569 "name": "BaseBdev4", 00:15:26.569 "aliases": [ 00:15:26.569 "6b3e171f-42f4-11ef-9f7f-e9a656123a8b" 00:15:26.569 ], 00:15:26.569 "product_name": "Malloc disk", 00:15:26.569 "block_size": 512, 00:15:26.569 "num_blocks": 65536, 00:15:26.569 "uuid": "6b3e171f-42f4-11ef-9f7f-e9a656123a8b", 00:15:26.569 "assigned_rate_limits": { 00:15:26.569 "rw_ios_per_sec": 0, 00:15:26.569 "rw_mbytes_per_sec": 0, 00:15:26.569 "r_mbytes_per_sec": 0, 00:15:26.569 "w_mbytes_per_sec": 0 00:15:26.569 }, 00:15:26.569 "claimed": true, 00:15:26.569 "claim_type": "exclusive_write", 00:15:26.569 "zoned": false, 00:15:26.569 "supported_io_types": { 00:15:26.569 "read": true, 00:15:26.569 "write": true, 00:15:26.569 "unmap": true, 00:15:26.569 "flush": true, 00:15:26.569 "reset": true, 00:15:26.569 "nvme_admin": false, 00:15:26.569 "nvme_io": false, 00:15:26.569 "nvme_io_md": false, 00:15:26.569 "write_zeroes": true, 00:15:26.569 "zcopy": true, 00:15:26.569 "get_zone_info": false, 00:15:26.569 "zone_management": false, 00:15:26.569 "zone_append": false, 00:15:26.569 "compare": false, 00:15:26.569 "compare_and_write": false, 00:15:26.569 "abort": true, 00:15:26.569 "seek_hole": false, 00:15:26.569 "seek_data": false, 00:15:26.569 "copy": true, 00:15:26.569 "nvme_iov_md": false 00:15:26.569 }, 00:15:26.569 "memory_domains": [ 00:15:26.569 { 00:15:26.569 "dma_device_id": "system", 00:15:26.569 "dma_device_type": 1 00:15:26.569 }, 00:15:26.569 { 00:15:26.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.569 "dma_device_type": 2 00:15:26.569 } 00:15:26.569 ], 00:15:26.569 "driver_specific": {} 00:15:26.569 } 00:15:26.569 ] 00:15:26.569 21:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:15:26.569 21:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:26.569 21:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:26.569 21:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:26.569 21:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:26.569 21:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:26.569 21:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:26.569 21:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:26.569 21:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:26.569 21:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:26.569 21:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:26.569 21:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:26.569 21:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:26.569 21:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.569 21:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.828 21:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:26.828 "name": "Existed_Raid", 00:15:26.828 "uuid": "6b3e1da6-42f4-11ef-9f7f-e9a656123a8b", 00:15:26.828 "strip_size_kb": 0, 00:15:26.828 "state": "online", 00:15:26.828 "raid_level": "raid1", 00:15:26.828 "superblock": false, 00:15:26.828 "num_base_bdevs": 4, 00:15:26.828 "num_base_bdevs_discovered": 4, 00:15:26.828 "num_base_bdevs_operational": 4, 00:15:26.828 "base_bdevs_list": [ 00:15:26.828 { 00:15:26.828 "name": "BaseBdev1", 00:15:26.828 "uuid": "688ecaf0-42f4-11ef-9f7f-e9a656123a8b", 00:15:26.828 "is_configured": true, 00:15:26.828 "data_offset": 0, 00:15:26.828 "data_size": 65536 00:15:26.828 }, 00:15:26.828 { 00:15:26.828 "name": "BaseBdev2", 00:15:26.828 "uuid": "69d5f57c-42f4-11ef-9f7f-e9a656123a8b", 00:15:26.828 "is_configured": true, 00:15:26.828 "data_offset": 0, 00:15:26.828 "data_size": 65536 00:15:26.828 }, 00:15:26.828 { 00:15:26.828 "name": "BaseBdev3", 00:15:26.828 "uuid": "6a85c263-42f4-11ef-9f7f-e9a656123a8b", 00:15:26.828 "is_configured": true, 00:15:26.828 "data_offset": 0, 00:15:26.828 "data_size": 65536 00:15:26.828 }, 00:15:26.828 { 00:15:26.828 "name": "BaseBdev4", 00:15:26.828 "uuid": "6b3e171f-42f4-11ef-9f7f-e9a656123a8b", 00:15:26.828 "is_configured": true, 00:15:26.828 "data_offset": 0, 00:15:26.828 "data_size": 65536 00:15:26.828 } 00:15:26.828 ] 00:15:26.828 }' 00:15:26.828 21:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:26.828 21:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.088 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:27.088 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:27.088 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:27.088 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:27.088 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:27.088 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:27.088 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:27.088 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:27.088 [2024-07-15 21:51:42.258044] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:27.088 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:27.088 "name": "Existed_Raid", 00:15:27.088 "aliases": [ 00:15:27.088 "6b3e1da6-42f4-11ef-9f7f-e9a656123a8b" 00:15:27.088 ], 00:15:27.088 "product_name": "Raid Volume", 00:15:27.088 "block_size": 512, 00:15:27.088 "num_blocks": 65536, 00:15:27.088 "uuid": "6b3e1da6-42f4-11ef-9f7f-e9a656123a8b", 00:15:27.088 "assigned_rate_limits": { 00:15:27.088 "rw_ios_per_sec": 0, 00:15:27.088 "rw_mbytes_per_sec": 0, 00:15:27.088 "r_mbytes_per_sec": 0, 00:15:27.088 "w_mbytes_per_sec": 0 00:15:27.088 }, 00:15:27.088 "claimed": false, 00:15:27.088 "zoned": false, 00:15:27.088 "supported_io_types": { 00:15:27.088 "read": true, 00:15:27.088 "write": true, 00:15:27.088 "unmap": false, 00:15:27.088 "flush": false, 00:15:27.088 "reset": true, 00:15:27.088 "nvme_admin": false, 00:15:27.088 "nvme_io": false, 00:15:27.088 "nvme_io_md": false, 00:15:27.088 "write_zeroes": true, 00:15:27.088 "zcopy": false, 00:15:27.088 "get_zone_info": false, 00:15:27.088 "zone_management": false, 00:15:27.088 "zone_append": false, 00:15:27.088 "compare": false, 00:15:27.088 "compare_and_write": false, 00:15:27.088 "abort": false, 00:15:27.088 "seek_hole": false, 00:15:27.088 "seek_data": false, 00:15:27.088 "copy": false, 00:15:27.088 "nvme_iov_md": false 00:15:27.088 }, 00:15:27.088 "memory_domains": [ 00:15:27.088 { 00:15:27.088 "dma_device_id": "system", 00:15:27.088 "dma_device_type": 1 00:15:27.088 }, 00:15:27.088 { 00:15:27.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.088 "dma_device_type": 2 00:15:27.088 }, 00:15:27.088 { 00:15:27.088 "dma_device_id": "system", 00:15:27.088 "dma_device_type": 1 00:15:27.088 }, 00:15:27.088 { 00:15:27.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.088 "dma_device_type": 2 00:15:27.088 }, 00:15:27.088 { 00:15:27.088 "dma_device_id": "system", 00:15:27.088 "dma_device_type": 1 00:15:27.088 }, 00:15:27.088 { 00:15:27.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.088 "dma_device_type": 2 00:15:27.088 }, 00:15:27.088 { 00:15:27.088 "dma_device_id": "system", 00:15:27.088 "dma_device_type": 1 00:15:27.088 }, 00:15:27.088 { 00:15:27.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.088 "dma_device_type": 2 00:15:27.088 } 00:15:27.088 ], 00:15:27.088 "driver_specific": { 00:15:27.088 "raid": { 00:15:27.088 "uuid": "6b3e1da6-42f4-11ef-9f7f-e9a656123a8b", 00:15:27.088 "strip_size_kb": 0, 00:15:27.088 "state": "online", 00:15:27.088 "raid_level": "raid1", 00:15:27.088 "superblock": false, 00:15:27.088 "num_base_bdevs": 4, 00:15:27.088 "num_base_bdevs_discovered": 4, 00:15:27.088 "num_base_bdevs_operational": 4, 00:15:27.088 "base_bdevs_list": [ 00:15:27.088 { 00:15:27.088 "name": "BaseBdev1", 00:15:27.088 "uuid": "688ecaf0-42f4-11ef-9f7f-e9a656123a8b", 00:15:27.088 "is_configured": true, 00:15:27.088 "data_offset": 0, 00:15:27.088 "data_size": 65536 00:15:27.088 }, 00:15:27.088 { 00:15:27.088 "name": "BaseBdev2", 00:15:27.088 "uuid": "69d5f57c-42f4-11ef-9f7f-e9a656123a8b", 00:15:27.088 "is_configured": true, 00:15:27.088 "data_offset": 0, 00:15:27.088 "data_size": 65536 00:15:27.088 }, 00:15:27.088 { 00:15:27.088 "name": "BaseBdev3", 00:15:27.088 "uuid": "6a85c263-42f4-11ef-9f7f-e9a656123a8b", 00:15:27.088 "is_configured": true, 00:15:27.088 "data_offset": 0, 00:15:27.088 "data_size": 65536 00:15:27.088 }, 00:15:27.088 { 00:15:27.088 "name": "BaseBdev4", 00:15:27.088 "uuid": "6b3e171f-42f4-11ef-9f7f-e9a656123a8b", 00:15:27.088 "is_configured": true, 00:15:27.088 "data_offset": 0, 00:15:27.088 "data_size": 65536 00:15:27.088 } 00:15:27.088 ] 00:15:27.088 } 00:15:27.088 } 00:15:27.088 }' 00:15:27.346 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:27.346 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:27.346 BaseBdev2 00:15:27.346 BaseBdev3 00:15:27.346 BaseBdev4' 00:15:27.346 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:27.346 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:27.346 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:27.346 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:27.346 "name": "BaseBdev1", 00:15:27.346 "aliases": [ 00:15:27.346 "688ecaf0-42f4-11ef-9f7f-e9a656123a8b" 00:15:27.346 ], 00:15:27.346 "product_name": "Malloc disk", 00:15:27.346 "block_size": 512, 00:15:27.346 "num_blocks": 65536, 00:15:27.346 "uuid": "688ecaf0-42f4-11ef-9f7f-e9a656123a8b", 00:15:27.346 "assigned_rate_limits": { 00:15:27.346 "rw_ios_per_sec": 0, 00:15:27.346 "rw_mbytes_per_sec": 0, 00:15:27.346 "r_mbytes_per_sec": 0, 00:15:27.346 "w_mbytes_per_sec": 0 00:15:27.346 }, 00:15:27.346 "claimed": true, 00:15:27.346 "claim_type": "exclusive_write", 00:15:27.346 "zoned": false, 00:15:27.346 "supported_io_types": { 00:15:27.346 "read": true, 00:15:27.346 "write": true, 00:15:27.346 "unmap": true, 00:15:27.346 "flush": true, 00:15:27.346 "reset": true, 00:15:27.346 "nvme_admin": false, 00:15:27.346 "nvme_io": false, 00:15:27.346 "nvme_io_md": false, 00:15:27.346 "write_zeroes": true, 00:15:27.346 "zcopy": true, 00:15:27.346 "get_zone_info": false, 00:15:27.346 "zone_management": false, 00:15:27.346 "zone_append": false, 00:15:27.346 "compare": false, 00:15:27.346 "compare_and_write": false, 00:15:27.347 "abort": true, 00:15:27.347 "seek_hole": false, 00:15:27.347 "seek_data": false, 00:15:27.347 "copy": true, 00:15:27.347 "nvme_iov_md": false 00:15:27.347 }, 00:15:27.347 "memory_domains": [ 00:15:27.347 { 00:15:27.347 "dma_device_id": "system", 00:15:27.347 "dma_device_type": 1 00:15:27.347 }, 00:15:27.347 { 00:15:27.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.347 "dma_device_type": 2 00:15:27.347 } 00:15:27.347 ], 00:15:27.347 "driver_specific": {} 00:15:27.347 }' 00:15:27.347 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:27.347 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:27.347 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:27.347 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:27.347 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:27.347 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:27.347 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:27.347 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:27.606 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:27.606 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:27.606 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:27.606 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:27.606 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:27.606 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:27.606 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:27.606 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:27.606 "name": "BaseBdev2", 00:15:27.606 "aliases": [ 00:15:27.606 "69d5f57c-42f4-11ef-9f7f-e9a656123a8b" 00:15:27.606 ], 00:15:27.606 "product_name": "Malloc disk", 00:15:27.606 "block_size": 512, 00:15:27.606 "num_blocks": 65536, 00:15:27.606 "uuid": "69d5f57c-42f4-11ef-9f7f-e9a656123a8b", 00:15:27.606 "assigned_rate_limits": { 00:15:27.606 "rw_ios_per_sec": 0, 00:15:27.606 "rw_mbytes_per_sec": 0, 00:15:27.606 "r_mbytes_per_sec": 0, 00:15:27.606 "w_mbytes_per_sec": 0 00:15:27.606 }, 00:15:27.606 "claimed": true, 00:15:27.606 "claim_type": "exclusive_write", 00:15:27.606 "zoned": false, 00:15:27.606 "supported_io_types": { 00:15:27.606 "read": true, 00:15:27.606 "write": true, 00:15:27.606 "unmap": true, 00:15:27.606 "flush": true, 00:15:27.606 "reset": true, 00:15:27.606 "nvme_admin": false, 00:15:27.606 "nvme_io": false, 00:15:27.606 "nvme_io_md": false, 00:15:27.606 "write_zeroes": true, 00:15:27.606 "zcopy": true, 00:15:27.606 "get_zone_info": false, 00:15:27.606 "zone_management": false, 00:15:27.606 "zone_append": false, 00:15:27.606 "compare": false, 00:15:27.606 "compare_and_write": false, 00:15:27.606 "abort": true, 00:15:27.606 "seek_hole": false, 00:15:27.606 "seek_data": false, 00:15:27.606 "copy": true, 00:15:27.606 "nvme_iov_md": false 00:15:27.606 }, 00:15:27.606 "memory_domains": [ 00:15:27.606 { 00:15:27.606 "dma_device_id": "system", 00:15:27.606 "dma_device_type": 1 00:15:27.606 }, 00:15:27.606 { 00:15:27.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.606 "dma_device_type": 2 00:15:27.606 } 00:15:27.606 ], 00:15:27.606 "driver_specific": {} 00:15:27.606 }' 00:15:27.606 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:27.606 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:27.606 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:27.606 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:27.606 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:27.865 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:27.865 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:27.865 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:27.865 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:27.865 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:27.865 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:27.865 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:27.865 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:27.865 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:27.865 21:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:28.124 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:28.124 "name": "BaseBdev3", 00:15:28.124 "aliases": [ 00:15:28.124 "6a85c263-42f4-11ef-9f7f-e9a656123a8b" 00:15:28.124 ], 00:15:28.124 "product_name": "Malloc disk", 00:15:28.124 "block_size": 512, 00:15:28.124 "num_blocks": 65536, 00:15:28.124 "uuid": "6a85c263-42f4-11ef-9f7f-e9a656123a8b", 00:15:28.124 "assigned_rate_limits": { 00:15:28.124 "rw_ios_per_sec": 0, 00:15:28.124 "rw_mbytes_per_sec": 0, 00:15:28.124 "r_mbytes_per_sec": 0, 00:15:28.124 "w_mbytes_per_sec": 0 00:15:28.124 }, 00:15:28.124 "claimed": true, 00:15:28.124 "claim_type": "exclusive_write", 00:15:28.124 "zoned": false, 00:15:28.124 "supported_io_types": { 00:15:28.124 "read": true, 00:15:28.124 "write": true, 00:15:28.124 "unmap": true, 00:15:28.124 "flush": true, 00:15:28.124 "reset": true, 00:15:28.124 "nvme_admin": false, 00:15:28.124 "nvme_io": false, 00:15:28.124 "nvme_io_md": false, 00:15:28.124 "write_zeroes": true, 00:15:28.124 "zcopy": true, 00:15:28.124 "get_zone_info": false, 00:15:28.124 "zone_management": false, 00:15:28.124 "zone_append": false, 00:15:28.124 "compare": false, 00:15:28.124 "compare_and_write": false, 00:15:28.124 "abort": true, 00:15:28.124 "seek_hole": false, 00:15:28.124 "seek_data": false, 00:15:28.124 "copy": true, 00:15:28.124 "nvme_iov_md": false 00:15:28.124 }, 00:15:28.124 "memory_domains": [ 00:15:28.124 { 00:15:28.124 "dma_device_id": "system", 00:15:28.124 "dma_device_type": 1 00:15:28.124 }, 00:15:28.124 { 00:15:28.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.124 "dma_device_type": 2 00:15:28.124 } 00:15:28.124 ], 00:15:28.124 "driver_specific": {} 00:15:28.124 }' 00:15:28.124 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:28.124 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:28.124 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:28.124 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:28.124 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:28.124 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:28.124 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:28.124 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:28.124 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:28.124 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:28.124 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:28.124 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:28.124 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:28.124 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:28.124 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:28.382 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:28.382 "name": "BaseBdev4", 00:15:28.382 "aliases": [ 00:15:28.382 "6b3e171f-42f4-11ef-9f7f-e9a656123a8b" 00:15:28.382 ], 00:15:28.382 "product_name": "Malloc disk", 00:15:28.382 "block_size": 512, 00:15:28.382 "num_blocks": 65536, 00:15:28.382 "uuid": "6b3e171f-42f4-11ef-9f7f-e9a656123a8b", 00:15:28.382 "assigned_rate_limits": { 00:15:28.382 "rw_ios_per_sec": 0, 00:15:28.382 "rw_mbytes_per_sec": 0, 00:15:28.382 "r_mbytes_per_sec": 0, 00:15:28.382 "w_mbytes_per_sec": 0 00:15:28.382 }, 00:15:28.382 "claimed": true, 00:15:28.382 "claim_type": "exclusive_write", 00:15:28.382 "zoned": false, 00:15:28.382 "supported_io_types": { 00:15:28.382 "read": true, 00:15:28.382 "write": true, 00:15:28.382 "unmap": true, 00:15:28.382 "flush": true, 00:15:28.382 "reset": true, 00:15:28.382 "nvme_admin": false, 00:15:28.382 "nvme_io": false, 00:15:28.382 "nvme_io_md": false, 00:15:28.382 "write_zeroes": true, 00:15:28.382 "zcopy": true, 00:15:28.382 "get_zone_info": false, 00:15:28.382 "zone_management": false, 00:15:28.382 "zone_append": false, 00:15:28.382 "compare": false, 00:15:28.382 "compare_and_write": false, 00:15:28.382 "abort": true, 00:15:28.382 "seek_hole": false, 00:15:28.382 "seek_data": false, 00:15:28.382 "copy": true, 00:15:28.382 "nvme_iov_md": false 00:15:28.382 }, 00:15:28.382 "memory_domains": [ 00:15:28.382 { 00:15:28.382 "dma_device_id": "system", 00:15:28.382 "dma_device_type": 1 00:15:28.382 }, 00:15:28.382 { 00:15:28.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.382 "dma_device_type": 2 00:15:28.382 } 00:15:28.382 ], 00:15:28.383 "driver_specific": {} 00:15:28.383 }' 00:15:28.383 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:28.383 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:28.383 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:28.383 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:28.383 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:28.383 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:28.383 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:28.383 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:28.383 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:28.383 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:28.383 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:28.383 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:28.383 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:28.640 [2024-07-15 21:51:43.662047] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:28.640 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:28.640 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:15:28.640 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:28.640 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:15:28.640 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:15:28.640 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:28.640 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:28.640 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:28.641 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:28.641 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:28.641 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:28.641 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:28.641 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:28.641 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:28.641 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:28.641 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.641 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.899 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:28.899 "name": "Existed_Raid", 00:15:28.899 "uuid": "6b3e1da6-42f4-11ef-9f7f-e9a656123a8b", 00:15:28.899 "strip_size_kb": 0, 00:15:28.899 "state": "online", 00:15:28.899 "raid_level": "raid1", 00:15:28.899 "superblock": false, 00:15:28.899 "num_base_bdevs": 4, 00:15:28.899 "num_base_bdevs_discovered": 3, 00:15:28.899 "num_base_bdevs_operational": 3, 00:15:28.899 "base_bdevs_list": [ 00:15:28.899 { 00:15:28.899 "name": null, 00:15:28.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.899 "is_configured": false, 00:15:28.899 "data_offset": 0, 00:15:28.899 "data_size": 65536 00:15:28.899 }, 00:15:28.899 { 00:15:28.899 "name": "BaseBdev2", 00:15:28.899 "uuid": "69d5f57c-42f4-11ef-9f7f-e9a656123a8b", 00:15:28.899 "is_configured": true, 00:15:28.899 "data_offset": 0, 00:15:28.899 "data_size": 65536 00:15:28.899 }, 00:15:28.899 { 00:15:28.899 "name": "BaseBdev3", 00:15:28.899 "uuid": "6a85c263-42f4-11ef-9f7f-e9a656123a8b", 00:15:28.899 "is_configured": true, 00:15:28.899 "data_offset": 0, 00:15:28.899 "data_size": 65536 00:15:28.899 }, 00:15:28.899 { 00:15:28.899 "name": "BaseBdev4", 00:15:28.899 "uuid": "6b3e171f-42f4-11ef-9f7f-e9a656123a8b", 00:15:28.899 "is_configured": true, 00:15:28.899 "data_offset": 0, 00:15:28.899 "data_size": 65536 00:15:28.899 } 00:15:28.899 ] 00:15:28.899 }' 00:15:28.899 21:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:28.899 21:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.157 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:29.157 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:29.157 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.157 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:29.415 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:29.415 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:29.415 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:29.673 [2024-07-15 21:51:44.619914] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:29.673 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:29.673 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:29.673 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:29.673 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.931 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:29.931 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:29.931 21:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:30.189 [2024-07-15 21:51:45.137763] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:30.189 21:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:30.189 21:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:30.189 21:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.189 21:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:30.447 21:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:30.447 21:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:30.447 21:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:15:30.705 [2024-07-15 21:51:45.651478] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:30.705 [2024-07-15 21:51:45.651532] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:30.705 [2024-07-15 21:51:45.657444] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:30.705 [2024-07-15 21:51:45.657502] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:30.705 [2024-07-15 21:51:45.657523] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x49f62434a00 name Existed_Raid, state offline 00:15:30.705 21:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:30.705 21:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:30.705 21:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.705 21:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:30.963 21:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:30.963 21:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:30.963 21:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:15:30.963 21:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:15:30.963 21:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:30.963 21:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:30.963 BaseBdev2 00:15:30.963 21:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:15:30.963 21:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev2 00:15:30.963 21:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:15:30.963 21:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:15:30.963 21:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:15:30.963 21:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:15:30.963 21:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:31.222 21:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:31.481 [ 00:15:31.481 { 00:15:31.481 "name": "BaseBdev2", 00:15:31.481 "aliases": [ 00:15:31.481 "6e421bba-42f4-11ef-9f7f-e9a656123a8b" 00:15:31.481 ], 00:15:31.481 "product_name": "Malloc disk", 00:15:31.481 "block_size": 512, 00:15:31.481 "num_blocks": 65536, 00:15:31.481 "uuid": "6e421bba-42f4-11ef-9f7f-e9a656123a8b", 00:15:31.481 "assigned_rate_limits": { 00:15:31.481 "rw_ios_per_sec": 0, 00:15:31.481 "rw_mbytes_per_sec": 0, 00:15:31.481 "r_mbytes_per_sec": 0, 00:15:31.481 "w_mbytes_per_sec": 0 00:15:31.481 }, 00:15:31.481 "claimed": false, 00:15:31.481 "zoned": false, 00:15:31.481 "supported_io_types": { 00:15:31.481 "read": true, 00:15:31.481 "write": true, 00:15:31.481 "unmap": true, 00:15:31.481 "flush": true, 00:15:31.481 "reset": true, 00:15:31.481 "nvme_admin": false, 00:15:31.481 "nvme_io": false, 00:15:31.481 "nvme_io_md": false, 00:15:31.481 "write_zeroes": true, 00:15:31.481 "zcopy": true, 00:15:31.481 "get_zone_info": false, 00:15:31.481 "zone_management": false, 00:15:31.481 "zone_append": false, 00:15:31.481 "compare": false, 00:15:31.481 "compare_and_write": false, 00:15:31.481 "abort": true, 00:15:31.481 "seek_hole": false, 00:15:31.481 "seek_data": false, 00:15:31.481 "copy": true, 00:15:31.481 "nvme_iov_md": false 00:15:31.481 }, 00:15:31.481 "memory_domains": [ 00:15:31.481 { 00:15:31.481 "dma_device_id": "system", 00:15:31.481 "dma_device_type": 1 00:15:31.481 }, 00:15:31.481 { 00:15:31.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.481 "dma_device_type": 2 00:15:31.481 } 00:15:31.481 ], 00:15:31.481 "driver_specific": {} 00:15:31.481 } 00:15:31.481 ] 00:15:31.481 21:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:15:31.481 21:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:31.481 21:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:31.481 21:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:31.739 BaseBdev3 00:15:31.739 21:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:15:31.739 21:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev3 00:15:31.739 21:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:15:31.739 21:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:15:31.739 21:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:15:31.739 21:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:15:31.739 21:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:31.998 21:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:32.258 [ 00:15:32.258 { 00:15:32.258 "name": "BaseBdev3", 00:15:32.258 "aliases": [ 00:15:32.258 "6ea28bcc-42f4-11ef-9f7f-e9a656123a8b" 00:15:32.258 ], 00:15:32.258 "product_name": "Malloc disk", 00:15:32.258 "block_size": 512, 00:15:32.258 "num_blocks": 65536, 00:15:32.258 "uuid": "6ea28bcc-42f4-11ef-9f7f-e9a656123a8b", 00:15:32.258 "assigned_rate_limits": { 00:15:32.258 "rw_ios_per_sec": 0, 00:15:32.258 "rw_mbytes_per_sec": 0, 00:15:32.258 "r_mbytes_per_sec": 0, 00:15:32.258 "w_mbytes_per_sec": 0 00:15:32.258 }, 00:15:32.258 "claimed": false, 00:15:32.258 "zoned": false, 00:15:32.258 "supported_io_types": { 00:15:32.258 "read": true, 00:15:32.258 "write": true, 00:15:32.258 "unmap": true, 00:15:32.258 "flush": true, 00:15:32.258 "reset": true, 00:15:32.258 "nvme_admin": false, 00:15:32.258 "nvme_io": false, 00:15:32.258 "nvme_io_md": false, 00:15:32.258 "write_zeroes": true, 00:15:32.258 "zcopy": true, 00:15:32.258 "get_zone_info": false, 00:15:32.258 "zone_management": false, 00:15:32.258 "zone_append": false, 00:15:32.258 "compare": false, 00:15:32.258 "compare_and_write": false, 00:15:32.258 "abort": true, 00:15:32.258 "seek_hole": false, 00:15:32.258 "seek_data": false, 00:15:32.258 "copy": true, 00:15:32.258 "nvme_iov_md": false 00:15:32.258 }, 00:15:32.258 "memory_domains": [ 00:15:32.258 { 00:15:32.258 "dma_device_id": "system", 00:15:32.258 "dma_device_type": 1 00:15:32.258 }, 00:15:32.258 { 00:15:32.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.258 "dma_device_type": 2 00:15:32.258 } 00:15:32.258 ], 00:15:32.258 "driver_specific": {} 00:15:32.258 } 00:15:32.258 ] 00:15:32.258 21:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:15:32.258 21:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:32.258 21:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:32.258 21:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:32.258 BaseBdev4 00:15:32.258 21:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:15:32.258 21:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev4 00:15:32.258 21:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:15:32.258 21:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:15:32.258 21:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:15:32.258 21:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:15:32.258 21:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:32.517 21:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:32.777 [ 00:15:32.777 { 00:15:32.777 "name": "BaseBdev4", 00:15:32.777 "aliases": [ 00:15:32.777 "6f091628-42f4-11ef-9f7f-e9a656123a8b" 00:15:32.777 ], 00:15:32.777 "product_name": "Malloc disk", 00:15:32.777 "block_size": 512, 00:15:32.777 "num_blocks": 65536, 00:15:32.777 "uuid": "6f091628-42f4-11ef-9f7f-e9a656123a8b", 00:15:32.777 "assigned_rate_limits": { 00:15:32.777 "rw_ios_per_sec": 0, 00:15:32.777 "rw_mbytes_per_sec": 0, 00:15:32.777 "r_mbytes_per_sec": 0, 00:15:32.777 "w_mbytes_per_sec": 0 00:15:32.777 }, 00:15:32.777 "claimed": false, 00:15:32.777 "zoned": false, 00:15:32.777 "supported_io_types": { 00:15:32.777 "read": true, 00:15:32.777 "write": true, 00:15:32.777 "unmap": true, 00:15:32.777 "flush": true, 00:15:32.777 "reset": true, 00:15:32.777 "nvme_admin": false, 00:15:32.777 "nvme_io": false, 00:15:32.777 "nvme_io_md": false, 00:15:32.777 "write_zeroes": true, 00:15:32.777 "zcopy": true, 00:15:32.777 "get_zone_info": false, 00:15:32.777 "zone_management": false, 00:15:32.777 "zone_append": false, 00:15:32.777 "compare": false, 00:15:32.777 "compare_and_write": false, 00:15:32.777 "abort": true, 00:15:32.777 "seek_hole": false, 00:15:32.777 "seek_data": false, 00:15:32.777 "copy": true, 00:15:32.777 "nvme_iov_md": false 00:15:32.777 }, 00:15:32.777 "memory_domains": [ 00:15:32.777 { 00:15:32.777 "dma_device_id": "system", 00:15:32.777 "dma_device_type": 1 00:15:32.777 }, 00:15:32.777 { 00:15:32.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.777 "dma_device_type": 2 00:15:32.777 } 00:15:32.777 ], 00:15:32.777 "driver_specific": {} 00:15:32.777 } 00:15:32.777 ] 00:15:32.777 21:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:15:32.777 21:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:32.777 21:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:32.777 21:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:33.037 [2024-07-15 21:51:48.085501] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:33.037 [2024-07-15 21:51:48.085569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:33.037 [2024-07-15 21:51:48.085594] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:33.037 [2024-07-15 21:51:48.086252] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:33.037 [2024-07-15 21:51:48.086275] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:33.037 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:33.037 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:33.037 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:33.037 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:33.037 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:33.037 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:33.037 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:33.037 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:33.037 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:33.037 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:33.037 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.037 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.296 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:33.296 "name": "Existed_Raid", 00:15:33.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.296 "strip_size_kb": 0, 00:15:33.296 "state": "configuring", 00:15:33.296 "raid_level": "raid1", 00:15:33.296 "superblock": false, 00:15:33.296 "num_base_bdevs": 4, 00:15:33.296 "num_base_bdevs_discovered": 3, 00:15:33.296 "num_base_bdevs_operational": 4, 00:15:33.296 "base_bdevs_list": [ 00:15:33.296 { 00:15:33.296 "name": "BaseBdev1", 00:15:33.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.296 "is_configured": false, 00:15:33.296 "data_offset": 0, 00:15:33.296 "data_size": 0 00:15:33.296 }, 00:15:33.296 { 00:15:33.296 "name": "BaseBdev2", 00:15:33.296 "uuid": "6e421bba-42f4-11ef-9f7f-e9a656123a8b", 00:15:33.296 "is_configured": true, 00:15:33.296 "data_offset": 0, 00:15:33.296 "data_size": 65536 00:15:33.296 }, 00:15:33.296 { 00:15:33.296 "name": "BaseBdev3", 00:15:33.296 "uuid": "6ea28bcc-42f4-11ef-9f7f-e9a656123a8b", 00:15:33.296 "is_configured": true, 00:15:33.296 "data_offset": 0, 00:15:33.296 "data_size": 65536 00:15:33.296 }, 00:15:33.296 { 00:15:33.296 "name": "BaseBdev4", 00:15:33.296 "uuid": "6f091628-42f4-11ef-9f7f-e9a656123a8b", 00:15:33.296 "is_configured": true, 00:15:33.296 "data_offset": 0, 00:15:33.296 "data_size": 65536 00:15:33.296 } 00:15:33.296 ] 00:15:33.296 }' 00:15:33.296 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:33.296 21:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.555 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:33.813 [2024-07-15 21:51:48.801552] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:33.813 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:33.813 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:33.813 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:33.813 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:33.813 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:33.813 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:33.813 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:33.813 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:33.813 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:33.813 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:33.813 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.813 21:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.072 21:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:34.072 "name": "Existed_Raid", 00:15:34.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.072 "strip_size_kb": 0, 00:15:34.072 "state": "configuring", 00:15:34.072 "raid_level": "raid1", 00:15:34.072 "superblock": false, 00:15:34.072 "num_base_bdevs": 4, 00:15:34.072 "num_base_bdevs_discovered": 2, 00:15:34.072 "num_base_bdevs_operational": 4, 00:15:34.072 "base_bdevs_list": [ 00:15:34.072 { 00:15:34.072 "name": "BaseBdev1", 00:15:34.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.072 "is_configured": false, 00:15:34.072 "data_offset": 0, 00:15:34.072 "data_size": 0 00:15:34.072 }, 00:15:34.072 { 00:15:34.072 "name": null, 00:15:34.072 "uuid": "6e421bba-42f4-11ef-9f7f-e9a656123a8b", 00:15:34.072 "is_configured": false, 00:15:34.072 "data_offset": 0, 00:15:34.072 "data_size": 65536 00:15:34.072 }, 00:15:34.072 { 00:15:34.073 "name": "BaseBdev3", 00:15:34.073 "uuid": "6ea28bcc-42f4-11ef-9f7f-e9a656123a8b", 00:15:34.073 "is_configured": true, 00:15:34.073 "data_offset": 0, 00:15:34.073 "data_size": 65536 00:15:34.073 }, 00:15:34.073 { 00:15:34.073 "name": "BaseBdev4", 00:15:34.073 "uuid": "6f091628-42f4-11ef-9f7f-e9a656123a8b", 00:15:34.073 "is_configured": true, 00:15:34.073 "data_offset": 0, 00:15:34.073 "data_size": 65536 00:15:34.073 } 00:15:34.073 ] 00:15:34.073 }' 00:15:34.073 21:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:34.073 21:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.331 21:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.331 21:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:34.331 21:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:15:34.331 21:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:34.590 [2024-07-15 21:51:49.697723] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:34.590 BaseBdev1 00:15:34.590 21:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:15:34.590 21:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev1 00:15:34.590 21:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:15:34.590 21:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:15:34.590 21:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:15:34.590 21:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:15:34.590 21:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:34.891 21:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:35.158 [ 00:15:35.158 { 00:15:35.158 "name": "BaseBdev1", 00:15:35.158 "aliases": [ 00:15:35.158 "70663a5c-42f4-11ef-9f7f-e9a656123a8b" 00:15:35.158 ], 00:15:35.158 "product_name": "Malloc disk", 00:15:35.158 "block_size": 512, 00:15:35.158 "num_blocks": 65536, 00:15:35.158 "uuid": "70663a5c-42f4-11ef-9f7f-e9a656123a8b", 00:15:35.158 "assigned_rate_limits": { 00:15:35.158 "rw_ios_per_sec": 0, 00:15:35.158 "rw_mbytes_per_sec": 0, 00:15:35.158 "r_mbytes_per_sec": 0, 00:15:35.158 "w_mbytes_per_sec": 0 00:15:35.158 }, 00:15:35.158 "claimed": true, 00:15:35.158 "claim_type": "exclusive_write", 00:15:35.158 "zoned": false, 00:15:35.158 "supported_io_types": { 00:15:35.158 "read": true, 00:15:35.158 "write": true, 00:15:35.158 "unmap": true, 00:15:35.158 "flush": true, 00:15:35.158 "reset": true, 00:15:35.158 "nvme_admin": false, 00:15:35.158 "nvme_io": false, 00:15:35.158 "nvme_io_md": false, 00:15:35.158 "write_zeroes": true, 00:15:35.158 "zcopy": true, 00:15:35.158 "get_zone_info": false, 00:15:35.158 "zone_management": false, 00:15:35.158 "zone_append": false, 00:15:35.158 "compare": false, 00:15:35.158 "compare_and_write": false, 00:15:35.158 "abort": true, 00:15:35.158 "seek_hole": false, 00:15:35.158 "seek_data": false, 00:15:35.158 "copy": true, 00:15:35.158 "nvme_iov_md": false 00:15:35.158 }, 00:15:35.158 "memory_domains": [ 00:15:35.158 { 00:15:35.158 "dma_device_id": "system", 00:15:35.158 "dma_device_type": 1 00:15:35.158 }, 00:15:35.158 { 00:15:35.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.158 "dma_device_type": 2 00:15:35.158 } 00:15:35.158 ], 00:15:35.158 "driver_specific": {} 00:15:35.158 } 00:15:35.158 ] 00:15:35.158 21:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:15:35.158 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:35.158 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:35.158 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:35.158 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:35.158 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:35.158 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:35.158 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:35.158 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:35.158 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:35.158 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:35.158 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.158 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.158 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:35.158 "name": "Existed_Raid", 00:15:35.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.158 "strip_size_kb": 0, 00:15:35.158 "state": "configuring", 00:15:35.158 "raid_level": "raid1", 00:15:35.158 "superblock": false, 00:15:35.158 "num_base_bdevs": 4, 00:15:35.158 "num_base_bdevs_discovered": 3, 00:15:35.158 "num_base_bdevs_operational": 4, 00:15:35.158 "base_bdevs_list": [ 00:15:35.158 { 00:15:35.158 "name": "BaseBdev1", 00:15:35.158 "uuid": "70663a5c-42f4-11ef-9f7f-e9a656123a8b", 00:15:35.158 "is_configured": true, 00:15:35.158 "data_offset": 0, 00:15:35.158 "data_size": 65536 00:15:35.158 }, 00:15:35.158 { 00:15:35.158 "name": null, 00:15:35.158 "uuid": "6e421bba-42f4-11ef-9f7f-e9a656123a8b", 00:15:35.158 "is_configured": false, 00:15:35.158 "data_offset": 0, 00:15:35.158 "data_size": 65536 00:15:35.158 }, 00:15:35.158 { 00:15:35.158 "name": "BaseBdev3", 00:15:35.158 "uuid": "6ea28bcc-42f4-11ef-9f7f-e9a656123a8b", 00:15:35.158 "is_configured": true, 00:15:35.158 "data_offset": 0, 00:15:35.158 "data_size": 65536 00:15:35.158 }, 00:15:35.158 { 00:15:35.158 "name": "BaseBdev4", 00:15:35.158 "uuid": "6f091628-42f4-11ef-9f7f-e9a656123a8b", 00:15:35.158 "is_configured": true, 00:15:35.158 "data_offset": 0, 00:15:35.158 "data_size": 65536 00:15:35.158 } 00:15:35.158 ] 00:15:35.158 }' 00:15:35.158 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:35.158 21:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.416 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.416 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:35.675 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:15:35.675 21:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:35.934 [2024-07-15 21:51:51.009628] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:35.934 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:35.934 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:35.934 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:35.934 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:35.934 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:35.934 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:35.934 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:35.934 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:35.934 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:35.934 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:35.934 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.934 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.194 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:36.194 "name": "Existed_Raid", 00:15:36.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.194 "strip_size_kb": 0, 00:15:36.194 "state": "configuring", 00:15:36.194 "raid_level": "raid1", 00:15:36.194 "superblock": false, 00:15:36.194 "num_base_bdevs": 4, 00:15:36.194 "num_base_bdevs_discovered": 2, 00:15:36.194 "num_base_bdevs_operational": 4, 00:15:36.194 "base_bdevs_list": [ 00:15:36.194 { 00:15:36.194 "name": "BaseBdev1", 00:15:36.194 "uuid": "70663a5c-42f4-11ef-9f7f-e9a656123a8b", 00:15:36.194 "is_configured": true, 00:15:36.194 "data_offset": 0, 00:15:36.194 "data_size": 65536 00:15:36.194 }, 00:15:36.194 { 00:15:36.194 "name": null, 00:15:36.194 "uuid": "6e421bba-42f4-11ef-9f7f-e9a656123a8b", 00:15:36.194 "is_configured": false, 00:15:36.194 "data_offset": 0, 00:15:36.194 "data_size": 65536 00:15:36.194 }, 00:15:36.194 { 00:15:36.194 "name": null, 00:15:36.194 "uuid": "6ea28bcc-42f4-11ef-9f7f-e9a656123a8b", 00:15:36.194 "is_configured": false, 00:15:36.194 "data_offset": 0, 00:15:36.194 "data_size": 65536 00:15:36.194 }, 00:15:36.194 { 00:15:36.194 "name": "BaseBdev4", 00:15:36.194 "uuid": "6f091628-42f4-11ef-9f7f-e9a656123a8b", 00:15:36.194 "is_configured": true, 00:15:36.194 "data_offset": 0, 00:15:36.194 "data_size": 65536 00:15:36.194 } 00:15:36.194 ] 00:15:36.194 }' 00:15:36.194 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:36.194 21:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.454 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.454 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:36.713 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:15:36.713 21:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:36.972 [2024-07-15 21:51:51.989702] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:36.972 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:36.972 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:36.972 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:36.972 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:36.972 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:36.972 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:36.972 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:36.972 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:36.972 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:36.972 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:36.972 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.972 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.231 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:37.231 "name": "Existed_Raid", 00:15:37.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.231 "strip_size_kb": 0, 00:15:37.231 "state": "configuring", 00:15:37.231 "raid_level": "raid1", 00:15:37.231 "superblock": false, 00:15:37.231 "num_base_bdevs": 4, 00:15:37.231 "num_base_bdevs_discovered": 3, 00:15:37.231 "num_base_bdevs_operational": 4, 00:15:37.231 "base_bdevs_list": [ 00:15:37.231 { 00:15:37.231 "name": "BaseBdev1", 00:15:37.231 "uuid": "70663a5c-42f4-11ef-9f7f-e9a656123a8b", 00:15:37.231 "is_configured": true, 00:15:37.231 "data_offset": 0, 00:15:37.231 "data_size": 65536 00:15:37.231 }, 00:15:37.231 { 00:15:37.231 "name": null, 00:15:37.231 "uuid": "6e421bba-42f4-11ef-9f7f-e9a656123a8b", 00:15:37.231 "is_configured": false, 00:15:37.231 "data_offset": 0, 00:15:37.231 "data_size": 65536 00:15:37.231 }, 00:15:37.231 { 00:15:37.231 "name": "BaseBdev3", 00:15:37.231 "uuid": "6ea28bcc-42f4-11ef-9f7f-e9a656123a8b", 00:15:37.231 "is_configured": true, 00:15:37.231 "data_offset": 0, 00:15:37.231 "data_size": 65536 00:15:37.231 }, 00:15:37.231 { 00:15:37.231 "name": "BaseBdev4", 00:15:37.231 "uuid": "6f091628-42f4-11ef-9f7f-e9a656123a8b", 00:15:37.231 "is_configured": true, 00:15:37.231 "data_offset": 0, 00:15:37.231 "data_size": 65536 00:15:37.231 } 00:15:37.231 ] 00:15:37.231 }' 00:15:37.231 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:37.231 21:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.490 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.490 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:37.749 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:15:37.749 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:37.749 [2024-07-15 21:51:52.893767] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:37.749 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:37.749 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:37.749 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:37.749 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:37.749 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:37.749 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:37.749 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:37.749 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:37.749 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:37.749 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:37.749 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.749 21:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.009 21:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:38.009 "name": "Existed_Raid", 00:15:38.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.009 "strip_size_kb": 0, 00:15:38.009 "state": "configuring", 00:15:38.009 "raid_level": "raid1", 00:15:38.009 "superblock": false, 00:15:38.009 "num_base_bdevs": 4, 00:15:38.009 "num_base_bdevs_discovered": 2, 00:15:38.009 "num_base_bdevs_operational": 4, 00:15:38.009 "base_bdevs_list": [ 00:15:38.009 { 00:15:38.009 "name": null, 00:15:38.009 "uuid": "70663a5c-42f4-11ef-9f7f-e9a656123a8b", 00:15:38.009 "is_configured": false, 00:15:38.009 "data_offset": 0, 00:15:38.009 "data_size": 65536 00:15:38.009 }, 00:15:38.009 { 00:15:38.009 "name": null, 00:15:38.009 "uuid": "6e421bba-42f4-11ef-9f7f-e9a656123a8b", 00:15:38.009 "is_configured": false, 00:15:38.009 "data_offset": 0, 00:15:38.009 "data_size": 65536 00:15:38.009 }, 00:15:38.009 { 00:15:38.009 "name": "BaseBdev3", 00:15:38.009 "uuid": "6ea28bcc-42f4-11ef-9f7f-e9a656123a8b", 00:15:38.009 "is_configured": true, 00:15:38.009 "data_offset": 0, 00:15:38.009 "data_size": 65536 00:15:38.009 }, 00:15:38.009 { 00:15:38.009 "name": "BaseBdev4", 00:15:38.009 "uuid": "6f091628-42f4-11ef-9f7f-e9a656123a8b", 00:15:38.009 "is_configured": true, 00:15:38.009 "data_offset": 0, 00:15:38.009 "data_size": 65536 00:15:38.009 } 00:15:38.009 ] 00:15:38.009 }' 00:15:38.009 21:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:38.009 21:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.268 21:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.268 21:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:38.525 21:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:15:38.525 21:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:38.781 [2024-07-15 21:51:53.892596] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:38.781 21:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:38.781 21:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:38.781 21:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:38.781 21:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:38.781 21:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:38.781 21:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:38.781 21:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:38.781 21:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:38.781 21:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:38.781 21:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:38.781 21:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.781 21:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.038 21:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:39.038 "name": "Existed_Raid", 00:15:39.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.038 "strip_size_kb": 0, 00:15:39.038 "state": "configuring", 00:15:39.038 "raid_level": "raid1", 00:15:39.038 "superblock": false, 00:15:39.038 "num_base_bdevs": 4, 00:15:39.038 "num_base_bdevs_discovered": 3, 00:15:39.038 "num_base_bdevs_operational": 4, 00:15:39.038 "base_bdevs_list": [ 00:15:39.038 { 00:15:39.038 "name": null, 00:15:39.038 "uuid": "70663a5c-42f4-11ef-9f7f-e9a656123a8b", 00:15:39.038 "is_configured": false, 00:15:39.038 "data_offset": 0, 00:15:39.038 "data_size": 65536 00:15:39.038 }, 00:15:39.038 { 00:15:39.038 "name": "BaseBdev2", 00:15:39.038 "uuid": "6e421bba-42f4-11ef-9f7f-e9a656123a8b", 00:15:39.038 "is_configured": true, 00:15:39.038 "data_offset": 0, 00:15:39.038 "data_size": 65536 00:15:39.038 }, 00:15:39.038 { 00:15:39.038 "name": "BaseBdev3", 00:15:39.038 "uuid": "6ea28bcc-42f4-11ef-9f7f-e9a656123a8b", 00:15:39.038 "is_configured": true, 00:15:39.038 "data_offset": 0, 00:15:39.038 "data_size": 65536 00:15:39.038 }, 00:15:39.038 { 00:15:39.038 "name": "BaseBdev4", 00:15:39.038 "uuid": "6f091628-42f4-11ef-9f7f-e9a656123a8b", 00:15:39.038 "is_configured": true, 00:15:39.038 "data_offset": 0, 00:15:39.038 "data_size": 65536 00:15:39.038 } 00:15:39.038 ] 00:15:39.038 }' 00:15:39.038 21:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:39.038 21:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.295 21:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.295 21:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:39.554 21:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:15:39.554 21:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.554 21:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:39.812 21:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 70663a5c-42f4-11ef-9f7f-e9a656123a8b 00:15:40.070 [2024-07-15 21:51:55.076842] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:40.070 [2024-07-15 21:51:55.076871] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x49f62434f00 00:15:40.070 [2024-07-15 21:51:55.076892] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:40.070 [2024-07-15 21:51:55.076914] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x49f62497e20 00:15:40.070 [2024-07-15 21:51:55.076993] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x49f62434f00 00:15:40.070 [2024-07-15 21:51:55.076998] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x49f62434f00 00:15:40.070 [2024-07-15 21:51:55.077030] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.070 NewBaseBdev 00:15:40.070 21:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:15:40.070 21:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@891 -- # local bdev_name=NewBaseBdev 00:15:40.070 21:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:15:40.070 21:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@893 -- # local i 00:15:40.070 21:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:15:40.070 21:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:15:40.070 21:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:40.328 21:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:40.587 [ 00:15:40.587 { 00:15:40.587 "name": "NewBaseBdev", 00:15:40.587 "aliases": [ 00:15:40.587 "70663a5c-42f4-11ef-9f7f-e9a656123a8b" 00:15:40.587 ], 00:15:40.587 "product_name": "Malloc disk", 00:15:40.587 "block_size": 512, 00:15:40.587 "num_blocks": 65536, 00:15:40.587 "uuid": "70663a5c-42f4-11ef-9f7f-e9a656123a8b", 00:15:40.587 "assigned_rate_limits": { 00:15:40.587 "rw_ios_per_sec": 0, 00:15:40.587 "rw_mbytes_per_sec": 0, 00:15:40.587 "r_mbytes_per_sec": 0, 00:15:40.587 "w_mbytes_per_sec": 0 00:15:40.587 }, 00:15:40.587 "claimed": true, 00:15:40.587 "claim_type": "exclusive_write", 00:15:40.587 "zoned": false, 00:15:40.587 "supported_io_types": { 00:15:40.587 "read": true, 00:15:40.587 "write": true, 00:15:40.587 "unmap": true, 00:15:40.587 "flush": true, 00:15:40.587 "reset": true, 00:15:40.587 "nvme_admin": false, 00:15:40.587 "nvme_io": false, 00:15:40.587 "nvme_io_md": false, 00:15:40.587 "write_zeroes": true, 00:15:40.587 "zcopy": true, 00:15:40.587 "get_zone_info": false, 00:15:40.587 "zone_management": false, 00:15:40.587 "zone_append": false, 00:15:40.587 "compare": false, 00:15:40.587 "compare_and_write": false, 00:15:40.587 "abort": true, 00:15:40.587 "seek_hole": false, 00:15:40.587 "seek_data": false, 00:15:40.587 "copy": true, 00:15:40.587 "nvme_iov_md": false 00:15:40.587 }, 00:15:40.587 "memory_domains": [ 00:15:40.587 { 00:15:40.587 "dma_device_id": "system", 00:15:40.587 "dma_device_type": 1 00:15:40.587 }, 00:15:40.587 { 00:15:40.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.587 "dma_device_type": 2 00:15:40.587 } 00:15:40.587 ], 00:15:40.587 "driver_specific": {} 00:15:40.587 } 00:15:40.587 ] 00:15:40.587 21:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # return 0 00:15:40.587 21:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:40.587 21:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:40.587 21:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:40.587 21:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:40.587 21:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:40.587 21:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:40.587 21:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:40.587 21:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:40.587 21:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:40.587 21:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:40.587 21:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.587 21:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.587 21:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:40.587 "name": "Existed_Raid", 00:15:40.587 "uuid": "739b0b54-42f4-11ef-9f7f-e9a656123a8b", 00:15:40.587 "strip_size_kb": 0, 00:15:40.587 "state": "online", 00:15:40.587 "raid_level": "raid1", 00:15:40.587 "superblock": false, 00:15:40.587 "num_base_bdevs": 4, 00:15:40.587 "num_base_bdevs_discovered": 4, 00:15:40.587 "num_base_bdevs_operational": 4, 00:15:40.587 "base_bdevs_list": [ 00:15:40.587 { 00:15:40.587 "name": "NewBaseBdev", 00:15:40.587 "uuid": "70663a5c-42f4-11ef-9f7f-e9a656123a8b", 00:15:40.587 "is_configured": true, 00:15:40.587 "data_offset": 0, 00:15:40.587 "data_size": 65536 00:15:40.587 }, 00:15:40.587 { 00:15:40.587 "name": "BaseBdev2", 00:15:40.587 "uuid": "6e421bba-42f4-11ef-9f7f-e9a656123a8b", 00:15:40.587 "is_configured": true, 00:15:40.587 "data_offset": 0, 00:15:40.587 "data_size": 65536 00:15:40.587 }, 00:15:40.587 { 00:15:40.587 "name": "BaseBdev3", 00:15:40.587 "uuid": "6ea28bcc-42f4-11ef-9f7f-e9a656123a8b", 00:15:40.587 "is_configured": true, 00:15:40.587 "data_offset": 0, 00:15:40.587 "data_size": 65536 00:15:40.587 }, 00:15:40.587 { 00:15:40.587 "name": "BaseBdev4", 00:15:40.587 "uuid": "6f091628-42f4-11ef-9f7f-e9a656123a8b", 00:15:40.587 "is_configured": true, 00:15:40.587 "data_offset": 0, 00:15:40.587 "data_size": 65536 00:15:40.587 } 00:15:40.587 ] 00:15:40.587 }' 00:15:40.587 21:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:40.587 21:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.846 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:40.846 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:40.846 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:40.846 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:40.846 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:40.846 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:40.846 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:40.846 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:41.104 [2024-07-15 21:51:56.228847] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:41.104 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:41.104 "name": "Existed_Raid", 00:15:41.104 "aliases": [ 00:15:41.104 "739b0b54-42f4-11ef-9f7f-e9a656123a8b" 00:15:41.104 ], 00:15:41.104 "product_name": "Raid Volume", 00:15:41.104 "block_size": 512, 00:15:41.104 "num_blocks": 65536, 00:15:41.104 "uuid": "739b0b54-42f4-11ef-9f7f-e9a656123a8b", 00:15:41.104 "assigned_rate_limits": { 00:15:41.104 "rw_ios_per_sec": 0, 00:15:41.104 "rw_mbytes_per_sec": 0, 00:15:41.104 "r_mbytes_per_sec": 0, 00:15:41.104 "w_mbytes_per_sec": 0 00:15:41.104 }, 00:15:41.104 "claimed": false, 00:15:41.104 "zoned": false, 00:15:41.104 "supported_io_types": { 00:15:41.104 "read": true, 00:15:41.104 "write": true, 00:15:41.104 "unmap": false, 00:15:41.104 "flush": false, 00:15:41.104 "reset": true, 00:15:41.104 "nvme_admin": false, 00:15:41.104 "nvme_io": false, 00:15:41.104 "nvme_io_md": false, 00:15:41.104 "write_zeroes": true, 00:15:41.104 "zcopy": false, 00:15:41.104 "get_zone_info": false, 00:15:41.104 "zone_management": false, 00:15:41.104 "zone_append": false, 00:15:41.104 "compare": false, 00:15:41.104 "compare_and_write": false, 00:15:41.104 "abort": false, 00:15:41.104 "seek_hole": false, 00:15:41.104 "seek_data": false, 00:15:41.104 "copy": false, 00:15:41.104 "nvme_iov_md": false 00:15:41.104 }, 00:15:41.104 "memory_domains": [ 00:15:41.104 { 00:15:41.104 "dma_device_id": "system", 00:15:41.104 "dma_device_type": 1 00:15:41.104 }, 00:15:41.104 { 00:15:41.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.104 "dma_device_type": 2 00:15:41.104 }, 00:15:41.104 { 00:15:41.104 "dma_device_id": "system", 00:15:41.104 "dma_device_type": 1 00:15:41.104 }, 00:15:41.104 { 00:15:41.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.104 "dma_device_type": 2 00:15:41.104 }, 00:15:41.104 { 00:15:41.104 "dma_device_id": "system", 00:15:41.104 "dma_device_type": 1 00:15:41.104 }, 00:15:41.104 { 00:15:41.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.104 "dma_device_type": 2 00:15:41.104 }, 00:15:41.104 { 00:15:41.104 "dma_device_id": "system", 00:15:41.104 "dma_device_type": 1 00:15:41.104 }, 00:15:41.104 { 00:15:41.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.104 "dma_device_type": 2 00:15:41.104 } 00:15:41.104 ], 00:15:41.104 "driver_specific": { 00:15:41.104 "raid": { 00:15:41.104 "uuid": "739b0b54-42f4-11ef-9f7f-e9a656123a8b", 00:15:41.104 "strip_size_kb": 0, 00:15:41.104 "state": "online", 00:15:41.104 "raid_level": "raid1", 00:15:41.104 "superblock": false, 00:15:41.104 "num_base_bdevs": 4, 00:15:41.104 "num_base_bdevs_discovered": 4, 00:15:41.104 "num_base_bdevs_operational": 4, 00:15:41.104 "base_bdevs_list": [ 00:15:41.104 { 00:15:41.104 "name": "NewBaseBdev", 00:15:41.104 "uuid": "70663a5c-42f4-11ef-9f7f-e9a656123a8b", 00:15:41.104 "is_configured": true, 00:15:41.104 "data_offset": 0, 00:15:41.104 "data_size": 65536 00:15:41.104 }, 00:15:41.104 { 00:15:41.104 "name": "BaseBdev2", 00:15:41.104 "uuid": "6e421bba-42f4-11ef-9f7f-e9a656123a8b", 00:15:41.104 "is_configured": true, 00:15:41.104 "data_offset": 0, 00:15:41.104 "data_size": 65536 00:15:41.104 }, 00:15:41.104 { 00:15:41.104 "name": "BaseBdev3", 00:15:41.104 "uuid": "6ea28bcc-42f4-11ef-9f7f-e9a656123a8b", 00:15:41.104 "is_configured": true, 00:15:41.104 "data_offset": 0, 00:15:41.104 "data_size": 65536 00:15:41.104 }, 00:15:41.104 { 00:15:41.104 "name": "BaseBdev4", 00:15:41.105 "uuid": "6f091628-42f4-11ef-9f7f-e9a656123a8b", 00:15:41.105 "is_configured": true, 00:15:41.105 "data_offset": 0, 00:15:41.105 "data_size": 65536 00:15:41.105 } 00:15:41.105 ] 00:15:41.105 } 00:15:41.105 } 00:15:41.105 }' 00:15:41.105 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:41.105 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:41.105 BaseBdev2 00:15:41.105 BaseBdev3 00:15:41.105 BaseBdev4' 00:15:41.105 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:41.105 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:41.105 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:41.363 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:41.363 "name": "NewBaseBdev", 00:15:41.363 "aliases": [ 00:15:41.363 "70663a5c-42f4-11ef-9f7f-e9a656123a8b" 00:15:41.363 ], 00:15:41.363 "product_name": "Malloc disk", 00:15:41.363 "block_size": 512, 00:15:41.363 "num_blocks": 65536, 00:15:41.363 "uuid": "70663a5c-42f4-11ef-9f7f-e9a656123a8b", 00:15:41.363 "assigned_rate_limits": { 00:15:41.363 "rw_ios_per_sec": 0, 00:15:41.363 "rw_mbytes_per_sec": 0, 00:15:41.363 "r_mbytes_per_sec": 0, 00:15:41.363 "w_mbytes_per_sec": 0 00:15:41.363 }, 00:15:41.363 "claimed": true, 00:15:41.363 "claim_type": "exclusive_write", 00:15:41.363 "zoned": false, 00:15:41.363 "supported_io_types": { 00:15:41.363 "read": true, 00:15:41.363 "write": true, 00:15:41.363 "unmap": true, 00:15:41.363 "flush": true, 00:15:41.363 "reset": true, 00:15:41.363 "nvme_admin": false, 00:15:41.363 "nvme_io": false, 00:15:41.363 "nvme_io_md": false, 00:15:41.363 "write_zeroes": true, 00:15:41.363 "zcopy": true, 00:15:41.363 "get_zone_info": false, 00:15:41.363 "zone_management": false, 00:15:41.363 "zone_append": false, 00:15:41.363 "compare": false, 00:15:41.363 "compare_and_write": false, 00:15:41.363 "abort": true, 00:15:41.363 "seek_hole": false, 00:15:41.363 "seek_data": false, 00:15:41.363 "copy": true, 00:15:41.363 "nvme_iov_md": false 00:15:41.363 }, 00:15:41.363 "memory_domains": [ 00:15:41.363 { 00:15:41.363 "dma_device_id": "system", 00:15:41.363 "dma_device_type": 1 00:15:41.363 }, 00:15:41.363 { 00:15:41.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.363 "dma_device_type": 2 00:15:41.363 } 00:15:41.363 ], 00:15:41.363 "driver_specific": {} 00:15:41.363 }' 00:15:41.363 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:41.363 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:41.363 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:41.364 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:41.364 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:41.364 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:41.364 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:41.364 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:41.622 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:41.622 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:41.622 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:41.622 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:41.622 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:41.622 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:41.622 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:41.622 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:41.622 "name": "BaseBdev2", 00:15:41.622 "aliases": [ 00:15:41.622 "6e421bba-42f4-11ef-9f7f-e9a656123a8b" 00:15:41.622 ], 00:15:41.622 "product_name": "Malloc disk", 00:15:41.622 "block_size": 512, 00:15:41.622 "num_blocks": 65536, 00:15:41.622 "uuid": "6e421bba-42f4-11ef-9f7f-e9a656123a8b", 00:15:41.622 "assigned_rate_limits": { 00:15:41.622 "rw_ios_per_sec": 0, 00:15:41.622 "rw_mbytes_per_sec": 0, 00:15:41.622 "r_mbytes_per_sec": 0, 00:15:41.622 "w_mbytes_per_sec": 0 00:15:41.622 }, 00:15:41.622 "claimed": true, 00:15:41.622 "claim_type": "exclusive_write", 00:15:41.622 "zoned": false, 00:15:41.622 "supported_io_types": { 00:15:41.622 "read": true, 00:15:41.622 "write": true, 00:15:41.622 "unmap": true, 00:15:41.622 "flush": true, 00:15:41.622 "reset": true, 00:15:41.622 "nvme_admin": false, 00:15:41.622 "nvme_io": false, 00:15:41.622 "nvme_io_md": false, 00:15:41.622 "write_zeroes": true, 00:15:41.622 "zcopy": true, 00:15:41.622 "get_zone_info": false, 00:15:41.622 "zone_management": false, 00:15:41.622 "zone_append": false, 00:15:41.622 "compare": false, 00:15:41.622 "compare_and_write": false, 00:15:41.622 "abort": true, 00:15:41.622 "seek_hole": false, 00:15:41.622 "seek_data": false, 00:15:41.622 "copy": true, 00:15:41.622 "nvme_iov_md": false 00:15:41.622 }, 00:15:41.622 "memory_domains": [ 00:15:41.622 { 00:15:41.622 "dma_device_id": "system", 00:15:41.622 "dma_device_type": 1 00:15:41.622 }, 00:15:41.622 { 00:15:41.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.622 "dma_device_type": 2 00:15:41.622 } 00:15:41.622 ], 00:15:41.622 "driver_specific": {} 00:15:41.622 }' 00:15:41.622 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:41.622 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:41.622 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:41.622 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:41.881 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:41.881 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:41.881 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:41.881 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:41.881 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:41.881 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:41.881 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:41.881 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:41.881 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:41.881 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:41.881 21:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:41.881 21:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:41.881 "name": "BaseBdev3", 00:15:41.881 "aliases": [ 00:15:41.881 "6ea28bcc-42f4-11ef-9f7f-e9a656123a8b" 00:15:41.881 ], 00:15:41.881 "product_name": "Malloc disk", 00:15:41.881 "block_size": 512, 00:15:41.881 "num_blocks": 65536, 00:15:41.881 "uuid": "6ea28bcc-42f4-11ef-9f7f-e9a656123a8b", 00:15:41.881 "assigned_rate_limits": { 00:15:41.881 "rw_ios_per_sec": 0, 00:15:41.881 "rw_mbytes_per_sec": 0, 00:15:41.881 "r_mbytes_per_sec": 0, 00:15:41.881 "w_mbytes_per_sec": 0 00:15:41.881 }, 00:15:41.881 "claimed": true, 00:15:41.881 "claim_type": "exclusive_write", 00:15:41.881 "zoned": false, 00:15:41.881 "supported_io_types": { 00:15:41.881 "read": true, 00:15:41.881 "write": true, 00:15:41.881 "unmap": true, 00:15:41.881 "flush": true, 00:15:41.881 "reset": true, 00:15:41.881 "nvme_admin": false, 00:15:41.881 "nvme_io": false, 00:15:41.881 "nvme_io_md": false, 00:15:41.881 "write_zeroes": true, 00:15:41.881 "zcopy": true, 00:15:41.881 "get_zone_info": false, 00:15:41.881 "zone_management": false, 00:15:41.881 "zone_append": false, 00:15:41.881 "compare": false, 00:15:41.881 "compare_and_write": false, 00:15:41.881 "abort": true, 00:15:41.881 "seek_hole": false, 00:15:41.881 "seek_data": false, 00:15:41.881 "copy": true, 00:15:41.881 "nvme_iov_md": false 00:15:41.881 }, 00:15:41.881 "memory_domains": [ 00:15:41.881 { 00:15:41.881 "dma_device_id": "system", 00:15:41.881 "dma_device_type": 1 00:15:41.881 }, 00:15:41.881 { 00:15:41.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.881 "dma_device_type": 2 00:15:41.881 } 00:15:41.881 ], 00:15:41.881 "driver_specific": {} 00:15:41.881 }' 00:15:41.881 21:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:41.881 21:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:42.140 21:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:42.140 21:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:42.140 21:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:42.140 21:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:42.140 21:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:42.140 21:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:42.140 21:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:42.140 21:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:42.140 21:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:42.140 21:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:42.140 21:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:42.140 21:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:42.140 21:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:42.399 21:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:42.399 "name": "BaseBdev4", 00:15:42.399 "aliases": [ 00:15:42.399 "6f091628-42f4-11ef-9f7f-e9a656123a8b" 00:15:42.399 ], 00:15:42.399 "product_name": "Malloc disk", 00:15:42.399 "block_size": 512, 00:15:42.399 "num_blocks": 65536, 00:15:42.399 "uuid": "6f091628-42f4-11ef-9f7f-e9a656123a8b", 00:15:42.399 "assigned_rate_limits": { 00:15:42.399 "rw_ios_per_sec": 0, 00:15:42.399 "rw_mbytes_per_sec": 0, 00:15:42.399 "r_mbytes_per_sec": 0, 00:15:42.399 "w_mbytes_per_sec": 0 00:15:42.399 }, 00:15:42.399 "claimed": true, 00:15:42.399 "claim_type": "exclusive_write", 00:15:42.399 "zoned": false, 00:15:42.399 "supported_io_types": { 00:15:42.399 "read": true, 00:15:42.399 "write": true, 00:15:42.399 "unmap": true, 00:15:42.399 "flush": true, 00:15:42.399 "reset": true, 00:15:42.399 "nvme_admin": false, 00:15:42.399 "nvme_io": false, 00:15:42.399 "nvme_io_md": false, 00:15:42.399 "write_zeroes": true, 00:15:42.399 "zcopy": true, 00:15:42.399 "get_zone_info": false, 00:15:42.399 "zone_management": false, 00:15:42.399 "zone_append": false, 00:15:42.399 "compare": false, 00:15:42.399 "compare_and_write": false, 00:15:42.399 "abort": true, 00:15:42.399 "seek_hole": false, 00:15:42.399 "seek_data": false, 00:15:42.399 "copy": true, 00:15:42.399 "nvme_iov_md": false 00:15:42.399 }, 00:15:42.399 "memory_domains": [ 00:15:42.399 { 00:15:42.399 "dma_device_id": "system", 00:15:42.399 "dma_device_type": 1 00:15:42.399 }, 00:15:42.399 { 00:15:42.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.399 "dma_device_type": 2 00:15:42.399 } 00:15:42.399 ], 00:15:42.399 "driver_specific": {} 00:15:42.399 }' 00:15:42.399 21:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:42.399 21:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:42.399 21:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:42.399 21:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:42.399 21:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:42.399 21:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:42.399 21:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:42.399 21:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:42.399 21:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:42.399 21:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:42.399 21:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:42.399 21:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:42.399 21:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:42.657 [2024-07-15 21:51:57.660877] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:42.657 [2024-07-15 21:51:57.660900] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:42.657 [2024-07-15 21:51:57.660937] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:42.657 [2024-07-15 21:51:57.660998] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:42.657 [2024-07-15 21:51:57.661002] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x49f62434f00 name Existed_Raid, state offline 00:15:42.657 21:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 62951 00:15:42.657 21:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@942 -- # '[' -z 62951 ']' 00:15:42.657 21:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # kill -0 62951 00:15:42.657 21:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@947 -- # uname 00:15:42.657 21:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:15:42.657 21:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # ps -c -o command 62951 00:15:42.657 21:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # tail -1 00:15:42.657 21:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:15:42.657 21:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:15:42.657 killing process with pid 62951 00:15:42.657 21:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 62951' 00:15:42.657 21:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@961 -- # kill 62951 00:15:42.657 [2024-07-15 21:51:57.685386] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:42.657 21:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # wait 62951 00:15:42.657 [2024-07-15 21:51:57.708971] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:42.974 21:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:42.974 00:15:42.974 real 0m23.857s 00:15:42.974 user 0m43.411s 00:15:42.974 sys 0m3.433s 00:15:42.974 21:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:15:42.974 ************************************ 00:15:42.974 END TEST raid_state_function_test 00:15:42.974 ************************************ 00:15:42.974 21:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.974 21:51:57 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:15:42.974 21:51:57 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:15:42.974 21:51:57 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:15:42.974 21:51:57 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:15:42.974 21:51:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:42.974 ************************************ 00:15:42.974 START TEST raid_state_function_test_sb 00:15:42.974 ************************************ 00:15:42.974 21:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1117 -- # raid_state_function_test raid1 4 true 00:15:42.974 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:15:42.974 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:15:42.974 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:42.974 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:42.974 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:42.974 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:42.974 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:42.974 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:42.974 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:42.974 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:42.974 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:42.974 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:42.974 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:42.974 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:42.974 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:42.974 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:15:42.974 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:42.974 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:42.974 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:42.974 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:42.974 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:42.975 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:42.975 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:42.975 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:42.975 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:15:42.975 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:15:42.975 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:42.975 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:42.975 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=63758 00:15:42.975 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 63758' 00:15:42.975 Process raid pid: 63758 00:15:42.975 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:42.975 21:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 63758 /var/tmp/spdk-raid.sock 00:15:42.975 21:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@823 -- # '[' -z 63758 ']' 00:15:42.975 21:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:42.975 21:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@828 -- # local max_retries=100 00:15:42.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:42.975 21:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:42.975 21:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # xtrace_disable 00:15:42.975 21:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.975 [2024-07-15 21:51:57.948768] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:15:42.975 [2024-07-15 21:51:57.948966] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:43.566 EAL: TSC is not safe to use in SMP mode 00:15:43.566 EAL: TSC is not invariant 00:15:43.566 [2024-07-15 21:51:58.482199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.566 [2024-07-15 21:51:58.560728] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:43.566 [2024-07-15 21:51:58.562998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.566 [2024-07-15 21:51:58.563952] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:43.566 [2024-07-15 21:51:58.563966] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:44.131 21:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:15:44.131 21:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # return 0 00:15:44.131 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:44.131 [2024-07-15 21:51:59.217220] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:44.131 [2024-07-15 21:51:59.217285] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:44.131 [2024-07-15 21:51:59.217306] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:44.131 [2024-07-15 21:51:59.217315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:44.131 [2024-07-15 21:51:59.217326] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:44.131 [2024-07-15 21:51:59.217334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:44.131 [2024-07-15 21:51:59.217337] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:44.131 [2024-07-15 21:51:59.217344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:44.131 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:44.131 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:44.131 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:44.131 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:44.131 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:44.131 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:44.131 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:44.131 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:44.131 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:44.131 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:44.132 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.132 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.389 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:44.389 "name": "Existed_Raid", 00:15:44.389 "uuid": "7612ce6b-42f4-11ef-9f7f-e9a656123a8b", 00:15:44.389 "strip_size_kb": 0, 00:15:44.389 "state": "configuring", 00:15:44.389 "raid_level": "raid1", 00:15:44.389 "superblock": true, 00:15:44.389 "num_base_bdevs": 4, 00:15:44.389 "num_base_bdevs_discovered": 0, 00:15:44.389 "num_base_bdevs_operational": 4, 00:15:44.389 "base_bdevs_list": [ 00:15:44.389 { 00:15:44.389 "name": "BaseBdev1", 00:15:44.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.389 "is_configured": false, 00:15:44.389 "data_offset": 0, 00:15:44.389 "data_size": 0 00:15:44.389 }, 00:15:44.389 { 00:15:44.389 "name": "BaseBdev2", 00:15:44.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.389 "is_configured": false, 00:15:44.389 "data_offset": 0, 00:15:44.389 "data_size": 0 00:15:44.389 }, 00:15:44.389 { 00:15:44.389 "name": "BaseBdev3", 00:15:44.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.389 "is_configured": false, 00:15:44.389 "data_offset": 0, 00:15:44.389 "data_size": 0 00:15:44.389 }, 00:15:44.389 { 00:15:44.389 "name": "BaseBdev4", 00:15:44.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.389 "is_configured": false, 00:15:44.389 "data_offset": 0, 00:15:44.389 "data_size": 0 00:15:44.389 } 00:15:44.389 ] 00:15:44.389 }' 00:15:44.389 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:44.389 21:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.648 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:44.905 [2024-07-15 21:51:59.985235] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:44.905 [2024-07-15 21:51:59.985285] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ed371e34500 name Existed_Raid, state configuring 00:15:44.905 21:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:45.163 [2024-07-15 21:52:00.185240] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:45.163 [2024-07-15 21:52:00.185312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:45.163 [2024-07-15 21:52:00.185318] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:45.163 [2024-07-15 21:52:00.185343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:45.163 [2024-07-15 21:52:00.185346] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:45.163 [2024-07-15 21:52:00.185353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:45.163 [2024-07-15 21:52:00.185356] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:45.163 [2024-07-15 21:52:00.185380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:45.163 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:45.420 [2024-07-15 21:52:00.402425] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:45.420 BaseBdev1 00:15:45.420 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:45.420 21:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev1 00:15:45.420 21:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:15:45.420 21:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:15:45.420 21:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:15:45.420 21:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:15:45.420 21:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:45.678 21:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:45.936 [ 00:15:45.936 { 00:15:45.936 "name": "BaseBdev1", 00:15:45.936 "aliases": [ 00:15:45.936 "76c779e8-42f4-11ef-9f7f-e9a656123a8b" 00:15:45.936 ], 00:15:45.936 "product_name": "Malloc disk", 00:15:45.936 "block_size": 512, 00:15:45.936 "num_blocks": 65536, 00:15:45.936 "uuid": "76c779e8-42f4-11ef-9f7f-e9a656123a8b", 00:15:45.936 "assigned_rate_limits": { 00:15:45.936 "rw_ios_per_sec": 0, 00:15:45.936 "rw_mbytes_per_sec": 0, 00:15:45.936 "r_mbytes_per_sec": 0, 00:15:45.936 "w_mbytes_per_sec": 0 00:15:45.936 }, 00:15:45.936 "claimed": true, 00:15:45.936 "claim_type": "exclusive_write", 00:15:45.936 "zoned": false, 00:15:45.936 "supported_io_types": { 00:15:45.936 "read": true, 00:15:45.936 "write": true, 00:15:45.936 "unmap": true, 00:15:45.936 "flush": true, 00:15:45.936 "reset": true, 00:15:45.936 "nvme_admin": false, 00:15:45.936 "nvme_io": false, 00:15:45.936 "nvme_io_md": false, 00:15:45.936 "write_zeroes": true, 00:15:45.936 "zcopy": true, 00:15:45.936 "get_zone_info": false, 00:15:45.936 "zone_management": false, 00:15:45.936 "zone_append": false, 00:15:45.936 "compare": false, 00:15:45.936 "compare_and_write": false, 00:15:45.936 "abort": true, 00:15:45.936 "seek_hole": false, 00:15:45.936 "seek_data": false, 00:15:45.936 "copy": true, 00:15:45.936 "nvme_iov_md": false 00:15:45.936 }, 00:15:45.936 "memory_domains": [ 00:15:45.936 { 00:15:45.936 "dma_device_id": "system", 00:15:45.936 "dma_device_type": 1 00:15:45.936 }, 00:15:45.936 { 00:15:45.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.936 "dma_device_type": 2 00:15:45.936 } 00:15:45.936 ], 00:15:45.936 "driver_specific": {} 00:15:45.936 } 00:15:45.936 ] 00:15:45.936 21:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:15:45.936 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:45.936 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:45.936 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:45.936 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:45.936 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:45.936 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:45.936 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:45.936 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:45.936 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:45.936 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:45.937 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.937 21:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.937 21:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:45.937 "name": "Existed_Raid", 00:15:45.937 "uuid": "76a683c0-42f4-11ef-9f7f-e9a656123a8b", 00:15:45.937 "strip_size_kb": 0, 00:15:45.937 "state": "configuring", 00:15:45.937 "raid_level": "raid1", 00:15:45.937 "superblock": true, 00:15:45.937 "num_base_bdevs": 4, 00:15:45.937 "num_base_bdevs_discovered": 1, 00:15:45.937 "num_base_bdevs_operational": 4, 00:15:45.937 "base_bdevs_list": [ 00:15:45.937 { 00:15:45.937 "name": "BaseBdev1", 00:15:45.937 "uuid": "76c779e8-42f4-11ef-9f7f-e9a656123a8b", 00:15:45.937 "is_configured": true, 00:15:45.937 "data_offset": 2048, 00:15:45.937 "data_size": 63488 00:15:45.937 }, 00:15:45.937 { 00:15:45.937 "name": "BaseBdev2", 00:15:45.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.937 "is_configured": false, 00:15:45.937 "data_offset": 0, 00:15:45.937 "data_size": 0 00:15:45.937 }, 00:15:45.937 { 00:15:45.937 "name": "BaseBdev3", 00:15:45.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.937 "is_configured": false, 00:15:45.937 "data_offset": 0, 00:15:45.937 "data_size": 0 00:15:45.937 }, 00:15:45.937 { 00:15:45.937 "name": "BaseBdev4", 00:15:45.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.937 "is_configured": false, 00:15:45.937 "data_offset": 0, 00:15:45.937 "data_size": 0 00:15:45.937 } 00:15:45.937 ] 00:15:45.937 }' 00:15:45.937 21:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:45.937 21:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.195 21:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:46.452 [2024-07-15 21:52:01.625322] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:46.452 [2024-07-15 21:52:01.625376] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ed371e34500 name Existed_Raid, state configuring 00:15:46.710 21:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:46.710 [2024-07-15 21:52:01.885353] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:46.710 [2024-07-15 21:52:01.886297] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:46.710 [2024-07-15 21:52:01.886375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:46.710 [2024-07-15 21:52:01.886397] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:46.710 [2024-07-15 21:52:01.886421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:46.710 [2024-07-15 21:52:01.886425] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:46.710 [2024-07-15 21:52:01.886448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:46.969 21:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:46.969 21:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:46.969 21:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:46.969 21:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:46.969 21:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:46.969 21:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:46.969 21:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:46.969 21:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:46.969 21:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:46.969 21:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:46.969 21:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:46.969 21:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:46.969 21:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.969 21:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.969 21:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:46.969 "name": "Existed_Raid", 00:15:46.969 "uuid": "77a9ee3e-42f4-11ef-9f7f-e9a656123a8b", 00:15:46.969 "strip_size_kb": 0, 00:15:46.969 "state": "configuring", 00:15:46.969 "raid_level": "raid1", 00:15:46.969 "superblock": true, 00:15:46.969 "num_base_bdevs": 4, 00:15:46.969 "num_base_bdevs_discovered": 1, 00:15:46.969 "num_base_bdevs_operational": 4, 00:15:46.969 "base_bdevs_list": [ 00:15:46.969 { 00:15:46.969 "name": "BaseBdev1", 00:15:46.969 "uuid": "76c779e8-42f4-11ef-9f7f-e9a656123a8b", 00:15:46.969 "is_configured": true, 00:15:46.969 "data_offset": 2048, 00:15:46.969 "data_size": 63488 00:15:46.969 }, 00:15:46.969 { 00:15:46.969 "name": "BaseBdev2", 00:15:46.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.969 "is_configured": false, 00:15:46.969 "data_offset": 0, 00:15:46.969 "data_size": 0 00:15:46.969 }, 00:15:46.969 { 00:15:46.969 "name": "BaseBdev3", 00:15:46.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.969 "is_configured": false, 00:15:46.969 "data_offset": 0, 00:15:46.969 "data_size": 0 00:15:46.969 }, 00:15:46.969 { 00:15:46.969 "name": "BaseBdev4", 00:15:46.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.969 "is_configured": false, 00:15:46.969 "data_offset": 0, 00:15:46.969 "data_size": 0 00:15:46.969 } 00:15:46.969 ] 00:15:46.969 }' 00:15:46.969 21:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:46.969 21:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.251 21:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:47.510 [2024-07-15 21:52:02.561547] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:47.510 BaseBdev2 00:15:47.510 21:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:47.510 21:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev2 00:15:47.510 21:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:15:47.510 21:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:15:47.510 21:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:15:47.510 21:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:15:47.510 21:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:47.768 21:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:48.027 [ 00:15:48.027 { 00:15:48.027 "name": "BaseBdev2", 00:15:48.027 "aliases": [ 00:15:48.027 "7811160e-42f4-11ef-9f7f-e9a656123a8b" 00:15:48.027 ], 00:15:48.027 "product_name": "Malloc disk", 00:15:48.027 "block_size": 512, 00:15:48.027 "num_blocks": 65536, 00:15:48.027 "uuid": "7811160e-42f4-11ef-9f7f-e9a656123a8b", 00:15:48.027 "assigned_rate_limits": { 00:15:48.027 "rw_ios_per_sec": 0, 00:15:48.027 "rw_mbytes_per_sec": 0, 00:15:48.027 "r_mbytes_per_sec": 0, 00:15:48.027 "w_mbytes_per_sec": 0 00:15:48.027 }, 00:15:48.027 "claimed": true, 00:15:48.027 "claim_type": "exclusive_write", 00:15:48.027 "zoned": false, 00:15:48.027 "supported_io_types": { 00:15:48.027 "read": true, 00:15:48.027 "write": true, 00:15:48.027 "unmap": true, 00:15:48.027 "flush": true, 00:15:48.027 "reset": true, 00:15:48.027 "nvme_admin": false, 00:15:48.027 "nvme_io": false, 00:15:48.027 "nvme_io_md": false, 00:15:48.028 "write_zeroes": true, 00:15:48.028 "zcopy": true, 00:15:48.028 "get_zone_info": false, 00:15:48.028 "zone_management": false, 00:15:48.028 "zone_append": false, 00:15:48.028 "compare": false, 00:15:48.028 "compare_and_write": false, 00:15:48.028 "abort": true, 00:15:48.028 "seek_hole": false, 00:15:48.028 "seek_data": false, 00:15:48.028 "copy": true, 00:15:48.028 "nvme_iov_md": false 00:15:48.028 }, 00:15:48.028 "memory_domains": [ 00:15:48.028 { 00:15:48.028 "dma_device_id": "system", 00:15:48.028 "dma_device_type": 1 00:15:48.028 }, 00:15:48.028 { 00:15:48.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.028 "dma_device_type": 2 00:15:48.028 } 00:15:48.028 ], 00:15:48.028 "driver_specific": {} 00:15:48.028 } 00:15:48.028 ] 00:15:48.028 21:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:15:48.028 21:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:48.028 21:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:48.028 21:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:48.028 21:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:48.028 21:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:48.028 21:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:48.028 21:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:48.028 21:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:48.028 21:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:48.028 21:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:48.028 21:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:48.028 21:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:48.028 21:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.028 21:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.286 21:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:48.286 "name": "Existed_Raid", 00:15:48.286 "uuid": "77a9ee3e-42f4-11ef-9f7f-e9a656123a8b", 00:15:48.286 "strip_size_kb": 0, 00:15:48.286 "state": "configuring", 00:15:48.286 "raid_level": "raid1", 00:15:48.286 "superblock": true, 00:15:48.286 "num_base_bdevs": 4, 00:15:48.286 "num_base_bdevs_discovered": 2, 00:15:48.286 "num_base_bdevs_operational": 4, 00:15:48.286 "base_bdevs_list": [ 00:15:48.286 { 00:15:48.286 "name": "BaseBdev1", 00:15:48.286 "uuid": "76c779e8-42f4-11ef-9f7f-e9a656123a8b", 00:15:48.286 "is_configured": true, 00:15:48.286 "data_offset": 2048, 00:15:48.286 "data_size": 63488 00:15:48.286 }, 00:15:48.286 { 00:15:48.286 "name": "BaseBdev2", 00:15:48.286 "uuid": "7811160e-42f4-11ef-9f7f-e9a656123a8b", 00:15:48.286 "is_configured": true, 00:15:48.286 "data_offset": 2048, 00:15:48.286 "data_size": 63488 00:15:48.286 }, 00:15:48.286 { 00:15:48.286 "name": "BaseBdev3", 00:15:48.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.286 "is_configured": false, 00:15:48.286 "data_offset": 0, 00:15:48.286 "data_size": 0 00:15:48.286 }, 00:15:48.286 { 00:15:48.286 "name": "BaseBdev4", 00:15:48.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.286 "is_configured": false, 00:15:48.286 "data_offset": 0, 00:15:48.286 "data_size": 0 00:15:48.286 } 00:15:48.286 ] 00:15:48.286 }' 00:15:48.286 21:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:48.286 21:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.545 21:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:48.545 [2024-07-15 21:52:03.729557] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:48.804 BaseBdev3 00:15:48.804 21:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:15:48.804 21:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev3 00:15:48.804 21:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:15:48.804 21:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:15:48.804 21:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:15:48.804 21:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:15:48.804 21:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:49.064 21:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:49.064 [ 00:15:49.064 { 00:15:49.064 "name": "BaseBdev3", 00:15:49.064 "aliases": [ 00:15:49.064 "78c3504a-42f4-11ef-9f7f-e9a656123a8b" 00:15:49.064 ], 00:15:49.064 "product_name": "Malloc disk", 00:15:49.064 "block_size": 512, 00:15:49.064 "num_blocks": 65536, 00:15:49.064 "uuid": "78c3504a-42f4-11ef-9f7f-e9a656123a8b", 00:15:49.064 "assigned_rate_limits": { 00:15:49.064 "rw_ios_per_sec": 0, 00:15:49.064 "rw_mbytes_per_sec": 0, 00:15:49.064 "r_mbytes_per_sec": 0, 00:15:49.064 "w_mbytes_per_sec": 0 00:15:49.064 }, 00:15:49.064 "claimed": true, 00:15:49.064 "claim_type": "exclusive_write", 00:15:49.064 "zoned": false, 00:15:49.064 "supported_io_types": { 00:15:49.064 "read": true, 00:15:49.064 "write": true, 00:15:49.064 "unmap": true, 00:15:49.064 "flush": true, 00:15:49.064 "reset": true, 00:15:49.064 "nvme_admin": false, 00:15:49.064 "nvme_io": false, 00:15:49.064 "nvme_io_md": false, 00:15:49.064 "write_zeroes": true, 00:15:49.064 "zcopy": true, 00:15:49.064 "get_zone_info": false, 00:15:49.064 "zone_management": false, 00:15:49.064 "zone_append": false, 00:15:49.064 "compare": false, 00:15:49.064 "compare_and_write": false, 00:15:49.064 "abort": true, 00:15:49.064 "seek_hole": false, 00:15:49.064 "seek_data": false, 00:15:49.064 "copy": true, 00:15:49.064 "nvme_iov_md": false 00:15:49.064 }, 00:15:49.064 "memory_domains": [ 00:15:49.064 { 00:15:49.064 "dma_device_id": "system", 00:15:49.064 "dma_device_type": 1 00:15:49.064 }, 00:15:49.064 { 00:15:49.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.064 "dma_device_type": 2 00:15:49.064 } 00:15:49.064 ], 00:15:49.064 "driver_specific": {} 00:15:49.064 } 00:15:49.064 ] 00:15:49.064 21:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:15:49.064 21:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:49.064 21:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:49.064 21:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:49.064 21:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:49.064 21:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:49.064 21:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:49.064 21:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:49.064 21:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:49.064 21:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:49.064 21:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:49.064 21:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:49.064 21:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:49.064 21:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.064 21:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.323 21:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:49.323 "name": "Existed_Raid", 00:15:49.323 "uuid": "77a9ee3e-42f4-11ef-9f7f-e9a656123a8b", 00:15:49.323 "strip_size_kb": 0, 00:15:49.323 "state": "configuring", 00:15:49.323 "raid_level": "raid1", 00:15:49.323 "superblock": true, 00:15:49.323 "num_base_bdevs": 4, 00:15:49.323 "num_base_bdevs_discovered": 3, 00:15:49.323 "num_base_bdevs_operational": 4, 00:15:49.323 "base_bdevs_list": [ 00:15:49.323 { 00:15:49.323 "name": "BaseBdev1", 00:15:49.323 "uuid": "76c779e8-42f4-11ef-9f7f-e9a656123a8b", 00:15:49.323 "is_configured": true, 00:15:49.323 "data_offset": 2048, 00:15:49.323 "data_size": 63488 00:15:49.323 }, 00:15:49.323 { 00:15:49.323 "name": "BaseBdev2", 00:15:49.323 "uuid": "7811160e-42f4-11ef-9f7f-e9a656123a8b", 00:15:49.323 "is_configured": true, 00:15:49.323 "data_offset": 2048, 00:15:49.323 "data_size": 63488 00:15:49.323 }, 00:15:49.323 { 00:15:49.323 "name": "BaseBdev3", 00:15:49.323 "uuid": "78c3504a-42f4-11ef-9f7f-e9a656123a8b", 00:15:49.323 "is_configured": true, 00:15:49.323 "data_offset": 2048, 00:15:49.323 "data_size": 63488 00:15:49.323 }, 00:15:49.323 { 00:15:49.323 "name": "BaseBdev4", 00:15:49.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.323 "is_configured": false, 00:15:49.323 "data_offset": 0, 00:15:49.323 "data_size": 0 00:15:49.323 } 00:15:49.323 ] 00:15:49.323 }' 00:15:49.323 21:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:49.323 21:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.581 21:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:49.840 [2024-07-15 21:52:04.981631] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:49.840 [2024-07-15 21:52:04.981719] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1ed371e34a00 00:15:49.840 [2024-07-15 21:52:04.981726] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:49.840 [2024-07-15 21:52:04.981748] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ed371e97e20 00:15:49.840 [2024-07-15 21:52:04.981826] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1ed371e34a00 00:15:49.840 [2024-07-15 21:52:04.981832] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1ed371e34a00 00:15:49.840 [2024-07-15 21:52:04.981857] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.840 BaseBdev4 00:15:49.840 21:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:15:49.840 21:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev4 00:15:49.840 21:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:15:49.840 21:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:15:49.840 21:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:15:49.840 21:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:15:49.840 21:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:50.098 21:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:50.358 [ 00:15:50.358 { 00:15:50.358 "name": "BaseBdev4", 00:15:50.358 "aliases": [ 00:15:50.358 "79825d0c-42f4-11ef-9f7f-e9a656123a8b" 00:15:50.358 ], 00:15:50.358 "product_name": "Malloc disk", 00:15:50.358 "block_size": 512, 00:15:50.358 "num_blocks": 65536, 00:15:50.358 "uuid": "79825d0c-42f4-11ef-9f7f-e9a656123a8b", 00:15:50.358 "assigned_rate_limits": { 00:15:50.358 "rw_ios_per_sec": 0, 00:15:50.358 "rw_mbytes_per_sec": 0, 00:15:50.358 "r_mbytes_per_sec": 0, 00:15:50.358 "w_mbytes_per_sec": 0 00:15:50.358 }, 00:15:50.358 "claimed": true, 00:15:50.358 "claim_type": "exclusive_write", 00:15:50.358 "zoned": false, 00:15:50.358 "supported_io_types": { 00:15:50.358 "read": true, 00:15:50.358 "write": true, 00:15:50.358 "unmap": true, 00:15:50.358 "flush": true, 00:15:50.358 "reset": true, 00:15:50.358 "nvme_admin": false, 00:15:50.358 "nvme_io": false, 00:15:50.358 "nvme_io_md": false, 00:15:50.358 "write_zeroes": true, 00:15:50.358 "zcopy": true, 00:15:50.358 "get_zone_info": false, 00:15:50.358 "zone_management": false, 00:15:50.358 "zone_append": false, 00:15:50.358 "compare": false, 00:15:50.358 "compare_and_write": false, 00:15:50.358 "abort": true, 00:15:50.358 "seek_hole": false, 00:15:50.358 "seek_data": false, 00:15:50.358 "copy": true, 00:15:50.358 "nvme_iov_md": false 00:15:50.359 }, 00:15:50.359 "memory_domains": [ 00:15:50.359 { 00:15:50.359 "dma_device_id": "system", 00:15:50.359 "dma_device_type": 1 00:15:50.359 }, 00:15:50.359 { 00:15:50.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.359 "dma_device_type": 2 00:15:50.359 } 00:15:50.359 ], 00:15:50.359 "driver_specific": {} 00:15:50.359 } 00:15:50.359 ] 00:15:50.359 21:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:15:50.359 21:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:50.359 21:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:50.359 21:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:50.359 21:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:50.359 21:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:50.359 21:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:50.359 21:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:50.359 21:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:50.359 21:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:50.359 21:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:50.359 21:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:50.359 21:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:50.359 21:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.359 21:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.618 21:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:50.618 "name": "Existed_Raid", 00:15:50.618 "uuid": "77a9ee3e-42f4-11ef-9f7f-e9a656123a8b", 00:15:50.618 "strip_size_kb": 0, 00:15:50.618 "state": "online", 00:15:50.618 "raid_level": "raid1", 00:15:50.618 "superblock": true, 00:15:50.618 "num_base_bdevs": 4, 00:15:50.618 "num_base_bdevs_discovered": 4, 00:15:50.618 "num_base_bdevs_operational": 4, 00:15:50.618 "base_bdevs_list": [ 00:15:50.618 { 00:15:50.618 "name": "BaseBdev1", 00:15:50.618 "uuid": "76c779e8-42f4-11ef-9f7f-e9a656123a8b", 00:15:50.618 "is_configured": true, 00:15:50.618 "data_offset": 2048, 00:15:50.618 "data_size": 63488 00:15:50.618 }, 00:15:50.618 { 00:15:50.618 "name": "BaseBdev2", 00:15:50.618 "uuid": "7811160e-42f4-11ef-9f7f-e9a656123a8b", 00:15:50.618 "is_configured": true, 00:15:50.618 "data_offset": 2048, 00:15:50.618 "data_size": 63488 00:15:50.618 }, 00:15:50.618 { 00:15:50.618 "name": "BaseBdev3", 00:15:50.618 "uuid": "78c3504a-42f4-11ef-9f7f-e9a656123a8b", 00:15:50.618 "is_configured": true, 00:15:50.618 "data_offset": 2048, 00:15:50.618 "data_size": 63488 00:15:50.618 }, 00:15:50.618 { 00:15:50.618 "name": "BaseBdev4", 00:15:50.618 "uuid": "79825d0c-42f4-11ef-9f7f-e9a656123a8b", 00:15:50.618 "is_configured": true, 00:15:50.618 "data_offset": 2048, 00:15:50.618 "data_size": 63488 00:15:50.618 } 00:15:50.618 ] 00:15:50.618 }' 00:15:50.618 21:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:50.618 21:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.877 21:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:50.877 21:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:50.877 21:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:50.877 21:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:50.877 21:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:50.877 21:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:50.877 21:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:50.877 21:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:51.136 [2024-07-15 21:52:06.153527] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:51.136 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:51.136 "name": "Existed_Raid", 00:15:51.136 "aliases": [ 00:15:51.136 "77a9ee3e-42f4-11ef-9f7f-e9a656123a8b" 00:15:51.136 ], 00:15:51.136 "product_name": "Raid Volume", 00:15:51.136 "block_size": 512, 00:15:51.136 "num_blocks": 63488, 00:15:51.136 "uuid": "77a9ee3e-42f4-11ef-9f7f-e9a656123a8b", 00:15:51.136 "assigned_rate_limits": { 00:15:51.136 "rw_ios_per_sec": 0, 00:15:51.136 "rw_mbytes_per_sec": 0, 00:15:51.136 "r_mbytes_per_sec": 0, 00:15:51.136 "w_mbytes_per_sec": 0 00:15:51.136 }, 00:15:51.136 "claimed": false, 00:15:51.136 "zoned": false, 00:15:51.136 "supported_io_types": { 00:15:51.136 "read": true, 00:15:51.136 "write": true, 00:15:51.136 "unmap": false, 00:15:51.136 "flush": false, 00:15:51.136 "reset": true, 00:15:51.136 "nvme_admin": false, 00:15:51.136 "nvme_io": false, 00:15:51.136 "nvme_io_md": false, 00:15:51.136 "write_zeroes": true, 00:15:51.136 "zcopy": false, 00:15:51.136 "get_zone_info": false, 00:15:51.136 "zone_management": false, 00:15:51.136 "zone_append": false, 00:15:51.136 "compare": false, 00:15:51.136 "compare_and_write": false, 00:15:51.136 "abort": false, 00:15:51.136 "seek_hole": false, 00:15:51.136 "seek_data": false, 00:15:51.136 "copy": false, 00:15:51.136 "nvme_iov_md": false 00:15:51.136 }, 00:15:51.136 "memory_domains": [ 00:15:51.136 { 00:15:51.136 "dma_device_id": "system", 00:15:51.136 "dma_device_type": 1 00:15:51.136 }, 00:15:51.136 { 00:15:51.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.136 "dma_device_type": 2 00:15:51.136 }, 00:15:51.136 { 00:15:51.136 "dma_device_id": "system", 00:15:51.136 "dma_device_type": 1 00:15:51.136 }, 00:15:51.136 { 00:15:51.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.136 "dma_device_type": 2 00:15:51.136 }, 00:15:51.136 { 00:15:51.136 "dma_device_id": "system", 00:15:51.136 "dma_device_type": 1 00:15:51.136 }, 00:15:51.136 { 00:15:51.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.136 "dma_device_type": 2 00:15:51.136 }, 00:15:51.136 { 00:15:51.136 "dma_device_id": "system", 00:15:51.136 "dma_device_type": 1 00:15:51.136 }, 00:15:51.136 { 00:15:51.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.136 "dma_device_type": 2 00:15:51.136 } 00:15:51.136 ], 00:15:51.136 "driver_specific": { 00:15:51.136 "raid": { 00:15:51.136 "uuid": "77a9ee3e-42f4-11ef-9f7f-e9a656123a8b", 00:15:51.136 "strip_size_kb": 0, 00:15:51.136 "state": "online", 00:15:51.136 "raid_level": "raid1", 00:15:51.136 "superblock": true, 00:15:51.136 "num_base_bdevs": 4, 00:15:51.136 "num_base_bdevs_discovered": 4, 00:15:51.136 "num_base_bdevs_operational": 4, 00:15:51.136 "base_bdevs_list": [ 00:15:51.136 { 00:15:51.136 "name": "BaseBdev1", 00:15:51.136 "uuid": "76c779e8-42f4-11ef-9f7f-e9a656123a8b", 00:15:51.136 "is_configured": true, 00:15:51.136 "data_offset": 2048, 00:15:51.136 "data_size": 63488 00:15:51.136 }, 00:15:51.136 { 00:15:51.136 "name": "BaseBdev2", 00:15:51.136 "uuid": "7811160e-42f4-11ef-9f7f-e9a656123a8b", 00:15:51.136 "is_configured": true, 00:15:51.136 "data_offset": 2048, 00:15:51.136 "data_size": 63488 00:15:51.136 }, 00:15:51.136 { 00:15:51.136 "name": "BaseBdev3", 00:15:51.136 "uuid": "78c3504a-42f4-11ef-9f7f-e9a656123a8b", 00:15:51.136 "is_configured": true, 00:15:51.136 "data_offset": 2048, 00:15:51.136 "data_size": 63488 00:15:51.136 }, 00:15:51.136 { 00:15:51.136 "name": "BaseBdev4", 00:15:51.136 "uuid": "79825d0c-42f4-11ef-9f7f-e9a656123a8b", 00:15:51.137 "is_configured": true, 00:15:51.137 "data_offset": 2048, 00:15:51.137 "data_size": 63488 00:15:51.137 } 00:15:51.137 ] 00:15:51.137 } 00:15:51.137 } 00:15:51.137 }' 00:15:51.137 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:51.137 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:51.137 BaseBdev2 00:15:51.137 BaseBdev3 00:15:51.137 BaseBdev4' 00:15:51.137 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:51.137 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:51.137 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:51.397 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:51.397 "name": "BaseBdev1", 00:15:51.397 "aliases": [ 00:15:51.397 "76c779e8-42f4-11ef-9f7f-e9a656123a8b" 00:15:51.397 ], 00:15:51.397 "product_name": "Malloc disk", 00:15:51.397 "block_size": 512, 00:15:51.397 "num_blocks": 65536, 00:15:51.397 "uuid": "76c779e8-42f4-11ef-9f7f-e9a656123a8b", 00:15:51.398 "assigned_rate_limits": { 00:15:51.398 "rw_ios_per_sec": 0, 00:15:51.398 "rw_mbytes_per_sec": 0, 00:15:51.398 "r_mbytes_per_sec": 0, 00:15:51.398 "w_mbytes_per_sec": 0 00:15:51.398 }, 00:15:51.398 "claimed": true, 00:15:51.398 "claim_type": "exclusive_write", 00:15:51.398 "zoned": false, 00:15:51.398 "supported_io_types": { 00:15:51.398 "read": true, 00:15:51.398 "write": true, 00:15:51.398 "unmap": true, 00:15:51.398 "flush": true, 00:15:51.398 "reset": true, 00:15:51.398 "nvme_admin": false, 00:15:51.398 "nvme_io": false, 00:15:51.398 "nvme_io_md": false, 00:15:51.398 "write_zeroes": true, 00:15:51.398 "zcopy": true, 00:15:51.398 "get_zone_info": false, 00:15:51.398 "zone_management": false, 00:15:51.398 "zone_append": false, 00:15:51.398 "compare": false, 00:15:51.398 "compare_and_write": false, 00:15:51.398 "abort": true, 00:15:51.398 "seek_hole": false, 00:15:51.398 "seek_data": false, 00:15:51.398 "copy": true, 00:15:51.398 "nvme_iov_md": false 00:15:51.398 }, 00:15:51.398 "memory_domains": [ 00:15:51.398 { 00:15:51.398 "dma_device_id": "system", 00:15:51.398 "dma_device_type": 1 00:15:51.398 }, 00:15:51.398 { 00:15:51.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.398 "dma_device_type": 2 00:15:51.398 } 00:15:51.398 ], 00:15:51.398 "driver_specific": {} 00:15:51.398 }' 00:15:51.398 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:51.398 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:51.398 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:51.398 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:51.398 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:51.398 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:51.398 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:51.398 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:51.398 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:51.398 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:51.398 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:51.398 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:51.398 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:51.398 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:51.398 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:51.656 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:51.656 "name": "BaseBdev2", 00:15:51.656 "aliases": [ 00:15:51.656 "7811160e-42f4-11ef-9f7f-e9a656123a8b" 00:15:51.656 ], 00:15:51.656 "product_name": "Malloc disk", 00:15:51.656 "block_size": 512, 00:15:51.656 "num_blocks": 65536, 00:15:51.656 "uuid": "7811160e-42f4-11ef-9f7f-e9a656123a8b", 00:15:51.656 "assigned_rate_limits": { 00:15:51.656 "rw_ios_per_sec": 0, 00:15:51.656 "rw_mbytes_per_sec": 0, 00:15:51.656 "r_mbytes_per_sec": 0, 00:15:51.656 "w_mbytes_per_sec": 0 00:15:51.656 }, 00:15:51.656 "claimed": true, 00:15:51.656 "claim_type": "exclusive_write", 00:15:51.656 "zoned": false, 00:15:51.656 "supported_io_types": { 00:15:51.656 "read": true, 00:15:51.656 "write": true, 00:15:51.656 "unmap": true, 00:15:51.656 "flush": true, 00:15:51.656 "reset": true, 00:15:51.656 "nvme_admin": false, 00:15:51.656 "nvme_io": false, 00:15:51.656 "nvme_io_md": false, 00:15:51.656 "write_zeroes": true, 00:15:51.656 "zcopy": true, 00:15:51.656 "get_zone_info": false, 00:15:51.656 "zone_management": false, 00:15:51.656 "zone_append": false, 00:15:51.656 "compare": false, 00:15:51.656 "compare_and_write": false, 00:15:51.656 "abort": true, 00:15:51.656 "seek_hole": false, 00:15:51.656 "seek_data": false, 00:15:51.656 "copy": true, 00:15:51.656 "nvme_iov_md": false 00:15:51.656 }, 00:15:51.656 "memory_domains": [ 00:15:51.656 { 00:15:51.656 "dma_device_id": "system", 00:15:51.656 "dma_device_type": 1 00:15:51.656 }, 00:15:51.656 { 00:15:51.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.656 "dma_device_type": 2 00:15:51.656 } 00:15:51.656 ], 00:15:51.656 "driver_specific": {} 00:15:51.656 }' 00:15:51.656 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:51.656 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:51.657 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:51.657 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:51.657 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:51.657 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:51.657 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:51.657 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:51.657 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:51.657 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:51.657 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:51.657 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:51.657 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:51.657 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:51.657 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:51.916 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:51.916 "name": "BaseBdev3", 00:15:51.916 "aliases": [ 00:15:51.916 "78c3504a-42f4-11ef-9f7f-e9a656123a8b" 00:15:51.916 ], 00:15:51.916 "product_name": "Malloc disk", 00:15:51.916 "block_size": 512, 00:15:51.916 "num_blocks": 65536, 00:15:51.916 "uuid": "78c3504a-42f4-11ef-9f7f-e9a656123a8b", 00:15:51.916 "assigned_rate_limits": { 00:15:51.916 "rw_ios_per_sec": 0, 00:15:51.916 "rw_mbytes_per_sec": 0, 00:15:51.916 "r_mbytes_per_sec": 0, 00:15:51.916 "w_mbytes_per_sec": 0 00:15:51.916 }, 00:15:51.916 "claimed": true, 00:15:51.916 "claim_type": "exclusive_write", 00:15:51.916 "zoned": false, 00:15:51.916 "supported_io_types": { 00:15:51.916 "read": true, 00:15:51.916 "write": true, 00:15:51.916 "unmap": true, 00:15:51.916 "flush": true, 00:15:51.916 "reset": true, 00:15:51.916 "nvme_admin": false, 00:15:51.916 "nvme_io": false, 00:15:51.916 "nvme_io_md": false, 00:15:51.916 "write_zeroes": true, 00:15:51.916 "zcopy": true, 00:15:51.916 "get_zone_info": false, 00:15:51.916 "zone_management": false, 00:15:51.916 "zone_append": false, 00:15:51.916 "compare": false, 00:15:51.916 "compare_and_write": false, 00:15:51.916 "abort": true, 00:15:51.916 "seek_hole": false, 00:15:51.916 "seek_data": false, 00:15:51.916 "copy": true, 00:15:51.916 "nvme_iov_md": false 00:15:51.916 }, 00:15:51.916 "memory_domains": [ 00:15:51.916 { 00:15:51.916 "dma_device_id": "system", 00:15:51.916 "dma_device_type": 1 00:15:51.916 }, 00:15:51.916 { 00:15:51.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.916 "dma_device_type": 2 00:15:51.916 } 00:15:51.916 ], 00:15:51.916 "driver_specific": {} 00:15:51.916 }' 00:15:51.916 21:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:51.916 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:51.916 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:51.916 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:51.916 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:51.916 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:51.916 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:51.916 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:51.916 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:51.916 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:51.916 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:51.916 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:51.916 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:51.916 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:51.916 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:52.175 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:52.175 "name": "BaseBdev4", 00:15:52.175 "aliases": [ 00:15:52.175 "79825d0c-42f4-11ef-9f7f-e9a656123a8b" 00:15:52.175 ], 00:15:52.175 "product_name": "Malloc disk", 00:15:52.175 "block_size": 512, 00:15:52.175 "num_blocks": 65536, 00:15:52.175 "uuid": "79825d0c-42f4-11ef-9f7f-e9a656123a8b", 00:15:52.175 "assigned_rate_limits": { 00:15:52.175 "rw_ios_per_sec": 0, 00:15:52.175 "rw_mbytes_per_sec": 0, 00:15:52.175 "r_mbytes_per_sec": 0, 00:15:52.175 "w_mbytes_per_sec": 0 00:15:52.175 }, 00:15:52.175 "claimed": true, 00:15:52.175 "claim_type": "exclusive_write", 00:15:52.175 "zoned": false, 00:15:52.175 "supported_io_types": { 00:15:52.175 "read": true, 00:15:52.175 "write": true, 00:15:52.175 "unmap": true, 00:15:52.175 "flush": true, 00:15:52.175 "reset": true, 00:15:52.175 "nvme_admin": false, 00:15:52.175 "nvme_io": false, 00:15:52.175 "nvme_io_md": false, 00:15:52.176 "write_zeroes": true, 00:15:52.176 "zcopy": true, 00:15:52.176 "get_zone_info": false, 00:15:52.176 "zone_management": false, 00:15:52.176 "zone_append": false, 00:15:52.176 "compare": false, 00:15:52.176 "compare_and_write": false, 00:15:52.176 "abort": true, 00:15:52.176 "seek_hole": false, 00:15:52.176 "seek_data": false, 00:15:52.176 "copy": true, 00:15:52.176 "nvme_iov_md": false 00:15:52.176 }, 00:15:52.176 "memory_domains": [ 00:15:52.176 { 00:15:52.176 "dma_device_id": "system", 00:15:52.176 "dma_device_type": 1 00:15:52.176 }, 00:15:52.176 { 00:15:52.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.176 "dma_device_type": 2 00:15:52.176 } 00:15:52.176 ], 00:15:52.176 "driver_specific": {} 00:15:52.176 }' 00:15:52.176 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:52.176 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:52.176 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:52.176 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:52.176 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:52.176 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:52.176 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:52.176 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:52.176 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:52.176 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:52.176 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:52.176 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:52.176 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:52.435 [2024-07-15 21:52:07.589643] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:52.435 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:52.435 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:15:52.435 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:52.435 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:15:52.435 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:15:52.435 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:52.435 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:52.435 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:52.435 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:52.435 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:52.435 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:52.435 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:52.435 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:52.435 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:52.435 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:52.435 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.435 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.004 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:53.004 "name": "Existed_Raid", 00:15:53.004 "uuid": "77a9ee3e-42f4-11ef-9f7f-e9a656123a8b", 00:15:53.004 "strip_size_kb": 0, 00:15:53.004 "state": "online", 00:15:53.004 "raid_level": "raid1", 00:15:53.004 "superblock": true, 00:15:53.004 "num_base_bdevs": 4, 00:15:53.004 "num_base_bdevs_discovered": 3, 00:15:53.004 "num_base_bdevs_operational": 3, 00:15:53.004 "base_bdevs_list": [ 00:15:53.004 { 00:15:53.004 "name": null, 00:15:53.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.004 "is_configured": false, 00:15:53.004 "data_offset": 2048, 00:15:53.005 "data_size": 63488 00:15:53.005 }, 00:15:53.005 { 00:15:53.005 "name": "BaseBdev2", 00:15:53.005 "uuid": "7811160e-42f4-11ef-9f7f-e9a656123a8b", 00:15:53.005 "is_configured": true, 00:15:53.005 "data_offset": 2048, 00:15:53.005 "data_size": 63488 00:15:53.005 }, 00:15:53.005 { 00:15:53.005 "name": "BaseBdev3", 00:15:53.005 "uuid": "78c3504a-42f4-11ef-9f7f-e9a656123a8b", 00:15:53.005 "is_configured": true, 00:15:53.005 "data_offset": 2048, 00:15:53.005 "data_size": 63488 00:15:53.005 }, 00:15:53.005 { 00:15:53.005 "name": "BaseBdev4", 00:15:53.005 "uuid": "79825d0c-42f4-11ef-9f7f-e9a656123a8b", 00:15:53.005 "is_configured": true, 00:15:53.005 "data_offset": 2048, 00:15:53.005 "data_size": 63488 00:15:53.005 } 00:15:53.005 ] 00:15:53.005 }' 00:15:53.005 21:52:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:53.005 21:52:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.005 21:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:53.005 21:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:53.005 21:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.005 21:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:53.265 21:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:53.265 21:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:53.265 21:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:53.524 [2024-07-15 21:52:08.644465] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:53.524 21:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:53.524 21:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:53.524 21:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.524 21:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:53.783 21:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:53.783 21:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:53.783 21:52:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:54.042 [2024-07-15 21:52:09.082805] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:54.042 21:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:54.042 21:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:54.042 21:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.042 21:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:54.301 21:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:54.301 21:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:54.301 21:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:15:54.560 [2024-07-15 21:52:09.524993] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:54.560 [2024-07-15 21:52:09.525053] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:54.560 [2024-07-15 21:52:09.531301] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:54.560 [2024-07-15 21:52:09.531323] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:54.560 [2024-07-15 21:52:09.531327] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ed371e34a00 name Existed_Raid, state offline 00:15:54.560 21:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:54.560 21:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:54.560 21:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.560 21:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:54.819 21:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:54.819 21:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:54.819 21:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:15:54.819 21:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:15:54.819 21:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:54.819 21:52:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:54.819 BaseBdev2 00:15:54.819 21:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:15:54.819 21:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev2 00:15:55.078 21:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:15:55.078 21:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:15:55.078 21:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:15:55.078 21:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:15:55.078 21:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:55.078 21:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:55.336 [ 00:15:55.336 { 00:15:55.336 "name": "BaseBdev2", 00:15:55.336 "aliases": [ 00:15:55.336 "7c7e33dd-42f4-11ef-9f7f-e9a656123a8b" 00:15:55.336 ], 00:15:55.336 "product_name": "Malloc disk", 00:15:55.336 "block_size": 512, 00:15:55.336 "num_blocks": 65536, 00:15:55.336 "uuid": "7c7e33dd-42f4-11ef-9f7f-e9a656123a8b", 00:15:55.336 "assigned_rate_limits": { 00:15:55.336 "rw_ios_per_sec": 0, 00:15:55.336 "rw_mbytes_per_sec": 0, 00:15:55.336 "r_mbytes_per_sec": 0, 00:15:55.336 "w_mbytes_per_sec": 0 00:15:55.336 }, 00:15:55.336 "claimed": false, 00:15:55.336 "zoned": false, 00:15:55.336 "supported_io_types": { 00:15:55.336 "read": true, 00:15:55.336 "write": true, 00:15:55.336 "unmap": true, 00:15:55.336 "flush": true, 00:15:55.336 "reset": true, 00:15:55.336 "nvme_admin": false, 00:15:55.336 "nvme_io": false, 00:15:55.336 "nvme_io_md": false, 00:15:55.336 "write_zeroes": true, 00:15:55.336 "zcopy": true, 00:15:55.336 "get_zone_info": false, 00:15:55.336 "zone_management": false, 00:15:55.336 "zone_append": false, 00:15:55.336 "compare": false, 00:15:55.336 "compare_and_write": false, 00:15:55.336 "abort": true, 00:15:55.336 "seek_hole": false, 00:15:55.336 "seek_data": false, 00:15:55.336 "copy": true, 00:15:55.336 "nvme_iov_md": false 00:15:55.336 }, 00:15:55.337 "memory_domains": [ 00:15:55.337 { 00:15:55.337 "dma_device_id": "system", 00:15:55.337 "dma_device_type": 1 00:15:55.337 }, 00:15:55.337 { 00:15:55.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.337 "dma_device_type": 2 00:15:55.337 } 00:15:55.337 ], 00:15:55.337 "driver_specific": {} 00:15:55.337 } 00:15:55.337 ] 00:15:55.337 21:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:15:55.337 21:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:55.337 21:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:55.337 21:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:55.595 BaseBdev3 00:15:55.595 21:52:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:15:55.595 21:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev3 00:15:55.595 21:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:15:55.595 21:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:15:55.595 21:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:15:55.595 21:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:15:55.595 21:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:55.854 21:52:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:56.114 [ 00:15:56.114 { 00:15:56.114 "name": "BaseBdev3", 00:15:56.114 "aliases": [ 00:15:56.114 "7ceb759b-42f4-11ef-9f7f-e9a656123a8b" 00:15:56.114 ], 00:15:56.114 "product_name": "Malloc disk", 00:15:56.114 "block_size": 512, 00:15:56.114 "num_blocks": 65536, 00:15:56.114 "uuid": "7ceb759b-42f4-11ef-9f7f-e9a656123a8b", 00:15:56.114 "assigned_rate_limits": { 00:15:56.114 "rw_ios_per_sec": 0, 00:15:56.114 "rw_mbytes_per_sec": 0, 00:15:56.114 "r_mbytes_per_sec": 0, 00:15:56.114 "w_mbytes_per_sec": 0 00:15:56.114 }, 00:15:56.114 "claimed": false, 00:15:56.114 "zoned": false, 00:15:56.114 "supported_io_types": { 00:15:56.114 "read": true, 00:15:56.114 "write": true, 00:15:56.114 "unmap": true, 00:15:56.114 "flush": true, 00:15:56.114 "reset": true, 00:15:56.114 "nvme_admin": false, 00:15:56.114 "nvme_io": false, 00:15:56.114 "nvme_io_md": false, 00:15:56.114 "write_zeroes": true, 00:15:56.114 "zcopy": true, 00:15:56.114 "get_zone_info": false, 00:15:56.114 "zone_management": false, 00:15:56.114 "zone_append": false, 00:15:56.114 "compare": false, 00:15:56.114 "compare_and_write": false, 00:15:56.114 "abort": true, 00:15:56.114 "seek_hole": false, 00:15:56.114 "seek_data": false, 00:15:56.114 "copy": true, 00:15:56.114 "nvme_iov_md": false 00:15:56.114 }, 00:15:56.114 "memory_domains": [ 00:15:56.114 { 00:15:56.114 "dma_device_id": "system", 00:15:56.114 "dma_device_type": 1 00:15:56.114 }, 00:15:56.114 { 00:15:56.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.114 "dma_device_type": 2 00:15:56.114 } 00:15:56.114 ], 00:15:56.114 "driver_specific": {} 00:15:56.114 } 00:15:56.114 ] 00:15:56.114 21:52:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:15:56.114 21:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:56.114 21:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:56.114 21:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:56.373 BaseBdev4 00:15:56.373 21:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:15:56.373 21:52:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev4 00:15:56.373 21:52:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:15:56.373 21:52:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:15:56.373 21:52:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:15:56.373 21:52:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:15:56.373 21:52:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:56.632 21:52:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:56.891 [ 00:15:56.891 { 00:15:56.891 "name": "BaseBdev4", 00:15:56.891 "aliases": [ 00:15:56.891 "7d533881-42f4-11ef-9f7f-e9a656123a8b" 00:15:56.891 ], 00:15:56.891 "product_name": "Malloc disk", 00:15:56.891 "block_size": 512, 00:15:56.891 "num_blocks": 65536, 00:15:56.891 "uuid": "7d533881-42f4-11ef-9f7f-e9a656123a8b", 00:15:56.891 "assigned_rate_limits": { 00:15:56.891 "rw_ios_per_sec": 0, 00:15:56.891 "rw_mbytes_per_sec": 0, 00:15:56.891 "r_mbytes_per_sec": 0, 00:15:56.891 "w_mbytes_per_sec": 0 00:15:56.891 }, 00:15:56.891 "claimed": false, 00:15:56.891 "zoned": false, 00:15:56.891 "supported_io_types": { 00:15:56.891 "read": true, 00:15:56.891 "write": true, 00:15:56.891 "unmap": true, 00:15:56.891 "flush": true, 00:15:56.891 "reset": true, 00:15:56.891 "nvme_admin": false, 00:15:56.891 "nvme_io": false, 00:15:56.891 "nvme_io_md": false, 00:15:56.891 "write_zeroes": true, 00:15:56.891 "zcopy": true, 00:15:56.891 "get_zone_info": false, 00:15:56.891 "zone_management": false, 00:15:56.891 "zone_append": false, 00:15:56.891 "compare": false, 00:15:56.891 "compare_and_write": false, 00:15:56.891 "abort": true, 00:15:56.891 "seek_hole": false, 00:15:56.891 "seek_data": false, 00:15:56.891 "copy": true, 00:15:56.891 "nvme_iov_md": false 00:15:56.891 }, 00:15:56.891 "memory_domains": [ 00:15:56.891 { 00:15:56.891 "dma_device_id": "system", 00:15:56.891 "dma_device_type": 1 00:15:56.891 }, 00:15:56.891 { 00:15:56.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.891 "dma_device_type": 2 00:15:56.891 } 00:15:56.891 ], 00:15:56.891 "driver_specific": {} 00:15:56.891 } 00:15:56.891 ] 00:15:56.891 21:52:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:15:56.891 21:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:56.891 21:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:56.891 21:52:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:56.891 [2024-07-15 21:52:12.063426] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:56.891 [2024-07-15 21:52:12.063492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:56.891 [2024-07-15 21:52:12.063502] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:56.891 [2024-07-15 21:52:12.064177] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:56.891 [2024-07-15 21:52:12.064201] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:57.167 21:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:57.168 21:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:57.168 21:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:57.168 21:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:57.168 21:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:57.168 21:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:57.168 21:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:57.168 21:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:57.168 21:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:57.168 21:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:57.168 21:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:57.168 21:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.168 21:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:57.168 "name": "Existed_Raid", 00:15:57.168 "uuid": "7dbafb4b-42f4-11ef-9f7f-e9a656123a8b", 00:15:57.168 "strip_size_kb": 0, 00:15:57.168 "state": "configuring", 00:15:57.168 "raid_level": "raid1", 00:15:57.168 "superblock": true, 00:15:57.168 "num_base_bdevs": 4, 00:15:57.168 "num_base_bdevs_discovered": 3, 00:15:57.168 "num_base_bdevs_operational": 4, 00:15:57.168 "base_bdevs_list": [ 00:15:57.168 { 00:15:57.168 "name": "BaseBdev1", 00:15:57.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.168 "is_configured": false, 00:15:57.168 "data_offset": 0, 00:15:57.168 "data_size": 0 00:15:57.168 }, 00:15:57.168 { 00:15:57.168 "name": "BaseBdev2", 00:15:57.168 "uuid": "7c7e33dd-42f4-11ef-9f7f-e9a656123a8b", 00:15:57.168 "is_configured": true, 00:15:57.168 "data_offset": 2048, 00:15:57.168 "data_size": 63488 00:15:57.168 }, 00:15:57.168 { 00:15:57.168 "name": "BaseBdev3", 00:15:57.168 "uuid": "7ceb759b-42f4-11ef-9f7f-e9a656123a8b", 00:15:57.168 "is_configured": true, 00:15:57.168 "data_offset": 2048, 00:15:57.168 "data_size": 63488 00:15:57.168 }, 00:15:57.168 { 00:15:57.168 "name": "BaseBdev4", 00:15:57.168 "uuid": "7d533881-42f4-11ef-9f7f-e9a656123a8b", 00:15:57.168 "is_configured": true, 00:15:57.169 "data_offset": 2048, 00:15:57.169 "data_size": 63488 00:15:57.169 } 00:15:57.169 ] 00:15:57.169 }' 00:15:57.169 21:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:57.169 21:52:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.433 21:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:57.690 [2024-07-15 21:52:12.803436] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:57.690 21:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:57.690 21:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:57.690 21:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:57.690 21:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:57.690 21:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:57.690 21:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:57.690 21:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:57.691 21:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:57.691 21:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:57.691 21:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:57.691 21:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:57.691 21:52:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.948 21:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:57.948 "name": "Existed_Raid", 00:15:57.948 "uuid": "7dbafb4b-42f4-11ef-9f7f-e9a656123a8b", 00:15:57.948 "strip_size_kb": 0, 00:15:57.948 "state": "configuring", 00:15:57.948 "raid_level": "raid1", 00:15:57.948 "superblock": true, 00:15:57.948 "num_base_bdevs": 4, 00:15:57.948 "num_base_bdevs_discovered": 2, 00:15:57.948 "num_base_bdevs_operational": 4, 00:15:57.948 "base_bdevs_list": [ 00:15:57.948 { 00:15:57.948 "name": "BaseBdev1", 00:15:57.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.948 "is_configured": false, 00:15:57.948 "data_offset": 0, 00:15:57.948 "data_size": 0 00:15:57.948 }, 00:15:57.948 { 00:15:57.948 "name": null, 00:15:57.948 "uuid": "7c7e33dd-42f4-11ef-9f7f-e9a656123a8b", 00:15:57.948 "is_configured": false, 00:15:57.948 "data_offset": 2048, 00:15:57.948 "data_size": 63488 00:15:57.948 }, 00:15:57.948 { 00:15:57.948 "name": "BaseBdev3", 00:15:57.948 "uuid": "7ceb759b-42f4-11ef-9f7f-e9a656123a8b", 00:15:57.948 "is_configured": true, 00:15:57.948 "data_offset": 2048, 00:15:57.948 "data_size": 63488 00:15:57.948 }, 00:15:57.948 { 00:15:57.948 "name": "BaseBdev4", 00:15:57.948 "uuid": "7d533881-42f4-11ef-9f7f-e9a656123a8b", 00:15:57.948 "is_configured": true, 00:15:57.948 "data_offset": 2048, 00:15:57.948 "data_size": 63488 00:15:57.948 } 00:15:57.948 ] 00:15:57.948 }' 00:15:57.948 21:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:57.948 21:52:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.206 21:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.206 21:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:58.464 21:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:15:58.464 21:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:58.723 [2024-07-15 21:52:13.759596] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:58.723 BaseBdev1 00:15:58.723 21:52:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:15:58.723 21:52:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev1 00:15:58.723 21:52:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:15:58.723 21:52:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:15:58.723 21:52:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:15:58.723 21:52:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:15:58.723 21:52:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:58.982 21:52:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:59.240 [ 00:15:59.240 { 00:15:59.240 "name": "BaseBdev1", 00:15:59.240 "aliases": [ 00:15:59.240 "7ebdc702-42f4-11ef-9f7f-e9a656123a8b" 00:15:59.240 ], 00:15:59.240 "product_name": "Malloc disk", 00:15:59.240 "block_size": 512, 00:15:59.240 "num_blocks": 65536, 00:15:59.240 "uuid": "7ebdc702-42f4-11ef-9f7f-e9a656123a8b", 00:15:59.240 "assigned_rate_limits": { 00:15:59.240 "rw_ios_per_sec": 0, 00:15:59.240 "rw_mbytes_per_sec": 0, 00:15:59.240 "r_mbytes_per_sec": 0, 00:15:59.240 "w_mbytes_per_sec": 0 00:15:59.240 }, 00:15:59.240 "claimed": true, 00:15:59.240 "claim_type": "exclusive_write", 00:15:59.240 "zoned": false, 00:15:59.240 "supported_io_types": { 00:15:59.240 "read": true, 00:15:59.240 "write": true, 00:15:59.240 "unmap": true, 00:15:59.240 "flush": true, 00:15:59.240 "reset": true, 00:15:59.240 "nvme_admin": false, 00:15:59.240 "nvme_io": false, 00:15:59.240 "nvme_io_md": false, 00:15:59.240 "write_zeroes": true, 00:15:59.240 "zcopy": true, 00:15:59.240 "get_zone_info": false, 00:15:59.240 "zone_management": false, 00:15:59.240 "zone_append": false, 00:15:59.240 "compare": false, 00:15:59.240 "compare_and_write": false, 00:15:59.240 "abort": true, 00:15:59.240 "seek_hole": false, 00:15:59.240 "seek_data": false, 00:15:59.240 "copy": true, 00:15:59.240 "nvme_iov_md": false 00:15:59.240 }, 00:15:59.240 "memory_domains": [ 00:15:59.240 { 00:15:59.240 "dma_device_id": "system", 00:15:59.240 "dma_device_type": 1 00:15:59.240 }, 00:15:59.240 { 00:15:59.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.240 "dma_device_type": 2 00:15:59.240 } 00:15:59.240 ], 00:15:59.240 "driver_specific": {} 00:15:59.240 } 00:15:59.240 ] 00:15:59.240 21:52:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:15:59.240 21:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:59.240 21:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:59.240 21:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:59.240 21:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:59.240 21:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:59.240 21:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:59.240 21:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:59.240 21:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:59.240 21:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:59.240 21:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:59.240 21:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.241 21:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.514 21:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:59.514 "name": "Existed_Raid", 00:15:59.514 "uuid": "7dbafb4b-42f4-11ef-9f7f-e9a656123a8b", 00:15:59.514 "strip_size_kb": 0, 00:15:59.514 "state": "configuring", 00:15:59.514 "raid_level": "raid1", 00:15:59.514 "superblock": true, 00:15:59.514 "num_base_bdevs": 4, 00:15:59.514 "num_base_bdevs_discovered": 3, 00:15:59.514 "num_base_bdevs_operational": 4, 00:15:59.514 "base_bdevs_list": [ 00:15:59.514 { 00:15:59.514 "name": "BaseBdev1", 00:15:59.514 "uuid": "7ebdc702-42f4-11ef-9f7f-e9a656123a8b", 00:15:59.514 "is_configured": true, 00:15:59.514 "data_offset": 2048, 00:15:59.514 "data_size": 63488 00:15:59.514 }, 00:15:59.515 { 00:15:59.515 "name": null, 00:15:59.515 "uuid": "7c7e33dd-42f4-11ef-9f7f-e9a656123a8b", 00:15:59.515 "is_configured": false, 00:15:59.515 "data_offset": 2048, 00:15:59.515 "data_size": 63488 00:15:59.515 }, 00:15:59.515 { 00:15:59.515 "name": "BaseBdev3", 00:15:59.515 "uuid": "7ceb759b-42f4-11ef-9f7f-e9a656123a8b", 00:15:59.515 "is_configured": true, 00:15:59.515 "data_offset": 2048, 00:15:59.515 "data_size": 63488 00:15:59.515 }, 00:15:59.515 { 00:15:59.515 "name": "BaseBdev4", 00:15:59.515 "uuid": "7d533881-42f4-11ef-9f7f-e9a656123a8b", 00:15:59.515 "is_configured": true, 00:15:59.515 "data_offset": 2048, 00:15:59.515 "data_size": 63488 00:15:59.515 } 00:15:59.515 ] 00:15:59.515 }' 00:15:59.515 21:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:59.515 21:52:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.772 21:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.772 21:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:59.772 21:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:15:59.773 21:52:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:00.030 [2024-07-15 21:52:15.211510] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:00.288 21:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:00.288 21:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:00.288 21:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:00.288 21:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:00.288 21:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:00.288 21:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:00.288 21:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:00.288 21:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:00.288 21:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:00.288 21:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:00.288 21:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.288 21:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.288 21:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:00.288 "name": "Existed_Raid", 00:16:00.288 "uuid": "7dbafb4b-42f4-11ef-9f7f-e9a656123a8b", 00:16:00.288 "strip_size_kb": 0, 00:16:00.288 "state": "configuring", 00:16:00.288 "raid_level": "raid1", 00:16:00.288 "superblock": true, 00:16:00.288 "num_base_bdevs": 4, 00:16:00.288 "num_base_bdevs_discovered": 2, 00:16:00.288 "num_base_bdevs_operational": 4, 00:16:00.288 "base_bdevs_list": [ 00:16:00.288 { 00:16:00.288 "name": "BaseBdev1", 00:16:00.288 "uuid": "7ebdc702-42f4-11ef-9f7f-e9a656123a8b", 00:16:00.288 "is_configured": true, 00:16:00.288 "data_offset": 2048, 00:16:00.288 "data_size": 63488 00:16:00.288 }, 00:16:00.288 { 00:16:00.288 "name": null, 00:16:00.288 "uuid": "7c7e33dd-42f4-11ef-9f7f-e9a656123a8b", 00:16:00.288 "is_configured": false, 00:16:00.288 "data_offset": 2048, 00:16:00.288 "data_size": 63488 00:16:00.288 }, 00:16:00.288 { 00:16:00.288 "name": null, 00:16:00.288 "uuid": "7ceb759b-42f4-11ef-9f7f-e9a656123a8b", 00:16:00.288 "is_configured": false, 00:16:00.288 "data_offset": 2048, 00:16:00.288 "data_size": 63488 00:16:00.288 }, 00:16:00.288 { 00:16:00.288 "name": "BaseBdev4", 00:16:00.288 "uuid": "7d533881-42f4-11ef-9f7f-e9a656123a8b", 00:16:00.288 "is_configured": true, 00:16:00.288 "data_offset": 2048, 00:16:00.288 "data_size": 63488 00:16:00.288 } 00:16:00.288 ] 00:16:00.288 }' 00:16:00.288 21:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:00.288 21:52:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.546 21:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.546 21:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:00.805 21:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:00.805 21:52:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:01.064 [2024-07-15 21:52:16.147523] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:01.064 21:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:01.064 21:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:01.064 21:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:01.064 21:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:01.064 21:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:01.064 21:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:01.065 21:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:01.065 21:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:01.065 21:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:01.065 21:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:01.065 21:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.065 21:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.323 21:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:01.324 "name": "Existed_Raid", 00:16:01.324 "uuid": "7dbafb4b-42f4-11ef-9f7f-e9a656123a8b", 00:16:01.324 "strip_size_kb": 0, 00:16:01.324 "state": "configuring", 00:16:01.324 "raid_level": "raid1", 00:16:01.324 "superblock": true, 00:16:01.324 "num_base_bdevs": 4, 00:16:01.324 "num_base_bdevs_discovered": 3, 00:16:01.324 "num_base_bdevs_operational": 4, 00:16:01.324 "base_bdevs_list": [ 00:16:01.324 { 00:16:01.324 "name": "BaseBdev1", 00:16:01.324 "uuid": "7ebdc702-42f4-11ef-9f7f-e9a656123a8b", 00:16:01.324 "is_configured": true, 00:16:01.324 "data_offset": 2048, 00:16:01.324 "data_size": 63488 00:16:01.324 }, 00:16:01.324 { 00:16:01.324 "name": null, 00:16:01.324 "uuid": "7c7e33dd-42f4-11ef-9f7f-e9a656123a8b", 00:16:01.324 "is_configured": false, 00:16:01.324 "data_offset": 2048, 00:16:01.324 "data_size": 63488 00:16:01.324 }, 00:16:01.324 { 00:16:01.324 "name": "BaseBdev3", 00:16:01.324 "uuid": "7ceb759b-42f4-11ef-9f7f-e9a656123a8b", 00:16:01.324 "is_configured": true, 00:16:01.324 "data_offset": 2048, 00:16:01.324 "data_size": 63488 00:16:01.324 }, 00:16:01.324 { 00:16:01.324 "name": "BaseBdev4", 00:16:01.324 "uuid": "7d533881-42f4-11ef-9f7f-e9a656123a8b", 00:16:01.324 "is_configured": true, 00:16:01.324 "data_offset": 2048, 00:16:01.324 "data_size": 63488 00:16:01.324 } 00:16:01.324 ] 00:16:01.324 }' 00:16:01.324 21:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:01.324 21:52:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.582 21:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:01.583 21:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.841 21:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:01.841 21:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:02.101 [2024-07-15 21:52:17.191560] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:02.101 21:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:02.101 21:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:02.101 21:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:02.101 21:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:02.101 21:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:02.101 21:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:02.101 21:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:02.101 21:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:02.101 21:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:02.101 21:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:02.101 21:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.101 21:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.359 21:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:02.359 "name": "Existed_Raid", 00:16:02.359 "uuid": "7dbafb4b-42f4-11ef-9f7f-e9a656123a8b", 00:16:02.359 "strip_size_kb": 0, 00:16:02.359 "state": "configuring", 00:16:02.359 "raid_level": "raid1", 00:16:02.359 "superblock": true, 00:16:02.359 "num_base_bdevs": 4, 00:16:02.359 "num_base_bdevs_discovered": 2, 00:16:02.359 "num_base_bdevs_operational": 4, 00:16:02.359 "base_bdevs_list": [ 00:16:02.359 { 00:16:02.359 "name": null, 00:16:02.359 "uuid": "7ebdc702-42f4-11ef-9f7f-e9a656123a8b", 00:16:02.359 "is_configured": false, 00:16:02.359 "data_offset": 2048, 00:16:02.359 "data_size": 63488 00:16:02.359 }, 00:16:02.359 { 00:16:02.359 "name": null, 00:16:02.359 "uuid": "7c7e33dd-42f4-11ef-9f7f-e9a656123a8b", 00:16:02.359 "is_configured": false, 00:16:02.359 "data_offset": 2048, 00:16:02.359 "data_size": 63488 00:16:02.359 }, 00:16:02.359 { 00:16:02.359 "name": "BaseBdev3", 00:16:02.359 "uuid": "7ceb759b-42f4-11ef-9f7f-e9a656123a8b", 00:16:02.359 "is_configured": true, 00:16:02.359 "data_offset": 2048, 00:16:02.359 "data_size": 63488 00:16:02.359 }, 00:16:02.359 { 00:16:02.359 "name": "BaseBdev4", 00:16:02.359 "uuid": "7d533881-42f4-11ef-9f7f-e9a656123a8b", 00:16:02.359 "is_configured": true, 00:16:02.359 "data_offset": 2048, 00:16:02.359 "data_size": 63488 00:16:02.359 } 00:16:02.359 ] 00:16:02.359 }' 00:16:02.359 21:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:02.359 21:52:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.618 21:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.618 21:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:02.876 21:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:02.876 21:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:03.135 [2024-07-15 21:52:18.158106] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:03.135 21:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:03.135 21:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:03.135 21:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:03.135 21:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:03.135 21:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:03.135 21:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:03.135 21:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:03.135 21:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:03.135 21:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:03.135 21:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:03.135 21:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.135 21:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.393 21:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:03.393 "name": "Existed_Raid", 00:16:03.393 "uuid": "7dbafb4b-42f4-11ef-9f7f-e9a656123a8b", 00:16:03.393 "strip_size_kb": 0, 00:16:03.393 "state": "configuring", 00:16:03.393 "raid_level": "raid1", 00:16:03.393 "superblock": true, 00:16:03.393 "num_base_bdevs": 4, 00:16:03.393 "num_base_bdevs_discovered": 3, 00:16:03.393 "num_base_bdevs_operational": 4, 00:16:03.393 "base_bdevs_list": [ 00:16:03.393 { 00:16:03.393 "name": null, 00:16:03.393 "uuid": "7ebdc702-42f4-11ef-9f7f-e9a656123a8b", 00:16:03.393 "is_configured": false, 00:16:03.393 "data_offset": 2048, 00:16:03.393 "data_size": 63488 00:16:03.393 }, 00:16:03.393 { 00:16:03.393 "name": "BaseBdev2", 00:16:03.393 "uuid": "7c7e33dd-42f4-11ef-9f7f-e9a656123a8b", 00:16:03.393 "is_configured": true, 00:16:03.393 "data_offset": 2048, 00:16:03.393 "data_size": 63488 00:16:03.393 }, 00:16:03.393 { 00:16:03.393 "name": "BaseBdev3", 00:16:03.393 "uuid": "7ceb759b-42f4-11ef-9f7f-e9a656123a8b", 00:16:03.393 "is_configured": true, 00:16:03.393 "data_offset": 2048, 00:16:03.393 "data_size": 63488 00:16:03.393 }, 00:16:03.393 { 00:16:03.393 "name": "BaseBdev4", 00:16:03.393 "uuid": "7d533881-42f4-11ef-9f7f-e9a656123a8b", 00:16:03.393 "is_configured": true, 00:16:03.393 "data_offset": 2048, 00:16:03.393 "data_size": 63488 00:16:03.393 } 00:16:03.393 ] 00:16:03.393 }' 00:16:03.393 21:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:03.393 21:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.652 21:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.652 21:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:03.911 21:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:03.911 21:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.911 21:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:04.169 21:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 7ebdc702-42f4-11ef-9f7f-e9a656123a8b 00:16:04.427 [2024-07-15 21:52:19.442255] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:04.427 [2024-07-15 21:52:19.442337] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1ed371e34f00 00:16:04.427 [2024-07-15 21:52:19.442342] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:04.427 [2024-07-15 21:52:19.442361] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ed371e97e20 00:16:04.427 [2024-07-15 21:52:19.442408] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1ed371e34f00 00:16:04.427 [2024-07-15 21:52:19.442413] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1ed371e34f00 00:16:04.427 [2024-07-15 21:52:19.442432] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.427 NewBaseBdev 00:16:04.427 21:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:04.427 21:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@891 -- # local bdev_name=NewBaseBdev 00:16:04.427 21:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:16:04.427 21:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@893 -- # local i 00:16:04.427 21:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:16:04.427 21:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:16:04.427 21:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:04.686 21:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:04.686 [ 00:16:04.686 { 00:16:04.686 "name": "NewBaseBdev", 00:16:04.686 "aliases": [ 00:16:04.686 "7ebdc702-42f4-11ef-9f7f-e9a656123a8b" 00:16:04.686 ], 00:16:04.686 "product_name": "Malloc disk", 00:16:04.686 "block_size": 512, 00:16:04.686 "num_blocks": 65536, 00:16:04.686 "uuid": "7ebdc702-42f4-11ef-9f7f-e9a656123a8b", 00:16:04.686 "assigned_rate_limits": { 00:16:04.686 "rw_ios_per_sec": 0, 00:16:04.686 "rw_mbytes_per_sec": 0, 00:16:04.686 "r_mbytes_per_sec": 0, 00:16:04.686 "w_mbytes_per_sec": 0 00:16:04.686 }, 00:16:04.686 "claimed": true, 00:16:04.686 "claim_type": "exclusive_write", 00:16:04.686 "zoned": false, 00:16:04.686 "supported_io_types": { 00:16:04.686 "read": true, 00:16:04.686 "write": true, 00:16:04.686 "unmap": true, 00:16:04.686 "flush": true, 00:16:04.686 "reset": true, 00:16:04.686 "nvme_admin": false, 00:16:04.686 "nvme_io": false, 00:16:04.686 "nvme_io_md": false, 00:16:04.686 "write_zeroes": true, 00:16:04.686 "zcopy": true, 00:16:04.686 "get_zone_info": false, 00:16:04.686 "zone_management": false, 00:16:04.686 "zone_append": false, 00:16:04.686 "compare": false, 00:16:04.686 "compare_and_write": false, 00:16:04.686 "abort": true, 00:16:04.686 "seek_hole": false, 00:16:04.686 "seek_data": false, 00:16:04.686 "copy": true, 00:16:04.686 "nvme_iov_md": false 00:16:04.686 }, 00:16:04.686 "memory_domains": [ 00:16:04.686 { 00:16:04.686 "dma_device_id": "system", 00:16:04.686 "dma_device_type": 1 00:16:04.686 }, 00:16:04.686 { 00:16:04.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.686 "dma_device_type": 2 00:16:04.686 } 00:16:04.686 ], 00:16:04.686 "driver_specific": {} 00:16:04.686 } 00:16:04.686 ] 00:16:04.944 21:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # return 0 00:16:04.944 21:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:04.944 21:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:04.944 21:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:04.944 21:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:04.944 21:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:04.944 21:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:04.944 21:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:04.944 21:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:04.944 21:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:04.944 21:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:04.944 21:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.944 21:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.944 21:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:04.944 "name": "Existed_Raid", 00:16:04.944 "uuid": "7dbafb4b-42f4-11ef-9f7f-e9a656123a8b", 00:16:04.944 "strip_size_kb": 0, 00:16:04.944 "state": "online", 00:16:04.944 "raid_level": "raid1", 00:16:04.944 "superblock": true, 00:16:04.944 "num_base_bdevs": 4, 00:16:04.944 "num_base_bdevs_discovered": 4, 00:16:04.944 "num_base_bdevs_operational": 4, 00:16:04.944 "base_bdevs_list": [ 00:16:04.944 { 00:16:04.944 "name": "NewBaseBdev", 00:16:04.944 "uuid": "7ebdc702-42f4-11ef-9f7f-e9a656123a8b", 00:16:04.944 "is_configured": true, 00:16:04.944 "data_offset": 2048, 00:16:04.944 "data_size": 63488 00:16:04.944 }, 00:16:04.944 { 00:16:04.944 "name": "BaseBdev2", 00:16:04.944 "uuid": "7c7e33dd-42f4-11ef-9f7f-e9a656123a8b", 00:16:04.944 "is_configured": true, 00:16:04.944 "data_offset": 2048, 00:16:04.944 "data_size": 63488 00:16:04.944 }, 00:16:04.944 { 00:16:04.944 "name": "BaseBdev3", 00:16:04.944 "uuid": "7ceb759b-42f4-11ef-9f7f-e9a656123a8b", 00:16:04.944 "is_configured": true, 00:16:04.944 "data_offset": 2048, 00:16:04.944 "data_size": 63488 00:16:04.944 }, 00:16:04.944 { 00:16:04.944 "name": "BaseBdev4", 00:16:04.944 "uuid": "7d533881-42f4-11ef-9f7f-e9a656123a8b", 00:16:04.944 "is_configured": true, 00:16:04.944 "data_offset": 2048, 00:16:04.944 "data_size": 63488 00:16:04.944 } 00:16:04.944 ] 00:16:04.944 }' 00:16:04.944 21:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:04.944 21:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.511 21:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:05.511 21:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:05.511 21:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:05.511 21:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:05.511 21:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:05.511 21:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:05.511 21:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:05.511 21:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:05.511 [2024-07-15 21:52:20.598203] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:05.511 21:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:05.511 "name": "Existed_Raid", 00:16:05.511 "aliases": [ 00:16:05.511 "7dbafb4b-42f4-11ef-9f7f-e9a656123a8b" 00:16:05.511 ], 00:16:05.511 "product_name": "Raid Volume", 00:16:05.511 "block_size": 512, 00:16:05.511 "num_blocks": 63488, 00:16:05.511 "uuid": "7dbafb4b-42f4-11ef-9f7f-e9a656123a8b", 00:16:05.511 "assigned_rate_limits": { 00:16:05.511 "rw_ios_per_sec": 0, 00:16:05.511 "rw_mbytes_per_sec": 0, 00:16:05.511 "r_mbytes_per_sec": 0, 00:16:05.511 "w_mbytes_per_sec": 0 00:16:05.511 }, 00:16:05.511 "claimed": false, 00:16:05.511 "zoned": false, 00:16:05.511 "supported_io_types": { 00:16:05.511 "read": true, 00:16:05.511 "write": true, 00:16:05.511 "unmap": false, 00:16:05.511 "flush": false, 00:16:05.511 "reset": true, 00:16:05.511 "nvme_admin": false, 00:16:05.511 "nvme_io": false, 00:16:05.511 "nvme_io_md": false, 00:16:05.511 "write_zeroes": true, 00:16:05.511 "zcopy": false, 00:16:05.511 "get_zone_info": false, 00:16:05.511 "zone_management": false, 00:16:05.511 "zone_append": false, 00:16:05.511 "compare": false, 00:16:05.511 "compare_and_write": false, 00:16:05.511 "abort": false, 00:16:05.511 "seek_hole": false, 00:16:05.511 "seek_data": false, 00:16:05.511 "copy": false, 00:16:05.511 "nvme_iov_md": false 00:16:05.511 }, 00:16:05.511 "memory_domains": [ 00:16:05.511 { 00:16:05.511 "dma_device_id": "system", 00:16:05.511 "dma_device_type": 1 00:16:05.511 }, 00:16:05.511 { 00:16:05.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.511 "dma_device_type": 2 00:16:05.511 }, 00:16:05.511 { 00:16:05.511 "dma_device_id": "system", 00:16:05.511 "dma_device_type": 1 00:16:05.511 }, 00:16:05.511 { 00:16:05.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.511 "dma_device_type": 2 00:16:05.511 }, 00:16:05.511 { 00:16:05.511 "dma_device_id": "system", 00:16:05.511 "dma_device_type": 1 00:16:05.511 }, 00:16:05.511 { 00:16:05.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.511 "dma_device_type": 2 00:16:05.511 }, 00:16:05.511 { 00:16:05.511 "dma_device_id": "system", 00:16:05.511 "dma_device_type": 1 00:16:05.511 }, 00:16:05.511 { 00:16:05.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.511 "dma_device_type": 2 00:16:05.511 } 00:16:05.511 ], 00:16:05.511 "driver_specific": { 00:16:05.511 "raid": { 00:16:05.511 "uuid": "7dbafb4b-42f4-11ef-9f7f-e9a656123a8b", 00:16:05.511 "strip_size_kb": 0, 00:16:05.511 "state": "online", 00:16:05.511 "raid_level": "raid1", 00:16:05.511 "superblock": true, 00:16:05.511 "num_base_bdevs": 4, 00:16:05.511 "num_base_bdevs_discovered": 4, 00:16:05.511 "num_base_bdevs_operational": 4, 00:16:05.511 "base_bdevs_list": [ 00:16:05.511 { 00:16:05.511 "name": "NewBaseBdev", 00:16:05.511 "uuid": "7ebdc702-42f4-11ef-9f7f-e9a656123a8b", 00:16:05.511 "is_configured": true, 00:16:05.511 "data_offset": 2048, 00:16:05.511 "data_size": 63488 00:16:05.511 }, 00:16:05.511 { 00:16:05.511 "name": "BaseBdev2", 00:16:05.511 "uuid": "7c7e33dd-42f4-11ef-9f7f-e9a656123a8b", 00:16:05.511 "is_configured": true, 00:16:05.511 "data_offset": 2048, 00:16:05.511 "data_size": 63488 00:16:05.511 }, 00:16:05.511 { 00:16:05.511 "name": "BaseBdev3", 00:16:05.511 "uuid": "7ceb759b-42f4-11ef-9f7f-e9a656123a8b", 00:16:05.511 "is_configured": true, 00:16:05.511 "data_offset": 2048, 00:16:05.511 "data_size": 63488 00:16:05.511 }, 00:16:05.511 { 00:16:05.511 "name": "BaseBdev4", 00:16:05.511 "uuid": "7d533881-42f4-11ef-9f7f-e9a656123a8b", 00:16:05.511 "is_configured": true, 00:16:05.511 "data_offset": 2048, 00:16:05.511 "data_size": 63488 00:16:05.511 } 00:16:05.511 ] 00:16:05.511 } 00:16:05.511 } 00:16:05.511 }' 00:16:05.511 21:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:05.511 21:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:05.511 BaseBdev2 00:16:05.511 BaseBdev3 00:16:05.511 BaseBdev4' 00:16:05.511 21:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:05.511 21:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:05.511 21:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:05.770 21:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:05.770 "name": "NewBaseBdev", 00:16:05.770 "aliases": [ 00:16:05.770 "7ebdc702-42f4-11ef-9f7f-e9a656123a8b" 00:16:05.770 ], 00:16:05.770 "product_name": "Malloc disk", 00:16:05.770 "block_size": 512, 00:16:05.770 "num_blocks": 65536, 00:16:05.770 "uuid": "7ebdc702-42f4-11ef-9f7f-e9a656123a8b", 00:16:05.770 "assigned_rate_limits": { 00:16:05.770 "rw_ios_per_sec": 0, 00:16:05.770 "rw_mbytes_per_sec": 0, 00:16:05.770 "r_mbytes_per_sec": 0, 00:16:05.770 "w_mbytes_per_sec": 0 00:16:05.770 }, 00:16:05.770 "claimed": true, 00:16:05.770 "claim_type": "exclusive_write", 00:16:05.770 "zoned": false, 00:16:05.770 "supported_io_types": { 00:16:05.770 "read": true, 00:16:05.770 "write": true, 00:16:05.770 "unmap": true, 00:16:05.770 "flush": true, 00:16:05.770 "reset": true, 00:16:05.770 "nvme_admin": false, 00:16:05.770 "nvme_io": false, 00:16:05.770 "nvme_io_md": false, 00:16:05.770 "write_zeroes": true, 00:16:05.770 "zcopy": true, 00:16:05.770 "get_zone_info": false, 00:16:05.770 "zone_management": false, 00:16:05.770 "zone_append": false, 00:16:05.770 "compare": false, 00:16:05.770 "compare_and_write": false, 00:16:05.770 "abort": true, 00:16:05.770 "seek_hole": false, 00:16:05.770 "seek_data": false, 00:16:05.770 "copy": true, 00:16:05.770 "nvme_iov_md": false 00:16:05.770 }, 00:16:05.770 "memory_domains": [ 00:16:05.770 { 00:16:05.770 "dma_device_id": "system", 00:16:05.770 "dma_device_type": 1 00:16:05.770 }, 00:16:05.770 { 00:16:05.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.770 "dma_device_type": 2 00:16:05.770 } 00:16:05.770 ], 00:16:05.770 "driver_specific": {} 00:16:05.770 }' 00:16:05.770 21:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:05.770 21:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:05.770 21:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:05.770 21:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:05.770 21:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:05.770 21:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:05.770 21:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:05.770 21:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:05.770 21:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:05.770 21:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:05.770 21:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:05.770 21:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:05.770 21:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:05.770 21:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:05.770 21:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:06.337 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:06.337 "name": "BaseBdev2", 00:16:06.337 "aliases": [ 00:16:06.337 "7c7e33dd-42f4-11ef-9f7f-e9a656123a8b" 00:16:06.337 ], 00:16:06.337 "product_name": "Malloc disk", 00:16:06.337 "block_size": 512, 00:16:06.337 "num_blocks": 65536, 00:16:06.337 "uuid": "7c7e33dd-42f4-11ef-9f7f-e9a656123a8b", 00:16:06.337 "assigned_rate_limits": { 00:16:06.337 "rw_ios_per_sec": 0, 00:16:06.337 "rw_mbytes_per_sec": 0, 00:16:06.337 "r_mbytes_per_sec": 0, 00:16:06.337 "w_mbytes_per_sec": 0 00:16:06.337 }, 00:16:06.337 "claimed": true, 00:16:06.337 "claim_type": "exclusive_write", 00:16:06.337 "zoned": false, 00:16:06.337 "supported_io_types": { 00:16:06.337 "read": true, 00:16:06.337 "write": true, 00:16:06.337 "unmap": true, 00:16:06.337 "flush": true, 00:16:06.337 "reset": true, 00:16:06.337 "nvme_admin": false, 00:16:06.337 "nvme_io": false, 00:16:06.337 "nvme_io_md": false, 00:16:06.337 "write_zeroes": true, 00:16:06.337 "zcopy": true, 00:16:06.337 "get_zone_info": false, 00:16:06.337 "zone_management": false, 00:16:06.337 "zone_append": false, 00:16:06.337 "compare": false, 00:16:06.337 "compare_and_write": false, 00:16:06.337 "abort": true, 00:16:06.337 "seek_hole": false, 00:16:06.337 "seek_data": false, 00:16:06.337 "copy": true, 00:16:06.337 "nvme_iov_md": false 00:16:06.337 }, 00:16:06.337 "memory_domains": [ 00:16:06.337 { 00:16:06.337 "dma_device_id": "system", 00:16:06.337 "dma_device_type": 1 00:16:06.337 }, 00:16:06.337 { 00:16:06.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.337 "dma_device_type": 2 00:16:06.337 } 00:16:06.337 ], 00:16:06.337 "driver_specific": {} 00:16:06.337 }' 00:16:06.337 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:06.337 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:06.337 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:06.337 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:06.337 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:06.337 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:06.337 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:06.337 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:06.337 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:06.337 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:06.337 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:06.337 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:06.337 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:06.337 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:06.337 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:06.596 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:06.596 "name": "BaseBdev3", 00:16:06.596 "aliases": [ 00:16:06.596 "7ceb759b-42f4-11ef-9f7f-e9a656123a8b" 00:16:06.596 ], 00:16:06.596 "product_name": "Malloc disk", 00:16:06.596 "block_size": 512, 00:16:06.596 "num_blocks": 65536, 00:16:06.596 "uuid": "7ceb759b-42f4-11ef-9f7f-e9a656123a8b", 00:16:06.596 "assigned_rate_limits": { 00:16:06.596 "rw_ios_per_sec": 0, 00:16:06.596 "rw_mbytes_per_sec": 0, 00:16:06.596 "r_mbytes_per_sec": 0, 00:16:06.596 "w_mbytes_per_sec": 0 00:16:06.596 }, 00:16:06.596 "claimed": true, 00:16:06.596 "claim_type": "exclusive_write", 00:16:06.596 "zoned": false, 00:16:06.596 "supported_io_types": { 00:16:06.596 "read": true, 00:16:06.596 "write": true, 00:16:06.596 "unmap": true, 00:16:06.596 "flush": true, 00:16:06.596 "reset": true, 00:16:06.596 "nvme_admin": false, 00:16:06.596 "nvme_io": false, 00:16:06.596 "nvme_io_md": false, 00:16:06.596 "write_zeroes": true, 00:16:06.596 "zcopy": true, 00:16:06.596 "get_zone_info": false, 00:16:06.596 "zone_management": false, 00:16:06.596 "zone_append": false, 00:16:06.596 "compare": false, 00:16:06.596 "compare_and_write": false, 00:16:06.596 "abort": true, 00:16:06.596 "seek_hole": false, 00:16:06.596 "seek_data": false, 00:16:06.596 "copy": true, 00:16:06.596 "nvme_iov_md": false 00:16:06.596 }, 00:16:06.596 "memory_domains": [ 00:16:06.596 { 00:16:06.596 "dma_device_id": "system", 00:16:06.596 "dma_device_type": 1 00:16:06.596 }, 00:16:06.596 { 00:16:06.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.596 "dma_device_type": 2 00:16:06.596 } 00:16:06.596 ], 00:16:06.596 "driver_specific": {} 00:16:06.596 }' 00:16:06.596 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:06.596 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:06.596 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:06.596 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:06.596 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:06.596 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:06.596 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:06.596 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:06.596 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:06.596 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:06.596 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:06.596 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:06.596 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:06.596 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:06.596 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:06.854 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:06.854 "name": "BaseBdev4", 00:16:06.854 "aliases": [ 00:16:06.854 "7d533881-42f4-11ef-9f7f-e9a656123a8b" 00:16:06.854 ], 00:16:06.854 "product_name": "Malloc disk", 00:16:06.854 "block_size": 512, 00:16:06.854 "num_blocks": 65536, 00:16:06.854 "uuid": "7d533881-42f4-11ef-9f7f-e9a656123a8b", 00:16:06.854 "assigned_rate_limits": { 00:16:06.854 "rw_ios_per_sec": 0, 00:16:06.854 "rw_mbytes_per_sec": 0, 00:16:06.854 "r_mbytes_per_sec": 0, 00:16:06.854 "w_mbytes_per_sec": 0 00:16:06.854 }, 00:16:06.854 "claimed": true, 00:16:06.854 "claim_type": "exclusive_write", 00:16:06.854 "zoned": false, 00:16:06.854 "supported_io_types": { 00:16:06.854 "read": true, 00:16:06.854 "write": true, 00:16:06.854 "unmap": true, 00:16:06.854 "flush": true, 00:16:06.854 "reset": true, 00:16:06.854 "nvme_admin": false, 00:16:06.854 "nvme_io": false, 00:16:06.854 "nvme_io_md": false, 00:16:06.854 "write_zeroes": true, 00:16:06.854 "zcopy": true, 00:16:06.854 "get_zone_info": false, 00:16:06.854 "zone_management": false, 00:16:06.854 "zone_append": false, 00:16:06.854 "compare": false, 00:16:06.854 "compare_and_write": false, 00:16:06.854 "abort": true, 00:16:06.854 "seek_hole": false, 00:16:06.854 "seek_data": false, 00:16:06.854 "copy": true, 00:16:06.854 "nvme_iov_md": false 00:16:06.854 }, 00:16:06.854 "memory_domains": [ 00:16:06.854 { 00:16:06.854 "dma_device_id": "system", 00:16:06.854 "dma_device_type": 1 00:16:06.854 }, 00:16:06.854 { 00:16:06.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.854 "dma_device_type": 2 00:16:06.854 } 00:16:06.854 ], 00:16:06.854 "driver_specific": {} 00:16:06.854 }' 00:16:06.854 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:06.854 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:06.854 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:06.854 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:06.854 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:06.854 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:06.854 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:06.854 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:06.854 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:06.854 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:06.854 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:06.854 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:06.854 21:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:07.112 [2024-07-15 21:52:22.082311] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:07.112 [2024-07-15 21:52:22.082335] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:07.113 [2024-07-15 21:52:22.082373] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:07.113 [2024-07-15 21:52:22.082437] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:07.113 [2024-07-15 21:52:22.082441] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ed371e34f00 name Existed_Raid, state offline 00:16:07.113 21:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 63758 00:16:07.113 21:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@942 -- # '[' -z 63758 ']' 00:16:07.113 21:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # kill -0 63758 00:16:07.113 21:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@947 -- # uname 00:16:07.113 21:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:16:07.113 21:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # tail -1 00:16:07.113 21:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # ps -c -o command 63758 00:16:07.113 21:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:16:07.113 21:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:16:07.113 killing process with pid 63758 00:16:07.113 21:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # echo 'killing process with pid 63758' 00:16:07.113 21:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@961 -- # kill 63758 00:16:07.113 [2024-07-15 21:52:22.107969] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:07.113 21:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # wait 63758 00:16:07.113 [2024-07-15 21:52:22.133278] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:07.371 21:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:16:07.371 00:16:07.371 real 0m24.383s 00:16:07.371 user 0m44.086s 00:16:07.371 sys 0m3.819s 00:16:07.371 21:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1118 -- # xtrace_disable 00:16:07.371 21:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.371 ************************************ 00:16:07.371 END TEST raid_state_function_test_sb 00:16:07.371 ************************************ 00:16:07.371 21:52:22 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:16:07.371 21:52:22 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:16:07.371 21:52:22 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:16:07.371 21:52:22 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:16:07.371 21:52:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:07.371 ************************************ 00:16:07.371 START TEST raid_superblock_test 00:16:07.371 ************************************ 00:16:07.371 21:52:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1117 -- # raid_superblock_test raid1 4 00:16:07.371 21:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:16:07.371 21:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:16:07.371 21:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:16:07.371 21:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:16:07.371 21:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:16:07.371 21:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:16:07.371 21:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:16:07.371 21:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:16:07.371 21:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:16:07.371 21:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:16:07.371 21:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:16:07.371 21:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:16:07.371 21:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:16:07.371 21:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:16:07.371 21:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:16:07.371 21:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=64564 00:16:07.372 21:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 64564 /var/tmp/spdk-raid.sock 00:16:07.372 21:52:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@823 -- # '[' -z 64564 ']' 00:16:07.372 21:52:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:07.372 21:52:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:16:07.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:07.372 21:52:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:07.372 21:52:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:16:07.372 21:52:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.372 21:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:07.372 [2024-07-15 21:52:22.372635] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:16:07.372 [2024-07-15 21:52:22.372855] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:07.941 EAL: TSC is not safe to use in SMP mode 00:16:07.941 EAL: TSC is not invariant 00:16:07.941 [2024-07-15 21:52:22.910426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.941 [2024-07-15 21:52:23.003854] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:07.941 [2024-07-15 21:52:23.006706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.941 [2024-07-15 21:52:23.007743] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:07.941 [2024-07-15 21:52:23.007762] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:08.210 21:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:16:08.210 21:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # return 0 00:16:08.210 21:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:16:08.210 21:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:08.210 21:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:16:08.210 21:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:16:08.210 21:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:08.210 21:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:08.210 21:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:08.210 21:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:08.210 21:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:08.468 malloc1 00:16:08.468 21:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:08.726 [2024-07-15 21:52:23.780709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:08.726 [2024-07-15 21:52:23.780789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.726 [2024-07-15 21:52:23.780819] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x34466d834780 00:16:08.726 [2024-07-15 21:52:23.780827] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.726 [2024-07-15 21:52:23.781964] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.726 [2024-07-15 21:52:23.782014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:08.726 pt1 00:16:08.726 21:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:08.726 21:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:08.726 21:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:16:08.726 21:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:16:08.726 21:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:08.726 21:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:08.726 21:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:08.726 21:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:08.726 21:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:08.983 malloc2 00:16:08.983 21:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:09.242 [2024-07-15 21:52:24.244739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:09.242 [2024-07-15 21:52:24.244804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.242 [2024-07-15 21:52:24.244833] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x34466d834c80 00:16:09.242 [2024-07-15 21:52:24.244841] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.242 [2024-07-15 21:52:24.245555] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.242 [2024-07-15 21:52:24.245587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:09.242 pt2 00:16:09.242 21:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:09.242 21:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:09.242 21:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:16:09.242 21:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:16:09.242 21:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:09.242 21:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:09.242 21:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:09.242 21:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:09.242 21:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:09.500 malloc3 00:16:09.500 21:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:09.500 [2024-07-15 21:52:24.664741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:09.500 [2024-07-15 21:52:24.664821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.500 [2024-07-15 21:52:24.664850] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x34466d835180 00:16:09.500 [2024-07-15 21:52:24.664857] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.500 [2024-07-15 21:52:24.665584] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.500 [2024-07-15 21:52:24.665615] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:09.500 pt3 00:16:09.500 21:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:09.500 21:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:09.500 21:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:16:09.500 21:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:16:09.500 21:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:09.500 21:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:09.500 21:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:09.501 21:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:09.501 21:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:16:09.759 malloc4 00:16:09.759 21:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:10.018 [2024-07-15 21:52:25.092748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:10.018 [2024-07-15 21:52:25.092832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.018 [2024-07-15 21:52:25.092861] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x34466d835680 00:16:10.018 [2024-07-15 21:52:25.092869] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.018 [2024-07-15 21:52:25.093595] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.018 [2024-07-15 21:52:25.093628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:10.018 pt4 00:16:10.018 21:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:10.018 21:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:10.018 21:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:16:10.277 [2024-07-15 21:52:25.348772] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:10.277 [2024-07-15 21:52:25.349451] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:10.277 [2024-07-15 21:52:25.349481] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:10.277 [2024-07-15 21:52:25.349493] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:10.277 [2024-07-15 21:52:25.349550] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x34466d835900 00:16:10.277 [2024-07-15 21:52:25.349557] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:10.277 [2024-07-15 21:52:25.349588] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x34466d897e20 00:16:10.277 [2024-07-15 21:52:25.349684] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x34466d835900 00:16:10.277 [2024-07-15 21:52:25.349690] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x34466d835900 00:16:10.277 [2024-07-15 21:52:25.349734] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.277 21:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:10.277 21:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:10.277 21:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:10.277 21:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:10.277 21:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:10.277 21:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:10.277 21:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:10.277 21:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:10.277 21:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:10.277 21:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:10.277 21:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.277 21:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.536 21:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:10.536 "name": "raid_bdev1", 00:16:10.536 "uuid": "85a629ff-42f4-11ef-9f7f-e9a656123a8b", 00:16:10.536 "strip_size_kb": 0, 00:16:10.536 "state": "online", 00:16:10.536 "raid_level": "raid1", 00:16:10.536 "superblock": true, 00:16:10.536 "num_base_bdevs": 4, 00:16:10.536 "num_base_bdevs_discovered": 4, 00:16:10.536 "num_base_bdevs_operational": 4, 00:16:10.536 "base_bdevs_list": [ 00:16:10.536 { 00:16:10.536 "name": "pt1", 00:16:10.536 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:10.536 "is_configured": true, 00:16:10.536 "data_offset": 2048, 00:16:10.536 "data_size": 63488 00:16:10.536 }, 00:16:10.536 { 00:16:10.536 "name": "pt2", 00:16:10.536 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:10.536 "is_configured": true, 00:16:10.536 "data_offset": 2048, 00:16:10.536 "data_size": 63488 00:16:10.536 }, 00:16:10.536 { 00:16:10.536 "name": "pt3", 00:16:10.536 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:10.536 "is_configured": true, 00:16:10.536 "data_offset": 2048, 00:16:10.536 "data_size": 63488 00:16:10.536 }, 00:16:10.536 { 00:16:10.536 "name": "pt4", 00:16:10.536 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:10.536 "is_configured": true, 00:16:10.536 "data_offset": 2048, 00:16:10.536 "data_size": 63488 00:16:10.536 } 00:16:10.536 ] 00:16:10.536 }' 00:16:10.536 21:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:10.536 21:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.794 21:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:16:10.794 21:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:10.794 21:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:10.794 21:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:10.794 21:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:10.794 21:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:10.794 21:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:10.794 21:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:11.054 [2024-07-15 21:52:26.124948] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:11.054 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:11.054 "name": "raid_bdev1", 00:16:11.054 "aliases": [ 00:16:11.054 "85a629ff-42f4-11ef-9f7f-e9a656123a8b" 00:16:11.054 ], 00:16:11.054 "product_name": "Raid Volume", 00:16:11.054 "block_size": 512, 00:16:11.054 "num_blocks": 63488, 00:16:11.054 "uuid": "85a629ff-42f4-11ef-9f7f-e9a656123a8b", 00:16:11.054 "assigned_rate_limits": { 00:16:11.054 "rw_ios_per_sec": 0, 00:16:11.054 "rw_mbytes_per_sec": 0, 00:16:11.054 "r_mbytes_per_sec": 0, 00:16:11.054 "w_mbytes_per_sec": 0 00:16:11.054 }, 00:16:11.054 "claimed": false, 00:16:11.054 "zoned": false, 00:16:11.054 "supported_io_types": { 00:16:11.054 "read": true, 00:16:11.054 "write": true, 00:16:11.054 "unmap": false, 00:16:11.054 "flush": false, 00:16:11.054 "reset": true, 00:16:11.054 "nvme_admin": false, 00:16:11.054 "nvme_io": false, 00:16:11.054 "nvme_io_md": false, 00:16:11.054 "write_zeroes": true, 00:16:11.054 "zcopy": false, 00:16:11.054 "get_zone_info": false, 00:16:11.054 "zone_management": false, 00:16:11.054 "zone_append": false, 00:16:11.054 "compare": false, 00:16:11.054 "compare_and_write": false, 00:16:11.054 "abort": false, 00:16:11.054 "seek_hole": false, 00:16:11.054 "seek_data": false, 00:16:11.054 "copy": false, 00:16:11.054 "nvme_iov_md": false 00:16:11.054 }, 00:16:11.054 "memory_domains": [ 00:16:11.054 { 00:16:11.054 "dma_device_id": "system", 00:16:11.054 "dma_device_type": 1 00:16:11.054 }, 00:16:11.054 { 00:16:11.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.054 "dma_device_type": 2 00:16:11.054 }, 00:16:11.054 { 00:16:11.054 "dma_device_id": "system", 00:16:11.054 "dma_device_type": 1 00:16:11.054 }, 00:16:11.054 { 00:16:11.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.054 "dma_device_type": 2 00:16:11.054 }, 00:16:11.054 { 00:16:11.054 "dma_device_id": "system", 00:16:11.054 "dma_device_type": 1 00:16:11.054 }, 00:16:11.054 { 00:16:11.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.054 "dma_device_type": 2 00:16:11.054 }, 00:16:11.054 { 00:16:11.054 "dma_device_id": "system", 00:16:11.054 "dma_device_type": 1 00:16:11.054 }, 00:16:11.054 { 00:16:11.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.054 "dma_device_type": 2 00:16:11.054 } 00:16:11.054 ], 00:16:11.054 "driver_specific": { 00:16:11.054 "raid": { 00:16:11.054 "uuid": "85a629ff-42f4-11ef-9f7f-e9a656123a8b", 00:16:11.054 "strip_size_kb": 0, 00:16:11.054 "state": "online", 00:16:11.054 "raid_level": "raid1", 00:16:11.054 "superblock": true, 00:16:11.054 "num_base_bdevs": 4, 00:16:11.054 "num_base_bdevs_discovered": 4, 00:16:11.054 "num_base_bdevs_operational": 4, 00:16:11.054 "base_bdevs_list": [ 00:16:11.054 { 00:16:11.054 "name": "pt1", 00:16:11.054 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:11.054 "is_configured": true, 00:16:11.054 "data_offset": 2048, 00:16:11.054 "data_size": 63488 00:16:11.054 }, 00:16:11.054 { 00:16:11.054 "name": "pt2", 00:16:11.054 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:11.054 "is_configured": true, 00:16:11.054 "data_offset": 2048, 00:16:11.054 "data_size": 63488 00:16:11.054 }, 00:16:11.054 { 00:16:11.054 "name": "pt3", 00:16:11.054 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:11.054 "is_configured": true, 00:16:11.054 "data_offset": 2048, 00:16:11.054 "data_size": 63488 00:16:11.054 }, 00:16:11.054 { 00:16:11.054 "name": "pt4", 00:16:11.054 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:11.054 "is_configured": true, 00:16:11.054 "data_offset": 2048, 00:16:11.054 "data_size": 63488 00:16:11.054 } 00:16:11.054 ] 00:16:11.054 } 00:16:11.054 } 00:16:11.054 }' 00:16:11.054 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:11.054 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:11.054 pt2 00:16:11.054 pt3 00:16:11.054 pt4' 00:16:11.054 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:11.054 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:11.054 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:11.313 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:11.313 "name": "pt1", 00:16:11.313 "aliases": [ 00:16:11.313 "00000000-0000-0000-0000-000000000001" 00:16:11.313 ], 00:16:11.313 "product_name": "passthru", 00:16:11.313 "block_size": 512, 00:16:11.313 "num_blocks": 65536, 00:16:11.313 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:11.313 "assigned_rate_limits": { 00:16:11.313 "rw_ios_per_sec": 0, 00:16:11.313 "rw_mbytes_per_sec": 0, 00:16:11.313 "r_mbytes_per_sec": 0, 00:16:11.313 "w_mbytes_per_sec": 0 00:16:11.313 }, 00:16:11.313 "claimed": true, 00:16:11.313 "claim_type": "exclusive_write", 00:16:11.313 "zoned": false, 00:16:11.313 "supported_io_types": { 00:16:11.313 "read": true, 00:16:11.313 "write": true, 00:16:11.313 "unmap": true, 00:16:11.313 "flush": true, 00:16:11.313 "reset": true, 00:16:11.313 "nvme_admin": false, 00:16:11.313 "nvme_io": false, 00:16:11.313 "nvme_io_md": false, 00:16:11.313 "write_zeroes": true, 00:16:11.313 "zcopy": true, 00:16:11.313 "get_zone_info": false, 00:16:11.313 "zone_management": false, 00:16:11.313 "zone_append": false, 00:16:11.313 "compare": false, 00:16:11.313 "compare_and_write": false, 00:16:11.313 "abort": true, 00:16:11.313 "seek_hole": false, 00:16:11.313 "seek_data": false, 00:16:11.313 "copy": true, 00:16:11.313 "nvme_iov_md": false 00:16:11.313 }, 00:16:11.313 "memory_domains": [ 00:16:11.313 { 00:16:11.313 "dma_device_id": "system", 00:16:11.313 "dma_device_type": 1 00:16:11.313 }, 00:16:11.313 { 00:16:11.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.313 "dma_device_type": 2 00:16:11.313 } 00:16:11.313 ], 00:16:11.313 "driver_specific": { 00:16:11.313 "passthru": { 00:16:11.313 "name": "pt1", 00:16:11.313 "base_bdev_name": "malloc1" 00:16:11.313 } 00:16:11.313 } 00:16:11.313 }' 00:16:11.313 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:11.313 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:11.313 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:11.313 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:11.313 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:11.313 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:11.313 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:11.313 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:11.313 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:11.313 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:11.313 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:11.313 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:11.313 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:11.313 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:11.313 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:11.572 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:11.572 "name": "pt2", 00:16:11.572 "aliases": [ 00:16:11.572 "00000000-0000-0000-0000-000000000002" 00:16:11.572 ], 00:16:11.572 "product_name": "passthru", 00:16:11.572 "block_size": 512, 00:16:11.572 "num_blocks": 65536, 00:16:11.572 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:11.572 "assigned_rate_limits": { 00:16:11.572 "rw_ios_per_sec": 0, 00:16:11.572 "rw_mbytes_per_sec": 0, 00:16:11.572 "r_mbytes_per_sec": 0, 00:16:11.572 "w_mbytes_per_sec": 0 00:16:11.572 }, 00:16:11.572 "claimed": true, 00:16:11.572 "claim_type": "exclusive_write", 00:16:11.572 "zoned": false, 00:16:11.572 "supported_io_types": { 00:16:11.572 "read": true, 00:16:11.572 "write": true, 00:16:11.572 "unmap": true, 00:16:11.572 "flush": true, 00:16:11.572 "reset": true, 00:16:11.572 "nvme_admin": false, 00:16:11.572 "nvme_io": false, 00:16:11.572 "nvme_io_md": false, 00:16:11.572 "write_zeroes": true, 00:16:11.572 "zcopy": true, 00:16:11.572 "get_zone_info": false, 00:16:11.572 "zone_management": false, 00:16:11.572 "zone_append": false, 00:16:11.572 "compare": false, 00:16:11.572 "compare_and_write": false, 00:16:11.572 "abort": true, 00:16:11.572 "seek_hole": false, 00:16:11.572 "seek_data": false, 00:16:11.572 "copy": true, 00:16:11.572 "nvme_iov_md": false 00:16:11.572 }, 00:16:11.572 "memory_domains": [ 00:16:11.572 { 00:16:11.572 "dma_device_id": "system", 00:16:11.572 "dma_device_type": 1 00:16:11.572 }, 00:16:11.572 { 00:16:11.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.572 "dma_device_type": 2 00:16:11.572 } 00:16:11.572 ], 00:16:11.572 "driver_specific": { 00:16:11.572 "passthru": { 00:16:11.572 "name": "pt2", 00:16:11.572 "base_bdev_name": "malloc2" 00:16:11.572 } 00:16:11.572 } 00:16:11.572 }' 00:16:11.572 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:11.572 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:11.572 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:11.572 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:11.572 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:11.572 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:11.572 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:11.572 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:11.572 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:11.572 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:11.572 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:11.572 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:11.572 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:11.572 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:11.572 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:11.831 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:11.831 "name": "pt3", 00:16:11.831 "aliases": [ 00:16:11.831 "00000000-0000-0000-0000-000000000003" 00:16:11.831 ], 00:16:11.831 "product_name": "passthru", 00:16:11.831 "block_size": 512, 00:16:11.831 "num_blocks": 65536, 00:16:11.831 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:11.831 "assigned_rate_limits": { 00:16:11.831 "rw_ios_per_sec": 0, 00:16:11.831 "rw_mbytes_per_sec": 0, 00:16:11.831 "r_mbytes_per_sec": 0, 00:16:11.831 "w_mbytes_per_sec": 0 00:16:11.831 }, 00:16:11.831 "claimed": true, 00:16:11.831 "claim_type": "exclusive_write", 00:16:11.831 "zoned": false, 00:16:11.831 "supported_io_types": { 00:16:11.831 "read": true, 00:16:11.831 "write": true, 00:16:11.831 "unmap": true, 00:16:11.831 "flush": true, 00:16:11.831 "reset": true, 00:16:11.831 "nvme_admin": false, 00:16:11.831 "nvme_io": false, 00:16:11.831 "nvme_io_md": false, 00:16:11.831 "write_zeroes": true, 00:16:11.831 "zcopy": true, 00:16:11.831 "get_zone_info": false, 00:16:11.831 "zone_management": false, 00:16:11.831 "zone_append": false, 00:16:11.831 "compare": false, 00:16:11.831 "compare_and_write": false, 00:16:11.831 "abort": true, 00:16:11.831 "seek_hole": false, 00:16:11.831 "seek_data": false, 00:16:11.831 "copy": true, 00:16:11.831 "nvme_iov_md": false 00:16:11.831 }, 00:16:11.831 "memory_domains": [ 00:16:11.831 { 00:16:11.831 "dma_device_id": "system", 00:16:11.831 "dma_device_type": 1 00:16:11.831 }, 00:16:11.831 { 00:16:11.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.831 "dma_device_type": 2 00:16:11.831 } 00:16:11.831 ], 00:16:11.831 "driver_specific": { 00:16:11.831 "passthru": { 00:16:11.831 "name": "pt3", 00:16:11.831 "base_bdev_name": "malloc3" 00:16:11.831 } 00:16:11.831 } 00:16:11.831 }' 00:16:11.831 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:11.831 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:11.831 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:11.831 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:11.831 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:11.831 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:11.831 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:11.831 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:11.831 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:11.831 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:11.831 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:11.831 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:11.831 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:11.831 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:16:11.831 21:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:12.090 21:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:12.090 "name": "pt4", 00:16:12.090 "aliases": [ 00:16:12.090 "00000000-0000-0000-0000-000000000004" 00:16:12.090 ], 00:16:12.090 "product_name": "passthru", 00:16:12.090 "block_size": 512, 00:16:12.090 "num_blocks": 65536, 00:16:12.090 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:12.090 "assigned_rate_limits": { 00:16:12.090 "rw_ios_per_sec": 0, 00:16:12.090 "rw_mbytes_per_sec": 0, 00:16:12.090 "r_mbytes_per_sec": 0, 00:16:12.090 "w_mbytes_per_sec": 0 00:16:12.090 }, 00:16:12.090 "claimed": true, 00:16:12.090 "claim_type": "exclusive_write", 00:16:12.090 "zoned": false, 00:16:12.090 "supported_io_types": { 00:16:12.090 "read": true, 00:16:12.090 "write": true, 00:16:12.090 "unmap": true, 00:16:12.090 "flush": true, 00:16:12.090 "reset": true, 00:16:12.090 "nvme_admin": false, 00:16:12.090 "nvme_io": false, 00:16:12.090 "nvme_io_md": false, 00:16:12.090 "write_zeroes": true, 00:16:12.090 "zcopy": true, 00:16:12.090 "get_zone_info": false, 00:16:12.090 "zone_management": false, 00:16:12.090 "zone_append": false, 00:16:12.090 "compare": false, 00:16:12.090 "compare_and_write": false, 00:16:12.090 "abort": true, 00:16:12.090 "seek_hole": false, 00:16:12.090 "seek_data": false, 00:16:12.090 "copy": true, 00:16:12.090 "nvme_iov_md": false 00:16:12.090 }, 00:16:12.090 "memory_domains": [ 00:16:12.090 { 00:16:12.090 "dma_device_id": "system", 00:16:12.090 "dma_device_type": 1 00:16:12.090 }, 00:16:12.090 { 00:16:12.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.090 "dma_device_type": 2 00:16:12.090 } 00:16:12.090 ], 00:16:12.090 "driver_specific": { 00:16:12.090 "passthru": { 00:16:12.090 "name": "pt4", 00:16:12.090 "base_bdev_name": "malloc4" 00:16:12.090 } 00:16:12.090 } 00:16:12.090 }' 00:16:12.090 21:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:12.090 21:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:12.090 21:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:12.090 21:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:12.090 21:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:12.090 21:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:12.090 21:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:12.090 21:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:12.090 21:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:12.090 21:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:12.090 21:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:12.090 21:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:12.090 21:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:12.090 21:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:16:12.348 [2024-07-15 21:52:27.437302] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:12.348 21:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=85a629ff-42f4-11ef-9f7f-e9a656123a8b 00:16:12.348 21:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 85a629ff-42f4-11ef-9f7f-e9a656123a8b ']' 00:16:12.348 21:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:12.606 [2024-07-15 21:52:27.721313] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:12.606 [2024-07-15 21:52:27.721338] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:12.606 [2024-07-15 21:52:27.721362] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:12.606 [2024-07-15 21:52:27.721384] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:12.606 [2024-07-15 21:52:27.721389] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x34466d835900 name raid_bdev1, state offline 00:16:12.606 21:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:16:12.607 21:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.864 21:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:16:12.864 21:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:16:12.864 21:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:12.864 21:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:13.123 21:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:13.123 21:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:13.381 21:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:13.381 21:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:13.640 21:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:13.640 21:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:16:13.922 21:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:13.922 21:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:13.922 21:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:16:13.922 21:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:13.922 21:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # local es=0 00:16:13.922 21:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:13.922 21:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@630 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:13.922 21:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:16:13.922 21:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:13.922 21:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:16:13.922 21:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:13.922 21:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:16:13.922 21:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:13.922 21:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:13.922 21:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:14.182 [2024-07-15 21:52:29.301560] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:14.182 [2024-07-15 21:52:29.302430] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:14.182 [2024-07-15 21:52:29.302462] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:14.182 [2024-07-15 21:52:29.302471] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:14.182 [2024-07-15 21:52:29.302486] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:14.182 [2024-07-15 21:52:29.302528] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:14.182 [2024-07-15 21:52:29.302538] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:14.182 [2024-07-15 21:52:29.302547] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:14.182 [2024-07-15 21:52:29.302555] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:14.182 [2024-07-15 21:52:29.302574] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x34466d835680 name raid_bdev1, state configuring 00:16:14.182 request: 00:16:14.182 { 00:16:14.182 "name": "raid_bdev1", 00:16:14.182 "raid_level": "raid1", 00:16:14.182 "base_bdevs": [ 00:16:14.182 "malloc1", 00:16:14.182 "malloc2", 00:16:14.182 "malloc3", 00:16:14.182 "malloc4" 00:16:14.182 ], 00:16:14.182 "superblock": false, 00:16:14.182 "method": "bdev_raid_create", 00:16:14.182 "req_id": 1 00:16:14.182 } 00:16:14.182 Got JSON-RPC error response 00:16:14.182 response: 00:16:14.182 { 00:16:14.182 "code": -17, 00:16:14.182 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:14.182 } 00:16:14.182 21:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@645 -- # es=1 00:16:14.182 21:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:16:14.182 21:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:16:14.182 21:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:16:14.182 21:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:16:14.182 21:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.441 21:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:16:14.441 21:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:16:14.441 21:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:14.699 [2024-07-15 21:52:29.721570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:14.699 [2024-07-15 21:52:29.721619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.699 [2024-07-15 21:52:29.721631] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x34466d835180 00:16:14.699 [2024-07-15 21:52:29.721639] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.699 [2024-07-15 21:52:29.722393] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.699 [2024-07-15 21:52:29.722417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:14.699 [2024-07-15 21:52:29.722440] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:14.699 [2024-07-15 21:52:29.722451] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:14.699 pt1 00:16:14.699 21:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:16:14.699 21:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:14.699 21:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:14.699 21:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:14.699 21:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:14.699 21:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:14.699 21:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:14.699 21:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:14.699 21:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:14.699 21:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:14.699 21:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.699 21:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.957 21:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:14.957 "name": "raid_bdev1", 00:16:14.957 "uuid": "85a629ff-42f4-11ef-9f7f-e9a656123a8b", 00:16:14.957 "strip_size_kb": 0, 00:16:14.957 "state": "configuring", 00:16:14.957 "raid_level": "raid1", 00:16:14.957 "superblock": true, 00:16:14.957 "num_base_bdevs": 4, 00:16:14.957 "num_base_bdevs_discovered": 1, 00:16:14.957 "num_base_bdevs_operational": 4, 00:16:14.957 "base_bdevs_list": [ 00:16:14.957 { 00:16:14.957 "name": "pt1", 00:16:14.957 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:14.957 "is_configured": true, 00:16:14.957 "data_offset": 2048, 00:16:14.957 "data_size": 63488 00:16:14.957 }, 00:16:14.957 { 00:16:14.957 "name": null, 00:16:14.957 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:14.957 "is_configured": false, 00:16:14.957 "data_offset": 2048, 00:16:14.957 "data_size": 63488 00:16:14.957 }, 00:16:14.957 { 00:16:14.957 "name": null, 00:16:14.957 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:14.957 "is_configured": false, 00:16:14.957 "data_offset": 2048, 00:16:14.957 "data_size": 63488 00:16:14.957 }, 00:16:14.957 { 00:16:14.957 "name": null, 00:16:14.957 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:14.957 "is_configured": false, 00:16:14.957 "data_offset": 2048, 00:16:14.957 "data_size": 63488 00:16:14.957 } 00:16:14.957 ] 00:16:14.957 }' 00:16:14.957 21:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:14.957 21:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.216 21:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:16:15.216 21:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:15.476 [2024-07-15 21:52:30.461616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:15.476 [2024-07-15 21:52:30.461690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.476 [2024-07-15 21:52:30.461702] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x34466d834780 00:16:15.476 [2024-07-15 21:52:30.461758] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.476 [2024-07-15 21:52:30.461910] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.476 [2024-07-15 21:52:30.462036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:15.476 [2024-07-15 21:52:30.462066] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:15.476 [2024-07-15 21:52:30.462091] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:15.476 pt2 00:16:15.476 21:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:15.735 [2024-07-15 21:52:30.669612] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:15.735 21:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:16:15.735 21:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:15.735 21:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:15.735 21:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:15.736 21:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:15.736 21:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:15.736 21:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:15.736 21:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:15.736 21:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:15.736 21:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:15.736 21:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.736 21:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.995 21:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:15.995 "name": "raid_bdev1", 00:16:15.995 "uuid": "85a629ff-42f4-11ef-9f7f-e9a656123a8b", 00:16:15.995 "strip_size_kb": 0, 00:16:15.995 "state": "configuring", 00:16:15.995 "raid_level": "raid1", 00:16:15.995 "superblock": true, 00:16:15.995 "num_base_bdevs": 4, 00:16:15.995 "num_base_bdevs_discovered": 1, 00:16:15.995 "num_base_bdevs_operational": 4, 00:16:15.995 "base_bdevs_list": [ 00:16:15.995 { 00:16:15.995 "name": "pt1", 00:16:15.995 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:15.995 "is_configured": true, 00:16:15.995 "data_offset": 2048, 00:16:15.995 "data_size": 63488 00:16:15.995 }, 00:16:15.995 { 00:16:15.995 "name": null, 00:16:15.995 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:15.995 "is_configured": false, 00:16:15.995 "data_offset": 2048, 00:16:15.995 "data_size": 63488 00:16:15.995 }, 00:16:15.995 { 00:16:15.995 "name": null, 00:16:15.995 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:15.995 "is_configured": false, 00:16:15.995 "data_offset": 2048, 00:16:15.995 "data_size": 63488 00:16:15.995 }, 00:16:15.995 { 00:16:15.995 "name": null, 00:16:15.995 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:15.995 "is_configured": false, 00:16:15.995 "data_offset": 2048, 00:16:15.995 "data_size": 63488 00:16:15.995 } 00:16:15.995 ] 00:16:15.995 }' 00:16:15.995 21:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:15.995 21:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.264 21:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:16:16.264 21:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:16.264 21:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:16.264 [2024-07-15 21:52:31.409643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:16.264 [2024-07-15 21:52:31.409705] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.264 [2024-07-15 21:52:31.409718] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x34466d834780 00:16:16.264 [2024-07-15 21:52:31.409726] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.264 [2024-07-15 21:52:31.409890] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.264 [2024-07-15 21:52:31.409904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:16.264 [2024-07-15 21:52:31.409949] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:16.264 [2024-07-15 21:52:31.409959] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:16.264 pt2 00:16:16.264 21:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:16.264 21:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:16.264 21:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:16.539 [2024-07-15 21:52:31.617654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:16.539 [2024-07-15 21:52:31.617704] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.539 [2024-07-15 21:52:31.617716] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x34466d835b80 00:16:16.539 [2024-07-15 21:52:31.617724] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.540 [2024-07-15 21:52:31.617860] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.540 [2024-07-15 21:52:31.617908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:16.540 [2024-07-15 21:52:31.617947] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:16.540 [2024-07-15 21:52:31.617956] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:16.540 pt3 00:16:16.540 21:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:16.540 21:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:16.540 21:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:16.798 [2024-07-15 21:52:31.833634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:16.798 [2024-07-15 21:52:31.833673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.798 [2024-07-15 21:52:31.833703] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x34466d835900 00:16:16.798 [2024-07-15 21:52:31.833711] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.798 [2024-07-15 21:52:31.833798] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.798 [2024-07-15 21:52:31.833810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:16.798 [2024-07-15 21:52:31.833830] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:16.798 [2024-07-15 21:52:31.833839] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:16.798 [2024-07-15 21:52:31.833870] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x34466d834c80 00:16:16.798 [2024-07-15 21:52:31.833875] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:16.798 [2024-07-15 21:52:31.833912] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x34466d897e20 00:16:16.798 [2024-07-15 21:52:31.834016] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x34466d834c80 00:16:16.799 [2024-07-15 21:52:31.834020] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x34466d834c80 00:16:16.799 [2024-07-15 21:52:31.834047] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.799 pt4 00:16:16.799 21:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:16.799 21:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:16.799 21:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:16.799 21:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:16.799 21:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:16.799 21:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:16.799 21:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:16.799 21:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:16.799 21:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:16.799 21:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:16.799 21:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:16.799 21:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:16.799 21:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.799 21:52:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.058 21:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:17.058 "name": "raid_bdev1", 00:16:17.058 "uuid": "85a629ff-42f4-11ef-9f7f-e9a656123a8b", 00:16:17.058 "strip_size_kb": 0, 00:16:17.058 "state": "online", 00:16:17.058 "raid_level": "raid1", 00:16:17.058 "superblock": true, 00:16:17.058 "num_base_bdevs": 4, 00:16:17.058 "num_base_bdevs_discovered": 4, 00:16:17.058 "num_base_bdevs_operational": 4, 00:16:17.058 "base_bdevs_list": [ 00:16:17.058 { 00:16:17.058 "name": "pt1", 00:16:17.058 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:17.058 "is_configured": true, 00:16:17.058 "data_offset": 2048, 00:16:17.058 "data_size": 63488 00:16:17.058 }, 00:16:17.058 { 00:16:17.058 "name": "pt2", 00:16:17.058 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:17.058 "is_configured": true, 00:16:17.058 "data_offset": 2048, 00:16:17.058 "data_size": 63488 00:16:17.058 }, 00:16:17.058 { 00:16:17.058 "name": "pt3", 00:16:17.058 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:17.058 "is_configured": true, 00:16:17.058 "data_offset": 2048, 00:16:17.058 "data_size": 63488 00:16:17.058 }, 00:16:17.058 { 00:16:17.058 "name": "pt4", 00:16:17.058 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:17.058 "is_configured": true, 00:16:17.058 "data_offset": 2048, 00:16:17.058 "data_size": 63488 00:16:17.058 } 00:16:17.058 ] 00:16:17.058 }' 00:16:17.058 21:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:17.058 21:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.317 21:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:16:17.317 21:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:17.317 21:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:17.317 21:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:17.317 21:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:17.317 21:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:17.317 21:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:17.317 21:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:17.576 [2024-07-15 21:52:32.581699] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:17.576 21:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:17.576 "name": "raid_bdev1", 00:16:17.576 "aliases": [ 00:16:17.576 "85a629ff-42f4-11ef-9f7f-e9a656123a8b" 00:16:17.576 ], 00:16:17.576 "product_name": "Raid Volume", 00:16:17.576 "block_size": 512, 00:16:17.576 "num_blocks": 63488, 00:16:17.576 "uuid": "85a629ff-42f4-11ef-9f7f-e9a656123a8b", 00:16:17.576 "assigned_rate_limits": { 00:16:17.576 "rw_ios_per_sec": 0, 00:16:17.576 "rw_mbytes_per_sec": 0, 00:16:17.576 "r_mbytes_per_sec": 0, 00:16:17.576 "w_mbytes_per_sec": 0 00:16:17.576 }, 00:16:17.576 "claimed": false, 00:16:17.576 "zoned": false, 00:16:17.576 "supported_io_types": { 00:16:17.576 "read": true, 00:16:17.576 "write": true, 00:16:17.576 "unmap": false, 00:16:17.576 "flush": false, 00:16:17.576 "reset": true, 00:16:17.576 "nvme_admin": false, 00:16:17.576 "nvme_io": false, 00:16:17.576 "nvme_io_md": false, 00:16:17.576 "write_zeroes": true, 00:16:17.576 "zcopy": false, 00:16:17.576 "get_zone_info": false, 00:16:17.576 "zone_management": false, 00:16:17.576 "zone_append": false, 00:16:17.576 "compare": false, 00:16:17.576 "compare_and_write": false, 00:16:17.576 "abort": false, 00:16:17.576 "seek_hole": false, 00:16:17.576 "seek_data": false, 00:16:17.576 "copy": false, 00:16:17.576 "nvme_iov_md": false 00:16:17.576 }, 00:16:17.576 "memory_domains": [ 00:16:17.576 { 00:16:17.576 "dma_device_id": "system", 00:16:17.576 "dma_device_type": 1 00:16:17.576 }, 00:16:17.576 { 00:16:17.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.576 "dma_device_type": 2 00:16:17.576 }, 00:16:17.576 { 00:16:17.576 "dma_device_id": "system", 00:16:17.576 "dma_device_type": 1 00:16:17.576 }, 00:16:17.576 { 00:16:17.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.576 "dma_device_type": 2 00:16:17.576 }, 00:16:17.576 { 00:16:17.576 "dma_device_id": "system", 00:16:17.576 "dma_device_type": 1 00:16:17.576 }, 00:16:17.576 { 00:16:17.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.576 "dma_device_type": 2 00:16:17.576 }, 00:16:17.576 { 00:16:17.576 "dma_device_id": "system", 00:16:17.576 "dma_device_type": 1 00:16:17.576 }, 00:16:17.576 { 00:16:17.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.576 "dma_device_type": 2 00:16:17.576 } 00:16:17.576 ], 00:16:17.576 "driver_specific": { 00:16:17.576 "raid": { 00:16:17.576 "uuid": "85a629ff-42f4-11ef-9f7f-e9a656123a8b", 00:16:17.576 "strip_size_kb": 0, 00:16:17.576 "state": "online", 00:16:17.576 "raid_level": "raid1", 00:16:17.576 "superblock": true, 00:16:17.576 "num_base_bdevs": 4, 00:16:17.576 "num_base_bdevs_discovered": 4, 00:16:17.576 "num_base_bdevs_operational": 4, 00:16:17.576 "base_bdevs_list": [ 00:16:17.576 { 00:16:17.576 "name": "pt1", 00:16:17.576 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:17.576 "is_configured": true, 00:16:17.576 "data_offset": 2048, 00:16:17.576 "data_size": 63488 00:16:17.576 }, 00:16:17.576 { 00:16:17.576 "name": "pt2", 00:16:17.576 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:17.576 "is_configured": true, 00:16:17.576 "data_offset": 2048, 00:16:17.576 "data_size": 63488 00:16:17.576 }, 00:16:17.576 { 00:16:17.576 "name": "pt3", 00:16:17.576 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:17.576 "is_configured": true, 00:16:17.576 "data_offset": 2048, 00:16:17.576 "data_size": 63488 00:16:17.576 }, 00:16:17.576 { 00:16:17.576 "name": "pt4", 00:16:17.576 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:17.576 "is_configured": true, 00:16:17.576 "data_offset": 2048, 00:16:17.576 "data_size": 63488 00:16:17.576 } 00:16:17.576 ] 00:16:17.576 } 00:16:17.576 } 00:16:17.576 }' 00:16:17.576 21:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:17.576 21:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:17.576 pt2 00:16:17.576 pt3 00:16:17.576 pt4' 00:16:17.576 21:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:17.576 21:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:17.576 21:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:17.835 21:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:17.835 "name": "pt1", 00:16:17.835 "aliases": [ 00:16:17.835 "00000000-0000-0000-0000-000000000001" 00:16:17.835 ], 00:16:17.835 "product_name": "passthru", 00:16:17.835 "block_size": 512, 00:16:17.835 "num_blocks": 65536, 00:16:17.835 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:17.835 "assigned_rate_limits": { 00:16:17.835 "rw_ios_per_sec": 0, 00:16:17.835 "rw_mbytes_per_sec": 0, 00:16:17.835 "r_mbytes_per_sec": 0, 00:16:17.835 "w_mbytes_per_sec": 0 00:16:17.835 }, 00:16:17.835 "claimed": true, 00:16:17.835 "claim_type": "exclusive_write", 00:16:17.835 "zoned": false, 00:16:17.835 "supported_io_types": { 00:16:17.835 "read": true, 00:16:17.835 "write": true, 00:16:17.835 "unmap": true, 00:16:17.835 "flush": true, 00:16:17.835 "reset": true, 00:16:17.835 "nvme_admin": false, 00:16:17.835 "nvme_io": false, 00:16:17.835 "nvme_io_md": false, 00:16:17.835 "write_zeroes": true, 00:16:17.835 "zcopy": true, 00:16:17.835 "get_zone_info": false, 00:16:17.835 "zone_management": false, 00:16:17.835 "zone_append": false, 00:16:17.835 "compare": false, 00:16:17.835 "compare_and_write": false, 00:16:17.835 "abort": true, 00:16:17.835 "seek_hole": false, 00:16:17.835 "seek_data": false, 00:16:17.835 "copy": true, 00:16:17.835 "nvme_iov_md": false 00:16:17.835 }, 00:16:17.835 "memory_domains": [ 00:16:17.835 { 00:16:17.835 "dma_device_id": "system", 00:16:17.835 "dma_device_type": 1 00:16:17.835 }, 00:16:17.835 { 00:16:17.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.835 "dma_device_type": 2 00:16:17.835 } 00:16:17.835 ], 00:16:17.835 "driver_specific": { 00:16:17.835 "passthru": { 00:16:17.835 "name": "pt1", 00:16:17.835 "base_bdev_name": "malloc1" 00:16:17.835 } 00:16:17.835 } 00:16:17.835 }' 00:16:17.835 21:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:17.835 21:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:17.835 21:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:17.835 21:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:17.835 21:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:17.835 21:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:17.835 21:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:17.835 21:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:17.835 21:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:17.835 21:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:17.835 21:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:17.835 21:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:17.835 21:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:17.835 21:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:17.835 21:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:18.094 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:18.094 "name": "pt2", 00:16:18.094 "aliases": [ 00:16:18.094 "00000000-0000-0000-0000-000000000002" 00:16:18.094 ], 00:16:18.094 "product_name": "passthru", 00:16:18.094 "block_size": 512, 00:16:18.094 "num_blocks": 65536, 00:16:18.094 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:18.094 "assigned_rate_limits": { 00:16:18.094 "rw_ios_per_sec": 0, 00:16:18.094 "rw_mbytes_per_sec": 0, 00:16:18.094 "r_mbytes_per_sec": 0, 00:16:18.094 "w_mbytes_per_sec": 0 00:16:18.094 }, 00:16:18.094 "claimed": true, 00:16:18.094 "claim_type": "exclusive_write", 00:16:18.094 "zoned": false, 00:16:18.094 "supported_io_types": { 00:16:18.094 "read": true, 00:16:18.094 "write": true, 00:16:18.094 "unmap": true, 00:16:18.094 "flush": true, 00:16:18.094 "reset": true, 00:16:18.094 "nvme_admin": false, 00:16:18.094 "nvme_io": false, 00:16:18.094 "nvme_io_md": false, 00:16:18.094 "write_zeroes": true, 00:16:18.094 "zcopy": true, 00:16:18.094 "get_zone_info": false, 00:16:18.094 "zone_management": false, 00:16:18.094 "zone_append": false, 00:16:18.094 "compare": false, 00:16:18.094 "compare_and_write": false, 00:16:18.094 "abort": true, 00:16:18.094 "seek_hole": false, 00:16:18.094 "seek_data": false, 00:16:18.094 "copy": true, 00:16:18.094 "nvme_iov_md": false 00:16:18.094 }, 00:16:18.094 "memory_domains": [ 00:16:18.094 { 00:16:18.094 "dma_device_id": "system", 00:16:18.094 "dma_device_type": 1 00:16:18.094 }, 00:16:18.094 { 00:16:18.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.094 "dma_device_type": 2 00:16:18.094 } 00:16:18.094 ], 00:16:18.094 "driver_specific": { 00:16:18.094 "passthru": { 00:16:18.094 "name": "pt2", 00:16:18.094 "base_bdev_name": "malloc2" 00:16:18.094 } 00:16:18.094 } 00:16:18.094 }' 00:16:18.094 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:18.094 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:18.094 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:18.095 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:18.095 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:18.095 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:18.095 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:18.095 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:18.095 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:18.095 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:18.095 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:18.095 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:18.095 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:18.095 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:18.095 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:18.353 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:18.353 "name": "pt3", 00:16:18.353 "aliases": [ 00:16:18.353 "00000000-0000-0000-0000-000000000003" 00:16:18.353 ], 00:16:18.353 "product_name": "passthru", 00:16:18.353 "block_size": 512, 00:16:18.353 "num_blocks": 65536, 00:16:18.353 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:18.353 "assigned_rate_limits": { 00:16:18.353 "rw_ios_per_sec": 0, 00:16:18.353 "rw_mbytes_per_sec": 0, 00:16:18.353 "r_mbytes_per_sec": 0, 00:16:18.353 "w_mbytes_per_sec": 0 00:16:18.353 }, 00:16:18.353 "claimed": true, 00:16:18.353 "claim_type": "exclusive_write", 00:16:18.353 "zoned": false, 00:16:18.353 "supported_io_types": { 00:16:18.353 "read": true, 00:16:18.353 "write": true, 00:16:18.353 "unmap": true, 00:16:18.353 "flush": true, 00:16:18.353 "reset": true, 00:16:18.353 "nvme_admin": false, 00:16:18.353 "nvme_io": false, 00:16:18.353 "nvme_io_md": false, 00:16:18.353 "write_zeroes": true, 00:16:18.353 "zcopy": true, 00:16:18.353 "get_zone_info": false, 00:16:18.353 "zone_management": false, 00:16:18.353 "zone_append": false, 00:16:18.353 "compare": false, 00:16:18.353 "compare_and_write": false, 00:16:18.353 "abort": true, 00:16:18.353 "seek_hole": false, 00:16:18.353 "seek_data": false, 00:16:18.353 "copy": true, 00:16:18.353 "nvme_iov_md": false 00:16:18.353 }, 00:16:18.353 "memory_domains": [ 00:16:18.353 { 00:16:18.353 "dma_device_id": "system", 00:16:18.353 "dma_device_type": 1 00:16:18.353 }, 00:16:18.353 { 00:16:18.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.353 "dma_device_type": 2 00:16:18.353 } 00:16:18.353 ], 00:16:18.353 "driver_specific": { 00:16:18.353 "passthru": { 00:16:18.353 "name": "pt3", 00:16:18.353 "base_bdev_name": "malloc3" 00:16:18.353 } 00:16:18.353 } 00:16:18.353 }' 00:16:18.353 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:18.353 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:18.353 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:18.353 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:18.353 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:18.353 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:18.353 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:18.353 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:18.353 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:18.354 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:18.354 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:18.612 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:18.612 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:18.612 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:16:18.612 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:18.612 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:18.612 "name": "pt4", 00:16:18.612 "aliases": [ 00:16:18.612 "00000000-0000-0000-0000-000000000004" 00:16:18.612 ], 00:16:18.612 "product_name": "passthru", 00:16:18.612 "block_size": 512, 00:16:18.612 "num_blocks": 65536, 00:16:18.613 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:18.613 "assigned_rate_limits": { 00:16:18.613 "rw_ios_per_sec": 0, 00:16:18.613 "rw_mbytes_per_sec": 0, 00:16:18.613 "r_mbytes_per_sec": 0, 00:16:18.613 "w_mbytes_per_sec": 0 00:16:18.613 }, 00:16:18.613 "claimed": true, 00:16:18.613 "claim_type": "exclusive_write", 00:16:18.613 "zoned": false, 00:16:18.613 "supported_io_types": { 00:16:18.613 "read": true, 00:16:18.613 "write": true, 00:16:18.613 "unmap": true, 00:16:18.613 "flush": true, 00:16:18.613 "reset": true, 00:16:18.613 "nvme_admin": false, 00:16:18.613 "nvme_io": false, 00:16:18.613 "nvme_io_md": false, 00:16:18.613 "write_zeroes": true, 00:16:18.613 "zcopy": true, 00:16:18.613 "get_zone_info": false, 00:16:18.613 "zone_management": false, 00:16:18.613 "zone_append": false, 00:16:18.613 "compare": false, 00:16:18.613 "compare_and_write": false, 00:16:18.613 "abort": true, 00:16:18.613 "seek_hole": false, 00:16:18.613 "seek_data": false, 00:16:18.613 "copy": true, 00:16:18.613 "nvme_iov_md": false 00:16:18.613 }, 00:16:18.613 "memory_domains": [ 00:16:18.613 { 00:16:18.613 "dma_device_id": "system", 00:16:18.613 "dma_device_type": 1 00:16:18.613 }, 00:16:18.613 { 00:16:18.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.613 "dma_device_type": 2 00:16:18.613 } 00:16:18.613 ], 00:16:18.613 "driver_specific": { 00:16:18.613 "passthru": { 00:16:18.613 "name": "pt4", 00:16:18.613 "base_bdev_name": "malloc4" 00:16:18.613 } 00:16:18.613 } 00:16:18.613 }' 00:16:18.613 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:18.613 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:18.613 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:18.613 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:18.613 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:18.871 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:18.871 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:18.871 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:18.871 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:18.871 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:18.871 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:18.871 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:18.871 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:18.871 21:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:16:18.871 [2024-07-15 21:52:34.033790] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:18.871 21:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 85a629ff-42f4-11ef-9f7f-e9a656123a8b '!=' 85a629ff-42f4-11ef-9f7f-e9a656123a8b ']' 00:16:18.871 21:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:16:18.871 21:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:18.871 21:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:18.871 21:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:19.130 [2024-07-15 21:52:34.301741] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:19.130 21:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:19.389 21:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:19.389 21:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:19.389 21:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:19.389 21:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:19.389 21:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:19.389 21:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:19.389 21:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:19.389 21:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:19.389 21:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:19.389 21:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.389 21:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.647 21:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:19.647 "name": "raid_bdev1", 00:16:19.647 "uuid": "85a629ff-42f4-11ef-9f7f-e9a656123a8b", 00:16:19.647 "strip_size_kb": 0, 00:16:19.647 "state": "online", 00:16:19.647 "raid_level": "raid1", 00:16:19.647 "superblock": true, 00:16:19.647 "num_base_bdevs": 4, 00:16:19.647 "num_base_bdevs_discovered": 3, 00:16:19.647 "num_base_bdevs_operational": 3, 00:16:19.647 "base_bdevs_list": [ 00:16:19.647 { 00:16:19.647 "name": null, 00:16:19.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.647 "is_configured": false, 00:16:19.647 "data_offset": 2048, 00:16:19.647 "data_size": 63488 00:16:19.647 }, 00:16:19.647 { 00:16:19.647 "name": "pt2", 00:16:19.647 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:19.647 "is_configured": true, 00:16:19.647 "data_offset": 2048, 00:16:19.647 "data_size": 63488 00:16:19.647 }, 00:16:19.647 { 00:16:19.647 "name": "pt3", 00:16:19.647 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:19.647 "is_configured": true, 00:16:19.647 "data_offset": 2048, 00:16:19.647 "data_size": 63488 00:16:19.647 }, 00:16:19.647 { 00:16:19.647 "name": "pt4", 00:16:19.648 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:19.648 "is_configured": true, 00:16:19.648 "data_offset": 2048, 00:16:19.648 "data_size": 63488 00:16:19.648 } 00:16:19.648 ] 00:16:19.648 }' 00:16:19.648 21:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:19.648 21:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.906 21:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:19.906 [2024-07-15 21:52:35.045838] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:19.906 [2024-07-15 21:52:35.045861] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:19.906 [2024-07-15 21:52:35.045899] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.906 [2024-07-15 21:52:35.045915] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:19.906 [2024-07-15 21:52:35.045919] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x34466d834c80 name raid_bdev1, state offline 00:16:19.906 21:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:16:19.906 21:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.166 21:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:16:20.166 21:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:16:20.166 21:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:16:20.166 21:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:20.166 21:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:20.425 21:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:16:20.425 21:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:20.425 21:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:20.684 21:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:16:20.684 21:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:20.684 21:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:16:20.942 21:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:16:20.943 21:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:20.943 21:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:16:20.943 21:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:16:20.943 21:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:21.201 [2024-07-15 21:52:36.209891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:21.201 [2024-07-15 21:52:36.209960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.201 [2024-07-15 21:52:36.209990] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x34466d835900 00:16:21.201 [2024-07-15 21:52:36.209998] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.201 [2024-07-15 21:52:36.210745] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.201 [2024-07-15 21:52:36.210783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:21.201 [2024-07-15 21:52:36.210842] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:21.201 [2024-07-15 21:52:36.210854] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:21.201 pt2 00:16:21.201 21:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:21.201 21:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:21.201 21:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:21.201 21:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:21.201 21:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:21.201 21:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:21.201 21:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:21.201 21:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:21.201 21:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:21.201 21:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:21.201 21:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.201 21:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.460 21:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:21.460 "name": "raid_bdev1", 00:16:21.460 "uuid": "85a629ff-42f4-11ef-9f7f-e9a656123a8b", 00:16:21.460 "strip_size_kb": 0, 00:16:21.460 "state": "configuring", 00:16:21.460 "raid_level": "raid1", 00:16:21.460 "superblock": true, 00:16:21.460 "num_base_bdevs": 4, 00:16:21.460 "num_base_bdevs_discovered": 1, 00:16:21.460 "num_base_bdevs_operational": 3, 00:16:21.460 "base_bdevs_list": [ 00:16:21.460 { 00:16:21.460 "name": null, 00:16:21.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.460 "is_configured": false, 00:16:21.460 "data_offset": 2048, 00:16:21.460 "data_size": 63488 00:16:21.460 }, 00:16:21.460 { 00:16:21.460 "name": "pt2", 00:16:21.460 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:21.460 "is_configured": true, 00:16:21.460 "data_offset": 2048, 00:16:21.460 "data_size": 63488 00:16:21.460 }, 00:16:21.460 { 00:16:21.460 "name": null, 00:16:21.460 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:21.460 "is_configured": false, 00:16:21.460 "data_offset": 2048, 00:16:21.460 "data_size": 63488 00:16:21.460 }, 00:16:21.460 { 00:16:21.460 "name": null, 00:16:21.460 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:21.460 "is_configured": false, 00:16:21.460 "data_offset": 2048, 00:16:21.460 "data_size": 63488 00:16:21.460 } 00:16:21.460 ] 00:16:21.460 }' 00:16:21.460 21:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:21.460 21:52:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.720 21:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:16:21.720 21:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:16:21.720 21:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:21.720 [2024-07-15 21:52:36.901934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:21.720 [2024-07-15 21:52:36.902004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.720 [2024-07-15 21:52:36.902033] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x34466d835680 00:16:21.720 [2024-07-15 21:52:36.902040] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.720 [2024-07-15 21:52:36.902196] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.720 [2024-07-15 21:52:36.902227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:21.720 [2024-07-15 21:52:36.902257] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:21.720 [2024-07-15 21:52:36.902266] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:21.720 pt3 00:16:21.979 21:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:21.979 21:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:21.979 21:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:21.979 21:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:21.979 21:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:21.979 21:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:21.979 21:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:21.979 21:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:21.979 21:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:21.979 21:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:21.979 21:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.979 21:52:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.238 21:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:22.238 "name": "raid_bdev1", 00:16:22.238 "uuid": "85a629ff-42f4-11ef-9f7f-e9a656123a8b", 00:16:22.238 "strip_size_kb": 0, 00:16:22.238 "state": "configuring", 00:16:22.238 "raid_level": "raid1", 00:16:22.238 "superblock": true, 00:16:22.238 "num_base_bdevs": 4, 00:16:22.238 "num_base_bdevs_discovered": 2, 00:16:22.238 "num_base_bdevs_operational": 3, 00:16:22.238 "base_bdevs_list": [ 00:16:22.238 { 00:16:22.238 "name": null, 00:16:22.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.238 "is_configured": false, 00:16:22.238 "data_offset": 2048, 00:16:22.238 "data_size": 63488 00:16:22.238 }, 00:16:22.238 { 00:16:22.238 "name": "pt2", 00:16:22.238 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:22.238 "is_configured": true, 00:16:22.238 "data_offset": 2048, 00:16:22.238 "data_size": 63488 00:16:22.238 }, 00:16:22.238 { 00:16:22.238 "name": "pt3", 00:16:22.238 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:22.238 "is_configured": true, 00:16:22.238 "data_offset": 2048, 00:16:22.238 "data_size": 63488 00:16:22.238 }, 00:16:22.238 { 00:16:22.238 "name": null, 00:16:22.238 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:22.238 "is_configured": false, 00:16:22.238 "data_offset": 2048, 00:16:22.238 "data_size": 63488 00:16:22.238 } 00:16:22.238 ] 00:16:22.238 }' 00:16:22.238 21:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:22.238 21:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.497 21:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:16:22.497 21:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:16:22.497 21:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:16:22.497 21:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:22.497 [2024-07-15 21:52:37.661989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:22.497 [2024-07-15 21:52:37.662069] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.497 [2024-07-15 21:52:37.662080] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x34466d834c80 00:16:22.497 [2024-07-15 21:52:37.662087] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.497 [2024-07-15 21:52:37.662211] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.497 [2024-07-15 21:52:37.662228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:22.497 [2024-07-15 21:52:37.662266] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:22.497 [2024-07-15 21:52:37.662274] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:22.497 [2024-07-15 21:52:37.662314] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x34466d834780 00:16:22.497 [2024-07-15 21:52:37.662334] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:22.497 [2024-07-15 21:52:37.662371] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x34466d897e20 00:16:22.497 [2024-07-15 21:52:37.662417] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x34466d834780 00:16:22.497 [2024-07-15 21:52:37.662421] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x34466d834780 00:16:22.497 [2024-07-15 21:52:37.662442] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.497 pt4 00:16:22.497 21:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:22.497 21:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:22.497 21:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:22.497 21:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:22.497 21:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:22.497 21:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:22.497 21:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:22.497 21:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:22.497 21:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:22.497 21:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:22.497 21:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.497 21:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.756 21:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:22.756 "name": "raid_bdev1", 00:16:22.756 "uuid": "85a629ff-42f4-11ef-9f7f-e9a656123a8b", 00:16:22.756 "strip_size_kb": 0, 00:16:22.756 "state": "online", 00:16:22.756 "raid_level": "raid1", 00:16:22.756 "superblock": true, 00:16:22.756 "num_base_bdevs": 4, 00:16:22.756 "num_base_bdevs_discovered": 3, 00:16:22.756 "num_base_bdevs_operational": 3, 00:16:22.756 "base_bdevs_list": [ 00:16:22.756 { 00:16:22.756 "name": null, 00:16:22.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.756 "is_configured": false, 00:16:22.756 "data_offset": 2048, 00:16:22.756 "data_size": 63488 00:16:22.756 }, 00:16:22.756 { 00:16:22.756 "name": "pt2", 00:16:22.756 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:22.756 "is_configured": true, 00:16:22.756 "data_offset": 2048, 00:16:22.756 "data_size": 63488 00:16:22.756 }, 00:16:22.757 { 00:16:22.757 "name": "pt3", 00:16:22.757 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:22.757 "is_configured": true, 00:16:22.757 "data_offset": 2048, 00:16:22.757 "data_size": 63488 00:16:22.757 }, 00:16:22.757 { 00:16:22.757 "name": "pt4", 00:16:22.757 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:22.757 "is_configured": true, 00:16:22.757 "data_offset": 2048, 00:16:22.757 "data_size": 63488 00:16:22.757 } 00:16:22.757 ] 00:16:22.757 }' 00:16:22.757 21:52:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:22.757 21:52:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.325 21:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:23.325 [2024-07-15 21:52:38.442004] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:23.325 [2024-07-15 21:52:38.442024] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:23.325 [2024-07-15 21:52:38.442062] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:23.325 [2024-07-15 21:52:38.442087] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:23.325 [2024-07-15 21:52:38.442091] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x34466d834780 name raid_bdev1, state offline 00:16:23.325 21:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:16:23.325 21:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.584 21:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:16:23.584 21:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:16:23.584 21:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:16:23.584 21:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:16:23.584 21:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:16:23.843 21:52:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:24.102 [2024-07-15 21:52:39.078032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:24.102 [2024-07-15 21:52:39.078099] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.102 [2024-07-15 21:52:39.078109] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x34466d834c80 00:16:24.103 [2024-07-15 21:52:39.078116] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.103 [2024-07-15 21:52:39.078949] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.103 [2024-07-15 21:52:39.078988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:24.103 [2024-07-15 21:52:39.079013] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:24.103 [2024-07-15 21:52:39.079024] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:24.103 [2024-07-15 21:52:39.079053] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:24.103 [2024-07-15 21:52:39.079058] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:24.103 [2024-07-15 21:52:39.079063] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x34466d834780 name raid_bdev1, state configuring 00:16:24.103 [2024-07-15 21:52:39.079071] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:24.103 [2024-07-15 21:52:39.079089] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:24.103 pt1 00:16:24.103 21:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:16:24.103 21:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:24.103 21:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:24.103 21:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:24.103 21:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:24.103 21:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:24.103 21:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:24.103 21:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:24.103 21:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:24.103 21:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:24.103 21:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:24.103 21:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.103 21:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.362 21:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:24.362 "name": "raid_bdev1", 00:16:24.362 "uuid": "85a629ff-42f4-11ef-9f7f-e9a656123a8b", 00:16:24.362 "strip_size_kb": 0, 00:16:24.362 "state": "configuring", 00:16:24.362 "raid_level": "raid1", 00:16:24.362 "superblock": true, 00:16:24.362 "num_base_bdevs": 4, 00:16:24.362 "num_base_bdevs_discovered": 2, 00:16:24.362 "num_base_bdevs_operational": 3, 00:16:24.362 "base_bdevs_list": [ 00:16:24.362 { 00:16:24.362 "name": null, 00:16:24.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.362 "is_configured": false, 00:16:24.362 "data_offset": 2048, 00:16:24.362 "data_size": 63488 00:16:24.363 }, 00:16:24.363 { 00:16:24.363 "name": "pt2", 00:16:24.363 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:24.363 "is_configured": true, 00:16:24.363 "data_offset": 2048, 00:16:24.363 "data_size": 63488 00:16:24.363 }, 00:16:24.363 { 00:16:24.363 "name": "pt3", 00:16:24.363 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:24.363 "is_configured": true, 00:16:24.363 "data_offset": 2048, 00:16:24.363 "data_size": 63488 00:16:24.363 }, 00:16:24.363 { 00:16:24.363 "name": null, 00:16:24.363 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:24.363 "is_configured": false, 00:16:24.363 "data_offset": 2048, 00:16:24.363 "data_size": 63488 00:16:24.363 } 00:16:24.363 ] 00:16:24.363 }' 00:16:24.363 21:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:24.363 21:52:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.631 21:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:16:24.631 21:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:24.939 21:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:16:24.940 21:52:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:24.940 [2024-07-15 21:52:40.042046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:24.940 [2024-07-15 21:52:40.042107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.940 [2024-07-15 21:52:40.042124] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x34466d835180 00:16:24.940 [2024-07-15 21:52:40.042133] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.940 [2024-07-15 21:52:40.042252] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.940 [2024-07-15 21:52:40.042268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:24.940 [2024-07-15 21:52:40.042291] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:24.940 [2024-07-15 21:52:40.042300] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:24.940 [2024-07-15 21:52:40.042343] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x34466d834780 00:16:24.940 [2024-07-15 21:52:40.042347] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:24.940 [2024-07-15 21:52:40.042384] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x34466d897e20 00:16:24.940 [2024-07-15 21:52:40.042430] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x34466d834780 00:16:24.940 [2024-07-15 21:52:40.042434] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x34466d834780 00:16:24.940 [2024-07-15 21:52:40.042455] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.940 pt4 00:16:24.940 21:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:24.940 21:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:24.940 21:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:24.940 21:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:24.940 21:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:24.940 21:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:24.940 21:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:24.940 21:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:24.940 21:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:24.940 21:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:24.940 21:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.940 21:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.198 21:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:25.198 "name": "raid_bdev1", 00:16:25.198 "uuid": "85a629ff-42f4-11ef-9f7f-e9a656123a8b", 00:16:25.198 "strip_size_kb": 0, 00:16:25.198 "state": "online", 00:16:25.198 "raid_level": "raid1", 00:16:25.198 "superblock": true, 00:16:25.198 "num_base_bdevs": 4, 00:16:25.198 "num_base_bdevs_discovered": 3, 00:16:25.198 "num_base_bdevs_operational": 3, 00:16:25.198 "base_bdevs_list": [ 00:16:25.198 { 00:16:25.198 "name": null, 00:16:25.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.198 "is_configured": false, 00:16:25.198 "data_offset": 2048, 00:16:25.198 "data_size": 63488 00:16:25.198 }, 00:16:25.198 { 00:16:25.198 "name": "pt2", 00:16:25.198 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:25.198 "is_configured": true, 00:16:25.198 "data_offset": 2048, 00:16:25.198 "data_size": 63488 00:16:25.198 }, 00:16:25.198 { 00:16:25.198 "name": "pt3", 00:16:25.198 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:25.198 "is_configured": true, 00:16:25.198 "data_offset": 2048, 00:16:25.198 "data_size": 63488 00:16:25.198 }, 00:16:25.198 { 00:16:25.198 "name": "pt4", 00:16:25.198 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:25.198 "is_configured": true, 00:16:25.198 "data_offset": 2048, 00:16:25.198 "data_size": 63488 00:16:25.198 } 00:16:25.198 ] 00:16:25.198 }' 00:16:25.198 21:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:25.198 21:52:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.456 21:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:16:25.456 21:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:25.714 21:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:16:25.714 21:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:25.714 21:52:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:16:25.971 [2024-07-15 21:52:41.046115] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:25.971 21:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 85a629ff-42f4-11ef-9f7f-e9a656123a8b '!=' 85a629ff-42f4-11ef-9f7f-e9a656123a8b ']' 00:16:25.971 21:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 64564 00:16:25.971 21:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@942 -- # '[' -z 64564 ']' 00:16:25.971 21:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # kill -0 64564 00:16:25.971 21:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@947 -- # uname 00:16:25.971 21:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:16:25.971 21:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # tail -1 00:16:25.971 21:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # ps -c -o command 64564 00:16:25.971 21:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:16:25.971 21:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:16:25.971 killing process with pid 64564 00:16:25.971 21:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 64564' 00:16:25.971 21:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@961 -- # kill 64564 00:16:25.971 [2024-07-15 21:52:41.074765] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:25.971 [2024-07-15 21:52:41.074786] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:25.971 [2024-07-15 21:52:41.074804] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:25.971 [2024-07-15 21:52:41.074808] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x34466d834780 name raid_bdev1, state offline 00:16:25.971 21:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # wait 64564 00:16:25.971 [2024-07-15 21:52:41.098908] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:26.229 21:52:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:16:26.229 00:16:26.229 real 0m18.897s 00:16:26.229 user 0m33.909s 00:16:26.229 sys 0m3.072s 00:16:26.229 ************************************ 00:16:26.229 END TEST raid_superblock_test 00:16:26.229 ************************************ 00:16:26.229 21:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:16:26.229 21:52:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.229 21:52:41 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:16:26.229 21:52:41 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:16:26.229 21:52:41 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:16:26.229 21:52:41 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:16:26.229 21:52:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:26.229 ************************************ 00:16:26.229 START TEST raid_read_error_test 00:16:26.229 ************************************ 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1117 -- # raid_io_error_test raid1 4 read 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.z8Loi3bKt7 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=65192 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 65192 /var/tmp/spdk-raid.sock 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@823 -- # '[' -z 65192 ']' 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:26.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:16:26.229 21:52:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.229 [2024-07-15 21:52:41.325419] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:16:26.229 [2024-07-15 21:52:41.325695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:26.795 EAL: TSC is not safe to use in SMP mode 00:16:26.795 EAL: TSC is not invariant 00:16:26.795 [2024-07-15 21:52:41.868167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.795 [2024-07-15 21:52:41.943895] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:26.795 [2024-07-15 21:52:41.946351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.795 [2024-07-15 21:52:41.947309] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:26.795 [2024-07-15 21:52:41.947322] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:27.360 21:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:16:27.360 21:52:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # return 0 00:16:27.360 21:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:27.360 21:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:27.360 BaseBdev1_malloc 00:16:27.619 21:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:27.619 true 00:16:27.619 21:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:27.876 [2024-07-15 21:52:42.969988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:27.876 [2024-07-15 21:52:42.970056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.876 [2024-07-15 21:52:42.970107] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3ce8e634780 00:16:27.876 [2024-07-15 21:52:42.970115] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.876 [2024-07-15 21:52:42.970844] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.876 [2024-07-15 21:52:42.970883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:27.876 BaseBdev1 00:16:27.876 21:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:27.876 21:52:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:28.134 BaseBdev2_malloc 00:16:28.134 21:52:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:28.393 true 00:16:28.393 21:52:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:28.652 [2024-07-15 21:52:43.642000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:28.652 [2024-07-15 21:52:43.642063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.652 [2024-07-15 21:52:43.642109] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3ce8e634c80 00:16:28.652 [2024-07-15 21:52:43.642117] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.652 [2024-07-15 21:52:43.642835] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.652 [2024-07-15 21:52:43.642861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:28.652 BaseBdev2 00:16:28.652 21:52:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:28.652 21:52:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:28.910 BaseBdev3_malloc 00:16:28.910 21:52:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:16:29.168 true 00:16:29.168 21:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:29.426 [2024-07-15 21:52:44.402018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:29.426 [2024-07-15 21:52:44.402085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.426 [2024-07-15 21:52:44.402124] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3ce8e635180 00:16:29.426 [2024-07-15 21:52:44.402132] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.426 [2024-07-15 21:52:44.402858] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.426 [2024-07-15 21:52:44.402897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:29.426 BaseBdev3 00:16:29.426 21:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:29.427 21:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:29.684 BaseBdev4_malloc 00:16:29.684 21:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:16:29.684 true 00:16:29.684 21:52:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:29.943 [2024-07-15 21:52:45.046070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:29.943 [2024-07-15 21:52:45.046147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.943 [2024-07-15 21:52:45.046185] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3ce8e635680 00:16:29.943 [2024-07-15 21:52:45.046209] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.943 [2024-07-15 21:52:45.046832] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.943 [2024-07-15 21:52:45.046857] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:29.943 BaseBdev4 00:16:29.943 21:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:16:30.202 [2024-07-15 21:52:45.250086] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:30.202 [2024-07-15 21:52:45.250725] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:30.202 [2024-07-15 21:52:45.250749] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:30.202 [2024-07-15 21:52:45.250763] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:30.202 [2024-07-15 21:52:45.250824] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3ce8e635900 00:16:30.202 [2024-07-15 21:52:45.250831] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:30.202 [2024-07-15 21:52:45.250879] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3ce8e6a0e20 00:16:30.202 [2024-07-15 21:52:45.250990] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3ce8e635900 00:16:30.203 [2024-07-15 21:52:45.250995] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3ce8e635900 00:16:30.203 [2024-07-15 21:52:45.251020] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.203 21:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:30.203 21:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:30.203 21:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:30.203 21:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:30.203 21:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:30.203 21:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:30.203 21:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:30.203 21:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:30.203 21:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:30.203 21:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:30.203 21:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.203 21:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.462 21:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:30.462 "name": "raid_bdev1", 00:16:30.462 "uuid": "9182dd44-42f4-11ef-9f7f-e9a656123a8b", 00:16:30.462 "strip_size_kb": 0, 00:16:30.462 "state": "online", 00:16:30.462 "raid_level": "raid1", 00:16:30.462 "superblock": true, 00:16:30.462 "num_base_bdevs": 4, 00:16:30.462 "num_base_bdevs_discovered": 4, 00:16:30.462 "num_base_bdevs_operational": 4, 00:16:30.462 "base_bdevs_list": [ 00:16:30.462 { 00:16:30.462 "name": "BaseBdev1", 00:16:30.462 "uuid": "8ae6c613-8d86-7159-8219-c30895c9eeeb", 00:16:30.462 "is_configured": true, 00:16:30.462 "data_offset": 2048, 00:16:30.462 "data_size": 63488 00:16:30.462 }, 00:16:30.462 { 00:16:30.462 "name": "BaseBdev2", 00:16:30.462 "uuid": "0399fc29-ec43-8c56-8864-e8ea8c834674", 00:16:30.462 "is_configured": true, 00:16:30.462 "data_offset": 2048, 00:16:30.462 "data_size": 63488 00:16:30.462 }, 00:16:30.462 { 00:16:30.462 "name": "BaseBdev3", 00:16:30.462 "uuid": "e4d51f44-9d2b-7a55-a5ce-a4c298b6a396", 00:16:30.462 "is_configured": true, 00:16:30.462 "data_offset": 2048, 00:16:30.462 "data_size": 63488 00:16:30.462 }, 00:16:30.462 { 00:16:30.462 "name": "BaseBdev4", 00:16:30.462 "uuid": "d1b0cd83-cf0b-d25a-b3ac-2da1f66125cc", 00:16:30.462 "is_configured": true, 00:16:30.462 "data_offset": 2048, 00:16:30.462 "data_size": 63488 00:16:30.462 } 00:16:30.462 ] 00:16:30.462 }' 00:16:30.462 21:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:30.462 21:52:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.721 21:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:30.721 21:52:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:30.721 [2024-07-15 21:52:45.874302] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3ce8e6a0ec0 00:16:32.098 21:52:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:32.098 21:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:32.098 21:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:32.098 21:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:16:32.098 21:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:16:32.098 21:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:32.098 21:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:32.098 21:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:32.098 21:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:32.098 21:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:32.098 21:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:32.098 21:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:32.098 21:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:32.098 21:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:32.098 21:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:32.098 21:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.098 21:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.357 21:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:32.357 "name": "raid_bdev1", 00:16:32.357 "uuid": "9182dd44-42f4-11ef-9f7f-e9a656123a8b", 00:16:32.357 "strip_size_kb": 0, 00:16:32.357 "state": "online", 00:16:32.357 "raid_level": "raid1", 00:16:32.357 "superblock": true, 00:16:32.357 "num_base_bdevs": 4, 00:16:32.357 "num_base_bdevs_discovered": 4, 00:16:32.357 "num_base_bdevs_operational": 4, 00:16:32.357 "base_bdevs_list": [ 00:16:32.357 { 00:16:32.357 "name": "BaseBdev1", 00:16:32.357 "uuid": "8ae6c613-8d86-7159-8219-c30895c9eeeb", 00:16:32.357 "is_configured": true, 00:16:32.357 "data_offset": 2048, 00:16:32.357 "data_size": 63488 00:16:32.357 }, 00:16:32.357 { 00:16:32.357 "name": "BaseBdev2", 00:16:32.357 "uuid": "0399fc29-ec43-8c56-8864-e8ea8c834674", 00:16:32.357 "is_configured": true, 00:16:32.357 "data_offset": 2048, 00:16:32.357 "data_size": 63488 00:16:32.357 }, 00:16:32.357 { 00:16:32.357 "name": "BaseBdev3", 00:16:32.357 "uuid": "e4d51f44-9d2b-7a55-a5ce-a4c298b6a396", 00:16:32.357 "is_configured": true, 00:16:32.357 "data_offset": 2048, 00:16:32.357 "data_size": 63488 00:16:32.357 }, 00:16:32.357 { 00:16:32.357 "name": "BaseBdev4", 00:16:32.357 "uuid": "d1b0cd83-cf0b-d25a-b3ac-2da1f66125cc", 00:16:32.357 "is_configured": true, 00:16:32.357 "data_offset": 2048, 00:16:32.357 "data_size": 63488 00:16:32.357 } 00:16:32.357 ] 00:16:32.357 }' 00:16:32.357 21:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:32.357 21:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.616 21:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:32.876 [2024-07-15 21:52:47.874997] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:32.876 [2024-07-15 21:52:47.875022] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.876 [2024-07-15 21:52:47.875353] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.876 [2024-07-15 21:52:47.875362] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.876 [2024-07-15 21:52:47.875379] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.876 [2024-07-15 21:52:47.875390] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3ce8e635900 name raid_bdev1, state offline 00:16:32.876 0 00:16:32.876 21:52:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 65192 00:16:32.876 21:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@942 -- # '[' -z 65192 ']' 00:16:32.876 21:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # kill -0 65192 00:16:32.876 21:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@947 -- # uname 00:16:32.876 21:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:16:32.876 21:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # ps -c -o command 65192 00:16:32.876 21:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # tail -1 00:16:32.876 21:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # process_name=bdevperf 00:16:32.876 21:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' bdevperf = sudo ']' 00:16:32.876 killing process with pid 65192 00:16:32.876 21:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 65192' 00:16:32.876 21:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@961 -- # kill 65192 00:16:32.876 [2024-07-15 21:52:47.902486] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:32.876 21:52:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # wait 65192 00:16:32.876 [2024-07-15 21:52:47.925875] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:33.135 21:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.z8Loi3bKt7 00:16:33.135 21:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:33.135 21:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:33.135 21:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:16:33.135 21:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:16:33.135 21:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:33.135 21:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:33.135 21:52:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:33.135 00:16:33.135 real 0m6.785s 00:16:33.135 user 0m10.755s 00:16:33.135 sys 0m1.089s 00:16:33.135 21:52:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:16:33.135 21:52:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.135 ************************************ 00:16:33.135 END TEST raid_read_error_test 00:16:33.135 ************************************ 00:16:33.135 21:52:48 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:16:33.135 21:52:48 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:16:33.135 21:52:48 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:16:33.135 21:52:48 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:16:33.135 21:52:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:33.135 ************************************ 00:16:33.135 START TEST raid_write_error_test 00:16:33.135 ************************************ 00:16:33.135 21:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1117 -- # raid_io_error_test raid1 4 write 00:16:33.135 21:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:16:33.135 21:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:16:33.135 21:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:16:33.135 21:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:33.135 21:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:33.135 21:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:16:33.135 21:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:33.135 21:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:33.135 21:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:16:33.135 21:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:33.135 21:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:33.135 21:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:16:33.135 21:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:33.135 21:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:33.135 21:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:16:33.135 21:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:33.135 21:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:33.135 21:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:33.135 21:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:33.135 21:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:33.135 21:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:33.135 21:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:33.136 21:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:33.136 21:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:33.136 21:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:16:33.136 21:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:16:33.136 21:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:33.136 21:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.dlzam084Kg 00:16:33.136 21:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=65326 00:16:33.136 21:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 65326 /var/tmp/spdk-raid.sock 00:16:33.136 21:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@823 -- # '[' -z 65326 ']' 00:16:33.136 21:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:33.136 21:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@828 -- # local max_retries=100 00:16:33.136 21:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:33.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:33.136 21:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # xtrace_disable 00:16:33.136 21:52:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.136 21:52:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:33.136 [2024-07-15 21:52:48.162262] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:16:33.136 [2024-07-15 21:52:48.162433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:33.704 EAL: TSC is not safe to use in SMP mode 00:16:33.704 EAL: TSC is not invariant 00:16:33.704 [2024-07-15 21:52:48.678251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.704 [2024-07-15 21:52:48.756704] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:33.704 [2024-07-15 21:52:48.759027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.704 [2024-07-15 21:52:48.759904] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:33.704 [2024-07-15 21:52:48.759926] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:34.270 21:52:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:16:34.270 21:52:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # return 0 00:16:34.270 21:52:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:34.270 21:52:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:34.270 BaseBdev1_malloc 00:16:34.270 21:52:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:34.529 true 00:16:34.529 21:52:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:34.787 [2024-07-15 21:52:49.907658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:34.787 [2024-07-15 21:52:49.907730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.787 [2024-07-15 21:52:49.907769] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1439b834780 00:16:34.787 [2024-07-15 21:52:49.907778] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.787 [2024-07-15 21:52:49.908370] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.787 [2024-07-15 21:52:49.908395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:34.787 BaseBdev1 00:16:34.787 21:52:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:34.787 21:52:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:35.045 BaseBdev2_malloc 00:16:35.045 21:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:35.304 true 00:16:35.304 21:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:35.562 [2024-07-15 21:52:50.591674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:35.562 [2024-07-15 21:52:50.591739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.562 [2024-07-15 21:52:50.591779] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1439b834c80 00:16:35.562 [2024-07-15 21:52:50.591787] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.562 [2024-07-15 21:52:50.592512] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.562 [2024-07-15 21:52:50.592537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:35.562 BaseBdev2 00:16:35.562 21:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:35.562 21:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:35.820 BaseBdev3_malloc 00:16:35.820 21:52:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:16:36.078 true 00:16:36.078 21:52:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:36.078 [2024-07-15 21:52:51.231707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:36.078 [2024-07-15 21:52:51.231769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.078 [2024-07-15 21:52:51.231808] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1439b835180 00:16:36.078 [2024-07-15 21:52:51.231816] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.078 [2024-07-15 21:52:51.232455] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.078 [2024-07-15 21:52:51.232479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:36.078 BaseBdev3 00:16:36.078 21:52:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:36.078 21:52:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:36.335 BaseBdev4_malloc 00:16:36.335 21:52:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:16:36.594 true 00:16:36.594 21:52:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:36.853 [2024-07-15 21:52:51.911737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:36.853 [2024-07-15 21:52:51.911793] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.853 [2024-07-15 21:52:51.911834] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1439b835680 00:16:36.853 [2024-07-15 21:52:51.911842] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.853 [2024-07-15 21:52:51.912370] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.853 [2024-07-15 21:52:51.912403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:36.853 BaseBdev4 00:16:36.853 21:52:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:16:37.112 [2024-07-15 21:52:52.119768] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:37.112 [2024-07-15 21:52:52.120364] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:37.112 [2024-07-15 21:52:52.120403] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:37.112 [2024-07-15 21:52:52.120425] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:37.112 [2024-07-15 21:52:52.120497] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1439b835900 00:16:37.112 [2024-07-15 21:52:52.120502] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:37.112 [2024-07-15 21:52:52.120538] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1439b8a0e20 00:16:37.112 [2024-07-15 21:52:52.120642] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1439b835900 00:16:37.112 [2024-07-15 21:52:52.120646] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1439b835900 00:16:37.112 [2024-07-15 21:52:52.120670] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.112 21:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:37.112 21:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:37.112 21:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:37.112 21:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:37.112 21:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:37.112 21:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:37.112 21:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:37.112 21:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:37.112 21:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:37.112 21:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:37.112 21:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.112 21:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.371 21:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:37.371 "name": "raid_bdev1", 00:16:37.371 "uuid": "959b1798-42f4-11ef-9f7f-e9a656123a8b", 00:16:37.371 "strip_size_kb": 0, 00:16:37.371 "state": "online", 00:16:37.371 "raid_level": "raid1", 00:16:37.371 "superblock": true, 00:16:37.371 "num_base_bdevs": 4, 00:16:37.371 "num_base_bdevs_discovered": 4, 00:16:37.371 "num_base_bdevs_operational": 4, 00:16:37.371 "base_bdevs_list": [ 00:16:37.371 { 00:16:37.371 "name": "BaseBdev1", 00:16:37.371 "uuid": "dd121f47-38a0-2053-81cb-78cb2b3d245f", 00:16:37.371 "is_configured": true, 00:16:37.371 "data_offset": 2048, 00:16:37.371 "data_size": 63488 00:16:37.371 }, 00:16:37.371 { 00:16:37.371 "name": "BaseBdev2", 00:16:37.371 "uuid": "cbf2db59-02c8-525e-9e73-cd7816670de9", 00:16:37.371 "is_configured": true, 00:16:37.371 "data_offset": 2048, 00:16:37.371 "data_size": 63488 00:16:37.371 }, 00:16:37.371 { 00:16:37.371 "name": "BaseBdev3", 00:16:37.371 "uuid": "e00c5aca-1256-8957-a24a-1cfa7d953812", 00:16:37.371 "is_configured": true, 00:16:37.371 "data_offset": 2048, 00:16:37.371 "data_size": 63488 00:16:37.371 }, 00:16:37.371 { 00:16:37.371 "name": "BaseBdev4", 00:16:37.371 "uuid": "4bd428ed-c828-6356-8aff-894db248b734", 00:16:37.371 "is_configured": true, 00:16:37.371 "data_offset": 2048, 00:16:37.371 "data_size": 63488 00:16:37.371 } 00:16:37.371 ] 00:16:37.371 }' 00:16:37.371 21:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:37.371 21:52:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.630 21:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:37.630 21:52:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:37.630 [2024-07-15 21:52:52.795955] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1439b8a0ec0 00:16:39.006 21:52:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:39.006 [2024-07-15 21:52:54.022999] bdev_raid.c:2222:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:16:39.006 [2024-07-15 21:52:54.023078] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:39.006 [2024-07-15 21:52:54.023222] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1439b8a0ec0 00:16:39.006 21:52:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:39.006 21:52:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:39.006 21:52:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:16:39.006 21:52:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=3 00:16:39.006 21:52:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:39.006 21:52:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:39.006 21:52:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:39.006 21:52:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:39.006 21:52:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:39.006 21:52:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:39.006 21:52:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:39.006 21:52:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:39.006 21:52:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:39.006 21:52:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:39.006 21:52:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.006 21:52:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.265 21:52:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:39.265 "name": "raid_bdev1", 00:16:39.265 "uuid": "959b1798-42f4-11ef-9f7f-e9a656123a8b", 00:16:39.265 "strip_size_kb": 0, 00:16:39.265 "state": "online", 00:16:39.265 "raid_level": "raid1", 00:16:39.265 "superblock": true, 00:16:39.265 "num_base_bdevs": 4, 00:16:39.265 "num_base_bdevs_discovered": 3, 00:16:39.265 "num_base_bdevs_operational": 3, 00:16:39.265 "base_bdevs_list": [ 00:16:39.265 { 00:16:39.265 "name": null, 00:16:39.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.265 "is_configured": false, 00:16:39.265 "data_offset": 2048, 00:16:39.265 "data_size": 63488 00:16:39.265 }, 00:16:39.265 { 00:16:39.265 "name": "BaseBdev2", 00:16:39.265 "uuid": "cbf2db59-02c8-525e-9e73-cd7816670de9", 00:16:39.265 "is_configured": true, 00:16:39.265 "data_offset": 2048, 00:16:39.265 "data_size": 63488 00:16:39.265 }, 00:16:39.265 { 00:16:39.265 "name": "BaseBdev3", 00:16:39.265 "uuid": "e00c5aca-1256-8957-a24a-1cfa7d953812", 00:16:39.265 "is_configured": true, 00:16:39.265 "data_offset": 2048, 00:16:39.265 "data_size": 63488 00:16:39.265 }, 00:16:39.265 { 00:16:39.265 "name": "BaseBdev4", 00:16:39.265 "uuid": "4bd428ed-c828-6356-8aff-894db248b734", 00:16:39.265 "is_configured": true, 00:16:39.265 "data_offset": 2048, 00:16:39.265 "data_size": 63488 00:16:39.265 } 00:16:39.265 ] 00:16:39.265 }' 00:16:39.265 21:52:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:39.265 21:52:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.524 21:52:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:39.783 [2024-07-15 21:52:54.864605] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:39.783 [2024-07-15 21:52:54.864634] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:39.783 [2024-07-15 21:52:54.864945] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:39.783 [2024-07-15 21:52:54.864954] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:39.783 [2024-07-15 21:52:54.864969] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:39.783 [2024-07-15 21:52:54.864973] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1439b835900 name raid_bdev1, state offline 00:16:39.783 0 00:16:39.783 21:52:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 65326 00:16:39.783 21:52:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@942 -- # '[' -z 65326 ']' 00:16:39.783 21:52:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # kill -0 65326 00:16:39.783 21:52:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@947 -- # uname 00:16:39.783 21:52:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:16:39.783 21:52:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # ps -c -o command 65326 00:16:39.783 21:52:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # tail -1 00:16:39.783 21:52:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # process_name=bdevperf 00:16:39.783 21:52:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' bdevperf = sudo ']' 00:16:39.783 killing process with pid 65326 00:16:39.783 21:52:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # echo 'killing process with pid 65326' 00:16:39.783 21:52:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@961 -- # kill 65326 00:16:39.783 [2024-07-15 21:52:54.898510] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:39.783 21:52:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # wait 65326 00:16:39.783 [2024-07-15 21:52:54.922890] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:40.042 21:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.dlzam084Kg 00:16:40.042 21:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:40.042 21:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:40.042 21:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:16:40.042 21:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:16:40.042 21:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:40.042 21:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:40.042 21:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:40.042 00:16:40.042 real 0m6.953s 00:16:40.042 user 0m11.041s 00:16:40.042 sys 0m1.080s 00:16:40.042 21:52:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:16:40.042 21:52:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.042 ************************************ 00:16:40.042 END TEST raid_write_error_test 00:16:40.042 ************************************ 00:16:40.042 21:52:55 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:16:40.042 21:52:55 bdev_raid -- bdev/bdev_raid.sh@875 -- # '[' '' = true ']' 00:16:40.042 21:52:55 bdev_raid -- bdev/bdev_raid.sh@884 -- # '[' n == y ']' 00:16:40.042 21:52:55 bdev_raid -- bdev/bdev_raid.sh@896 -- # base_blocklen=4096 00:16:40.042 21:52:55 bdev_raid -- bdev/bdev_raid.sh@898 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:16:40.042 21:52:55 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:16:40.042 21:52:55 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:16:40.042 21:52:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:40.042 ************************************ 00:16:40.042 START TEST raid_state_function_test_sb_4k 00:16:40.042 ************************************ 00:16:40.042 21:52:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1117 -- # raid_state_function_test raid1 2 true 00:16:40.042 21:52:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:16:40.042 21:52:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:16:40.042 21:52:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:40.042 21:52:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:40.042 21:52:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:40.042 21:52:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:40.042 21:52:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:40.042 21:52:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:40.042 21:52:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:40.042 21:52:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:40.042 21:52:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:40.042 21:52:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:40.042 21:52:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:40.042 21:52:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:40.042 21:52:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:40.042 21:52:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:40.042 21:52:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:40.042 21:52:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:40.042 21:52:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:16:40.042 21:52:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:16:40.042 21:52:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:40.042 21:52:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:40.042 21:52:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=65462 00:16:40.042 Process raid pid: 65462 00:16:40.042 21:52:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 65462' 00:16:40.042 21:52:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 65462 /var/tmp/spdk-raid.sock 00:16:40.043 21:52:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:40.043 21:52:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@823 -- # '[' -z 65462 ']' 00:16:40.043 21:52:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:40.043 21:52:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@828 -- # local max_retries=100 00:16:40.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:40.043 21:52:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:40.043 21:52:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@832 -- # xtrace_disable 00:16:40.043 21:52:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.043 [2024-07-15 21:52:55.158113] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:16:40.043 [2024-07-15 21:52:55.158302] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:40.660 EAL: TSC is not safe to use in SMP mode 00:16:40.660 EAL: TSC is not invariant 00:16:40.660 [2024-07-15 21:52:55.678168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.660 [2024-07-15 21:52:55.757759] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:40.660 [2024-07-15 21:52:55.759876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.660 [2024-07-15 21:52:55.760697] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:40.660 [2024-07-15 21:52:55.760711] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:41.229 21:52:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:16:41.229 21:52:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@856 -- # return 0 00:16:41.229 21:52:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:41.229 [2024-07-15 21:52:56.376958] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:41.229 [2024-07-15 21:52:56.377016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:41.229 [2024-07-15 21:52:56.377021] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:41.229 [2024-07-15 21:52:56.377045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:41.229 21:52:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:41.229 21:52:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:41.229 21:52:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:41.229 21:52:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:41.229 21:52:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:41.229 21:52:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:41.229 21:52:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:41.229 21:52:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:41.229 21:52:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:41.229 21:52:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:41.229 21:52:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.229 21:52:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.488 21:52:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:41.488 "name": "Existed_Raid", 00:16:41.488 "uuid": "9824b04a-42f4-11ef-9f7f-e9a656123a8b", 00:16:41.488 "strip_size_kb": 0, 00:16:41.488 "state": "configuring", 00:16:41.488 "raid_level": "raid1", 00:16:41.488 "superblock": true, 00:16:41.488 "num_base_bdevs": 2, 00:16:41.488 "num_base_bdevs_discovered": 0, 00:16:41.488 "num_base_bdevs_operational": 2, 00:16:41.488 "base_bdevs_list": [ 00:16:41.488 { 00:16:41.488 "name": "BaseBdev1", 00:16:41.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.488 "is_configured": false, 00:16:41.488 "data_offset": 0, 00:16:41.488 "data_size": 0 00:16:41.488 }, 00:16:41.488 { 00:16:41.488 "name": "BaseBdev2", 00:16:41.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.488 "is_configured": false, 00:16:41.488 "data_offset": 0, 00:16:41.488 "data_size": 0 00:16:41.488 } 00:16:41.488 ] 00:16:41.488 }' 00:16:41.488 21:52:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:41.488 21:52:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.747 21:52:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:42.005 [2024-07-15 21:52:57.076931] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:42.005 [2024-07-15 21:52:57.076951] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2befbe034500 name Existed_Raid, state configuring 00:16:42.005 21:52:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:42.264 [2024-07-15 21:52:57.284944] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:42.264 [2024-07-15 21:52:57.284992] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:42.264 [2024-07-15 21:52:57.284996] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:42.264 [2024-07-15 21:52:57.285019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:42.264 21:52:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:16:42.522 [2024-07-15 21:52:57.493928] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.522 BaseBdev1 00:16:42.522 21:52:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:42.522 21:52:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev1 00:16:42.522 21:52:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:16:42.522 21:52:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@893 -- # local i 00:16:42.522 21:52:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:16:42.522 21:52:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:16:42.522 21:52:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:42.781 21:52:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:42.781 [ 00:16:42.781 { 00:16:42.781 "name": "BaseBdev1", 00:16:42.781 "aliases": [ 00:16:42.781 "98cefa3c-42f4-11ef-9f7f-e9a656123a8b" 00:16:42.781 ], 00:16:42.781 "product_name": "Malloc disk", 00:16:42.781 "block_size": 4096, 00:16:42.781 "num_blocks": 8192, 00:16:42.781 "uuid": "98cefa3c-42f4-11ef-9f7f-e9a656123a8b", 00:16:42.781 "assigned_rate_limits": { 00:16:42.781 "rw_ios_per_sec": 0, 00:16:42.781 "rw_mbytes_per_sec": 0, 00:16:42.781 "r_mbytes_per_sec": 0, 00:16:42.781 "w_mbytes_per_sec": 0 00:16:42.781 }, 00:16:42.781 "claimed": true, 00:16:42.781 "claim_type": "exclusive_write", 00:16:42.781 "zoned": false, 00:16:42.781 "supported_io_types": { 00:16:42.781 "read": true, 00:16:42.781 "write": true, 00:16:42.781 "unmap": true, 00:16:42.781 "flush": true, 00:16:42.781 "reset": true, 00:16:42.781 "nvme_admin": false, 00:16:42.781 "nvme_io": false, 00:16:42.781 "nvme_io_md": false, 00:16:42.781 "write_zeroes": true, 00:16:42.781 "zcopy": true, 00:16:42.781 "get_zone_info": false, 00:16:42.781 "zone_management": false, 00:16:42.781 "zone_append": false, 00:16:42.781 "compare": false, 00:16:42.781 "compare_and_write": false, 00:16:42.781 "abort": true, 00:16:42.781 "seek_hole": false, 00:16:42.781 "seek_data": false, 00:16:42.781 "copy": true, 00:16:42.781 "nvme_iov_md": false 00:16:42.781 }, 00:16:42.781 "memory_domains": [ 00:16:42.781 { 00:16:42.781 "dma_device_id": "system", 00:16:42.781 "dma_device_type": 1 00:16:42.781 }, 00:16:42.781 { 00:16:42.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.781 "dma_device_type": 2 00:16:42.781 } 00:16:42.781 ], 00:16:42.781 "driver_specific": {} 00:16:42.781 } 00:16:42.781 ] 00:16:42.781 21:52:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # return 0 00:16:43.040 21:52:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:43.040 21:52:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:43.040 21:52:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:43.040 21:52:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:43.040 21:52:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:43.040 21:52:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:43.040 21:52:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:43.040 21:52:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:43.040 21:52:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:43.040 21:52:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:43.040 21:52:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.040 21:52:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.299 21:52:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:43.299 "name": "Existed_Raid", 00:16:43.299 "uuid": "98af3cfb-42f4-11ef-9f7f-e9a656123a8b", 00:16:43.299 "strip_size_kb": 0, 00:16:43.299 "state": "configuring", 00:16:43.299 "raid_level": "raid1", 00:16:43.299 "superblock": true, 00:16:43.299 "num_base_bdevs": 2, 00:16:43.299 "num_base_bdevs_discovered": 1, 00:16:43.299 "num_base_bdevs_operational": 2, 00:16:43.299 "base_bdevs_list": [ 00:16:43.299 { 00:16:43.299 "name": "BaseBdev1", 00:16:43.299 "uuid": "98cefa3c-42f4-11ef-9f7f-e9a656123a8b", 00:16:43.299 "is_configured": true, 00:16:43.299 "data_offset": 256, 00:16:43.299 "data_size": 7936 00:16:43.299 }, 00:16:43.299 { 00:16:43.299 "name": "BaseBdev2", 00:16:43.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.299 "is_configured": false, 00:16:43.299 "data_offset": 0, 00:16:43.299 "data_size": 0 00:16:43.299 } 00:16:43.299 ] 00:16:43.299 }' 00:16:43.299 21:52:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:43.299 21:52:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.558 21:52:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:43.817 [2024-07-15 21:52:58.744989] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:43.817 [2024-07-15 21:52:58.745014] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2befbe034500 name Existed_Raid, state configuring 00:16:43.817 21:52:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:43.817 [2024-07-15 21:52:58.969010] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:43.817 [2024-07-15 21:52:58.969905] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:43.817 [2024-07-15 21:52:58.969984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:43.817 21:52:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:43.817 21:52:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:43.817 21:52:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:43.817 21:52:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:43.817 21:52:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:43.817 21:52:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:43.817 21:52:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:43.817 21:52:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:43.817 21:52:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:43.817 21:52:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:43.817 21:52:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:43.817 21:52:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:43.817 21:52:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.817 21:52:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.075 21:52:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:44.075 "name": "Existed_Raid", 00:16:44.075 "uuid": "99b0349f-42f4-11ef-9f7f-e9a656123a8b", 00:16:44.075 "strip_size_kb": 0, 00:16:44.075 "state": "configuring", 00:16:44.075 "raid_level": "raid1", 00:16:44.075 "superblock": true, 00:16:44.075 "num_base_bdevs": 2, 00:16:44.075 "num_base_bdevs_discovered": 1, 00:16:44.075 "num_base_bdevs_operational": 2, 00:16:44.075 "base_bdevs_list": [ 00:16:44.075 { 00:16:44.075 "name": "BaseBdev1", 00:16:44.075 "uuid": "98cefa3c-42f4-11ef-9f7f-e9a656123a8b", 00:16:44.075 "is_configured": true, 00:16:44.075 "data_offset": 256, 00:16:44.075 "data_size": 7936 00:16:44.075 }, 00:16:44.075 { 00:16:44.075 "name": "BaseBdev2", 00:16:44.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.075 "is_configured": false, 00:16:44.075 "data_offset": 0, 00:16:44.075 "data_size": 0 00:16:44.075 } 00:16:44.075 ] 00:16:44.075 }' 00:16:44.075 21:52:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:44.075 21:52:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.333 21:52:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:16:44.591 [2024-07-15 21:52:59.741135] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:44.591 [2024-07-15 21:52:59.741221] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2befbe034a00 00:16:44.591 [2024-07-15 21:52:59.741227] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:44.591 [2024-07-15 21:52:59.741264] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2befbe097e20 00:16:44.591 [2024-07-15 21:52:59.741310] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2befbe034a00 00:16:44.591 [2024-07-15 21:52:59.741314] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2befbe034a00 00:16:44.591 [2024-07-15 21:52:59.741335] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.591 BaseBdev2 00:16:44.591 21:52:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:44.591 21:52:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev2 00:16:44.591 21:52:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:16:44.591 21:52:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@893 -- # local i 00:16:44.591 21:52:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:16:44.591 21:52:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:16:44.591 21:52:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:44.849 21:53:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:45.107 [ 00:16:45.107 { 00:16:45.107 "name": "BaseBdev2", 00:16:45.107 "aliases": [ 00:16:45.107 "9a26019c-42f4-11ef-9f7f-e9a656123a8b" 00:16:45.107 ], 00:16:45.107 "product_name": "Malloc disk", 00:16:45.107 "block_size": 4096, 00:16:45.107 "num_blocks": 8192, 00:16:45.107 "uuid": "9a26019c-42f4-11ef-9f7f-e9a656123a8b", 00:16:45.107 "assigned_rate_limits": { 00:16:45.107 "rw_ios_per_sec": 0, 00:16:45.107 "rw_mbytes_per_sec": 0, 00:16:45.107 "r_mbytes_per_sec": 0, 00:16:45.107 "w_mbytes_per_sec": 0 00:16:45.107 }, 00:16:45.107 "claimed": true, 00:16:45.108 "claim_type": "exclusive_write", 00:16:45.108 "zoned": false, 00:16:45.108 "supported_io_types": { 00:16:45.108 "read": true, 00:16:45.108 "write": true, 00:16:45.108 "unmap": true, 00:16:45.108 "flush": true, 00:16:45.108 "reset": true, 00:16:45.108 "nvme_admin": false, 00:16:45.108 "nvme_io": false, 00:16:45.108 "nvme_io_md": false, 00:16:45.108 "write_zeroes": true, 00:16:45.108 "zcopy": true, 00:16:45.108 "get_zone_info": false, 00:16:45.108 "zone_management": false, 00:16:45.108 "zone_append": false, 00:16:45.108 "compare": false, 00:16:45.108 "compare_and_write": false, 00:16:45.108 "abort": true, 00:16:45.108 "seek_hole": false, 00:16:45.108 "seek_data": false, 00:16:45.108 "copy": true, 00:16:45.108 "nvme_iov_md": false 00:16:45.108 }, 00:16:45.108 "memory_domains": [ 00:16:45.108 { 00:16:45.108 "dma_device_id": "system", 00:16:45.108 "dma_device_type": 1 00:16:45.108 }, 00:16:45.108 { 00:16:45.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.108 "dma_device_type": 2 00:16:45.108 } 00:16:45.108 ], 00:16:45.108 "driver_specific": {} 00:16:45.108 } 00:16:45.108 ] 00:16:45.108 21:53:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # return 0 00:16:45.108 21:53:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:45.108 21:53:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:45.108 21:53:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:45.108 21:53:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:45.108 21:53:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:45.108 21:53:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:45.108 21:53:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:45.108 21:53:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:45.108 21:53:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:45.108 21:53:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:45.108 21:53:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:45.108 21:53:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:45.108 21:53:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.108 21:53:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.366 21:53:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:45.366 "name": "Existed_Raid", 00:16:45.366 "uuid": "99b0349f-42f4-11ef-9f7f-e9a656123a8b", 00:16:45.366 "strip_size_kb": 0, 00:16:45.366 "state": "online", 00:16:45.366 "raid_level": "raid1", 00:16:45.366 "superblock": true, 00:16:45.366 "num_base_bdevs": 2, 00:16:45.366 "num_base_bdevs_discovered": 2, 00:16:45.366 "num_base_bdevs_operational": 2, 00:16:45.366 "base_bdevs_list": [ 00:16:45.366 { 00:16:45.366 "name": "BaseBdev1", 00:16:45.366 "uuid": "98cefa3c-42f4-11ef-9f7f-e9a656123a8b", 00:16:45.366 "is_configured": true, 00:16:45.366 "data_offset": 256, 00:16:45.366 "data_size": 7936 00:16:45.366 }, 00:16:45.366 { 00:16:45.366 "name": "BaseBdev2", 00:16:45.366 "uuid": "9a26019c-42f4-11ef-9f7f-e9a656123a8b", 00:16:45.366 "is_configured": true, 00:16:45.366 "data_offset": 256, 00:16:45.366 "data_size": 7936 00:16:45.366 } 00:16:45.366 ] 00:16:45.366 }' 00:16:45.366 21:53:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:45.366 21:53:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:45.625 21:53:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:45.625 21:53:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:45.625 21:53:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:45.625 21:53:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:45.625 21:53:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:45.625 21:53:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:16:45.625 21:53:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:45.625 21:53:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:45.884 [2024-07-15 21:53:00.985094] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:45.884 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:45.884 "name": "Existed_Raid", 00:16:45.884 "aliases": [ 00:16:45.884 "99b0349f-42f4-11ef-9f7f-e9a656123a8b" 00:16:45.884 ], 00:16:45.884 "product_name": "Raid Volume", 00:16:45.884 "block_size": 4096, 00:16:45.884 "num_blocks": 7936, 00:16:45.884 "uuid": "99b0349f-42f4-11ef-9f7f-e9a656123a8b", 00:16:45.884 "assigned_rate_limits": { 00:16:45.884 "rw_ios_per_sec": 0, 00:16:45.884 "rw_mbytes_per_sec": 0, 00:16:45.884 "r_mbytes_per_sec": 0, 00:16:45.884 "w_mbytes_per_sec": 0 00:16:45.884 }, 00:16:45.884 "claimed": false, 00:16:45.884 "zoned": false, 00:16:45.884 "supported_io_types": { 00:16:45.884 "read": true, 00:16:45.884 "write": true, 00:16:45.884 "unmap": false, 00:16:45.884 "flush": false, 00:16:45.884 "reset": true, 00:16:45.884 "nvme_admin": false, 00:16:45.884 "nvme_io": false, 00:16:45.884 "nvme_io_md": false, 00:16:45.884 "write_zeroes": true, 00:16:45.884 "zcopy": false, 00:16:45.884 "get_zone_info": false, 00:16:45.884 "zone_management": false, 00:16:45.884 "zone_append": false, 00:16:45.884 "compare": false, 00:16:45.884 "compare_and_write": false, 00:16:45.884 "abort": false, 00:16:45.884 "seek_hole": false, 00:16:45.884 "seek_data": false, 00:16:45.884 "copy": false, 00:16:45.884 "nvme_iov_md": false 00:16:45.884 }, 00:16:45.884 "memory_domains": [ 00:16:45.884 { 00:16:45.884 "dma_device_id": "system", 00:16:45.884 "dma_device_type": 1 00:16:45.884 }, 00:16:45.884 { 00:16:45.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.884 "dma_device_type": 2 00:16:45.884 }, 00:16:45.884 { 00:16:45.884 "dma_device_id": "system", 00:16:45.884 "dma_device_type": 1 00:16:45.884 }, 00:16:45.884 { 00:16:45.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.884 "dma_device_type": 2 00:16:45.884 } 00:16:45.884 ], 00:16:45.884 "driver_specific": { 00:16:45.884 "raid": { 00:16:45.884 "uuid": "99b0349f-42f4-11ef-9f7f-e9a656123a8b", 00:16:45.884 "strip_size_kb": 0, 00:16:45.884 "state": "online", 00:16:45.884 "raid_level": "raid1", 00:16:45.884 "superblock": true, 00:16:45.884 "num_base_bdevs": 2, 00:16:45.884 "num_base_bdevs_discovered": 2, 00:16:45.884 "num_base_bdevs_operational": 2, 00:16:45.884 "base_bdevs_list": [ 00:16:45.884 { 00:16:45.884 "name": "BaseBdev1", 00:16:45.884 "uuid": "98cefa3c-42f4-11ef-9f7f-e9a656123a8b", 00:16:45.884 "is_configured": true, 00:16:45.884 "data_offset": 256, 00:16:45.884 "data_size": 7936 00:16:45.884 }, 00:16:45.884 { 00:16:45.884 "name": "BaseBdev2", 00:16:45.884 "uuid": "9a26019c-42f4-11ef-9f7f-e9a656123a8b", 00:16:45.884 "is_configured": true, 00:16:45.884 "data_offset": 256, 00:16:45.884 "data_size": 7936 00:16:45.884 } 00:16:45.884 ] 00:16:45.884 } 00:16:45.884 } 00:16:45.884 }' 00:16:45.884 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:45.884 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:45.884 BaseBdev2' 00:16:45.884 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:45.884 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:45.884 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:46.142 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:46.142 "name": "BaseBdev1", 00:16:46.142 "aliases": [ 00:16:46.142 "98cefa3c-42f4-11ef-9f7f-e9a656123a8b" 00:16:46.142 ], 00:16:46.142 "product_name": "Malloc disk", 00:16:46.142 "block_size": 4096, 00:16:46.142 "num_blocks": 8192, 00:16:46.142 "uuid": "98cefa3c-42f4-11ef-9f7f-e9a656123a8b", 00:16:46.142 "assigned_rate_limits": { 00:16:46.142 "rw_ios_per_sec": 0, 00:16:46.142 "rw_mbytes_per_sec": 0, 00:16:46.142 "r_mbytes_per_sec": 0, 00:16:46.142 "w_mbytes_per_sec": 0 00:16:46.142 }, 00:16:46.142 "claimed": true, 00:16:46.142 "claim_type": "exclusive_write", 00:16:46.142 "zoned": false, 00:16:46.142 "supported_io_types": { 00:16:46.142 "read": true, 00:16:46.142 "write": true, 00:16:46.142 "unmap": true, 00:16:46.142 "flush": true, 00:16:46.142 "reset": true, 00:16:46.142 "nvme_admin": false, 00:16:46.142 "nvme_io": false, 00:16:46.142 "nvme_io_md": false, 00:16:46.142 "write_zeroes": true, 00:16:46.142 "zcopy": true, 00:16:46.142 "get_zone_info": false, 00:16:46.142 "zone_management": false, 00:16:46.142 "zone_append": false, 00:16:46.142 "compare": false, 00:16:46.142 "compare_and_write": false, 00:16:46.142 "abort": true, 00:16:46.142 "seek_hole": false, 00:16:46.142 "seek_data": false, 00:16:46.142 "copy": true, 00:16:46.142 "nvme_iov_md": false 00:16:46.142 }, 00:16:46.142 "memory_domains": [ 00:16:46.142 { 00:16:46.142 "dma_device_id": "system", 00:16:46.142 "dma_device_type": 1 00:16:46.142 }, 00:16:46.142 { 00:16:46.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.142 "dma_device_type": 2 00:16:46.142 } 00:16:46.142 ], 00:16:46.142 "driver_specific": {} 00:16:46.142 }' 00:16:46.142 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:46.142 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:46.142 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:16:46.142 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:46.142 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:46.142 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:46.142 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:46.143 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:46.143 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:46.143 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:46.143 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:46.143 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:46.143 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:46.143 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:46.143 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:46.401 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:46.401 "name": "BaseBdev2", 00:16:46.401 "aliases": [ 00:16:46.401 "9a26019c-42f4-11ef-9f7f-e9a656123a8b" 00:16:46.401 ], 00:16:46.401 "product_name": "Malloc disk", 00:16:46.401 "block_size": 4096, 00:16:46.401 "num_blocks": 8192, 00:16:46.401 "uuid": "9a26019c-42f4-11ef-9f7f-e9a656123a8b", 00:16:46.401 "assigned_rate_limits": { 00:16:46.401 "rw_ios_per_sec": 0, 00:16:46.401 "rw_mbytes_per_sec": 0, 00:16:46.401 "r_mbytes_per_sec": 0, 00:16:46.401 "w_mbytes_per_sec": 0 00:16:46.401 }, 00:16:46.401 "claimed": true, 00:16:46.401 "claim_type": "exclusive_write", 00:16:46.401 "zoned": false, 00:16:46.401 "supported_io_types": { 00:16:46.401 "read": true, 00:16:46.401 "write": true, 00:16:46.401 "unmap": true, 00:16:46.401 "flush": true, 00:16:46.401 "reset": true, 00:16:46.401 "nvme_admin": false, 00:16:46.401 "nvme_io": false, 00:16:46.401 "nvme_io_md": false, 00:16:46.401 "write_zeroes": true, 00:16:46.401 "zcopy": true, 00:16:46.401 "get_zone_info": false, 00:16:46.401 "zone_management": false, 00:16:46.401 "zone_append": false, 00:16:46.401 "compare": false, 00:16:46.401 "compare_and_write": false, 00:16:46.401 "abort": true, 00:16:46.401 "seek_hole": false, 00:16:46.401 "seek_data": false, 00:16:46.401 "copy": true, 00:16:46.401 "nvme_iov_md": false 00:16:46.401 }, 00:16:46.401 "memory_domains": [ 00:16:46.401 { 00:16:46.401 "dma_device_id": "system", 00:16:46.401 "dma_device_type": 1 00:16:46.401 }, 00:16:46.401 { 00:16:46.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.401 "dma_device_type": 2 00:16:46.401 } 00:16:46.401 ], 00:16:46.401 "driver_specific": {} 00:16:46.401 }' 00:16:46.401 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:46.401 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:46.401 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:16:46.401 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:46.401 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:46.401 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:46.401 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:46.401 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:46.683 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:46.683 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:46.683 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:46.683 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:46.683 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:46.683 [2024-07-15 21:53:01.845096] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:46.683 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:46.683 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:16:46.683 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:46.683 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:16:46.683 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:16:46.683 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:46.683 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:46.683 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:46.683 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:46.683 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:46.683 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:46.683 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:46.683 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:46.683 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:46.683 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:46.683 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.683 21:53:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.942 21:53:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:46.942 "name": "Existed_Raid", 00:16:46.942 "uuid": "99b0349f-42f4-11ef-9f7f-e9a656123a8b", 00:16:46.942 "strip_size_kb": 0, 00:16:46.942 "state": "online", 00:16:46.942 "raid_level": "raid1", 00:16:46.942 "superblock": true, 00:16:46.942 "num_base_bdevs": 2, 00:16:46.942 "num_base_bdevs_discovered": 1, 00:16:46.942 "num_base_bdevs_operational": 1, 00:16:46.942 "base_bdevs_list": [ 00:16:46.942 { 00:16:46.942 "name": null, 00:16:46.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.942 "is_configured": false, 00:16:46.942 "data_offset": 256, 00:16:46.942 "data_size": 7936 00:16:46.942 }, 00:16:46.942 { 00:16:46.942 "name": "BaseBdev2", 00:16:46.942 "uuid": "9a26019c-42f4-11ef-9f7f-e9a656123a8b", 00:16:46.942 "is_configured": true, 00:16:46.942 "data_offset": 256, 00:16:46.942 "data_size": 7936 00:16:46.942 } 00:16:46.942 ] 00:16:46.942 }' 00:16:46.942 21:53:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:46.942 21:53:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.509 21:53:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:47.509 21:53:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:47.509 21:53:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.509 21:53:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:47.509 21:53:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:47.509 21:53:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:47.509 21:53:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:47.767 [2024-07-15 21:53:02.859065] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:47.767 [2024-07-15 21:53:02.859121] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:47.767 [2024-07-15 21:53:02.864930] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:47.767 [2024-07-15 21:53:02.864947] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:47.767 [2024-07-15 21:53:02.864951] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2befbe034a00 name Existed_Raid, state offline 00:16:47.767 21:53:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:47.767 21:53:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:47.767 21:53:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.767 21:53:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:48.025 21:53:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:48.025 21:53:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:48.025 21:53:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:16:48.025 21:53:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 65462 00:16:48.025 21:53:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@942 -- # '[' -z 65462 ']' 00:16:48.025 21:53:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@946 -- # kill -0 65462 00:16:48.025 21:53:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@947 -- # uname 00:16:48.025 21:53:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:16:48.025 21:53:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # ps -c -o command 65462 00:16:48.025 21:53:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # tail -1 00:16:48.025 21:53:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:16:48.025 21:53:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:16:48.025 killing process with pid 65462 00:16:48.025 21:53:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # echo 'killing process with pid 65462' 00:16:48.025 21:53:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@961 -- # kill 65462 00:16:48.025 [2024-07-15 21:53:03.091529] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:48.025 [2024-07-15 21:53:03.091560] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:48.025 21:53:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@966 -- # wait 65462 00:16:48.284 21:53:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:16:48.284 00:16:48.284 real 0m8.114s 00:16:48.284 user 0m13.938s 00:16:48.284 sys 0m1.559s 00:16:48.284 ************************************ 00:16:48.284 END TEST raid_state_function_test_sb_4k 00:16:48.284 ************************************ 00:16:48.284 21:53:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1118 -- # xtrace_disable 00:16:48.284 21:53:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.284 21:53:03 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:16:48.284 21:53:03 bdev_raid -- bdev/bdev_raid.sh@899 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:16:48.284 21:53:03 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:16:48.284 21:53:03 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:16:48.284 21:53:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:48.284 ************************************ 00:16:48.284 START TEST raid_superblock_test_4k 00:16:48.284 ************************************ 00:16:48.284 21:53:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1117 -- # raid_superblock_test raid1 2 00:16:48.284 21:53:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:16:48.284 21:53:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:16:48.284 21:53:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:16:48.284 21:53:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:16:48.284 21:53:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:16:48.284 21:53:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:16:48.284 21:53:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:16:48.284 21:53:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:16:48.284 21:53:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:16:48.284 21:53:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local strip_size 00:16:48.284 21:53:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:16:48.284 21:53:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:16:48.284 21:53:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:16:48.284 21:53:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:16:48.284 21:53:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:16:48.284 21:53:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # raid_pid=65732 00:16:48.284 21:53:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # waitforlisten 65732 /var/tmp/spdk-raid.sock 00:16:48.284 21:53:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@823 -- # '[' -z 65732 ']' 00:16:48.284 21:53:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:48.284 21:53:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@828 -- # local max_retries=100 00:16:48.284 21:53:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:48.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:48.284 21:53:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:48.284 21:53:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@832 -- # xtrace_disable 00:16:48.284 21:53:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.284 [2024-07-15 21:53:03.321177] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:16:48.284 [2024-07-15 21:53:03.321484] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:48.905 EAL: TSC is not safe to use in SMP mode 00:16:48.905 EAL: TSC is not invariant 00:16:48.905 [2024-07-15 21:53:04.011242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.168 [2024-07-15 21:53:04.095518] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:49.168 [2024-07-15 21:53:04.097873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.168 [2024-07-15 21:53:04.098729] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:49.168 [2024-07-15 21:53:04.098748] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:49.168 21:53:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:16:49.168 21:53:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@856 -- # return 0 00:16:49.168 21:53:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:16:49.168 21:53:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:49.168 21:53:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:16:49.168 21:53:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:16:49.168 21:53:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:49.168 21:53:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:49.168 21:53:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:49.168 21:53:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:49.168 21:53:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:16:49.425 malloc1 00:16:49.425 21:53:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:49.682 [2024-07-15 21:53:04.779249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:49.682 [2024-07-15 21:53:04.779314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.682 [2024-07-15 21:53:04.779343] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5da3c234780 00:16:49.682 [2024-07-15 21:53:04.779351] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.682 [2024-07-15 21:53:04.780436] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.682 [2024-07-15 21:53:04.780463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:49.682 pt1 00:16:49.682 21:53:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:49.682 21:53:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:49.682 21:53:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:16:49.682 21:53:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:16:49.682 21:53:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:49.682 21:53:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:49.682 21:53:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:49.682 21:53:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:49.682 21:53:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:16:49.941 malloc2 00:16:49.941 21:53:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:50.199 [2024-07-15 21:53:05.263271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:50.199 [2024-07-15 21:53:05.263334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.199 [2024-07-15 21:53:05.263361] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5da3c234c80 00:16:50.199 [2024-07-15 21:53:05.263368] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.199 [2024-07-15 21:53:05.264050] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.199 [2024-07-15 21:53:05.264089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:50.199 pt2 00:16:50.199 21:53:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:50.199 21:53:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:50.199 21:53:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:16:50.459 [2024-07-15 21:53:05.483278] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:50.459 [2024-07-15 21:53:05.483859] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:50.459 [2024-07-15 21:53:05.483932] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x5da3c234f00 00:16:50.459 [2024-07-15 21:53:05.483938] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:50.459 [2024-07-15 21:53:05.483971] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x5da3c297e20 00:16:50.459 [2024-07-15 21:53:05.484069] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x5da3c234f00 00:16:50.459 [2024-07-15 21:53:05.484089] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x5da3c234f00 00:16:50.459 [2024-07-15 21:53:05.484150] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.459 21:53:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:50.459 21:53:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:50.459 21:53:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:50.459 21:53:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:50.459 21:53:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:50.459 21:53:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:50.459 21:53:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:50.459 21:53:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:50.459 21:53:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:50.459 21:53:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:50.459 21:53:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.459 21:53:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.718 21:53:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:50.718 "name": "raid_bdev1", 00:16:50.718 "uuid": "9d923450-42f4-11ef-9f7f-e9a656123a8b", 00:16:50.718 "strip_size_kb": 0, 00:16:50.718 "state": "online", 00:16:50.718 "raid_level": "raid1", 00:16:50.718 "superblock": true, 00:16:50.718 "num_base_bdevs": 2, 00:16:50.718 "num_base_bdevs_discovered": 2, 00:16:50.718 "num_base_bdevs_operational": 2, 00:16:50.718 "base_bdevs_list": [ 00:16:50.718 { 00:16:50.718 "name": "pt1", 00:16:50.718 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:50.718 "is_configured": true, 00:16:50.718 "data_offset": 256, 00:16:50.718 "data_size": 7936 00:16:50.718 }, 00:16:50.718 { 00:16:50.718 "name": "pt2", 00:16:50.718 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.718 "is_configured": true, 00:16:50.718 "data_offset": 256, 00:16:50.718 "data_size": 7936 00:16:50.718 } 00:16:50.718 ] 00:16:50.718 }' 00:16:50.718 21:53:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:50.718 21:53:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.976 21:53:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:16:50.976 21:53:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:50.976 21:53:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:50.976 21:53:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:50.976 21:53:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:50.976 21:53:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:16:50.976 21:53:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:50.976 21:53:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:51.234 [2024-07-15 21:53:06.223332] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:51.235 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:51.235 "name": "raid_bdev1", 00:16:51.235 "aliases": [ 00:16:51.235 "9d923450-42f4-11ef-9f7f-e9a656123a8b" 00:16:51.235 ], 00:16:51.235 "product_name": "Raid Volume", 00:16:51.235 "block_size": 4096, 00:16:51.235 "num_blocks": 7936, 00:16:51.235 "uuid": "9d923450-42f4-11ef-9f7f-e9a656123a8b", 00:16:51.235 "assigned_rate_limits": { 00:16:51.235 "rw_ios_per_sec": 0, 00:16:51.235 "rw_mbytes_per_sec": 0, 00:16:51.235 "r_mbytes_per_sec": 0, 00:16:51.235 "w_mbytes_per_sec": 0 00:16:51.235 }, 00:16:51.235 "claimed": false, 00:16:51.235 "zoned": false, 00:16:51.235 "supported_io_types": { 00:16:51.235 "read": true, 00:16:51.235 "write": true, 00:16:51.235 "unmap": false, 00:16:51.235 "flush": false, 00:16:51.235 "reset": true, 00:16:51.235 "nvme_admin": false, 00:16:51.235 "nvme_io": false, 00:16:51.235 "nvme_io_md": false, 00:16:51.235 "write_zeroes": true, 00:16:51.235 "zcopy": false, 00:16:51.235 "get_zone_info": false, 00:16:51.235 "zone_management": false, 00:16:51.235 "zone_append": false, 00:16:51.235 "compare": false, 00:16:51.235 "compare_and_write": false, 00:16:51.235 "abort": false, 00:16:51.235 "seek_hole": false, 00:16:51.235 "seek_data": false, 00:16:51.235 "copy": false, 00:16:51.235 "nvme_iov_md": false 00:16:51.235 }, 00:16:51.235 "memory_domains": [ 00:16:51.235 { 00:16:51.235 "dma_device_id": "system", 00:16:51.235 "dma_device_type": 1 00:16:51.235 }, 00:16:51.235 { 00:16:51.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.235 "dma_device_type": 2 00:16:51.235 }, 00:16:51.235 { 00:16:51.235 "dma_device_id": "system", 00:16:51.235 "dma_device_type": 1 00:16:51.235 }, 00:16:51.235 { 00:16:51.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.235 "dma_device_type": 2 00:16:51.235 } 00:16:51.235 ], 00:16:51.235 "driver_specific": { 00:16:51.235 "raid": { 00:16:51.235 "uuid": "9d923450-42f4-11ef-9f7f-e9a656123a8b", 00:16:51.235 "strip_size_kb": 0, 00:16:51.235 "state": "online", 00:16:51.235 "raid_level": "raid1", 00:16:51.235 "superblock": true, 00:16:51.235 "num_base_bdevs": 2, 00:16:51.235 "num_base_bdevs_discovered": 2, 00:16:51.235 "num_base_bdevs_operational": 2, 00:16:51.235 "base_bdevs_list": [ 00:16:51.235 { 00:16:51.235 "name": "pt1", 00:16:51.235 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:51.235 "is_configured": true, 00:16:51.235 "data_offset": 256, 00:16:51.235 "data_size": 7936 00:16:51.235 }, 00:16:51.235 { 00:16:51.235 "name": "pt2", 00:16:51.235 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:51.235 "is_configured": true, 00:16:51.235 "data_offset": 256, 00:16:51.235 "data_size": 7936 00:16:51.235 } 00:16:51.235 ] 00:16:51.235 } 00:16:51.235 } 00:16:51.235 }' 00:16:51.235 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:51.235 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:51.235 pt2' 00:16:51.235 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:51.235 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:51.235 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:51.494 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:51.494 "name": "pt1", 00:16:51.494 "aliases": [ 00:16:51.494 "00000000-0000-0000-0000-000000000001" 00:16:51.494 ], 00:16:51.494 "product_name": "passthru", 00:16:51.494 "block_size": 4096, 00:16:51.494 "num_blocks": 8192, 00:16:51.494 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:51.494 "assigned_rate_limits": { 00:16:51.494 "rw_ios_per_sec": 0, 00:16:51.494 "rw_mbytes_per_sec": 0, 00:16:51.494 "r_mbytes_per_sec": 0, 00:16:51.494 "w_mbytes_per_sec": 0 00:16:51.494 }, 00:16:51.494 "claimed": true, 00:16:51.494 "claim_type": "exclusive_write", 00:16:51.494 "zoned": false, 00:16:51.494 "supported_io_types": { 00:16:51.494 "read": true, 00:16:51.494 "write": true, 00:16:51.494 "unmap": true, 00:16:51.494 "flush": true, 00:16:51.494 "reset": true, 00:16:51.494 "nvme_admin": false, 00:16:51.494 "nvme_io": false, 00:16:51.494 "nvme_io_md": false, 00:16:51.494 "write_zeroes": true, 00:16:51.494 "zcopy": true, 00:16:51.494 "get_zone_info": false, 00:16:51.494 "zone_management": false, 00:16:51.494 "zone_append": false, 00:16:51.494 "compare": false, 00:16:51.494 "compare_and_write": false, 00:16:51.494 "abort": true, 00:16:51.494 "seek_hole": false, 00:16:51.494 "seek_data": false, 00:16:51.494 "copy": true, 00:16:51.494 "nvme_iov_md": false 00:16:51.494 }, 00:16:51.494 "memory_domains": [ 00:16:51.494 { 00:16:51.494 "dma_device_id": "system", 00:16:51.494 "dma_device_type": 1 00:16:51.494 }, 00:16:51.494 { 00:16:51.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.494 "dma_device_type": 2 00:16:51.494 } 00:16:51.494 ], 00:16:51.494 "driver_specific": { 00:16:51.494 "passthru": { 00:16:51.494 "name": "pt1", 00:16:51.494 "base_bdev_name": "malloc1" 00:16:51.494 } 00:16:51.494 } 00:16:51.494 }' 00:16:51.494 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:51.494 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:51.494 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:16:51.494 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:51.494 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:51.494 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:51.494 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:51.494 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:51.494 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:51.494 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:51.494 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:51.494 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:51.494 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:51.494 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:51.494 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:51.753 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:51.753 "name": "pt2", 00:16:51.753 "aliases": [ 00:16:51.754 "00000000-0000-0000-0000-000000000002" 00:16:51.754 ], 00:16:51.754 "product_name": "passthru", 00:16:51.754 "block_size": 4096, 00:16:51.754 "num_blocks": 8192, 00:16:51.754 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:51.754 "assigned_rate_limits": { 00:16:51.754 "rw_ios_per_sec": 0, 00:16:51.754 "rw_mbytes_per_sec": 0, 00:16:51.754 "r_mbytes_per_sec": 0, 00:16:51.754 "w_mbytes_per_sec": 0 00:16:51.754 }, 00:16:51.754 "claimed": true, 00:16:51.754 "claim_type": "exclusive_write", 00:16:51.754 "zoned": false, 00:16:51.754 "supported_io_types": { 00:16:51.754 "read": true, 00:16:51.754 "write": true, 00:16:51.754 "unmap": true, 00:16:51.754 "flush": true, 00:16:51.754 "reset": true, 00:16:51.754 "nvme_admin": false, 00:16:51.754 "nvme_io": false, 00:16:51.754 "nvme_io_md": false, 00:16:51.754 "write_zeroes": true, 00:16:51.754 "zcopy": true, 00:16:51.754 "get_zone_info": false, 00:16:51.754 "zone_management": false, 00:16:51.754 "zone_append": false, 00:16:51.754 "compare": false, 00:16:51.754 "compare_and_write": false, 00:16:51.754 "abort": true, 00:16:51.754 "seek_hole": false, 00:16:51.754 "seek_data": false, 00:16:51.754 "copy": true, 00:16:51.754 "nvme_iov_md": false 00:16:51.754 }, 00:16:51.754 "memory_domains": [ 00:16:51.754 { 00:16:51.754 "dma_device_id": "system", 00:16:51.754 "dma_device_type": 1 00:16:51.754 }, 00:16:51.754 { 00:16:51.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.754 "dma_device_type": 2 00:16:51.754 } 00:16:51.754 ], 00:16:51.754 "driver_specific": { 00:16:51.754 "passthru": { 00:16:51.754 "name": "pt2", 00:16:51.754 "base_bdev_name": "malloc2" 00:16:51.754 } 00:16:51.754 } 00:16:51.754 }' 00:16:51.754 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:51.754 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:51.754 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:16:51.754 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:51.754 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:51.754 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:51.754 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:51.754 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:51.754 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:51.754 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:51.754 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:51.754 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:51.754 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:16:51.754 21:53:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:52.013 [2024-07-15 21:53:07.043365] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:52.013 21:53:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=9d923450-42f4-11ef-9f7f-e9a656123a8b 00:16:52.013 21:53:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # '[' -z 9d923450-42f4-11ef-9f7f-e9a656123a8b ']' 00:16:52.013 21:53:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:52.273 [2024-07-15 21:53:07.311329] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:52.273 [2024-07-15 21:53:07.311346] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:52.273 [2024-07-15 21:53:07.311382] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:52.273 [2024-07-15 21:53:07.311395] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:52.273 [2024-07-15 21:53:07.311399] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x5da3c234f00 name raid_bdev1, state offline 00:16:52.273 21:53:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.273 21:53:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:16:52.532 21:53:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:16:52.532 21:53:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:16:52.532 21:53:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:52.532 21:53:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:52.790 21:53:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:52.790 21:53:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:53.049 21:53:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:53.049 21:53:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:53.308 21:53:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:16:53.308 21:53:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:53.308 21:53:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # local es=0 00:16:53.308 21:53:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:53.308 21:53:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@630 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:53.308 21:53:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:16:53.308 21:53:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@634 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:53.308 21:53:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:16:53.308 21:53:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@636 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:53.308 21:53:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:16:53.308 21:53:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@636 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:53.308 21:53:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@636 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:53.308 21:53:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:53.308 [2024-07-15 21:53:08.455353] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:53.308 [2024-07-15 21:53:08.456028] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:53.308 [2024-07-15 21:53:08.456051] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:53.308 [2024-07-15 21:53:08.456088] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:53.308 [2024-07-15 21:53:08.456105] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:53.308 [2024-07-15 21:53:08.456109] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x5da3c234c80 name raid_bdev1, state configuring 00:16:53.308 request: 00:16:53.308 { 00:16:53.308 "name": "raid_bdev1", 00:16:53.308 "raid_level": "raid1", 00:16:53.308 "base_bdevs": [ 00:16:53.308 "malloc1", 00:16:53.308 "malloc2" 00:16:53.308 ], 00:16:53.308 "superblock": false, 00:16:53.308 "method": "bdev_raid_create", 00:16:53.308 "req_id": 1 00:16:53.308 } 00:16:53.308 Got JSON-RPC error response 00:16:53.308 response: 00:16:53.308 { 00:16:53.308 "code": -17, 00:16:53.308 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:53.308 } 00:16:53.308 21:53:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@645 -- # es=1 00:16:53.308 21:53:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:16:53.308 21:53:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:16:53.308 21:53:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:16:53.309 21:53:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.309 21:53:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:16:53.567 21:53:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:16:53.567 21:53:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:16:53.567 21:53:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:53.851 [2024-07-15 21:53:08.907360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:53.851 [2024-07-15 21:53:08.907426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.851 [2024-07-15 21:53:08.907479] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5da3c234780 00:16:53.851 [2024-07-15 21:53:08.907486] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.851 [2024-07-15 21:53:08.908276] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.851 [2024-07-15 21:53:08.908318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:53.851 [2024-07-15 21:53:08.908340] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:53.851 [2024-07-15 21:53:08.908352] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:53.851 pt1 00:16:53.851 21:53:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:53.851 21:53:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:53.851 21:53:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:53.851 21:53:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:53.851 21:53:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:53.851 21:53:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:53.851 21:53:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:53.851 21:53:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:53.851 21:53:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:53.851 21:53:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:53.851 21:53:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.851 21:53:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.110 21:53:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:54.110 "name": "raid_bdev1", 00:16:54.110 "uuid": "9d923450-42f4-11ef-9f7f-e9a656123a8b", 00:16:54.110 "strip_size_kb": 0, 00:16:54.110 "state": "configuring", 00:16:54.110 "raid_level": "raid1", 00:16:54.110 "superblock": true, 00:16:54.110 "num_base_bdevs": 2, 00:16:54.110 "num_base_bdevs_discovered": 1, 00:16:54.110 "num_base_bdevs_operational": 2, 00:16:54.110 "base_bdevs_list": [ 00:16:54.110 { 00:16:54.110 "name": "pt1", 00:16:54.110 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:54.110 "is_configured": true, 00:16:54.110 "data_offset": 256, 00:16:54.110 "data_size": 7936 00:16:54.110 }, 00:16:54.110 { 00:16:54.110 "name": null, 00:16:54.110 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:54.110 "is_configured": false, 00:16:54.110 "data_offset": 256, 00:16:54.110 "data_size": 7936 00:16:54.110 } 00:16:54.110 ] 00:16:54.110 }' 00:16:54.110 21:53:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:54.110 21:53:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.369 21:53:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:16:54.369 21:53:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:16:54.369 21:53:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:54.369 21:53:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:54.628 [2024-07-15 21:53:09.683382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:54.628 [2024-07-15 21:53:09.683443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.628 [2024-07-15 21:53:09.683470] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5da3c234f00 00:16:54.628 [2024-07-15 21:53:09.683477] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.628 [2024-07-15 21:53:09.683594] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.628 [2024-07-15 21:53:09.683616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:54.628 [2024-07-15 21:53:09.683637] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:54.628 [2024-07-15 21:53:09.683645] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:54.628 [2024-07-15 21:53:09.683707] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x5da3c235180 00:16:54.628 [2024-07-15 21:53:09.683711] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:54.628 [2024-07-15 21:53:09.683746] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x5da3c297e20 00:16:54.628 [2024-07-15 21:53:09.683832] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x5da3c235180 00:16:54.628 [2024-07-15 21:53:09.683837] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x5da3c235180 00:16:54.628 [2024-07-15 21:53:09.683860] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.628 pt2 00:16:54.628 21:53:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:54.628 21:53:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:54.628 21:53:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:54.628 21:53:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:54.628 21:53:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:54.628 21:53:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:54.628 21:53:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:54.628 21:53:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:54.628 21:53:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:54.628 21:53:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:54.628 21:53:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:54.628 21:53:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:54.628 21:53:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.628 21:53:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.886 21:53:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:54.886 "name": "raid_bdev1", 00:16:54.886 "uuid": "9d923450-42f4-11ef-9f7f-e9a656123a8b", 00:16:54.886 "strip_size_kb": 0, 00:16:54.886 "state": "online", 00:16:54.886 "raid_level": "raid1", 00:16:54.886 "superblock": true, 00:16:54.886 "num_base_bdevs": 2, 00:16:54.886 "num_base_bdevs_discovered": 2, 00:16:54.886 "num_base_bdevs_operational": 2, 00:16:54.886 "base_bdevs_list": [ 00:16:54.886 { 00:16:54.886 "name": "pt1", 00:16:54.886 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:54.886 "is_configured": true, 00:16:54.886 "data_offset": 256, 00:16:54.886 "data_size": 7936 00:16:54.886 }, 00:16:54.886 { 00:16:54.887 "name": "pt2", 00:16:54.887 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:54.887 "is_configured": true, 00:16:54.887 "data_offset": 256, 00:16:54.887 "data_size": 7936 00:16:54.887 } 00:16:54.887 ] 00:16:54.887 }' 00:16:54.887 21:53:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:54.887 21:53:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.145 21:53:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:16:55.145 21:53:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:55.145 21:53:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:55.145 21:53:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:55.145 21:53:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:55.145 21:53:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:16:55.145 21:53:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:55.145 21:53:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:55.416 [2024-07-15 21:53:10.443437] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:55.416 21:53:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:55.416 "name": "raid_bdev1", 00:16:55.416 "aliases": [ 00:16:55.416 "9d923450-42f4-11ef-9f7f-e9a656123a8b" 00:16:55.416 ], 00:16:55.416 "product_name": "Raid Volume", 00:16:55.416 "block_size": 4096, 00:16:55.416 "num_blocks": 7936, 00:16:55.416 "uuid": "9d923450-42f4-11ef-9f7f-e9a656123a8b", 00:16:55.416 "assigned_rate_limits": { 00:16:55.416 "rw_ios_per_sec": 0, 00:16:55.416 "rw_mbytes_per_sec": 0, 00:16:55.416 "r_mbytes_per_sec": 0, 00:16:55.416 "w_mbytes_per_sec": 0 00:16:55.416 }, 00:16:55.416 "claimed": false, 00:16:55.416 "zoned": false, 00:16:55.416 "supported_io_types": { 00:16:55.416 "read": true, 00:16:55.416 "write": true, 00:16:55.416 "unmap": false, 00:16:55.416 "flush": false, 00:16:55.416 "reset": true, 00:16:55.416 "nvme_admin": false, 00:16:55.416 "nvme_io": false, 00:16:55.416 "nvme_io_md": false, 00:16:55.416 "write_zeroes": true, 00:16:55.416 "zcopy": false, 00:16:55.416 "get_zone_info": false, 00:16:55.416 "zone_management": false, 00:16:55.416 "zone_append": false, 00:16:55.416 "compare": false, 00:16:55.416 "compare_and_write": false, 00:16:55.416 "abort": false, 00:16:55.416 "seek_hole": false, 00:16:55.416 "seek_data": false, 00:16:55.416 "copy": false, 00:16:55.416 "nvme_iov_md": false 00:16:55.416 }, 00:16:55.416 "memory_domains": [ 00:16:55.416 { 00:16:55.416 "dma_device_id": "system", 00:16:55.416 "dma_device_type": 1 00:16:55.416 }, 00:16:55.416 { 00:16:55.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.417 "dma_device_type": 2 00:16:55.417 }, 00:16:55.417 { 00:16:55.417 "dma_device_id": "system", 00:16:55.417 "dma_device_type": 1 00:16:55.417 }, 00:16:55.417 { 00:16:55.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.417 "dma_device_type": 2 00:16:55.417 } 00:16:55.417 ], 00:16:55.417 "driver_specific": { 00:16:55.417 "raid": { 00:16:55.417 "uuid": "9d923450-42f4-11ef-9f7f-e9a656123a8b", 00:16:55.417 "strip_size_kb": 0, 00:16:55.417 "state": "online", 00:16:55.417 "raid_level": "raid1", 00:16:55.417 "superblock": true, 00:16:55.417 "num_base_bdevs": 2, 00:16:55.417 "num_base_bdevs_discovered": 2, 00:16:55.417 "num_base_bdevs_operational": 2, 00:16:55.417 "base_bdevs_list": [ 00:16:55.417 { 00:16:55.417 "name": "pt1", 00:16:55.417 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:55.417 "is_configured": true, 00:16:55.417 "data_offset": 256, 00:16:55.417 "data_size": 7936 00:16:55.417 }, 00:16:55.417 { 00:16:55.417 "name": "pt2", 00:16:55.417 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:55.417 "is_configured": true, 00:16:55.417 "data_offset": 256, 00:16:55.417 "data_size": 7936 00:16:55.417 } 00:16:55.417 ] 00:16:55.417 } 00:16:55.417 } 00:16:55.417 }' 00:16:55.417 21:53:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:55.417 21:53:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:55.417 pt2' 00:16:55.417 21:53:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:55.417 21:53:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:55.417 21:53:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:55.707 21:53:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:55.707 "name": "pt1", 00:16:55.707 "aliases": [ 00:16:55.707 "00000000-0000-0000-0000-000000000001" 00:16:55.707 ], 00:16:55.707 "product_name": "passthru", 00:16:55.707 "block_size": 4096, 00:16:55.707 "num_blocks": 8192, 00:16:55.707 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:55.707 "assigned_rate_limits": { 00:16:55.707 "rw_ios_per_sec": 0, 00:16:55.707 "rw_mbytes_per_sec": 0, 00:16:55.707 "r_mbytes_per_sec": 0, 00:16:55.707 "w_mbytes_per_sec": 0 00:16:55.707 }, 00:16:55.707 "claimed": true, 00:16:55.707 "claim_type": "exclusive_write", 00:16:55.707 "zoned": false, 00:16:55.707 "supported_io_types": { 00:16:55.707 "read": true, 00:16:55.707 "write": true, 00:16:55.707 "unmap": true, 00:16:55.707 "flush": true, 00:16:55.707 "reset": true, 00:16:55.707 "nvme_admin": false, 00:16:55.707 "nvme_io": false, 00:16:55.707 "nvme_io_md": false, 00:16:55.707 "write_zeroes": true, 00:16:55.707 "zcopy": true, 00:16:55.707 "get_zone_info": false, 00:16:55.707 "zone_management": false, 00:16:55.707 "zone_append": false, 00:16:55.707 "compare": false, 00:16:55.707 "compare_and_write": false, 00:16:55.707 "abort": true, 00:16:55.707 "seek_hole": false, 00:16:55.707 "seek_data": false, 00:16:55.707 "copy": true, 00:16:55.707 "nvme_iov_md": false 00:16:55.707 }, 00:16:55.707 "memory_domains": [ 00:16:55.707 { 00:16:55.707 "dma_device_id": "system", 00:16:55.707 "dma_device_type": 1 00:16:55.707 }, 00:16:55.707 { 00:16:55.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.707 "dma_device_type": 2 00:16:55.707 } 00:16:55.707 ], 00:16:55.707 "driver_specific": { 00:16:55.707 "passthru": { 00:16:55.707 "name": "pt1", 00:16:55.707 "base_bdev_name": "malloc1" 00:16:55.707 } 00:16:55.707 } 00:16:55.707 }' 00:16:55.707 21:53:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:55.707 21:53:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:55.707 21:53:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:16:55.707 21:53:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:55.707 21:53:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:55.707 21:53:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:55.707 21:53:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:55.707 21:53:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:55.707 21:53:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:55.707 21:53:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:55.707 21:53:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:55.707 21:53:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:55.707 21:53:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:55.707 21:53:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:55.707 21:53:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:55.971 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:55.971 "name": "pt2", 00:16:55.971 "aliases": [ 00:16:55.971 "00000000-0000-0000-0000-000000000002" 00:16:55.971 ], 00:16:55.971 "product_name": "passthru", 00:16:55.971 "block_size": 4096, 00:16:55.971 "num_blocks": 8192, 00:16:55.971 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:55.971 "assigned_rate_limits": { 00:16:55.971 "rw_ios_per_sec": 0, 00:16:55.971 "rw_mbytes_per_sec": 0, 00:16:55.971 "r_mbytes_per_sec": 0, 00:16:55.971 "w_mbytes_per_sec": 0 00:16:55.971 }, 00:16:55.971 "claimed": true, 00:16:55.971 "claim_type": "exclusive_write", 00:16:55.971 "zoned": false, 00:16:55.971 "supported_io_types": { 00:16:55.971 "read": true, 00:16:55.971 "write": true, 00:16:55.971 "unmap": true, 00:16:55.971 "flush": true, 00:16:55.971 "reset": true, 00:16:55.971 "nvme_admin": false, 00:16:55.971 "nvme_io": false, 00:16:55.971 "nvme_io_md": false, 00:16:55.971 "write_zeroes": true, 00:16:55.971 "zcopy": true, 00:16:55.971 "get_zone_info": false, 00:16:55.971 "zone_management": false, 00:16:55.971 "zone_append": false, 00:16:55.971 "compare": false, 00:16:55.971 "compare_and_write": false, 00:16:55.971 "abort": true, 00:16:55.971 "seek_hole": false, 00:16:55.971 "seek_data": false, 00:16:55.971 "copy": true, 00:16:55.971 "nvme_iov_md": false 00:16:55.971 }, 00:16:55.971 "memory_domains": [ 00:16:55.971 { 00:16:55.971 "dma_device_id": "system", 00:16:55.971 "dma_device_type": 1 00:16:55.971 }, 00:16:55.971 { 00:16:55.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.971 "dma_device_type": 2 00:16:55.971 } 00:16:55.971 ], 00:16:55.971 "driver_specific": { 00:16:55.971 "passthru": { 00:16:55.971 "name": "pt2", 00:16:55.971 "base_bdev_name": "malloc2" 00:16:55.971 } 00:16:55.971 } 00:16:55.971 }' 00:16:55.971 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:55.971 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:55.971 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:16:55.971 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:55.971 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:55.971 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:55.971 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:55.971 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:55.971 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:55.971 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:55.971 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:55.971 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:55.971 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:55.971 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:16:56.230 [2024-07-15 21:53:11.263593] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:56.230 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # '[' 9d923450-42f4-11ef-9f7f-e9a656123a8b '!=' 9d923450-42f4-11ef-9f7f-e9a656123a8b ']' 00:16:56.230 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:16:56.230 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:56.230 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:16:56.230 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:56.489 [2024-07-15 21:53:11.479479] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:56.489 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:56.489 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:56.489 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:56.489 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:56.489 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:56.489 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:56.489 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:56.489 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:56.489 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:56.489 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:56.489 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.489 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.748 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:56.748 "name": "raid_bdev1", 00:16:56.748 "uuid": "9d923450-42f4-11ef-9f7f-e9a656123a8b", 00:16:56.748 "strip_size_kb": 0, 00:16:56.748 "state": "online", 00:16:56.748 "raid_level": "raid1", 00:16:56.748 "superblock": true, 00:16:56.748 "num_base_bdevs": 2, 00:16:56.748 "num_base_bdevs_discovered": 1, 00:16:56.748 "num_base_bdevs_operational": 1, 00:16:56.748 "base_bdevs_list": [ 00:16:56.748 { 00:16:56.748 "name": null, 00:16:56.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.748 "is_configured": false, 00:16:56.748 "data_offset": 256, 00:16:56.748 "data_size": 7936 00:16:56.748 }, 00:16:56.748 { 00:16:56.748 "name": "pt2", 00:16:56.748 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:56.748 "is_configured": true, 00:16:56.748 "data_offset": 256, 00:16:56.748 "data_size": 7936 00:16:56.748 } 00:16:56.748 ] 00:16:56.748 }' 00:16:56.748 21:53:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:56.748 21:53:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.007 21:53:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:57.266 [2024-07-15 21:53:12.215473] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:57.266 [2024-07-15 21:53:12.215491] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:57.266 [2024-07-15 21:53:12.215536] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:57.266 [2024-07-15 21:53:12.215547] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:57.266 [2024-07-15 21:53:12.215550] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x5da3c235180 name raid_bdev1, state offline 00:16:57.266 21:53:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.266 21:53:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:16:57.266 21:53:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:16:57.266 21:53:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:16:57.266 21:53:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:16:57.266 21:53:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:57.266 21:53:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:57.525 21:53:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:16:57.525 21:53:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:57.525 21:53:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:16:57.525 21:53:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:16:57.525 21:53:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@518 -- # i=1 00:16:57.525 21:53:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:57.784 [2024-07-15 21:53:12.895560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:57.784 [2024-07-15 21:53:12.895624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.784 [2024-07-15 21:53:12.895652] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5da3c234f00 00:16:57.784 [2024-07-15 21:53:12.895659] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.784 [2024-07-15 21:53:12.896410] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.784 [2024-07-15 21:53:12.896434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:57.784 [2024-07-15 21:53:12.896471] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:57.784 [2024-07-15 21:53:12.896482] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:57.784 [2024-07-15 21:53:12.896511] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x5da3c235180 00:16:57.784 [2024-07-15 21:53:12.896515] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:57.784 [2024-07-15 21:53:12.896534] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x5da3c297e20 00:16:57.784 [2024-07-15 21:53:12.896581] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x5da3c235180 00:16:57.784 [2024-07-15 21:53:12.896585] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x5da3c235180 00:16:57.784 [2024-07-15 21:53:12.896607] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.784 pt2 00:16:57.784 21:53:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:57.784 21:53:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:57.784 21:53:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:57.784 21:53:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:57.784 21:53:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:57.784 21:53:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:57.784 21:53:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:57.784 21:53:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:57.784 21:53:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:57.784 21:53:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:57.784 21:53:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.784 21:53:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.043 21:53:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:58.043 "name": "raid_bdev1", 00:16:58.043 "uuid": "9d923450-42f4-11ef-9f7f-e9a656123a8b", 00:16:58.043 "strip_size_kb": 0, 00:16:58.043 "state": "online", 00:16:58.043 "raid_level": "raid1", 00:16:58.043 "superblock": true, 00:16:58.043 "num_base_bdevs": 2, 00:16:58.043 "num_base_bdevs_discovered": 1, 00:16:58.043 "num_base_bdevs_operational": 1, 00:16:58.043 "base_bdevs_list": [ 00:16:58.043 { 00:16:58.043 "name": null, 00:16:58.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.043 "is_configured": false, 00:16:58.043 "data_offset": 256, 00:16:58.043 "data_size": 7936 00:16:58.043 }, 00:16:58.043 { 00:16:58.043 "name": "pt2", 00:16:58.043 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:58.043 "is_configured": true, 00:16:58.043 "data_offset": 256, 00:16:58.043 "data_size": 7936 00:16:58.043 } 00:16:58.043 ] 00:16:58.043 }' 00:16:58.043 21:53:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:58.043 21:53:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.301 21:53:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:58.558 [2024-07-15 21:53:13.679620] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:58.558 [2024-07-15 21:53:13.679638] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:58.558 [2024-07-15 21:53:13.679674] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:58.558 [2024-07-15 21:53:13.679684] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:58.558 [2024-07-15 21:53:13.679688] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x5da3c235180 name raid_bdev1, state offline 00:16:58.558 21:53:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.558 21:53:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:16:58.816 21:53:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:16:58.816 21:53:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:16:58.816 21:53:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:16:58.816 21:53:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:59.075 [2024-07-15 21:53:14.155670] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:59.075 [2024-07-15 21:53:14.155733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.075 [2024-07-15 21:53:14.155774] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5da3c234c80 00:16:59.075 [2024-07-15 21:53:14.155781] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.075 [2024-07-15 21:53:14.156622] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.075 [2024-07-15 21:53:14.156644] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:59.075 [2024-07-15 21:53:14.156669] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:59.075 [2024-07-15 21:53:14.156680] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:59.075 [2024-07-15 21:53:14.156710] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:59.075 [2024-07-15 21:53:14.156729] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:59.075 [2024-07-15 21:53:14.156734] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x5da3c234780 name raid_bdev1, state configuring 00:16:59.075 [2024-07-15 21:53:14.156742] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:59.075 [2024-07-15 21:53:14.156757] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x5da3c234780 00:16:59.075 [2024-07-15 21:53:14.156761] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:59.075 [2024-07-15 21:53:14.156779] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x5da3c297e20 00:16:59.075 [2024-07-15 21:53:14.156841] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x5da3c234780 00:16:59.075 [2024-07-15 21:53:14.156846] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x5da3c234780 00:16:59.075 [2024-07-15 21:53:14.156866] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.075 pt1 00:16:59.075 21:53:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:16:59.075 21:53:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:59.075 21:53:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:59.075 21:53:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:59.075 21:53:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:59.075 21:53:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:59.075 21:53:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:59.075 21:53:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:59.075 21:53:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:59.075 21:53:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:59.075 21:53:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:59.075 21:53:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.075 21:53:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.334 21:53:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:59.334 "name": "raid_bdev1", 00:16:59.334 "uuid": "9d923450-42f4-11ef-9f7f-e9a656123a8b", 00:16:59.334 "strip_size_kb": 0, 00:16:59.334 "state": "online", 00:16:59.334 "raid_level": "raid1", 00:16:59.334 "superblock": true, 00:16:59.334 "num_base_bdevs": 2, 00:16:59.334 "num_base_bdevs_discovered": 1, 00:16:59.334 "num_base_bdevs_operational": 1, 00:16:59.334 "base_bdevs_list": [ 00:16:59.334 { 00:16:59.334 "name": null, 00:16:59.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.334 "is_configured": false, 00:16:59.334 "data_offset": 256, 00:16:59.334 "data_size": 7936 00:16:59.334 }, 00:16:59.334 { 00:16:59.334 "name": "pt2", 00:16:59.334 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.334 "is_configured": true, 00:16:59.334 "data_offset": 256, 00:16:59.334 "data_size": 7936 00:16:59.334 } 00:16:59.334 ] 00:16:59.334 }' 00:16:59.334 21:53:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:59.334 21:53:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.592 21:53:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:16:59.592 21:53:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:59.851 21:53:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:16:59.851 21:53:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:59.851 21:53:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:17:00.109 [2024-07-15 21:53:15.119743] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:00.109 21:53:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' 9d923450-42f4-11ef-9f7f-e9a656123a8b '!=' 9d923450-42f4-11ef-9f7f-e9a656123a8b ']' 00:17:00.109 21:53:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@562 -- # killprocess 65732 00:17:00.109 21:53:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@942 -- # '[' -z 65732 ']' 00:17:00.109 21:53:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@946 -- # kill -0 65732 00:17:00.109 21:53:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@947 -- # uname 00:17:00.109 21:53:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:17:00.109 21:53:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # ps -c -o command 65732 00:17:00.109 21:53:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # tail -1 00:17:00.110 21:53:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:17:00.110 21:53:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:17:00.110 killing process with pid 65732 00:17:00.110 21:53:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # echo 'killing process with pid 65732' 00:17:00.110 21:53:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@961 -- # kill 65732 00:17:00.110 [2024-07-15 21:53:15.147552] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:00.110 [2024-07-15 21:53:15.147572] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.110 [2024-07-15 21:53:15.147583] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.110 [2024-07-15 21:53:15.147587] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x5da3c234780 name raid_bdev1, state offline 00:17:00.110 21:53:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@966 -- # wait 65732 00:17:00.110 [2024-07-15 21:53:15.160629] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:00.368 21:53:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@564 -- # return 0 00:17:00.368 00:17:00.368 real 0m12.029s 00:17:00.368 user 0m21.090s 00:17:00.368 sys 0m2.170s 00:17:00.368 21:53:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1118 -- # xtrace_disable 00:17:00.368 21:53:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.368 ************************************ 00:17:00.368 END TEST raid_superblock_test_4k 00:17:00.368 ************************************ 00:17:00.368 21:53:15 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:17:00.368 21:53:15 bdev_raid -- bdev/bdev_raid.sh@900 -- # '[' '' = true ']' 00:17:00.368 21:53:15 bdev_raid -- bdev/bdev_raid.sh@904 -- # base_malloc_params='-m 32' 00:17:00.368 21:53:15 bdev_raid -- bdev/bdev_raid.sh@905 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:00.368 21:53:15 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:17:00.368 21:53:15 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:17:00.368 21:53:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:00.368 ************************************ 00:17:00.368 START TEST raid_state_function_test_sb_md_separate 00:17:00.368 ************************************ 00:17:00.368 21:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1117 -- # raid_state_function_test raid1 2 true 00:17:00.369 21:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:17:00.369 21:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:17:00.369 21:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:00.369 21:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:00.369 21:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:00.369 21:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:00.369 21:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:00.369 21:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:00.369 21:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:00.369 21:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:00.369 21:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:00.369 21:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:00.369 21:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:00.369 21:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:00.369 21:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:00.369 21:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:00.369 21:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:00.369 21:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:00.369 21:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:17:00.369 21:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:17:00.369 21:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:00.369 21:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:00.369 21:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=66115 00:17:00.369 21:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 66115' 00:17:00.369 Process raid pid: 66115 00:17:00.369 21:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 66115 /var/tmp/spdk-raid.sock 00:17:00.369 21:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:00.369 21:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@823 -- # '[' -z 66115 ']' 00:17:00.369 21:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:00.369 21:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@828 -- # local max_retries=100 00:17:00.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:00.369 21:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:00.369 21:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@832 -- # xtrace_disable 00:17:00.369 21:53:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:00.369 [2024-07-15 21:53:15.402101] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:17:00.369 [2024-07-15 21:53:15.402374] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:00.937 EAL: TSC is not safe to use in SMP mode 00:17:00.937 EAL: TSC is not invariant 00:17:00.937 [2024-07-15 21:53:15.954200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.937 [2024-07-15 21:53:16.031066] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:00.937 [2024-07-15 21:53:16.033677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.937 [2024-07-15 21:53:16.034718] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:00.937 [2024-07-15 21:53:16.034732] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.506 21:53:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:17:01.506 21:53:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@856 -- # return 0 00:17:01.506 21:53:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:01.506 [2024-07-15 21:53:16.635711] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:01.506 [2024-07-15 21:53:16.635779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:01.506 [2024-07-15 21:53:16.635784] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:01.506 [2024-07-15 21:53:16.635808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:01.506 21:53:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:01.506 21:53:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:01.506 21:53:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:01.506 21:53:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:01.506 21:53:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:01.506 21:53:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:01.506 21:53:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:01.506 21:53:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:01.506 21:53:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:01.506 21:53:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:01.506 21:53:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.506 21:53:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.766 21:53:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:01.766 "name": "Existed_Raid", 00:17:01.766 "uuid": "a437edf6-42f4-11ef-9f7f-e9a656123a8b", 00:17:01.766 "strip_size_kb": 0, 00:17:01.766 "state": "configuring", 00:17:01.766 "raid_level": "raid1", 00:17:01.766 "superblock": true, 00:17:01.766 "num_base_bdevs": 2, 00:17:01.766 "num_base_bdevs_discovered": 0, 00:17:01.766 "num_base_bdevs_operational": 2, 00:17:01.766 "base_bdevs_list": [ 00:17:01.766 { 00:17:01.766 "name": "BaseBdev1", 00:17:01.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.766 "is_configured": false, 00:17:01.766 "data_offset": 0, 00:17:01.766 "data_size": 0 00:17:01.766 }, 00:17:01.766 { 00:17:01.766 "name": "BaseBdev2", 00:17:01.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.766 "is_configured": false, 00:17:01.766 "data_offset": 0, 00:17:01.766 "data_size": 0 00:17:01.766 } 00:17:01.766 ] 00:17:01.766 }' 00:17:01.766 21:53:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:01.766 21:53:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.025 21:53:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:02.285 [2024-07-15 21:53:17.435705] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:02.285 [2024-07-15 21:53:17.435737] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3c2733a34500 name Existed_Raid, state configuring 00:17:02.285 21:53:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:02.544 [2024-07-15 21:53:17.651722] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:02.544 [2024-07-15 21:53:17.651787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:02.544 [2024-07-15 21:53:17.651812] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:02.544 [2024-07-15 21:53:17.651819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:02.544 21:53:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:02.805 [2024-07-15 21:53:17.868810] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:02.805 BaseBdev1 00:17:02.805 21:53:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:02.805 21:53:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev1 00:17:02.805 21:53:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:17:02.805 21:53:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@893 -- # local i 00:17:02.805 21:53:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:17:02.805 21:53:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:17:02.805 21:53:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:03.087 21:53:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:03.348 [ 00:17:03.348 { 00:17:03.348 "name": "BaseBdev1", 00:17:03.348 "aliases": [ 00:17:03.348 "a4f3ed27-42f4-11ef-9f7f-e9a656123a8b" 00:17:03.348 ], 00:17:03.348 "product_name": "Malloc disk", 00:17:03.348 "block_size": 4096, 00:17:03.348 "num_blocks": 8192, 00:17:03.348 "uuid": "a4f3ed27-42f4-11ef-9f7f-e9a656123a8b", 00:17:03.348 "md_size": 32, 00:17:03.348 "md_interleave": false, 00:17:03.348 "dif_type": 0, 00:17:03.348 "assigned_rate_limits": { 00:17:03.348 "rw_ios_per_sec": 0, 00:17:03.348 "rw_mbytes_per_sec": 0, 00:17:03.348 "r_mbytes_per_sec": 0, 00:17:03.348 "w_mbytes_per_sec": 0 00:17:03.348 }, 00:17:03.348 "claimed": true, 00:17:03.348 "claim_type": "exclusive_write", 00:17:03.348 "zoned": false, 00:17:03.348 "supported_io_types": { 00:17:03.348 "read": true, 00:17:03.348 "write": true, 00:17:03.348 "unmap": true, 00:17:03.348 "flush": true, 00:17:03.348 "reset": true, 00:17:03.348 "nvme_admin": false, 00:17:03.348 "nvme_io": false, 00:17:03.348 "nvme_io_md": false, 00:17:03.348 "write_zeroes": true, 00:17:03.348 "zcopy": true, 00:17:03.348 "get_zone_info": false, 00:17:03.348 "zone_management": false, 00:17:03.348 "zone_append": false, 00:17:03.348 "compare": false, 00:17:03.348 "compare_and_write": false, 00:17:03.348 "abort": true, 00:17:03.348 "seek_hole": false, 00:17:03.348 "seek_data": false, 00:17:03.348 "copy": true, 00:17:03.348 "nvme_iov_md": false 00:17:03.348 }, 00:17:03.348 "memory_domains": [ 00:17:03.348 { 00:17:03.348 "dma_device_id": "system", 00:17:03.348 "dma_device_type": 1 00:17:03.348 }, 00:17:03.348 { 00:17:03.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.348 "dma_device_type": 2 00:17:03.348 } 00:17:03.348 ], 00:17:03.348 "driver_specific": {} 00:17:03.348 } 00:17:03.348 ] 00:17:03.348 21:53:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # return 0 00:17:03.348 21:53:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:03.348 21:53:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:03.348 21:53:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:03.348 21:53:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:03.348 21:53:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:03.348 21:53:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:03.348 21:53:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:03.349 21:53:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:03.349 21:53:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:03.349 21:53:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:03.349 21:53:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.349 21:53:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.349 21:53:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:03.349 "name": "Existed_Raid", 00:17:03.349 "uuid": "a4d2f63a-42f4-11ef-9f7f-e9a656123a8b", 00:17:03.349 "strip_size_kb": 0, 00:17:03.349 "state": "configuring", 00:17:03.349 "raid_level": "raid1", 00:17:03.349 "superblock": true, 00:17:03.349 "num_base_bdevs": 2, 00:17:03.349 "num_base_bdevs_discovered": 1, 00:17:03.349 "num_base_bdevs_operational": 2, 00:17:03.349 "base_bdevs_list": [ 00:17:03.349 { 00:17:03.349 "name": "BaseBdev1", 00:17:03.349 "uuid": "a4f3ed27-42f4-11ef-9f7f-e9a656123a8b", 00:17:03.349 "is_configured": true, 00:17:03.349 "data_offset": 256, 00:17:03.349 "data_size": 7936 00:17:03.349 }, 00:17:03.349 { 00:17:03.349 "name": "BaseBdev2", 00:17:03.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.349 "is_configured": false, 00:17:03.349 "data_offset": 0, 00:17:03.349 "data_size": 0 00:17:03.349 } 00:17:03.349 ] 00:17:03.349 }' 00:17:03.349 21:53:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:03.349 21:53:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.917 21:53:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:03.917 [2024-07-15 21:53:19.047776] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:03.917 [2024-07-15 21:53:19.047815] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3c2733a34500 name Existed_Raid, state configuring 00:17:03.917 21:53:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:04.176 [2024-07-15 21:53:19.323818] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:04.176 [2024-07-15 21:53:19.324733] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:04.177 [2024-07-15 21:53:19.324784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:04.177 21:53:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:04.177 21:53:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:04.177 21:53:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:04.177 21:53:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:04.177 21:53:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:04.177 21:53:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:04.177 21:53:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:04.177 21:53:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:04.177 21:53:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:04.177 21:53:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:04.177 21:53:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:04.177 21:53:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:04.177 21:53:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.177 21:53:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.436 21:53:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:04.436 "name": "Existed_Raid", 00:17:04.436 "uuid": "a5d21a54-42f4-11ef-9f7f-e9a656123a8b", 00:17:04.436 "strip_size_kb": 0, 00:17:04.436 "state": "configuring", 00:17:04.436 "raid_level": "raid1", 00:17:04.436 "superblock": true, 00:17:04.436 "num_base_bdevs": 2, 00:17:04.436 "num_base_bdevs_discovered": 1, 00:17:04.436 "num_base_bdevs_operational": 2, 00:17:04.436 "base_bdevs_list": [ 00:17:04.436 { 00:17:04.436 "name": "BaseBdev1", 00:17:04.436 "uuid": "a4f3ed27-42f4-11ef-9f7f-e9a656123a8b", 00:17:04.436 "is_configured": true, 00:17:04.436 "data_offset": 256, 00:17:04.436 "data_size": 7936 00:17:04.436 }, 00:17:04.436 { 00:17:04.436 "name": "BaseBdev2", 00:17:04.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.436 "is_configured": false, 00:17:04.436 "data_offset": 0, 00:17:04.436 "data_size": 0 00:17:04.436 } 00:17:04.436 ] 00:17:04.436 }' 00:17:04.436 21:53:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:04.436 21:53:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.004 21:53:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:05.004 [2024-07-15 21:53:20.119937] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:05.004 [2024-07-15 21:53:20.120000] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3c2733a34a00 00:17:05.004 [2024-07-15 21:53:20.120005] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:05.004 [2024-07-15 21:53:20.120024] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3c2733a97e20 00:17:05.004 [2024-07-15 21:53:20.120068] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3c2733a34a00 00:17:05.004 [2024-07-15 21:53:20.120072] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3c2733a34a00 00:17:05.004 [2024-07-15 21:53:20.120085] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.004 BaseBdev2 00:17:05.005 21:53:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:05.005 21:53:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev2 00:17:05.005 21:53:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:17:05.005 21:53:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@893 -- # local i 00:17:05.005 21:53:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:17:05.005 21:53:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:17:05.005 21:53:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:05.263 21:53:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:05.520 [ 00:17:05.521 { 00:17:05.521 "name": "BaseBdev2", 00:17:05.521 "aliases": [ 00:17:05.521 "a64b9208-42f4-11ef-9f7f-e9a656123a8b" 00:17:05.521 ], 00:17:05.521 "product_name": "Malloc disk", 00:17:05.521 "block_size": 4096, 00:17:05.521 "num_blocks": 8192, 00:17:05.521 "uuid": "a64b9208-42f4-11ef-9f7f-e9a656123a8b", 00:17:05.521 "md_size": 32, 00:17:05.521 "md_interleave": false, 00:17:05.521 "dif_type": 0, 00:17:05.521 "assigned_rate_limits": { 00:17:05.521 "rw_ios_per_sec": 0, 00:17:05.521 "rw_mbytes_per_sec": 0, 00:17:05.521 "r_mbytes_per_sec": 0, 00:17:05.521 "w_mbytes_per_sec": 0 00:17:05.521 }, 00:17:05.521 "claimed": true, 00:17:05.521 "claim_type": "exclusive_write", 00:17:05.521 "zoned": false, 00:17:05.521 "supported_io_types": { 00:17:05.521 "read": true, 00:17:05.521 "write": true, 00:17:05.521 "unmap": true, 00:17:05.521 "flush": true, 00:17:05.521 "reset": true, 00:17:05.521 "nvme_admin": false, 00:17:05.521 "nvme_io": false, 00:17:05.521 "nvme_io_md": false, 00:17:05.521 "write_zeroes": true, 00:17:05.521 "zcopy": true, 00:17:05.521 "get_zone_info": false, 00:17:05.521 "zone_management": false, 00:17:05.521 "zone_append": false, 00:17:05.521 "compare": false, 00:17:05.521 "compare_and_write": false, 00:17:05.521 "abort": true, 00:17:05.521 "seek_hole": false, 00:17:05.521 "seek_data": false, 00:17:05.521 "copy": true, 00:17:05.521 "nvme_iov_md": false 00:17:05.521 }, 00:17:05.521 "memory_domains": [ 00:17:05.521 { 00:17:05.521 "dma_device_id": "system", 00:17:05.521 "dma_device_type": 1 00:17:05.521 }, 00:17:05.521 { 00:17:05.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.521 "dma_device_type": 2 00:17:05.521 } 00:17:05.521 ], 00:17:05.521 "driver_specific": {} 00:17:05.521 } 00:17:05.521 ] 00:17:05.521 21:53:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # return 0 00:17:05.521 21:53:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:05.521 21:53:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:05.521 21:53:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:05.521 21:53:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:05.521 21:53:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:05.521 21:53:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:05.521 21:53:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:05.521 21:53:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:05.521 21:53:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:05.521 21:53:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:05.521 21:53:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:05.521 21:53:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:05.521 21:53:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.521 21:53:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.779 21:53:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:05.779 "name": "Existed_Raid", 00:17:05.779 "uuid": "a5d21a54-42f4-11ef-9f7f-e9a656123a8b", 00:17:05.779 "strip_size_kb": 0, 00:17:05.779 "state": "online", 00:17:05.779 "raid_level": "raid1", 00:17:05.779 "superblock": true, 00:17:05.779 "num_base_bdevs": 2, 00:17:05.779 "num_base_bdevs_discovered": 2, 00:17:05.779 "num_base_bdevs_operational": 2, 00:17:05.779 "base_bdevs_list": [ 00:17:05.779 { 00:17:05.779 "name": "BaseBdev1", 00:17:05.779 "uuid": "a4f3ed27-42f4-11ef-9f7f-e9a656123a8b", 00:17:05.779 "is_configured": true, 00:17:05.779 "data_offset": 256, 00:17:05.779 "data_size": 7936 00:17:05.779 }, 00:17:05.779 { 00:17:05.779 "name": "BaseBdev2", 00:17:05.779 "uuid": "a64b9208-42f4-11ef-9f7f-e9a656123a8b", 00:17:05.779 "is_configured": true, 00:17:05.779 "data_offset": 256, 00:17:05.779 "data_size": 7936 00:17:05.779 } 00:17:05.779 ] 00:17:05.779 }' 00:17:05.779 21:53:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:05.779 21:53:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.038 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:06.038 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:06.038 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:06.038 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:06.038 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:06.038 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:17:06.038 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:06.038 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:06.297 [2024-07-15 21:53:21.371927] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:06.297 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:06.297 "name": "Existed_Raid", 00:17:06.297 "aliases": [ 00:17:06.297 "a5d21a54-42f4-11ef-9f7f-e9a656123a8b" 00:17:06.297 ], 00:17:06.297 "product_name": "Raid Volume", 00:17:06.297 "block_size": 4096, 00:17:06.297 "num_blocks": 7936, 00:17:06.297 "uuid": "a5d21a54-42f4-11ef-9f7f-e9a656123a8b", 00:17:06.297 "md_size": 32, 00:17:06.297 "md_interleave": false, 00:17:06.297 "dif_type": 0, 00:17:06.297 "assigned_rate_limits": { 00:17:06.297 "rw_ios_per_sec": 0, 00:17:06.297 "rw_mbytes_per_sec": 0, 00:17:06.297 "r_mbytes_per_sec": 0, 00:17:06.297 "w_mbytes_per_sec": 0 00:17:06.297 }, 00:17:06.297 "claimed": false, 00:17:06.297 "zoned": false, 00:17:06.297 "supported_io_types": { 00:17:06.297 "read": true, 00:17:06.297 "write": true, 00:17:06.297 "unmap": false, 00:17:06.297 "flush": false, 00:17:06.297 "reset": true, 00:17:06.297 "nvme_admin": false, 00:17:06.297 "nvme_io": false, 00:17:06.297 "nvme_io_md": false, 00:17:06.297 "write_zeroes": true, 00:17:06.297 "zcopy": false, 00:17:06.297 "get_zone_info": false, 00:17:06.297 "zone_management": false, 00:17:06.297 "zone_append": false, 00:17:06.297 "compare": false, 00:17:06.297 "compare_and_write": false, 00:17:06.297 "abort": false, 00:17:06.297 "seek_hole": false, 00:17:06.297 "seek_data": false, 00:17:06.297 "copy": false, 00:17:06.297 "nvme_iov_md": false 00:17:06.297 }, 00:17:06.297 "memory_domains": [ 00:17:06.297 { 00:17:06.297 "dma_device_id": "system", 00:17:06.297 "dma_device_type": 1 00:17:06.297 }, 00:17:06.297 { 00:17:06.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.297 "dma_device_type": 2 00:17:06.297 }, 00:17:06.297 { 00:17:06.297 "dma_device_id": "system", 00:17:06.297 "dma_device_type": 1 00:17:06.297 }, 00:17:06.297 { 00:17:06.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.297 "dma_device_type": 2 00:17:06.297 } 00:17:06.297 ], 00:17:06.297 "driver_specific": { 00:17:06.297 "raid": { 00:17:06.297 "uuid": "a5d21a54-42f4-11ef-9f7f-e9a656123a8b", 00:17:06.297 "strip_size_kb": 0, 00:17:06.297 "state": "online", 00:17:06.297 "raid_level": "raid1", 00:17:06.297 "superblock": true, 00:17:06.297 "num_base_bdevs": 2, 00:17:06.297 "num_base_bdevs_discovered": 2, 00:17:06.297 "num_base_bdevs_operational": 2, 00:17:06.297 "base_bdevs_list": [ 00:17:06.297 { 00:17:06.297 "name": "BaseBdev1", 00:17:06.297 "uuid": "a4f3ed27-42f4-11ef-9f7f-e9a656123a8b", 00:17:06.297 "is_configured": true, 00:17:06.297 "data_offset": 256, 00:17:06.297 "data_size": 7936 00:17:06.297 }, 00:17:06.297 { 00:17:06.297 "name": "BaseBdev2", 00:17:06.297 "uuid": "a64b9208-42f4-11ef-9f7f-e9a656123a8b", 00:17:06.297 "is_configured": true, 00:17:06.297 "data_offset": 256, 00:17:06.297 "data_size": 7936 00:17:06.297 } 00:17:06.297 ] 00:17:06.297 } 00:17:06.297 } 00:17:06.297 }' 00:17:06.297 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:06.297 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:06.297 BaseBdev2' 00:17:06.297 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:06.297 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:06.297 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:06.557 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:06.557 "name": "BaseBdev1", 00:17:06.557 "aliases": [ 00:17:06.557 "a4f3ed27-42f4-11ef-9f7f-e9a656123a8b" 00:17:06.557 ], 00:17:06.557 "product_name": "Malloc disk", 00:17:06.557 "block_size": 4096, 00:17:06.557 "num_blocks": 8192, 00:17:06.557 "uuid": "a4f3ed27-42f4-11ef-9f7f-e9a656123a8b", 00:17:06.557 "md_size": 32, 00:17:06.557 "md_interleave": false, 00:17:06.557 "dif_type": 0, 00:17:06.557 "assigned_rate_limits": { 00:17:06.557 "rw_ios_per_sec": 0, 00:17:06.557 "rw_mbytes_per_sec": 0, 00:17:06.557 "r_mbytes_per_sec": 0, 00:17:06.557 "w_mbytes_per_sec": 0 00:17:06.557 }, 00:17:06.557 "claimed": true, 00:17:06.557 "claim_type": "exclusive_write", 00:17:06.557 "zoned": false, 00:17:06.557 "supported_io_types": { 00:17:06.557 "read": true, 00:17:06.557 "write": true, 00:17:06.557 "unmap": true, 00:17:06.557 "flush": true, 00:17:06.557 "reset": true, 00:17:06.557 "nvme_admin": false, 00:17:06.557 "nvme_io": false, 00:17:06.557 "nvme_io_md": false, 00:17:06.557 "write_zeroes": true, 00:17:06.557 "zcopy": true, 00:17:06.557 "get_zone_info": false, 00:17:06.557 "zone_management": false, 00:17:06.557 "zone_append": false, 00:17:06.557 "compare": false, 00:17:06.557 "compare_and_write": false, 00:17:06.557 "abort": true, 00:17:06.557 "seek_hole": false, 00:17:06.557 "seek_data": false, 00:17:06.557 "copy": true, 00:17:06.557 "nvme_iov_md": false 00:17:06.557 }, 00:17:06.557 "memory_domains": [ 00:17:06.557 { 00:17:06.557 "dma_device_id": "system", 00:17:06.557 "dma_device_type": 1 00:17:06.557 }, 00:17:06.557 { 00:17:06.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.557 "dma_device_type": 2 00:17:06.557 } 00:17:06.557 ], 00:17:06.557 "driver_specific": {} 00:17:06.557 }' 00:17:06.557 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:06.557 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:06.557 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:06.557 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:06.557 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:06.557 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:06.557 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:06.557 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:06.557 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:17:06.557 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:06.557 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:06.557 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:06.557 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:06.557 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:06.557 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:06.816 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:06.816 "name": "BaseBdev2", 00:17:06.816 "aliases": [ 00:17:06.816 "a64b9208-42f4-11ef-9f7f-e9a656123a8b" 00:17:06.816 ], 00:17:06.816 "product_name": "Malloc disk", 00:17:06.816 "block_size": 4096, 00:17:06.816 "num_blocks": 8192, 00:17:06.816 "uuid": "a64b9208-42f4-11ef-9f7f-e9a656123a8b", 00:17:06.816 "md_size": 32, 00:17:06.816 "md_interleave": false, 00:17:06.816 "dif_type": 0, 00:17:06.816 "assigned_rate_limits": { 00:17:06.816 "rw_ios_per_sec": 0, 00:17:06.816 "rw_mbytes_per_sec": 0, 00:17:06.816 "r_mbytes_per_sec": 0, 00:17:06.816 "w_mbytes_per_sec": 0 00:17:06.816 }, 00:17:06.816 "claimed": true, 00:17:06.816 "claim_type": "exclusive_write", 00:17:06.816 "zoned": false, 00:17:06.816 "supported_io_types": { 00:17:06.816 "read": true, 00:17:06.816 "write": true, 00:17:06.816 "unmap": true, 00:17:06.816 "flush": true, 00:17:06.816 "reset": true, 00:17:06.816 "nvme_admin": false, 00:17:06.816 "nvme_io": false, 00:17:06.816 "nvme_io_md": false, 00:17:06.816 "write_zeroes": true, 00:17:06.816 "zcopy": true, 00:17:06.816 "get_zone_info": false, 00:17:06.816 "zone_management": false, 00:17:06.816 "zone_append": false, 00:17:06.816 "compare": false, 00:17:06.816 "compare_and_write": false, 00:17:06.816 "abort": true, 00:17:06.816 "seek_hole": false, 00:17:06.816 "seek_data": false, 00:17:06.816 "copy": true, 00:17:06.816 "nvme_iov_md": false 00:17:06.816 }, 00:17:06.816 "memory_domains": [ 00:17:06.816 { 00:17:06.816 "dma_device_id": "system", 00:17:06.816 "dma_device_type": 1 00:17:06.816 }, 00:17:06.816 { 00:17:06.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.816 "dma_device_type": 2 00:17:06.816 } 00:17:06.816 ], 00:17:06.816 "driver_specific": {} 00:17:06.816 }' 00:17:06.816 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:06.816 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:06.816 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:06.816 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:06.816 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:06.816 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:06.816 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:06.816 21:53:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:07.075 21:53:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:17:07.075 21:53:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:07.075 21:53:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:07.075 21:53:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:07.075 21:53:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:07.335 [2024-07-15 21:53:22.307976] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:07.335 21:53:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:07.335 21:53:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:17:07.335 21:53:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:07.335 21:53:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:17:07.335 21:53:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:17:07.335 21:53:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:07.335 21:53:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:07.335 21:53:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:07.335 21:53:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:07.335 21:53:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:07.335 21:53:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:07.335 21:53:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:07.335 21:53:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:07.335 21:53:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:07.335 21:53:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:07.335 21:53:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.335 21:53:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.594 21:53:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:07.594 "name": "Existed_Raid", 00:17:07.594 "uuid": "a5d21a54-42f4-11ef-9f7f-e9a656123a8b", 00:17:07.594 "strip_size_kb": 0, 00:17:07.594 "state": "online", 00:17:07.594 "raid_level": "raid1", 00:17:07.594 "superblock": true, 00:17:07.594 "num_base_bdevs": 2, 00:17:07.594 "num_base_bdevs_discovered": 1, 00:17:07.594 "num_base_bdevs_operational": 1, 00:17:07.594 "base_bdevs_list": [ 00:17:07.594 { 00:17:07.594 "name": null, 00:17:07.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.594 "is_configured": false, 00:17:07.594 "data_offset": 256, 00:17:07.594 "data_size": 7936 00:17:07.594 }, 00:17:07.594 { 00:17:07.594 "name": "BaseBdev2", 00:17:07.594 "uuid": "a64b9208-42f4-11ef-9f7f-e9a656123a8b", 00:17:07.594 "is_configured": true, 00:17:07.594 "data_offset": 256, 00:17:07.594 "data_size": 7936 00:17:07.594 } 00:17:07.594 ] 00:17:07.594 }' 00:17:07.594 21:53:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:07.594 21:53:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.853 21:53:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:07.853 21:53:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:07.853 21:53:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.853 21:53:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:08.112 21:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:08.112 21:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:08.112 21:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:08.369 [2024-07-15 21:53:23.314295] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:08.369 [2024-07-15 21:53:23.314352] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:08.369 [2024-07-15 21:53:23.320859] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:08.369 [2024-07-15 21:53:23.320877] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:08.369 [2024-07-15 21:53:23.320904] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3c2733a34a00 name Existed_Raid, state offline 00:17:08.369 21:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:08.369 21:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:08.369 21:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.369 21:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:08.369 21:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:08.369 21:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:08.369 21:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:17:08.369 21:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 66115 00:17:08.369 21:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@942 -- # '[' -z 66115 ']' 00:17:08.369 21:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@946 -- # kill -0 66115 00:17:08.369 21:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@947 -- # uname 00:17:08.369 21:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:17:08.369 21:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # ps -c -o command 66115 00:17:08.369 21:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # tail -1 00:17:08.369 21:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:17:08.369 killing process with pid 66115 00:17:08.370 21:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:17:08.370 21:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # echo 'killing process with pid 66115' 00:17:08.370 21:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@961 -- # kill 66115 00:17:08.370 [2024-07-15 21:53:23.546920] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:08.370 [2024-07-15 21:53:23.546972] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:08.370 21:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@966 -- # wait 66115 00:17:08.628 21:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:17:08.628 ************************************ 00:17:08.628 END TEST raid_state_function_test_sb_md_separate 00:17:08.628 ************************************ 00:17:08.628 00:17:08.628 real 0m8.325s 00:17:08.628 user 0m14.442s 00:17:08.628 sys 0m1.468s 00:17:08.628 21:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1118 -- # xtrace_disable 00:17:08.628 21:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.628 21:53:23 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:17:08.628 21:53:23 bdev_raid -- bdev/bdev_raid.sh@906 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:08.628 21:53:23 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:17:08.628 21:53:23 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:17:08.628 21:53:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:08.628 ************************************ 00:17:08.628 START TEST raid_superblock_test_md_separate 00:17:08.628 ************************************ 00:17:08.628 21:53:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1117 -- # raid_superblock_test raid1 2 00:17:08.628 21:53:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:17:08.628 21:53:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:17:08.628 21:53:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:17:08.628 21:53:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:17:08.628 21:53:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:17:08.628 21:53:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:17:08.628 21:53:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:17:08.628 21:53:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:17:08.628 21:53:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:17:08.628 21:53:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local strip_size 00:17:08.628 21:53:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:17:08.628 21:53:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:17:08.628 21:53:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:17:08.628 21:53:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:17:08.628 21:53:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:17:08.628 21:53:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # raid_pid=66385 00:17:08.628 21:53:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # waitforlisten 66385 /var/tmp/spdk-raid.sock 00:17:08.628 21:53:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@823 -- # '[' -z 66385 ']' 00:17:08.628 21:53:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:08.628 21:53:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:08.628 21:53:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@828 -- # local max_retries=100 00:17:08.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:08.628 21:53:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:08.628 21:53:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@832 -- # xtrace_disable 00:17:08.628 21:53:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.628 [2024-07-15 21:53:23.773715] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:17:08.628 [2024-07-15 21:53:23.774011] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:09.195 EAL: TSC is not safe to use in SMP mode 00:17:09.195 EAL: TSC is not invariant 00:17:09.195 [2024-07-15 21:53:24.307464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.195 [2024-07-15 21:53:24.380778] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:09.454 [2024-07-15 21:53:24.383297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.454 [2024-07-15 21:53:24.384339] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:09.454 [2024-07-15 21:53:24.384363] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:09.714 21:53:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:17:09.714 21:53:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@856 -- # return 0 00:17:09.714 21:53:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:17:09.714 21:53:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:09.714 21:53:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:17:09.714 21:53:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:17:09.714 21:53:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:09.714 21:53:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:09.714 21:53:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:09.714 21:53:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:09.714 21:53:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:09.973 malloc1 00:17:09.973 21:53:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:10.232 [2024-07-15 21:53:25.213772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:10.232 [2024-07-15 21:53:25.213833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.232 [2024-07-15 21:53:25.213862] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2ea3f5c34780 00:17:10.232 [2024-07-15 21:53:25.213869] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.232 [2024-07-15 21:53:25.214992] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.232 [2024-07-15 21:53:25.215034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:10.232 pt1 00:17:10.232 21:53:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:10.232 21:53:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:10.232 21:53:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:17:10.232 21:53:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:17:10.232 21:53:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:10.232 21:53:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:10.232 21:53:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:10.232 21:53:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:10.232 21:53:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:10.513 malloc2 00:17:10.513 21:53:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:10.513 [2024-07-15 21:53:25.637787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:10.513 [2024-07-15 21:53:25.637846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.513 [2024-07-15 21:53:25.637873] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2ea3f5c34c80 00:17:10.513 [2024-07-15 21:53:25.637880] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.513 [2024-07-15 21:53:25.638478] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.513 [2024-07-15 21:53:25.638520] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:10.513 pt2 00:17:10.513 21:53:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:10.513 21:53:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:10.513 21:53:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:17:10.773 [2024-07-15 21:53:25.853798] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:10.773 [2024-07-15 21:53:25.854430] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:10.773 [2024-07-15 21:53:25.854519] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2ea3f5c34f00 00:17:10.773 [2024-07-15 21:53:25.854524] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:10.773 [2024-07-15 21:53:25.854555] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2ea3f5c97e20 00:17:10.773 [2024-07-15 21:53:25.854593] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2ea3f5c34f00 00:17:10.773 [2024-07-15 21:53:25.854596] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2ea3f5c34f00 00:17:10.773 [2024-07-15 21:53:25.854656] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.773 21:53:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:10.773 21:53:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:10.773 21:53:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:10.773 21:53:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:10.773 21:53:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:10.773 21:53:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:10.773 21:53:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:10.773 21:53:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:10.773 21:53:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:10.773 21:53:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:10.773 21:53:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.773 21:53:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.032 21:53:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:11.032 "name": "raid_bdev1", 00:17:11.032 "uuid": "a9b67fbf-42f4-11ef-9f7f-e9a656123a8b", 00:17:11.032 "strip_size_kb": 0, 00:17:11.032 "state": "online", 00:17:11.032 "raid_level": "raid1", 00:17:11.032 "superblock": true, 00:17:11.032 "num_base_bdevs": 2, 00:17:11.032 "num_base_bdevs_discovered": 2, 00:17:11.032 "num_base_bdevs_operational": 2, 00:17:11.032 "base_bdevs_list": [ 00:17:11.032 { 00:17:11.032 "name": "pt1", 00:17:11.032 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:11.032 "is_configured": true, 00:17:11.032 "data_offset": 256, 00:17:11.032 "data_size": 7936 00:17:11.032 }, 00:17:11.032 { 00:17:11.032 "name": "pt2", 00:17:11.032 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:11.032 "is_configured": true, 00:17:11.032 "data_offset": 256, 00:17:11.032 "data_size": 7936 00:17:11.032 } 00:17:11.032 ] 00:17:11.032 }' 00:17:11.032 21:53:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:11.032 21:53:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:11.291 21:53:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:17:11.291 21:53:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:11.291 21:53:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:11.291 21:53:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:11.291 21:53:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:11.291 21:53:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:17:11.291 21:53:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:11.291 21:53:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:11.550 [2024-07-15 21:53:26.593893] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:11.550 21:53:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:11.550 "name": "raid_bdev1", 00:17:11.550 "aliases": [ 00:17:11.550 "a9b67fbf-42f4-11ef-9f7f-e9a656123a8b" 00:17:11.550 ], 00:17:11.550 "product_name": "Raid Volume", 00:17:11.550 "block_size": 4096, 00:17:11.550 "num_blocks": 7936, 00:17:11.550 "uuid": "a9b67fbf-42f4-11ef-9f7f-e9a656123a8b", 00:17:11.550 "md_size": 32, 00:17:11.550 "md_interleave": false, 00:17:11.550 "dif_type": 0, 00:17:11.550 "assigned_rate_limits": { 00:17:11.550 "rw_ios_per_sec": 0, 00:17:11.550 "rw_mbytes_per_sec": 0, 00:17:11.550 "r_mbytes_per_sec": 0, 00:17:11.550 "w_mbytes_per_sec": 0 00:17:11.550 }, 00:17:11.550 "claimed": false, 00:17:11.550 "zoned": false, 00:17:11.550 "supported_io_types": { 00:17:11.550 "read": true, 00:17:11.550 "write": true, 00:17:11.550 "unmap": false, 00:17:11.550 "flush": false, 00:17:11.550 "reset": true, 00:17:11.550 "nvme_admin": false, 00:17:11.550 "nvme_io": false, 00:17:11.550 "nvme_io_md": false, 00:17:11.550 "write_zeroes": true, 00:17:11.550 "zcopy": false, 00:17:11.550 "get_zone_info": false, 00:17:11.550 "zone_management": false, 00:17:11.550 "zone_append": false, 00:17:11.550 "compare": false, 00:17:11.550 "compare_and_write": false, 00:17:11.550 "abort": false, 00:17:11.550 "seek_hole": false, 00:17:11.550 "seek_data": false, 00:17:11.550 "copy": false, 00:17:11.550 "nvme_iov_md": false 00:17:11.550 }, 00:17:11.550 "memory_domains": [ 00:17:11.550 { 00:17:11.550 "dma_device_id": "system", 00:17:11.550 "dma_device_type": 1 00:17:11.550 }, 00:17:11.550 { 00:17:11.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.550 "dma_device_type": 2 00:17:11.550 }, 00:17:11.550 { 00:17:11.550 "dma_device_id": "system", 00:17:11.550 "dma_device_type": 1 00:17:11.550 }, 00:17:11.550 { 00:17:11.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.550 "dma_device_type": 2 00:17:11.550 } 00:17:11.550 ], 00:17:11.550 "driver_specific": { 00:17:11.550 "raid": { 00:17:11.550 "uuid": "a9b67fbf-42f4-11ef-9f7f-e9a656123a8b", 00:17:11.550 "strip_size_kb": 0, 00:17:11.550 "state": "online", 00:17:11.550 "raid_level": "raid1", 00:17:11.550 "superblock": true, 00:17:11.550 "num_base_bdevs": 2, 00:17:11.550 "num_base_bdevs_discovered": 2, 00:17:11.550 "num_base_bdevs_operational": 2, 00:17:11.550 "base_bdevs_list": [ 00:17:11.550 { 00:17:11.550 "name": "pt1", 00:17:11.550 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:11.550 "is_configured": true, 00:17:11.550 "data_offset": 256, 00:17:11.550 "data_size": 7936 00:17:11.550 }, 00:17:11.550 { 00:17:11.550 "name": "pt2", 00:17:11.550 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:11.550 "is_configured": true, 00:17:11.550 "data_offset": 256, 00:17:11.550 "data_size": 7936 00:17:11.550 } 00:17:11.550 ] 00:17:11.550 } 00:17:11.550 } 00:17:11.550 }' 00:17:11.550 21:53:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:11.550 21:53:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:11.550 pt2' 00:17:11.550 21:53:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:11.550 21:53:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:11.550 21:53:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:11.809 21:53:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:11.809 "name": "pt1", 00:17:11.809 "aliases": [ 00:17:11.809 "00000000-0000-0000-0000-000000000001" 00:17:11.809 ], 00:17:11.809 "product_name": "passthru", 00:17:11.809 "block_size": 4096, 00:17:11.809 "num_blocks": 8192, 00:17:11.809 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:11.809 "md_size": 32, 00:17:11.809 "md_interleave": false, 00:17:11.809 "dif_type": 0, 00:17:11.809 "assigned_rate_limits": { 00:17:11.809 "rw_ios_per_sec": 0, 00:17:11.809 "rw_mbytes_per_sec": 0, 00:17:11.809 "r_mbytes_per_sec": 0, 00:17:11.809 "w_mbytes_per_sec": 0 00:17:11.809 }, 00:17:11.809 "claimed": true, 00:17:11.809 "claim_type": "exclusive_write", 00:17:11.809 "zoned": false, 00:17:11.809 "supported_io_types": { 00:17:11.809 "read": true, 00:17:11.809 "write": true, 00:17:11.809 "unmap": true, 00:17:11.809 "flush": true, 00:17:11.809 "reset": true, 00:17:11.809 "nvme_admin": false, 00:17:11.809 "nvme_io": false, 00:17:11.809 "nvme_io_md": false, 00:17:11.809 "write_zeroes": true, 00:17:11.809 "zcopy": true, 00:17:11.809 "get_zone_info": false, 00:17:11.809 "zone_management": false, 00:17:11.809 "zone_append": false, 00:17:11.809 "compare": false, 00:17:11.809 "compare_and_write": false, 00:17:11.809 "abort": true, 00:17:11.809 "seek_hole": false, 00:17:11.809 "seek_data": false, 00:17:11.809 "copy": true, 00:17:11.809 "nvme_iov_md": false 00:17:11.809 }, 00:17:11.809 "memory_domains": [ 00:17:11.809 { 00:17:11.809 "dma_device_id": "system", 00:17:11.809 "dma_device_type": 1 00:17:11.809 }, 00:17:11.809 { 00:17:11.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.809 "dma_device_type": 2 00:17:11.809 } 00:17:11.809 ], 00:17:11.809 "driver_specific": { 00:17:11.809 "passthru": { 00:17:11.809 "name": "pt1", 00:17:11.809 "base_bdev_name": "malloc1" 00:17:11.809 } 00:17:11.809 } 00:17:11.809 }' 00:17:11.809 21:53:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:11.809 21:53:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:11.809 21:53:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:11.810 21:53:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:11.810 21:53:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:11.810 21:53:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:11.810 21:53:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:11.810 21:53:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:11.810 21:53:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:17:11.810 21:53:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:11.810 21:53:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:11.810 21:53:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:11.810 21:53:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:11.810 21:53:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:11.810 21:53:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:12.069 21:53:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:12.069 "name": "pt2", 00:17:12.069 "aliases": [ 00:17:12.069 "00000000-0000-0000-0000-000000000002" 00:17:12.069 ], 00:17:12.069 "product_name": "passthru", 00:17:12.069 "block_size": 4096, 00:17:12.069 "num_blocks": 8192, 00:17:12.069 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:12.069 "md_size": 32, 00:17:12.069 "md_interleave": false, 00:17:12.069 "dif_type": 0, 00:17:12.069 "assigned_rate_limits": { 00:17:12.069 "rw_ios_per_sec": 0, 00:17:12.069 "rw_mbytes_per_sec": 0, 00:17:12.069 "r_mbytes_per_sec": 0, 00:17:12.069 "w_mbytes_per_sec": 0 00:17:12.069 }, 00:17:12.069 "claimed": true, 00:17:12.069 "claim_type": "exclusive_write", 00:17:12.069 "zoned": false, 00:17:12.069 "supported_io_types": { 00:17:12.069 "read": true, 00:17:12.069 "write": true, 00:17:12.069 "unmap": true, 00:17:12.069 "flush": true, 00:17:12.069 "reset": true, 00:17:12.069 "nvme_admin": false, 00:17:12.069 "nvme_io": false, 00:17:12.069 "nvme_io_md": false, 00:17:12.069 "write_zeroes": true, 00:17:12.069 "zcopy": true, 00:17:12.069 "get_zone_info": false, 00:17:12.069 "zone_management": false, 00:17:12.069 "zone_append": false, 00:17:12.069 "compare": false, 00:17:12.069 "compare_and_write": false, 00:17:12.069 "abort": true, 00:17:12.069 "seek_hole": false, 00:17:12.069 "seek_data": false, 00:17:12.069 "copy": true, 00:17:12.069 "nvme_iov_md": false 00:17:12.069 }, 00:17:12.069 "memory_domains": [ 00:17:12.069 { 00:17:12.069 "dma_device_id": "system", 00:17:12.069 "dma_device_type": 1 00:17:12.069 }, 00:17:12.069 { 00:17:12.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.069 "dma_device_type": 2 00:17:12.069 } 00:17:12.069 ], 00:17:12.069 "driver_specific": { 00:17:12.069 "passthru": { 00:17:12.069 "name": "pt2", 00:17:12.069 "base_bdev_name": "malloc2" 00:17:12.069 } 00:17:12.069 } 00:17:12.069 }' 00:17:12.069 21:53:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:12.069 21:53:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:12.069 21:53:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:12.069 21:53:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:12.069 21:53:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:12.069 21:53:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:12.069 21:53:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:12.069 21:53:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:12.069 21:53:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:17:12.069 21:53:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:12.069 21:53:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:12.069 21:53:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:12.069 21:53:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:12.069 21:53:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:17:12.328 [2024-07-15 21:53:27.485924] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:12.328 21:53:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=a9b67fbf-42f4-11ef-9f7f-e9a656123a8b 00:17:12.328 21:53:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # '[' -z a9b67fbf-42f4-11ef-9f7f-e9a656123a8b ']' 00:17:12.328 21:53:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:12.586 [2024-07-15 21:53:27.709904] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:12.586 [2024-07-15 21:53:27.709922] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:12.586 [2024-07-15 21:53:27.709959] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:12.586 [2024-07-15 21:53:27.709972] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:12.586 [2024-07-15 21:53:27.709976] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2ea3f5c34f00 name raid_bdev1, state offline 00:17:12.586 21:53:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.586 21:53:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:17:12.845 21:53:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:17:12.845 21:53:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:17:12.845 21:53:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:12.845 21:53:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:13.103 21:53:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:13.103 21:53:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:13.361 21:53:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:13.361 21:53:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:13.619 21:53:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:17:13.619 21:53:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:13.619 21:53:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # local es=0 00:17:13.619 21:53:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:13.620 21:53:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@630 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:13.620 21:53:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:13.620 21:53:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@634 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:13.620 21:53:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:13.620 21:53:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@636 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:13.620 21:53:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:13.620 21:53:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@636 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:13.620 21:53:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@636 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:13.620 21:53:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:13.878 [2024-07-15 21:53:28.913956] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:13.878 [2024-07-15 21:53:28.914660] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:13.878 [2024-07-15 21:53:28.914702] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:13.878 [2024-07-15 21:53:28.914739] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:13.878 [2024-07-15 21:53:28.914750] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:13.878 [2024-07-15 21:53:28.914754] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2ea3f5c34c80 name raid_bdev1, state configuring 00:17:13.878 request: 00:17:13.878 { 00:17:13.878 "name": "raid_bdev1", 00:17:13.878 "raid_level": "raid1", 00:17:13.878 "base_bdevs": [ 00:17:13.878 "malloc1", 00:17:13.878 "malloc2" 00:17:13.878 ], 00:17:13.878 "superblock": false, 00:17:13.878 "method": "bdev_raid_create", 00:17:13.878 "req_id": 1 00:17:13.878 } 00:17:13.878 Got JSON-RPC error response 00:17:13.878 response: 00:17:13.878 { 00:17:13.878 "code": -17, 00:17:13.878 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:13.878 } 00:17:13.878 21:53:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@645 -- # es=1 00:17:13.878 21:53:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:17:13.878 21:53:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:17:13.878 21:53:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:17:13.878 21:53:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.878 21:53:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:17:14.135 21:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:17:14.135 21:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:17:14.135 21:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:14.135 [2024-07-15 21:53:29.321973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:14.135 [2024-07-15 21:53:29.322050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.135 [2024-07-15 21:53:29.322077] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2ea3f5c34780 00:17:14.135 [2024-07-15 21:53:29.322084] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.394 [2024-07-15 21:53:29.322797] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.394 [2024-07-15 21:53:29.322821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:14.394 [2024-07-15 21:53:29.322854] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:14.394 [2024-07-15 21:53:29.322866] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:14.394 pt1 00:17:14.394 21:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:14.394 21:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:14.394 21:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:14.394 21:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:14.394 21:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:14.394 21:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:14.394 21:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:14.394 21:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:14.394 21:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:14.394 21:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:14.394 21:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.394 21:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.394 21:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:14.394 "name": "raid_bdev1", 00:17:14.394 "uuid": "a9b67fbf-42f4-11ef-9f7f-e9a656123a8b", 00:17:14.394 "strip_size_kb": 0, 00:17:14.394 "state": "configuring", 00:17:14.394 "raid_level": "raid1", 00:17:14.394 "superblock": true, 00:17:14.394 "num_base_bdevs": 2, 00:17:14.394 "num_base_bdevs_discovered": 1, 00:17:14.394 "num_base_bdevs_operational": 2, 00:17:14.394 "base_bdevs_list": [ 00:17:14.394 { 00:17:14.394 "name": "pt1", 00:17:14.394 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:14.394 "is_configured": true, 00:17:14.394 "data_offset": 256, 00:17:14.394 "data_size": 7936 00:17:14.394 }, 00:17:14.394 { 00:17:14.394 "name": null, 00:17:14.394 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:14.394 "is_configured": false, 00:17:14.394 "data_offset": 256, 00:17:14.394 "data_size": 7936 00:17:14.394 } 00:17:14.394 ] 00:17:14.394 }' 00:17:14.394 21:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:14.394 21:53:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.961 21:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:17:14.961 21:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:17:14.961 21:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:14.961 21:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:14.961 [2024-07-15 21:53:30.102019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:14.961 [2024-07-15 21:53:30.102085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.961 [2024-07-15 21:53:30.102112] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2ea3f5c34f00 00:17:14.961 [2024-07-15 21:53:30.102121] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.961 [2024-07-15 21:53:30.102182] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.961 [2024-07-15 21:53:30.102191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:14.961 [2024-07-15 21:53:30.102212] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:14.961 [2024-07-15 21:53:30.102219] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:14.961 [2024-07-15 21:53:30.102233] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2ea3f5c35180 00:17:14.961 [2024-07-15 21:53:30.102237] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:14.961 [2024-07-15 21:53:30.102254] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2ea3f5c97e20 00:17:14.961 [2024-07-15 21:53:30.102283] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2ea3f5c35180 00:17:14.961 [2024-07-15 21:53:30.102286] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2ea3f5c35180 00:17:14.961 [2024-07-15 21:53:30.102341] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.961 pt2 00:17:14.961 21:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:14.961 21:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:14.961 21:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:14.961 21:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:14.961 21:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:14.961 21:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:14.961 21:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:14.961 21:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:14.961 21:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:14.961 21:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:14.961 21:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:14.961 21:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:14.961 21:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.961 21:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.219 21:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:15.219 "name": "raid_bdev1", 00:17:15.219 "uuid": "a9b67fbf-42f4-11ef-9f7f-e9a656123a8b", 00:17:15.219 "strip_size_kb": 0, 00:17:15.219 "state": "online", 00:17:15.219 "raid_level": "raid1", 00:17:15.219 "superblock": true, 00:17:15.219 "num_base_bdevs": 2, 00:17:15.219 "num_base_bdevs_discovered": 2, 00:17:15.219 "num_base_bdevs_operational": 2, 00:17:15.219 "base_bdevs_list": [ 00:17:15.219 { 00:17:15.219 "name": "pt1", 00:17:15.219 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:15.219 "is_configured": true, 00:17:15.219 "data_offset": 256, 00:17:15.219 "data_size": 7936 00:17:15.219 }, 00:17:15.220 { 00:17:15.220 "name": "pt2", 00:17:15.220 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:15.220 "is_configured": true, 00:17:15.220 "data_offset": 256, 00:17:15.220 "data_size": 7936 00:17:15.220 } 00:17:15.220 ] 00:17:15.220 }' 00:17:15.220 21:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:15.220 21:53:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.477 21:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:17:15.477 21:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:15.477 21:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:15.477 21:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:15.477 21:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:15.477 21:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:17:15.477 21:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:15.477 21:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:15.735 [2024-07-15 21:53:30.850075] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:15.735 21:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:15.735 "name": "raid_bdev1", 00:17:15.735 "aliases": [ 00:17:15.735 "a9b67fbf-42f4-11ef-9f7f-e9a656123a8b" 00:17:15.735 ], 00:17:15.735 "product_name": "Raid Volume", 00:17:15.735 "block_size": 4096, 00:17:15.735 "num_blocks": 7936, 00:17:15.735 "uuid": "a9b67fbf-42f4-11ef-9f7f-e9a656123a8b", 00:17:15.735 "md_size": 32, 00:17:15.735 "md_interleave": false, 00:17:15.735 "dif_type": 0, 00:17:15.735 "assigned_rate_limits": { 00:17:15.735 "rw_ios_per_sec": 0, 00:17:15.735 "rw_mbytes_per_sec": 0, 00:17:15.735 "r_mbytes_per_sec": 0, 00:17:15.735 "w_mbytes_per_sec": 0 00:17:15.735 }, 00:17:15.735 "claimed": false, 00:17:15.735 "zoned": false, 00:17:15.735 "supported_io_types": { 00:17:15.735 "read": true, 00:17:15.735 "write": true, 00:17:15.735 "unmap": false, 00:17:15.735 "flush": false, 00:17:15.735 "reset": true, 00:17:15.735 "nvme_admin": false, 00:17:15.735 "nvme_io": false, 00:17:15.735 "nvme_io_md": false, 00:17:15.735 "write_zeroes": true, 00:17:15.735 "zcopy": false, 00:17:15.735 "get_zone_info": false, 00:17:15.735 "zone_management": false, 00:17:15.735 "zone_append": false, 00:17:15.735 "compare": false, 00:17:15.735 "compare_and_write": false, 00:17:15.735 "abort": false, 00:17:15.735 "seek_hole": false, 00:17:15.735 "seek_data": false, 00:17:15.735 "copy": false, 00:17:15.735 "nvme_iov_md": false 00:17:15.735 }, 00:17:15.735 "memory_domains": [ 00:17:15.735 { 00:17:15.735 "dma_device_id": "system", 00:17:15.735 "dma_device_type": 1 00:17:15.735 }, 00:17:15.735 { 00:17:15.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.735 "dma_device_type": 2 00:17:15.735 }, 00:17:15.735 { 00:17:15.736 "dma_device_id": "system", 00:17:15.736 "dma_device_type": 1 00:17:15.736 }, 00:17:15.736 { 00:17:15.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.736 "dma_device_type": 2 00:17:15.736 } 00:17:15.736 ], 00:17:15.736 "driver_specific": { 00:17:15.736 "raid": { 00:17:15.736 "uuid": "a9b67fbf-42f4-11ef-9f7f-e9a656123a8b", 00:17:15.736 "strip_size_kb": 0, 00:17:15.736 "state": "online", 00:17:15.736 "raid_level": "raid1", 00:17:15.736 "superblock": true, 00:17:15.736 "num_base_bdevs": 2, 00:17:15.736 "num_base_bdevs_discovered": 2, 00:17:15.736 "num_base_bdevs_operational": 2, 00:17:15.736 "base_bdevs_list": [ 00:17:15.736 { 00:17:15.736 "name": "pt1", 00:17:15.736 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:15.736 "is_configured": true, 00:17:15.736 "data_offset": 256, 00:17:15.736 "data_size": 7936 00:17:15.736 }, 00:17:15.736 { 00:17:15.736 "name": "pt2", 00:17:15.736 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:15.736 "is_configured": true, 00:17:15.736 "data_offset": 256, 00:17:15.736 "data_size": 7936 00:17:15.736 } 00:17:15.736 ] 00:17:15.736 } 00:17:15.736 } 00:17:15.736 }' 00:17:15.736 21:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:15.736 21:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:15.736 pt2' 00:17:15.736 21:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:15.736 21:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:15.736 21:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:15.993 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:15.993 "name": "pt1", 00:17:15.993 "aliases": [ 00:17:15.993 "00000000-0000-0000-0000-000000000001" 00:17:15.993 ], 00:17:15.993 "product_name": "passthru", 00:17:15.993 "block_size": 4096, 00:17:15.993 "num_blocks": 8192, 00:17:15.993 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:15.993 "md_size": 32, 00:17:15.993 "md_interleave": false, 00:17:15.993 "dif_type": 0, 00:17:15.993 "assigned_rate_limits": { 00:17:15.993 "rw_ios_per_sec": 0, 00:17:15.993 "rw_mbytes_per_sec": 0, 00:17:15.993 "r_mbytes_per_sec": 0, 00:17:15.993 "w_mbytes_per_sec": 0 00:17:15.993 }, 00:17:15.993 "claimed": true, 00:17:15.993 "claim_type": "exclusive_write", 00:17:15.993 "zoned": false, 00:17:15.993 "supported_io_types": { 00:17:15.993 "read": true, 00:17:15.993 "write": true, 00:17:15.993 "unmap": true, 00:17:15.993 "flush": true, 00:17:15.993 "reset": true, 00:17:15.993 "nvme_admin": false, 00:17:15.993 "nvme_io": false, 00:17:15.993 "nvme_io_md": false, 00:17:15.993 "write_zeroes": true, 00:17:15.993 "zcopy": true, 00:17:15.993 "get_zone_info": false, 00:17:15.993 "zone_management": false, 00:17:15.993 "zone_append": false, 00:17:15.993 "compare": false, 00:17:15.993 "compare_and_write": false, 00:17:15.993 "abort": true, 00:17:15.993 "seek_hole": false, 00:17:15.993 "seek_data": false, 00:17:15.993 "copy": true, 00:17:15.993 "nvme_iov_md": false 00:17:15.993 }, 00:17:15.993 "memory_domains": [ 00:17:15.993 { 00:17:15.993 "dma_device_id": "system", 00:17:15.993 "dma_device_type": 1 00:17:15.993 }, 00:17:15.993 { 00:17:15.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.993 "dma_device_type": 2 00:17:15.993 } 00:17:15.993 ], 00:17:15.993 "driver_specific": { 00:17:15.993 "passthru": { 00:17:15.993 "name": "pt1", 00:17:15.993 "base_bdev_name": "malloc1" 00:17:15.993 } 00:17:15.993 } 00:17:15.993 }' 00:17:15.993 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:15.993 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:15.993 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:15.993 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:15.994 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:15.994 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:15.994 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:16.251 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:16.251 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:17:16.251 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:16.251 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:16.251 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:16.251 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:16.251 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:16.251 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:16.509 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:16.509 "name": "pt2", 00:17:16.509 "aliases": [ 00:17:16.509 "00000000-0000-0000-0000-000000000002" 00:17:16.509 ], 00:17:16.509 "product_name": "passthru", 00:17:16.509 "block_size": 4096, 00:17:16.509 "num_blocks": 8192, 00:17:16.509 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:16.509 "md_size": 32, 00:17:16.509 "md_interleave": false, 00:17:16.509 "dif_type": 0, 00:17:16.509 "assigned_rate_limits": { 00:17:16.509 "rw_ios_per_sec": 0, 00:17:16.509 "rw_mbytes_per_sec": 0, 00:17:16.509 "r_mbytes_per_sec": 0, 00:17:16.509 "w_mbytes_per_sec": 0 00:17:16.509 }, 00:17:16.509 "claimed": true, 00:17:16.509 "claim_type": "exclusive_write", 00:17:16.509 "zoned": false, 00:17:16.509 "supported_io_types": { 00:17:16.509 "read": true, 00:17:16.509 "write": true, 00:17:16.509 "unmap": true, 00:17:16.509 "flush": true, 00:17:16.509 "reset": true, 00:17:16.509 "nvme_admin": false, 00:17:16.509 "nvme_io": false, 00:17:16.509 "nvme_io_md": false, 00:17:16.509 "write_zeroes": true, 00:17:16.509 "zcopy": true, 00:17:16.509 "get_zone_info": false, 00:17:16.509 "zone_management": false, 00:17:16.509 "zone_append": false, 00:17:16.509 "compare": false, 00:17:16.509 "compare_and_write": false, 00:17:16.509 "abort": true, 00:17:16.509 "seek_hole": false, 00:17:16.509 "seek_data": false, 00:17:16.509 "copy": true, 00:17:16.509 "nvme_iov_md": false 00:17:16.509 }, 00:17:16.509 "memory_domains": [ 00:17:16.509 { 00:17:16.509 "dma_device_id": "system", 00:17:16.509 "dma_device_type": 1 00:17:16.509 }, 00:17:16.509 { 00:17:16.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.509 "dma_device_type": 2 00:17:16.509 } 00:17:16.509 ], 00:17:16.509 "driver_specific": { 00:17:16.509 "passthru": { 00:17:16.509 "name": "pt2", 00:17:16.509 "base_bdev_name": "malloc2" 00:17:16.509 } 00:17:16.509 } 00:17:16.509 }' 00:17:16.509 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:16.509 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:16.509 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:16.509 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:16.509 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:16.509 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:16.509 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:16.509 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:16.509 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:17:16.509 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:16.509 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:16.509 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:16.509 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:16.509 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:17:16.767 [2024-07-15 21:53:31.782121] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:16.767 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # '[' a9b67fbf-42f4-11ef-9f7f-e9a656123a8b '!=' a9b67fbf-42f4-11ef-9f7f-e9a656123a8b ']' 00:17:16.767 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:17:16.767 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:16.767 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:17:16.767 21:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:17.025 [2024-07-15 21:53:32.050100] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:17.025 21:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:17.025 21:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:17.025 21:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:17.025 21:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:17.025 21:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:17.025 21:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:17.025 21:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:17.025 21:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:17.025 21:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:17.025 21:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:17.025 21:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.025 21:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.284 21:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:17.284 "name": "raid_bdev1", 00:17:17.284 "uuid": "a9b67fbf-42f4-11ef-9f7f-e9a656123a8b", 00:17:17.284 "strip_size_kb": 0, 00:17:17.284 "state": "online", 00:17:17.284 "raid_level": "raid1", 00:17:17.284 "superblock": true, 00:17:17.284 "num_base_bdevs": 2, 00:17:17.284 "num_base_bdevs_discovered": 1, 00:17:17.284 "num_base_bdevs_operational": 1, 00:17:17.284 "base_bdevs_list": [ 00:17:17.284 { 00:17:17.284 "name": null, 00:17:17.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.284 "is_configured": false, 00:17:17.284 "data_offset": 256, 00:17:17.284 "data_size": 7936 00:17:17.284 }, 00:17:17.284 { 00:17:17.284 "name": "pt2", 00:17:17.284 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:17.284 "is_configured": true, 00:17:17.284 "data_offset": 256, 00:17:17.284 "data_size": 7936 00:17:17.284 } 00:17:17.284 ] 00:17:17.284 }' 00:17:17.284 21:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:17.284 21:53:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:17.542 21:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:17.801 [2024-07-15 21:53:32.882175] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:17.801 [2024-07-15 21:53:32.882196] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:17.801 [2024-07-15 21:53:32.882235] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:17.801 [2024-07-15 21:53:32.882247] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:17.801 [2024-07-15 21:53:32.882251] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2ea3f5c35180 name raid_bdev1, state offline 00:17:17.801 21:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.801 21:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:17:18.059 21:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:17:18.059 21:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:17:18.059 21:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:17:18.059 21:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:18.059 21:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:18.318 21:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:17:18.318 21:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:18.318 21:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:17:18.318 21:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:17:18.318 21:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@518 -- # i=1 00:17:18.318 21:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:18.576 [2024-07-15 21:53:33.514191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:18.576 [2024-07-15 21:53:33.514259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.576 [2024-07-15 21:53:33.514286] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2ea3f5c34f00 00:17:18.576 [2024-07-15 21:53:33.514293] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.576 [2024-07-15 21:53:33.514992] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.576 [2024-07-15 21:53:33.515015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:18.576 [2024-07-15 21:53:33.515039] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:18.576 [2024-07-15 21:53:33.515051] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:18.576 [2024-07-15 21:53:33.515066] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2ea3f5c35180 00:17:18.576 [2024-07-15 21:53:33.515070] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:18.576 [2024-07-15 21:53:33.515089] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2ea3f5c97e20 00:17:18.576 [2024-07-15 21:53:33.515121] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2ea3f5c35180 00:17:18.576 [2024-07-15 21:53:33.515125] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2ea3f5c35180 00:17:18.576 [2024-07-15 21:53:33.515148] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.576 pt2 00:17:18.577 21:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:18.577 21:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:18.577 21:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:18.577 21:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:18.577 21:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:18.577 21:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:18.577 21:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:18.577 21:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:18.577 21:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:18.577 21:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:18.577 21:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.577 21:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.835 21:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:18.835 "name": "raid_bdev1", 00:17:18.835 "uuid": "a9b67fbf-42f4-11ef-9f7f-e9a656123a8b", 00:17:18.835 "strip_size_kb": 0, 00:17:18.835 "state": "online", 00:17:18.835 "raid_level": "raid1", 00:17:18.835 "superblock": true, 00:17:18.835 "num_base_bdevs": 2, 00:17:18.835 "num_base_bdevs_discovered": 1, 00:17:18.835 "num_base_bdevs_operational": 1, 00:17:18.835 "base_bdevs_list": [ 00:17:18.835 { 00:17:18.835 "name": null, 00:17:18.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.835 "is_configured": false, 00:17:18.835 "data_offset": 256, 00:17:18.835 "data_size": 7936 00:17:18.835 }, 00:17:18.835 { 00:17:18.835 "name": "pt2", 00:17:18.835 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:18.835 "is_configured": true, 00:17:18.835 "data_offset": 256, 00:17:18.835 "data_size": 7936 00:17:18.835 } 00:17:18.835 ] 00:17:18.835 }' 00:17:18.835 21:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:18.835 21:53:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.093 21:53:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:19.351 [2024-07-15 21:53:34.290220] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:19.351 [2024-07-15 21:53:34.290239] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:19.351 [2024-07-15 21:53:34.290263] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:19.351 [2024-07-15 21:53:34.290274] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:19.351 [2024-07-15 21:53:34.290278] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2ea3f5c35180 name raid_bdev1, state offline 00:17:19.351 21:53:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.351 21:53:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:17:19.610 21:53:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:17:19.610 21:53:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:17:19.610 21:53:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:17:19.610 21:53:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:19.610 [2024-07-15 21:53:34.786257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:19.610 [2024-07-15 21:53:34.786317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.610 [2024-07-15 21:53:34.786513] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2ea3f5c34c80 00:17:19.610 [2024-07-15 21:53:34.786522] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.610 [2024-07-15 21:53:34.787223] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.610 [2024-07-15 21:53:34.787251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:19.610 [2024-07-15 21:53:34.787274] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:19.610 [2024-07-15 21:53:34.787285] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:19.610 [2024-07-15 21:53:34.787304] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:19.610 [2024-07-15 21:53:34.787308] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:19.610 [2024-07-15 21:53:34.787317] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2ea3f5c34780 name raid_bdev1, state configuring 00:17:19.610 [2024-07-15 21:53:34.787325] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:19.610 [2024-07-15 21:53:34.787339] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2ea3f5c34780 00:17:19.610 [2024-07-15 21:53:34.787342] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:19.610 [2024-07-15 21:53:34.787368] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2ea3f5c97e20 00:17:19.610 [2024-07-15 21:53:34.787390] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2ea3f5c34780 00:17:19.610 [2024-07-15 21:53:34.787394] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2ea3f5c34780 00:17:19.610 [2024-07-15 21:53:34.787407] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.610 pt1 00:17:19.868 21:53:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:17:19.868 21:53:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:19.868 21:53:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:19.868 21:53:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:19.868 21:53:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:19.868 21:53:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:19.868 21:53:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:19.868 21:53:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:19.868 21:53:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:19.868 21:53:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:19.868 21:53:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:19.868 21:53:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.868 21:53:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.868 21:53:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:19.868 "name": "raid_bdev1", 00:17:19.868 "uuid": "a9b67fbf-42f4-11ef-9f7f-e9a656123a8b", 00:17:19.868 "strip_size_kb": 0, 00:17:19.868 "state": "online", 00:17:19.868 "raid_level": "raid1", 00:17:19.868 "superblock": true, 00:17:19.868 "num_base_bdevs": 2, 00:17:19.868 "num_base_bdevs_discovered": 1, 00:17:19.868 "num_base_bdevs_operational": 1, 00:17:19.868 "base_bdevs_list": [ 00:17:19.868 { 00:17:19.868 "name": null, 00:17:19.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.868 "is_configured": false, 00:17:19.868 "data_offset": 256, 00:17:19.868 "data_size": 7936 00:17:19.868 }, 00:17:19.868 { 00:17:19.868 "name": "pt2", 00:17:19.868 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.868 "is_configured": true, 00:17:19.868 "data_offset": 256, 00:17:19.868 "data_size": 7936 00:17:19.868 } 00:17:19.868 ] 00:17:19.868 }' 00:17:19.868 21:53:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:19.868 21:53:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.435 21:53:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:17:20.435 21:53:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:20.693 21:53:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:17:20.693 21:53:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:20.693 21:53:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:17:20.693 [2024-07-15 21:53:35.838340] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:20.693 21:53:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' a9b67fbf-42f4-11ef-9f7f-e9a656123a8b '!=' a9b67fbf-42f4-11ef-9f7f-e9a656123a8b ']' 00:17:20.693 21:53:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@562 -- # killprocess 66385 00:17:20.693 21:53:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@942 -- # '[' -z 66385 ']' 00:17:20.693 21:53:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@946 -- # kill -0 66385 00:17:20.693 21:53:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@947 -- # uname 00:17:20.693 21:53:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:17:20.693 21:53:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # ps -c -o command 66385 00:17:20.693 21:53:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # tail -1 00:17:20.693 21:53:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:17:20.694 killing process with pid 66385 00:17:20.694 21:53:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:17:20.694 21:53:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # echo 'killing process with pid 66385' 00:17:20.694 21:53:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@961 -- # kill 66385 00:17:20.694 [2024-07-15 21:53:35.865564] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:20.694 [2024-07-15 21:53:35.865584] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:20.694 [2024-07-15 21:53:35.865596] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:20.694 [2024-07-15 21:53:35.865600] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2ea3f5c34780 name raid_bdev1, state offline 00:17:20.694 21:53:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@966 -- # wait 66385 00:17:20.694 [2024-07-15 21:53:35.877975] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:20.952 21:53:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@564 -- # return 0 00:17:20.952 00:17:20.952 real 0m12.277s 00:17:20.952 user 0m21.865s 00:17:20.952 sys 0m1.947s 00:17:20.952 ************************************ 00:17:20.952 END TEST raid_superblock_test_md_separate 00:17:20.952 ************************************ 00:17:20.952 21:53:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1118 -- # xtrace_disable 00:17:20.952 21:53:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.952 21:53:36 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:17:20.952 21:53:36 bdev_raid -- bdev/bdev_raid.sh@907 -- # '[' '' = true ']' 00:17:20.952 21:53:36 bdev_raid -- bdev/bdev_raid.sh@911 -- # base_malloc_params='-m 32 -i' 00:17:20.952 21:53:36 bdev_raid -- bdev/bdev_raid.sh@912 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:17:20.952 21:53:36 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:17:20.952 21:53:36 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:17:20.952 21:53:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:20.952 ************************************ 00:17:20.952 START TEST raid_state_function_test_sb_md_interleaved 00:17:20.952 ************************************ 00:17:20.952 21:53:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1117 -- # raid_state_function_test raid1 2 true 00:17:20.952 21:53:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:17:20.952 21:53:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:17:20.952 21:53:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:20.952 21:53:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:20.952 21:53:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:20.952 21:53:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:20.952 21:53:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:20.952 21:53:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:20.952 21:53:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:20.952 21:53:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:20.952 21:53:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:20.953 21:53:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:20.953 21:53:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:20.953 21:53:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:20.953 21:53:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:20.953 21:53:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:20.953 21:53:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:20.953 21:53:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:20.953 21:53:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:17:20.953 21:53:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:17:20.953 21:53:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:20.953 21:53:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:20.953 21:53:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=66772 00:17:20.953 Process raid pid: 66772 00:17:20.953 21:53:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 66772' 00:17:20.953 21:53:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 66772 /var/tmp/spdk-raid.sock 00:17:20.953 21:53:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:20.953 21:53:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@823 -- # '[' -z 66772 ']' 00:17:20.953 21:53:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:20.953 21:53:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@828 -- # local max_retries=100 00:17:20.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:20.953 21:53:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:20.953 21:53:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@832 -- # xtrace_disable 00:17:20.953 21:53:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:20.953 [2024-07-15 21:53:36.103021] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:17:20.953 [2024-07-15 21:53:36.103260] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:21.520 EAL: TSC is not safe to use in SMP mode 00:17:21.520 EAL: TSC is not invariant 00:17:21.520 [2024-07-15 21:53:36.688204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.779 [2024-07-15 21:53:36.764251] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:21.779 [2024-07-15 21:53:36.766598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.779 [2024-07-15 21:53:36.767460] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:21.779 [2024-07-15 21:53:36.767474] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:22.039 21:53:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:17:22.039 21:53:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@856 -- # return 0 00:17:22.039 21:53:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:22.298 [2024-07-15 21:53:37.292146] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:22.298 [2024-07-15 21:53:37.292201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:22.298 [2024-07-15 21:53:37.292206] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:22.298 [2024-07-15 21:53:37.292231] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:22.298 21:53:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:22.298 21:53:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:22.298 21:53:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:22.298 21:53:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:22.298 21:53:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:22.298 21:53:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:22.298 21:53:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:22.298 21:53:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:22.298 21:53:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:22.298 21:53:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:22.298 21:53:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.298 21:53:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:22.556 21:53:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:22.556 "name": "Existed_Raid", 00:17:22.556 "uuid": "b087da44-42f4-11ef-9f7f-e9a656123a8b", 00:17:22.556 "strip_size_kb": 0, 00:17:22.556 "state": "configuring", 00:17:22.556 "raid_level": "raid1", 00:17:22.556 "superblock": true, 00:17:22.556 "num_base_bdevs": 2, 00:17:22.556 "num_base_bdevs_discovered": 0, 00:17:22.556 "num_base_bdevs_operational": 2, 00:17:22.556 "base_bdevs_list": [ 00:17:22.556 { 00:17:22.556 "name": "BaseBdev1", 00:17:22.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.556 "is_configured": false, 00:17:22.556 "data_offset": 0, 00:17:22.556 "data_size": 0 00:17:22.556 }, 00:17:22.556 { 00:17:22.556 "name": "BaseBdev2", 00:17:22.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.556 "is_configured": false, 00:17:22.556 "data_offset": 0, 00:17:22.556 "data_size": 0 00:17:22.556 } 00:17:22.556 ] 00:17:22.556 }' 00:17:22.556 21:53:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:22.556 21:53:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:22.815 21:53:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:23.073 [2024-07-15 21:53:38.024200] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:23.073 [2024-07-15 21:53:38.024221] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x105128a34500 name Existed_Raid, state configuring 00:17:23.073 21:53:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:23.331 [2024-07-15 21:53:38.284216] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:23.331 [2024-07-15 21:53:38.284276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:23.331 [2024-07-15 21:53:38.284280] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:23.331 [2024-07-15 21:53:38.284303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:23.331 21:53:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:17:23.331 [2024-07-15 21:53:38.489055] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:23.331 BaseBdev1 00:17:23.331 21:53:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:23.331 21:53:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev1 00:17:23.331 21:53:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:17:23.331 21:53:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@893 -- # local i 00:17:23.331 21:53:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:17:23.331 21:53:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:17:23.331 21:53:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:23.589 21:53:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:23.848 [ 00:17:23.848 { 00:17:23.848 "name": "BaseBdev1", 00:17:23.848 "aliases": [ 00:17:23.848 "b13e5bbf-42f4-11ef-9f7f-e9a656123a8b" 00:17:23.848 ], 00:17:23.848 "product_name": "Malloc disk", 00:17:23.848 "block_size": 4128, 00:17:23.848 "num_blocks": 8192, 00:17:23.848 "uuid": "b13e5bbf-42f4-11ef-9f7f-e9a656123a8b", 00:17:23.848 "md_size": 32, 00:17:23.848 "md_interleave": true, 00:17:23.848 "dif_type": 0, 00:17:23.848 "assigned_rate_limits": { 00:17:23.848 "rw_ios_per_sec": 0, 00:17:23.848 "rw_mbytes_per_sec": 0, 00:17:23.848 "r_mbytes_per_sec": 0, 00:17:23.848 "w_mbytes_per_sec": 0 00:17:23.848 }, 00:17:23.848 "claimed": true, 00:17:23.848 "claim_type": "exclusive_write", 00:17:23.848 "zoned": false, 00:17:23.848 "supported_io_types": { 00:17:23.848 "read": true, 00:17:23.848 "write": true, 00:17:23.848 "unmap": true, 00:17:23.848 "flush": true, 00:17:23.848 "reset": true, 00:17:23.848 "nvme_admin": false, 00:17:23.848 "nvme_io": false, 00:17:23.848 "nvme_io_md": false, 00:17:23.848 "write_zeroes": true, 00:17:23.848 "zcopy": true, 00:17:23.848 "get_zone_info": false, 00:17:23.848 "zone_management": false, 00:17:23.848 "zone_append": false, 00:17:23.848 "compare": false, 00:17:23.848 "compare_and_write": false, 00:17:23.848 "abort": true, 00:17:23.848 "seek_hole": false, 00:17:23.848 "seek_data": false, 00:17:23.848 "copy": true, 00:17:23.848 "nvme_iov_md": false 00:17:23.848 }, 00:17:23.848 "memory_domains": [ 00:17:23.848 { 00:17:23.848 "dma_device_id": "system", 00:17:23.848 "dma_device_type": 1 00:17:23.848 }, 00:17:23.848 { 00:17:23.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.848 "dma_device_type": 2 00:17:23.848 } 00:17:23.848 ], 00:17:23.848 "driver_specific": {} 00:17:23.848 } 00:17:23.848 ] 00:17:23.848 21:53:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # return 0 00:17:23.848 21:53:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:23.848 21:53:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:23.848 21:53:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:23.848 21:53:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:23.848 21:53:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:23.848 21:53:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:23.848 21:53:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:23.848 21:53:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:23.848 21:53:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:23.848 21:53:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:23.848 21:53:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.848 21:53:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.107 21:53:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:24.107 "name": "Existed_Raid", 00:17:24.107 "uuid": "b11f3ae2-42f4-11ef-9f7f-e9a656123a8b", 00:17:24.107 "strip_size_kb": 0, 00:17:24.107 "state": "configuring", 00:17:24.107 "raid_level": "raid1", 00:17:24.107 "superblock": true, 00:17:24.107 "num_base_bdevs": 2, 00:17:24.107 "num_base_bdevs_discovered": 1, 00:17:24.107 "num_base_bdevs_operational": 2, 00:17:24.107 "base_bdevs_list": [ 00:17:24.107 { 00:17:24.107 "name": "BaseBdev1", 00:17:24.107 "uuid": "b13e5bbf-42f4-11ef-9f7f-e9a656123a8b", 00:17:24.107 "is_configured": true, 00:17:24.107 "data_offset": 256, 00:17:24.107 "data_size": 7936 00:17:24.107 }, 00:17:24.107 { 00:17:24.107 "name": "BaseBdev2", 00:17:24.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.107 "is_configured": false, 00:17:24.107 "data_offset": 0, 00:17:24.107 "data_size": 0 00:17:24.107 } 00:17:24.107 ] 00:17:24.107 }' 00:17:24.107 21:53:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:24.107 21:53:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:24.365 21:53:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:24.625 [2024-07-15 21:53:39.780277] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:24.625 [2024-07-15 21:53:39.780305] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x105128a34500 name Existed_Raid, state configuring 00:17:24.625 21:53:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:25.193 [2024-07-15 21:53:40.076297] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:25.193 [2024-07-15 21:53:40.077279] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:25.193 [2024-07-15 21:53:40.077356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:25.193 21:53:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:25.193 21:53:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:25.193 21:53:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:25.193 21:53:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:25.193 21:53:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:25.193 21:53:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:25.193 21:53:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:25.193 21:53:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:25.193 21:53:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:25.193 21:53:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:25.194 21:53:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:25.194 21:53:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:25.194 21:53:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.194 21:53:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.194 21:53:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:25.194 "name": "Existed_Raid", 00:17:25.194 "uuid": "b230ae07-42f4-11ef-9f7f-e9a656123a8b", 00:17:25.194 "strip_size_kb": 0, 00:17:25.194 "state": "configuring", 00:17:25.194 "raid_level": "raid1", 00:17:25.194 "superblock": true, 00:17:25.194 "num_base_bdevs": 2, 00:17:25.194 "num_base_bdevs_discovered": 1, 00:17:25.194 "num_base_bdevs_operational": 2, 00:17:25.194 "base_bdevs_list": [ 00:17:25.194 { 00:17:25.194 "name": "BaseBdev1", 00:17:25.194 "uuid": "b13e5bbf-42f4-11ef-9f7f-e9a656123a8b", 00:17:25.194 "is_configured": true, 00:17:25.194 "data_offset": 256, 00:17:25.194 "data_size": 7936 00:17:25.194 }, 00:17:25.194 { 00:17:25.194 "name": "BaseBdev2", 00:17:25.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.194 "is_configured": false, 00:17:25.194 "data_offset": 0, 00:17:25.194 "data_size": 0 00:17:25.194 } 00:17:25.194 ] 00:17:25.194 }' 00:17:25.194 21:53:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:25.194 21:53:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:25.465 21:53:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:17:25.735 [2024-07-15 21:53:40.812369] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:25.735 [2024-07-15 21:53:40.812452] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x105128a34a00 00:17:25.735 [2024-07-15 21:53:40.812457] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:25.735 [2024-07-15 21:53:40.812475] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x105128a97e20 00:17:25.735 [2024-07-15 21:53:40.812489] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x105128a34a00 00:17:25.735 [2024-07-15 21:53:40.812492] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x105128a34a00 00:17:25.735 [2024-07-15 21:53:40.812503] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.735 BaseBdev2 00:17:25.735 21:53:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:25.735 21:53:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@891 -- # local bdev_name=BaseBdev2 00:17:25.735 21:53:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:17:25.735 21:53:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@893 -- # local i 00:17:25.735 21:53:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:17:25.735 21:53:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:17:25.735 21:53:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@896 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:25.994 21:53:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:26.252 [ 00:17:26.252 { 00:17:26.252 "name": "BaseBdev2", 00:17:26.252 "aliases": [ 00:17:26.252 "b2a0fcb4-42f4-11ef-9f7f-e9a656123a8b" 00:17:26.252 ], 00:17:26.252 "product_name": "Malloc disk", 00:17:26.252 "block_size": 4128, 00:17:26.252 "num_blocks": 8192, 00:17:26.252 "uuid": "b2a0fcb4-42f4-11ef-9f7f-e9a656123a8b", 00:17:26.253 "md_size": 32, 00:17:26.253 "md_interleave": true, 00:17:26.253 "dif_type": 0, 00:17:26.253 "assigned_rate_limits": { 00:17:26.253 "rw_ios_per_sec": 0, 00:17:26.253 "rw_mbytes_per_sec": 0, 00:17:26.253 "r_mbytes_per_sec": 0, 00:17:26.253 "w_mbytes_per_sec": 0 00:17:26.253 }, 00:17:26.253 "claimed": true, 00:17:26.253 "claim_type": "exclusive_write", 00:17:26.253 "zoned": false, 00:17:26.253 "supported_io_types": { 00:17:26.253 "read": true, 00:17:26.253 "write": true, 00:17:26.253 "unmap": true, 00:17:26.253 "flush": true, 00:17:26.253 "reset": true, 00:17:26.253 "nvme_admin": false, 00:17:26.253 "nvme_io": false, 00:17:26.253 "nvme_io_md": false, 00:17:26.253 "write_zeroes": true, 00:17:26.253 "zcopy": true, 00:17:26.253 "get_zone_info": false, 00:17:26.253 "zone_management": false, 00:17:26.253 "zone_append": false, 00:17:26.253 "compare": false, 00:17:26.253 "compare_and_write": false, 00:17:26.253 "abort": true, 00:17:26.253 "seek_hole": false, 00:17:26.253 "seek_data": false, 00:17:26.253 "copy": true, 00:17:26.253 "nvme_iov_md": false 00:17:26.253 }, 00:17:26.253 "memory_domains": [ 00:17:26.253 { 00:17:26.253 "dma_device_id": "system", 00:17:26.253 "dma_device_type": 1 00:17:26.253 }, 00:17:26.253 { 00:17:26.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.253 "dma_device_type": 2 00:17:26.253 } 00:17:26.253 ], 00:17:26.253 "driver_specific": {} 00:17:26.253 } 00:17:26.253 ] 00:17:26.253 21:53:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # return 0 00:17:26.253 21:53:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:26.253 21:53:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:26.253 21:53:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:26.253 21:53:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:26.253 21:53:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:26.253 21:53:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:26.253 21:53:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:26.253 21:53:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:26.253 21:53:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:26.253 21:53:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:26.253 21:53:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:26.253 21:53:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:26.253 21:53:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.253 21:53:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.512 21:53:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:26.512 "name": "Existed_Raid", 00:17:26.512 "uuid": "b230ae07-42f4-11ef-9f7f-e9a656123a8b", 00:17:26.512 "strip_size_kb": 0, 00:17:26.512 "state": "online", 00:17:26.512 "raid_level": "raid1", 00:17:26.512 "superblock": true, 00:17:26.512 "num_base_bdevs": 2, 00:17:26.512 "num_base_bdevs_discovered": 2, 00:17:26.512 "num_base_bdevs_operational": 2, 00:17:26.512 "base_bdevs_list": [ 00:17:26.512 { 00:17:26.512 "name": "BaseBdev1", 00:17:26.512 "uuid": "b13e5bbf-42f4-11ef-9f7f-e9a656123a8b", 00:17:26.512 "is_configured": true, 00:17:26.512 "data_offset": 256, 00:17:26.512 "data_size": 7936 00:17:26.512 }, 00:17:26.512 { 00:17:26.512 "name": "BaseBdev2", 00:17:26.512 "uuid": "b2a0fcb4-42f4-11ef-9f7f-e9a656123a8b", 00:17:26.512 "is_configured": true, 00:17:26.512 "data_offset": 256, 00:17:26.512 "data_size": 7936 00:17:26.512 } 00:17:26.512 ] 00:17:26.512 }' 00:17:26.512 21:53:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:26.512 21:53:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:26.771 21:53:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:26.771 21:53:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:26.771 21:53:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:26.771 21:53:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:26.771 21:53:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:26.771 21:53:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:17:26.771 21:53:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:26.771 21:53:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:27.030 [2024-07-15 21:53:42.048391] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:27.030 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:27.030 "name": "Existed_Raid", 00:17:27.030 "aliases": [ 00:17:27.030 "b230ae07-42f4-11ef-9f7f-e9a656123a8b" 00:17:27.030 ], 00:17:27.030 "product_name": "Raid Volume", 00:17:27.030 "block_size": 4128, 00:17:27.030 "num_blocks": 7936, 00:17:27.030 "uuid": "b230ae07-42f4-11ef-9f7f-e9a656123a8b", 00:17:27.030 "md_size": 32, 00:17:27.030 "md_interleave": true, 00:17:27.030 "dif_type": 0, 00:17:27.030 "assigned_rate_limits": { 00:17:27.030 "rw_ios_per_sec": 0, 00:17:27.030 "rw_mbytes_per_sec": 0, 00:17:27.030 "r_mbytes_per_sec": 0, 00:17:27.031 "w_mbytes_per_sec": 0 00:17:27.031 }, 00:17:27.031 "claimed": false, 00:17:27.031 "zoned": false, 00:17:27.031 "supported_io_types": { 00:17:27.031 "read": true, 00:17:27.031 "write": true, 00:17:27.031 "unmap": false, 00:17:27.031 "flush": false, 00:17:27.031 "reset": true, 00:17:27.031 "nvme_admin": false, 00:17:27.031 "nvme_io": false, 00:17:27.031 "nvme_io_md": false, 00:17:27.031 "write_zeroes": true, 00:17:27.031 "zcopy": false, 00:17:27.031 "get_zone_info": false, 00:17:27.031 "zone_management": false, 00:17:27.031 "zone_append": false, 00:17:27.031 "compare": false, 00:17:27.031 "compare_and_write": false, 00:17:27.031 "abort": false, 00:17:27.031 "seek_hole": false, 00:17:27.031 "seek_data": false, 00:17:27.031 "copy": false, 00:17:27.031 "nvme_iov_md": false 00:17:27.031 }, 00:17:27.031 "memory_domains": [ 00:17:27.031 { 00:17:27.031 "dma_device_id": "system", 00:17:27.031 "dma_device_type": 1 00:17:27.031 }, 00:17:27.031 { 00:17:27.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.031 "dma_device_type": 2 00:17:27.031 }, 00:17:27.031 { 00:17:27.031 "dma_device_id": "system", 00:17:27.031 "dma_device_type": 1 00:17:27.031 }, 00:17:27.031 { 00:17:27.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.031 "dma_device_type": 2 00:17:27.031 } 00:17:27.031 ], 00:17:27.031 "driver_specific": { 00:17:27.031 "raid": { 00:17:27.031 "uuid": "b230ae07-42f4-11ef-9f7f-e9a656123a8b", 00:17:27.031 "strip_size_kb": 0, 00:17:27.031 "state": "online", 00:17:27.031 "raid_level": "raid1", 00:17:27.031 "superblock": true, 00:17:27.031 "num_base_bdevs": 2, 00:17:27.031 "num_base_bdevs_discovered": 2, 00:17:27.031 "num_base_bdevs_operational": 2, 00:17:27.031 "base_bdevs_list": [ 00:17:27.031 { 00:17:27.031 "name": "BaseBdev1", 00:17:27.031 "uuid": "b13e5bbf-42f4-11ef-9f7f-e9a656123a8b", 00:17:27.031 "is_configured": true, 00:17:27.031 "data_offset": 256, 00:17:27.031 "data_size": 7936 00:17:27.031 }, 00:17:27.031 { 00:17:27.031 "name": "BaseBdev2", 00:17:27.031 "uuid": "b2a0fcb4-42f4-11ef-9f7f-e9a656123a8b", 00:17:27.031 "is_configured": true, 00:17:27.031 "data_offset": 256, 00:17:27.031 "data_size": 7936 00:17:27.031 } 00:17:27.031 ] 00:17:27.031 } 00:17:27.031 } 00:17:27.031 }' 00:17:27.031 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:27.031 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:27.031 BaseBdev2' 00:17:27.031 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:27.031 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:27.031 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:27.291 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:27.291 "name": "BaseBdev1", 00:17:27.291 "aliases": [ 00:17:27.291 "b13e5bbf-42f4-11ef-9f7f-e9a656123a8b" 00:17:27.291 ], 00:17:27.291 "product_name": "Malloc disk", 00:17:27.291 "block_size": 4128, 00:17:27.291 "num_blocks": 8192, 00:17:27.291 "uuid": "b13e5bbf-42f4-11ef-9f7f-e9a656123a8b", 00:17:27.291 "md_size": 32, 00:17:27.291 "md_interleave": true, 00:17:27.291 "dif_type": 0, 00:17:27.291 "assigned_rate_limits": { 00:17:27.291 "rw_ios_per_sec": 0, 00:17:27.291 "rw_mbytes_per_sec": 0, 00:17:27.291 "r_mbytes_per_sec": 0, 00:17:27.291 "w_mbytes_per_sec": 0 00:17:27.291 }, 00:17:27.291 "claimed": true, 00:17:27.291 "claim_type": "exclusive_write", 00:17:27.291 "zoned": false, 00:17:27.291 "supported_io_types": { 00:17:27.291 "read": true, 00:17:27.291 "write": true, 00:17:27.291 "unmap": true, 00:17:27.291 "flush": true, 00:17:27.291 "reset": true, 00:17:27.291 "nvme_admin": false, 00:17:27.291 "nvme_io": false, 00:17:27.291 "nvme_io_md": false, 00:17:27.291 "write_zeroes": true, 00:17:27.291 "zcopy": true, 00:17:27.291 "get_zone_info": false, 00:17:27.291 "zone_management": false, 00:17:27.291 "zone_append": false, 00:17:27.291 "compare": false, 00:17:27.291 "compare_and_write": false, 00:17:27.291 "abort": true, 00:17:27.291 "seek_hole": false, 00:17:27.291 "seek_data": false, 00:17:27.291 "copy": true, 00:17:27.291 "nvme_iov_md": false 00:17:27.291 }, 00:17:27.291 "memory_domains": [ 00:17:27.291 { 00:17:27.291 "dma_device_id": "system", 00:17:27.291 "dma_device_type": 1 00:17:27.291 }, 00:17:27.291 { 00:17:27.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.291 "dma_device_type": 2 00:17:27.291 } 00:17:27.291 ], 00:17:27.291 "driver_specific": {} 00:17:27.291 }' 00:17:27.291 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:27.291 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:27.291 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:17:27.291 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:27.291 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:27.291 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:27.291 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:27.291 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:27.291 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:17:27.291 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:27.291 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:27.291 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:27.291 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:27.291 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:27.291 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:27.550 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:27.550 "name": "BaseBdev2", 00:17:27.550 "aliases": [ 00:17:27.550 "b2a0fcb4-42f4-11ef-9f7f-e9a656123a8b" 00:17:27.550 ], 00:17:27.550 "product_name": "Malloc disk", 00:17:27.550 "block_size": 4128, 00:17:27.550 "num_blocks": 8192, 00:17:27.550 "uuid": "b2a0fcb4-42f4-11ef-9f7f-e9a656123a8b", 00:17:27.550 "md_size": 32, 00:17:27.550 "md_interleave": true, 00:17:27.550 "dif_type": 0, 00:17:27.550 "assigned_rate_limits": { 00:17:27.550 "rw_ios_per_sec": 0, 00:17:27.550 "rw_mbytes_per_sec": 0, 00:17:27.550 "r_mbytes_per_sec": 0, 00:17:27.550 "w_mbytes_per_sec": 0 00:17:27.550 }, 00:17:27.550 "claimed": true, 00:17:27.550 "claim_type": "exclusive_write", 00:17:27.550 "zoned": false, 00:17:27.550 "supported_io_types": { 00:17:27.550 "read": true, 00:17:27.550 "write": true, 00:17:27.550 "unmap": true, 00:17:27.550 "flush": true, 00:17:27.550 "reset": true, 00:17:27.550 "nvme_admin": false, 00:17:27.550 "nvme_io": false, 00:17:27.550 "nvme_io_md": false, 00:17:27.550 "write_zeroes": true, 00:17:27.550 "zcopy": true, 00:17:27.550 "get_zone_info": false, 00:17:27.550 "zone_management": false, 00:17:27.550 "zone_append": false, 00:17:27.550 "compare": false, 00:17:27.550 "compare_and_write": false, 00:17:27.550 "abort": true, 00:17:27.550 "seek_hole": false, 00:17:27.550 "seek_data": false, 00:17:27.550 "copy": true, 00:17:27.550 "nvme_iov_md": false 00:17:27.550 }, 00:17:27.550 "memory_domains": [ 00:17:27.550 { 00:17:27.550 "dma_device_id": "system", 00:17:27.550 "dma_device_type": 1 00:17:27.550 }, 00:17:27.550 { 00:17:27.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.550 "dma_device_type": 2 00:17:27.550 } 00:17:27.550 ], 00:17:27.550 "driver_specific": {} 00:17:27.550 }' 00:17:27.550 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:27.550 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:27.550 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:17:27.550 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:27.550 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:27.550 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:27.550 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:27.550 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:27.551 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:17:27.551 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:27.551 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:27.551 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:27.551 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:27.810 [2024-07-15 21:53:42.968423] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:27.810 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:27.810 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:17:27.810 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:27.810 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:17:27.810 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:17:27.810 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:27.810 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:27.810 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:27.810 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:27.810 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:27.810 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:27.810 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:27.810 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:27.810 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:27.810 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:27.810 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.810 21:53:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.069 21:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:28.069 "name": "Existed_Raid", 00:17:28.069 "uuid": "b230ae07-42f4-11ef-9f7f-e9a656123a8b", 00:17:28.069 "strip_size_kb": 0, 00:17:28.069 "state": "online", 00:17:28.069 "raid_level": "raid1", 00:17:28.069 "superblock": true, 00:17:28.069 "num_base_bdevs": 2, 00:17:28.069 "num_base_bdevs_discovered": 1, 00:17:28.069 "num_base_bdevs_operational": 1, 00:17:28.069 "base_bdevs_list": [ 00:17:28.069 { 00:17:28.069 "name": null, 00:17:28.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.069 "is_configured": false, 00:17:28.069 "data_offset": 256, 00:17:28.069 "data_size": 7936 00:17:28.069 }, 00:17:28.069 { 00:17:28.069 "name": "BaseBdev2", 00:17:28.069 "uuid": "b2a0fcb4-42f4-11ef-9f7f-e9a656123a8b", 00:17:28.069 "is_configured": true, 00:17:28.069 "data_offset": 256, 00:17:28.069 "data_size": 7936 00:17:28.069 } 00:17:28.069 ] 00:17:28.069 }' 00:17:28.069 21:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:28.069 21:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:28.636 21:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:28.636 21:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:28.636 21:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.636 21:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:28.636 21:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:28.636 21:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:28.636 21:53:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:28.894 [2024-07-15 21:53:44.006818] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:28.894 [2024-07-15 21:53:44.006891] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:28.894 [2024-07-15 21:53:44.013266] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:28.894 [2024-07-15 21:53:44.013283] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:28.894 [2024-07-15 21:53:44.013328] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x105128a34a00 name Existed_Raid, state offline 00:17:28.894 21:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:28.894 21:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:28.894 21:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.894 21:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:29.152 21:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:29.152 21:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:29.152 21:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:17:29.152 21:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 66772 00:17:29.152 21:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@942 -- # '[' -z 66772 ']' 00:17:29.152 21:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@946 -- # kill -0 66772 00:17:29.152 21:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@947 -- # uname 00:17:29.152 21:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:17:29.152 21:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # ps -c -o command 66772 00:17:29.152 21:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # tail -1 00:17:29.152 21:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:17:29.152 21:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:17:29.152 killing process with pid 66772 00:17:29.152 21:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # echo 'killing process with pid 66772' 00:17:29.152 21:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@961 -- # kill 66772 00:17:29.152 [2024-07-15 21:53:44.255439] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:29.152 [2024-07-15 21:53:44.255468] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:29.152 21:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # wait 66772 00:17:29.411 21:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:17:29.411 00:17:29.411 real 0m8.340s 00:17:29.411 user 0m14.224s 00:17:29.411 sys 0m1.682s 00:17:29.411 21:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1118 -- # xtrace_disable 00:17:29.411 ************************************ 00:17:29.411 END TEST raid_state_function_test_sb_md_interleaved 00:17:29.411 ************************************ 00:17:29.411 21:53:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:29.411 21:53:44 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:17:29.411 21:53:44 bdev_raid -- bdev/bdev_raid.sh@913 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:17:29.411 21:53:44 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:17:29.411 21:53:44 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:17:29.411 21:53:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:29.411 ************************************ 00:17:29.411 START TEST raid_superblock_test_md_interleaved 00:17:29.411 ************************************ 00:17:29.411 21:53:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1117 -- # raid_superblock_test raid1 2 00:17:29.411 21:53:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:17:29.411 21:53:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:17:29.411 21:53:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:17:29.411 21:53:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:17:29.411 21:53:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:17:29.411 21:53:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:17:29.411 21:53:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:17:29.411 21:53:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:17:29.411 21:53:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:17:29.411 21:53:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local strip_size 00:17:29.411 21:53:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:17:29.411 21:53:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:17:29.411 21:53:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:17:29.411 21:53:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:17:29.411 21:53:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:17:29.411 21:53:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # raid_pid=67042 00:17:29.411 21:53:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # waitforlisten 67042 /var/tmp/spdk-raid.sock 00:17:29.411 21:53:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@823 -- # '[' -z 67042 ']' 00:17:29.411 21:53:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:29.411 21:53:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:29.411 21:53:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@828 -- # local max_retries=100 00:17:29.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:29.411 21:53:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:29.411 21:53:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@832 -- # xtrace_disable 00:17:29.411 21:53:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:29.411 [2024-07-15 21:53:44.491367] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:17:29.411 [2024-07-15 21:53:44.491628] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:29.978 EAL: TSC is not safe to use in SMP mode 00:17:29.978 EAL: TSC is not invariant 00:17:29.978 [2024-07-15 21:53:45.049685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.978 [2024-07-15 21:53:45.124750] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:29.978 [2024-07-15 21:53:45.127233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.978 [2024-07-15 21:53:45.128120] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:29.978 [2024-07-15 21:53:45.128134] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:30.544 21:53:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:17:30.544 21:53:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@856 -- # return 0 00:17:30.544 21:53:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:17:30.544 21:53:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:30.544 21:53:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:17:30.544 21:53:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:17:30.544 21:53:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:30.544 21:53:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:30.544 21:53:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:30.544 21:53:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:30.544 21:53:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:17:30.544 malloc1 00:17:30.544 21:53:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:30.803 [2024-07-15 21:53:45.963249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:30.803 [2024-07-15 21:53:45.963336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.803 [2024-07-15 21:53:45.963363] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2b7ca0c34780 00:17:30.803 [2024-07-15 21:53:45.963371] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.803 [2024-07-15 21:53:45.964282] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.803 [2024-07-15 21:53:45.964307] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:30.803 pt1 00:17:30.803 21:53:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:30.803 21:53:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:30.803 21:53:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:17:30.803 21:53:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:17:30.803 21:53:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:30.803 21:53:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:30.803 21:53:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:30.803 21:53:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:30.803 21:53:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:17:31.062 malloc2 00:17:31.062 21:53:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:31.321 [2024-07-15 21:53:46.427257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:31.321 [2024-07-15 21:53:46.427321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.321 [2024-07-15 21:53:46.427357] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2b7ca0c34c80 00:17:31.321 [2024-07-15 21:53:46.427365] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.321 [2024-07-15 21:53:46.428018] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.321 [2024-07-15 21:53:46.428047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:31.321 pt2 00:17:31.321 21:53:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:31.321 21:53:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:31.321 21:53:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:17:31.580 [2024-07-15 21:53:46.639282] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:31.580 [2024-07-15 21:53:46.639873] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:31.580 [2024-07-15 21:53:46.639941] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2b7ca0c34f00 00:17:31.580 [2024-07-15 21:53:46.639956] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:31.580 [2024-07-15 21:53:46.640013] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2b7ca0c97e20 00:17:31.580 [2024-07-15 21:53:46.640030] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2b7ca0c34f00 00:17:31.580 [2024-07-15 21:53:46.640034] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2b7ca0c34f00 00:17:31.580 [2024-07-15 21:53:46.640065] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.580 21:53:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:31.580 21:53:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:31.580 21:53:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:31.580 21:53:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:31.580 21:53:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:31.580 21:53:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:31.580 21:53:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:31.580 21:53:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:31.580 21:53:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:31.580 21:53:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:31.580 21:53:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.580 21:53:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.839 21:53:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:31.839 "name": "raid_bdev1", 00:17:31.839 "uuid": "b61a1ca2-42f4-11ef-9f7f-e9a656123a8b", 00:17:31.839 "strip_size_kb": 0, 00:17:31.839 "state": "online", 00:17:31.839 "raid_level": "raid1", 00:17:31.839 "superblock": true, 00:17:31.839 "num_base_bdevs": 2, 00:17:31.839 "num_base_bdevs_discovered": 2, 00:17:31.839 "num_base_bdevs_operational": 2, 00:17:31.839 "base_bdevs_list": [ 00:17:31.839 { 00:17:31.839 "name": "pt1", 00:17:31.839 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:31.839 "is_configured": true, 00:17:31.839 "data_offset": 256, 00:17:31.839 "data_size": 7936 00:17:31.839 }, 00:17:31.839 { 00:17:31.839 "name": "pt2", 00:17:31.839 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:31.839 "is_configured": true, 00:17:31.839 "data_offset": 256, 00:17:31.839 "data_size": 7936 00:17:31.839 } 00:17:31.839 ] 00:17:31.839 }' 00:17:31.839 21:53:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:31.839 21:53:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.097 21:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:17:32.097 21:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:32.097 21:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:32.097 21:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:32.097 21:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:32.097 21:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:17:32.097 21:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:32.097 21:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:32.356 [2024-07-15 21:53:47.463317] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:32.356 21:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:32.356 "name": "raid_bdev1", 00:17:32.356 "aliases": [ 00:17:32.356 "b61a1ca2-42f4-11ef-9f7f-e9a656123a8b" 00:17:32.356 ], 00:17:32.356 "product_name": "Raid Volume", 00:17:32.356 "block_size": 4128, 00:17:32.356 "num_blocks": 7936, 00:17:32.356 "uuid": "b61a1ca2-42f4-11ef-9f7f-e9a656123a8b", 00:17:32.356 "md_size": 32, 00:17:32.356 "md_interleave": true, 00:17:32.356 "dif_type": 0, 00:17:32.356 "assigned_rate_limits": { 00:17:32.356 "rw_ios_per_sec": 0, 00:17:32.356 "rw_mbytes_per_sec": 0, 00:17:32.356 "r_mbytes_per_sec": 0, 00:17:32.356 "w_mbytes_per_sec": 0 00:17:32.356 }, 00:17:32.356 "claimed": false, 00:17:32.356 "zoned": false, 00:17:32.356 "supported_io_types": { 00:17:32.356 "read": true, 00:17:32.356 "write": true, 00:17:32.356 "unmap": false, 00:17:32.356 "flush": false, 00:17:32.356 "reset": true, 00:17:32.356 "nvme_admin": false, 00:17:32.356 "nvme_io": false, 00:17:32.356 "nvme_io_md": false, 00:17:32.356 "write_zeroes": true, 00:17:32.356 "zcopy": false, 00:17:32.356 "get_zone_info": false, 00:17:32.356 "zone_management": false, 00:17:32.356 "zone_append": false, 00:17:32.356 "compare": false, 00:17:32.356 "compare_and_write": false, 00:17:32.356 "abort": false, 00:17:32.356 "seek_hole": false, 00:17:32.356 "seek_data": false, 00:17:32.356 "copy": false, 00:17:32.356 "nvme_iov_md": false 00:17:32.356 }, 00:17:32.356 "memory_domains": [ 00:17:32.356 { 00:17:32.356 "dma_device_id": "system", 00:17:32.356 "dma_device_type": 1 00:17:32.356 }, 00:17:32.356 { 00:17:32.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.356 "dma_device_type": 2 00:17:32.356 }, 00:17:32.356 { 00:17:32.356 "dma_device_id": "system", 00:17:32.356 "dma_device_type": 1 00:17:32.356 }, 00:17:32.356 { 00:17:32.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.356 "dma_device_type": 2 00:17:32.356 } 00:17:32.356 ], 00:17:32.356 "driver_specific": { 00:17:32.356 "raid": { 00:17:32.356 "uuid": "b61a1ca2-42f4-11ef-9f7f-e9a656123a8b", 00:17:32.356 "strip_size_kb": 0, 00:17:32.356 "state": "online", 00:17:32.356 "raid_level": "raid1", 00:17:32.356 "superblock": true, 00:17:32.356 "num_base_bdevs": 2, 00:17:32.356 "num_base_bdevs_discovered": 2, 00:17:32.356 "num_base_bdevs_operational": 2, 00:17:32.356 "base_bdevs_list": [ 00:17:32.356 { 00:17:32.356 "name": "pt1", 00:17:32.356 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:32.356 "is_configured": true, 00:17:32.356 "data_offset": 256, 00:17:32.356 "data_size": 7936 00:17:32.356 }, 00:17:32.356 { 00:17:32.356 "name": "pt2", 00:17:32.356 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:32.356 "is_configured": true, 00:17:32.356 "data_offset": 256, 00:17:32.356 "data_size": 7936 00:17:32.356 } 00:17:32.356 ] 00:17:32.356 } 00:17:32.356 } 00:17:32.356 }' 00:17:32.356 21:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:32.356 21:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:32.356 pt2' 00:17:32.356 21:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:32.356 21:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:32.356 21:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:32.614 21:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:32.614 "name": "pt1", 00:17:32.614 "aliases": [ 00:17:32.614 "00000000-0000-0000-0000-000000000001" 00:17:32.614 ], 00:17:32.614 "product_name": "passthru", 00:17:32.615 "block_size": 4128, 00:17:32.615 "num_blocks": 8192, 00:17:32.615 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:32.615 "md_size": 32, 00:17:32.615 "md_interleave": true, 00:17:32.615 "dif_type": 0, 00:17:32.615 "assigned_rate_limits": { 00:17:32.615 "rw_ios_per_sec": 0, 00:17:32.615 "rw_mbytes_per_sec": 0, 00:17:32.615 "r_mbytes_per_sec": 0, 00:17:32.615 "w_mbytes_per_sec": 0 00:17:32.615 }, 00:17:32.615 "claimed": true, 00:17:32.615 "claim_type": "exclusive_write", 00:17:32.615 "zoned": false, 00:17:32.615 "supported_io_types": { 00:17:32.615 "read": true, 00:17:32.615 "write": true, 00:17:32.615 "unmap": true, 00:17:32.615 "flush": true, 00:17:32.615 "reset": true, 00:17:32.615 "nvme_admin": false, 00:17:32.615 "nvme_io": false, 00:17:32.615 "nvme_io_md": false, 00:17:32.615 "write_zeroes": true, 00:17:32.615 "zcopy": true, 00:17:32.615 "get_zone_info": false, 00:17:32.615 "zone_management": false, 00:17:32.615 "zone_append": false, 00:17:32.615 "compare": false, 00:17:32.615 "compare_and_write": false, 00:17:32.615 "abort": true, 00:17:32.615 "seek_hole": false, 00:17:32.615 "seek_data": false, 00:17:32.615 "copy": true, 00:17:32.615 "nvme_iov_md": false 00:17:32.615 }, 00:17:32.615 "memory_domains": [ 00:17:32.615 { 00:17:32.615 "dma_device_id": "system", 00:17:32.615 "dma_device_type": 1 00:17:32.615 }, 00:17:32.615 { 00:17:32.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.615 "dma_device_type": 2 00:17:32.615 } 00:17:32.615 ], 00:17:32.615 "driver_specific": { 00:17:32.615 "passthru": { 00:17:32.615 "name": "pt1", 00:17:32.615 "base_bdev_name": "malloc1" 00:17:32.615 } 00:17:32.615 } 00:17:32.615 }' 00:17:32.615 21:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:32.615 21:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:32.615 21:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:17:32.615 21:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:32.615 21:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:32.615 21:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:32.615 21:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:32.615 21:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:32.615 21:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:17:32.615 21:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:32.615 21:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:32.615 21:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:32.615 21:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:32.615 21:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:32.615 21:53:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:32.889 21:53:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:32.889 "name": "pt2", 00:17:32.889 "aliases": [ 00:17:32.889 "00000000-0000-0000-0000-000000000002" 00:17:32.889 ], 00:17:32.889 "product_name": "passthru", 00:17:32.889 "block_size": 4128, 00:17:32.889 "num_blocks": 8192, 00:17:32.889 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:32.889 "md_size": 32, 00:17:32.889 "md_interleave": true, 00:17:32.889 "dif_type": 0, 00:17:32.889 "assigned_rate_limits": { 00:17:32.889 "rw_ios_per_sec": 0, 00:17:32.889 "rw_mbytes_per_sec": 0, 00:17:32.889 "r_mbytes_per_sec": 0, 00:17:32.889 "w_mbytes_per_sec": 0 00:17:32.889 }, 00:17:32.889 "claimed": true, 00:17:32.889 "claim_type": "exclusive_write", 00:17:32.889 "zoned": false, 00:17:32.889 "supported_io_types": { 00:17:32.889 "read": true, 00:17:32.889 "write": true, 00:17:32.889 "unmap": true, 00:17:32.889 "flush": true, 00:17:32.889 "reset": true, 00:17:32.889 "nvme_admin": false, 00:17:32.889 "nvme_io": false, 00:17:32.889 "nvme_io_md": false, 00:17:32.889 "write_zeroes": true, 00:17:32.889 "zcopy": true, 00:17:32.889 "get_zone_info": false, 00:17:32.889 "zone_management": false, 00:17:32.889 "zone_append": false, 00:17:32.889 "compare": false, 00:17:32.889 "compare_and_write": false, 00:17:32.889 "abort": true, 00:17:32.889 "seek_hole": false, 00:17:32.889 "seek_data": false, 00:17:32.889 "copy": true, 00:17:32.889 "nvme_iov_md": false 00:17:32.889 }, 00:17:32.889 "memory_domains": [ 00:17:32.889 { 00:17:32.889 "dma_device_id": "system", 00:17:32.889 "dma_device_type": 1 00:17:32.889 }, 00:17:32.889 { 00:17:32.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.889 "dma_device_type": 2 00:17:32.889 } 00:17:32.889 ], 00:17:32.889 "driver_specific": { 00:17:32.889 "passthru": { 00:17:32.889 "name": "pt2", 00:17:32.889 "base_bdev_name": "malloc2" 00:17:32.889 } 00:17:32.889 } 00:17:32.889 }' 00:17:32.889 21:53:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:33.147 21:53:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:33.147 21:53:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:17:33.147 21:53:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:33.147 21:53:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:33.147 21:53:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:33.147 21:53:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:33.147 21:53:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:33.147 21:53:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:17:33.147 21:53:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:33.147 21:53:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:33.147 21:53:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:33.147 21:53:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:33.147 21:53:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:17:33.405 [2024-07-15 21:53:48.391364] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:33.405 21:53:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=b61a1ca2-42f4-11ef-9f7f-e9a656123a8b 00:17:33.405 21:53:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # '[' -z b61a1ca2-42f4-11ef-9f7f-e9a656123a8b ']' 00:17:33.405 21:53:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:33.663 [2024-07-15 21:53:48.643302] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:33.663 [2024-07-15 21:53:48.643329] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:33.663 [2024-07-15 21:53:48.643365] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:33.663 [2024-07-15 21:53:48.643378] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:33.663 [2024-07-15 21:53:48.643382] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2b7ca0c34f00 name raid_bdev1, state offline 00:17:33.663 21:53:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:17:33.663 21:53:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.921 21:53:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:17:33.921 21:53:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:17:33.921 21:53:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:33.922 21:53:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:34.180 21:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:34.180 21:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:34.437 21:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:34.437 21:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:34.438 21:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:17:34.438 21:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:34.438 21:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # local es=0 00:17:34.438 21:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:34.438 21:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@630 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.438 21:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:34.438 21:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@634 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.438 21:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:34.438 21:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@636 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.438 21:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:34.438 21:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@636 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.438 21:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@636 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:34.438 21:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:34.695 [2024-07-15 21:53:49.855333] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:34.695 [2024-07-15 21:53:49.855970] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:34.696 [2024-07-15 21:53:49.855996] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:34.696 [2024-07-15 21:53:49.856045] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:34.696 [2024-07-15 21:53:49.856056] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:34.696 [2024-07-15 21:53:49.856060] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2b7ca0c34c80 name raid_bdev1, state configuring 00:17:34.696 request: 00:17:34.696 { 00:17:34.696 "name": "raid_bdev1", 00:17:34.696 "raid_level": "raid1", 00:17:34.696 "base_bdevs": [ 00:17:34.696 "malloc1", 00:17:34.696 "malloc2" 00:17:34.696 ], 00:17:34.696 "superblock": false, 00:17:34.696 "method": "bdev_raid_create", 00:17:34.696 "req_id": 1 00:17:34.696 } 00:17:34.696 Got JSON-RPC error response 00:17:34.696 response: 00:17:34.696 { 00:17:34.696 "code": -17, 00:17:34.696 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:34.696 } 00:17:34.696 21:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@645 -- # es=1 00:17:34.696 21:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:17:34.696 21:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:17:34.696 21:53:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:17:34.696 21:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:17:34.696 21:53:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.953 21:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:17:34.953 21:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:17:34.953 21:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:35.212 [2024-07-15 21:53:50.299334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:35.212 [2024-07-15 21:53:50.299394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.212 [2024-07-15 21:53:50.299422] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2b7ca0c34780 00:17:35.212 [2024-07-15 21:53:50.299429] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.212 [2024-07-15 21:53:50.300050] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.212 [2024-07-15 21:53:50.300082] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:35.212 [2024-07-15 21:53:50.300100] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:35.212 [2024-07-15 21:53:50.300112] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:35.212 pt1 00:17:35.212 21:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:35.212 21:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:35.212 21:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:35.212 21:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:35.212 21:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:35.212 21:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:35.212 21:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:35.212 21:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:35.212 21:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:35.212 21:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:35.212 21:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.212 21:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.470 21:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:35.470 "name": "raid_bdev1", 00:17:35.470 "uuid": "b61a1ca2-42f4-11ef-9f7f-e9a656123a8b", 00:17:35.470 "strip_size_kb": 0, 00:17:35.470 "state": "configuring", 00:17:35.470 "raid_level": "raid1", 00:17:35.470 "superblock": true, 00:17:35.470 "num_base_bdevs": 2, 00:17:35.470 "num_base_bdevs_discovered": 1, 00:17:35.470 "num_base_bdevs_operational": 2, 00:17:35.470 "base_bdevs_list": [ 00:17:35.470 { 00:17:35.470 "name": "pt1", 00:17:35.470 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:35.470 "is_configured": true, 00:17:35.470 "data_offset": 256, 00:17:35.470 "data_size": 7936 00:17:35.470 }, 00:17:35.471 { 00:17:35.471 "name": null, 00:17:35.471 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:35.471 "is_configured": false, 00:17:35.471 "data_offset": 256, 00:17:35.471 "data_size": 7936 00:17:35.471 } 00:17:35.471 ] 00:17:35.471 }' 00:17:35.471 21:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:35.471 21:53:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.730 21:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:17:35.730 21:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:17:35.730 21:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:35.730 21:53:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:35.989 [2024-07-15 21:53:51.023367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:35.989 [2024-07-15 21:53:51.023447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.989 [2024-07-15 21:53:51.023458] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2b7ca0c34f00 00:17:35.989 [2024-07-15 21:53:51.023467] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.989 [2024-07-15 21:53:51.023531] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.989 [2024-07-15 21:53:51.023540] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:35.989 [2024-07-15 21:53:51.023556] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:35.989 [2024-07-15 21:53:51.023564] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:35.989 [2024-07-15 21:53:51.023585] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2b7ca0c35180 00:17:35.989 [2024-07-15 21:53:51.023589] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:35.989 [2024-07-15 21:53:51.023605] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2b7ca0c97e20 00:17:35.989 [2024-07-15 21:53:51.023634] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2b7ca0c35180 00:17:35.989 [2024-07-15 21:53:51.023637] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2b7ca0c35180 00:17:35.989 [2024-07-15 21:53:51.023649] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.989 pt2 00:17:35.989 21:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:35.989 21:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:35.989 21:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:35.989 21:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:35.989 21:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:35.989 21:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:35.989 21:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:35.989 21:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:35.989 21:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:35.989 21:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:35.989 21:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:35.989 21:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:35.989 21:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.989 21:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.248 21:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:36.248 "name": "raid_bdev1", 00:17:36.248 "uuid": "b61a1ca2-42f4-11ef-9f7f-e9a656123a8b", 00:17:36.248 "strip_size_kb": 0, 00:17:36.248 "state": "online", 00:17:36.248 "raid_level": "raid1", 00:17:36.248 "superblock": true, 00:17:36.248 "num_base_bdevs": 2, 00:17:36.248 "num_base_bdevs_discovered": 2, 00:17:36.248 "num_base_bdevs_operational": 2, 00:17:36.248 "base_bdevs_list": [ 00:17:36.248 { 00:17:36.248 "name": "pt1", 00:17:36.248 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:36.248 "is_configured": true, 00:17:36.248 "data_offset": 256, 00:17:36.248 "data_size": 7936 00:17:36.248 }, 00:17:36.248 { 00:17:36.248 "name": "pt2", 00:17:36.248 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:36.248 "is_configured": true, 00:17:36.248 "data_offset": 256, 00:17:36.248 "data_size": 7936 00:17:36.248 } 00:17:36.248 ] 00:17:36.248 }' 00:17:36.248 21:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:36.248 21:53:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.507 21:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:17:36.507 21:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:36.507 21:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:36.507 21:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:36.507 21:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:36.507 21:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:17:36.507 21:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:36.507 21:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:36.766 [2024-07-15 21:53:51.815432] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:36.766 21:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:36.766 "name": "raid_bdev1", 00:17:36.766 "aliases": [ 00:17:36.766 "b61a1ca2-42f4-11ef-9f7f-e9a656123a8b" 00:17:36.766 ], 00:17:36.766 "product_name": "Raid Volume", 00:17:36.766 "block_size": 4128, 00:17:36.766 "num_blocks": 7936, 00:17:36.766 "uuid": "b61a1ca2-42f4-11ef-9f7f-e9a656123a8b", 00:17:36.766 "md_size": 32, 00:17:36.766 "md_interleave": true, 00:17:36.766 "dif_type": 0, 00:17:36.766 "assigned_rate_limits": { 00:17:36.766 "rw_ios_per_sec": 0, 00:17:36.766 "rw_mbytes_per_sec": 0, 00:17:36.766 "r_mbytes_per_sec": 0, 00:17:36.766 "w_mbytes_per_sec": 0 00:17:36.766 }, 00:17:36.766 "claimed": false, 00:17:36.766 "zoned": false, 00:17:36.766 "supported_io_types": { 00:17:36.766 "read": true, 00:17:36.766 "write": true, 00:17:36.766 "unmap": false, 00:17:36.766 "flush": false, 00:17:36.766 "reset": true, 00:17:36.766 "nvme_admin": false, 00:17:36.766 "nvme_io": false, 00:17:36.766 "nvme_io_md": false, 00:17:36.766 "write_zeroes": true, 00:17:36.766 "zcopy": false, 00:17:36.766 "get_zone_info": false, 00:17:36.766 "zone_management": false, 00:17:36.766 "zone_append": false, 00:17:36.766 "compare": false, 00:17:36.766 "compare_and_write": false, 00:17:36.766 "abort": false, 00:17:36.766 "seek_hole": false, 00:17:36.766 "seek_data": false, 00:17:36.766 "copy": false, 00:17:36.766 "nvme_iov_md": false 00:17:36.766 }, 00:17:36.766 "memory_domains": [ 00:17:36.766 { 00:17:36.766 "dma_device_id": "system", 00:17:36.766 "dma_device_type": 1 00:17:36.766 }, 00:17:36.766 { 00:17:36.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.766 "dma_device_type": 2 00:17:36.766 }, 00:17:36.766 { 00:17:36.766 "dma_device_id": "system", 00:17:36.766 "dma_device_type": 1 00:17:36.766 }, 00:17:36.766 { 00:17:36.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.766 "dma_device_type": 2 00:17:36.766 } 00:17:36.766 ], 00:17:36.766 "driver_specific": { 00:17:36.766 "raid": { 00:17:36.766 "uuid": "b61a1ca2-42f4-11ef-9f7f-e9a656123a8b", 00:17:36.766 "strip_size_kb": 0, 00:17:36.766 "state": "online", 00:17:36.766 "raid_level": "raid1", 00:17:36.766 "superblock": true, 00:17:36.766 "num_base_bdevs": 2, 00:17:36.766 "num_base_bdevs_discovered": 2, 00:17:36.766 "num_base_bdevs_operational": 2, 00:17:36.766 "base_bdevs_list": [ 00:17:36.766 { 00:17:36.766 "name": "pt1", 00:17:36.766 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:36.766 "is_configured": true, 00:17:36.766 "data_offset": 256, 00:17:36.766 "data_size": 7936 00:17:36.766 }, 00:17:36.766 { 00:17:36.766 "name": "pt2", 00:17:36.766 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:36.766 "is_configured": true, 00:17:36.766 "data_offset": 256, 00:17:36.766 "data_size": 7936 00:17:36.766 } 00:17:36.766 ] 00:17:36.766 } 00:17:36.766 } 00:17:36.766 }' 00:17:36.766 21:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:36.766 21:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:36.766 pt2' 00:17:36.766 21:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:36.766 21:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:36.766 21:53:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:37.024 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:37.024 "name": "pt1", 00:17:37.024 "aliases": [ 00:17:37.024 "00000000-0000-0000-0000-000000000001" 00:17:37.024 ], 00:17:37.024 "product_name": "passthru", 00:17:37.024 "block_size": 4128, 00:17:37.024 "num_blocks": 8192, 00:17:37.024 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:37.024 "md_size": 32, 00:17:37.024 "md_interleave": true, 00:17:37.024 "dif_type": 0, 00:17:37.024 "assigned_rate_limits": { 00:17:37.024 "rw_ios_per_sec": 0, 00:17:37.024 "rw_mbytes_per_sec": 0, 00:17:37.024 "r_mbytes_per_sec": 0, 00:17:37.024 "w_mbytes_per_sec": 0 00:17:37.024 }, 00:17:37.024 "claimed": true, 00:17:37.024 "claim_type": "exclusive_write", 00:17:37.024 "zoned": false, 00:17:37.024 "supported_io_types": { 00:17:37.024 "read": true, 00:17:37.024 "write": true, 00:17:37.024 "unmap": true, 00:17:37.024 "flush": true, 00:17:37.024 "reset": true, 00:17:37.024 "nvme_admin": false, 00:17:37.024 "nvme_io": false, 00:17:37.024 "nvme_io_md": false, 00:17:37.024 "write_zeroes": true, 00:17:37.024 "zcopy": true, 00:17:37.024 "get_zone_info": false, 00:17:37.024 "zone_management": false, 00:17:37.024 "zone_append": false, 00:17:37.024 "compare": false, 00:17:37.024 "compare_and_write": false, 00:17:37.024 "abort": true, 00:17:37.024 "seek_hole": false, 00:17:37.024 "seek_data": false, 00:17:37.024 "copy": true, 00:17:37.024 "nvme_iov_md": false 00:17:37.024 }, 00:17:37.024 "memory_domains": [ 00:17:37.024 { 00:17:37.024 "dma_device_id": "system", 00:17:37.024 "dma_device_type": 1 00:17:37.024 }, 00:17:37.024 { 00:17:37.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.024 "dma_device_type": 2 00:17:37.024 } 00:17:37.024 ], 00:17:37.024 "driver_specific": { 00:17:37.024 "passthru": { 00:17:37.024 "name": "pt1", 00:17:37.024 "base_bdev_name": "malloc1" 00:17:37.024 } 00:17:37.024 } 00:17:37.024 }' 00:17:37.024 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:37.024 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:37.024 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:17:37.024 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:37.024 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:37.024 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:37.024 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:37.025 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:37.025 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:17:37.025 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:37.025 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:37.025 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:37.025 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:37.025 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:37.025 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:37.282 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:37.282 "name": "pt2", 00:17:37.282 "aliases": [ 00:17:37.282 "00000000-0000-0000-0000-000000000002" 00:17:37.282 ], 00:17:37.282 "product_name": "passthru", 00:17:37.282 "block_size": 4128, 00:17:37.282 "num_blocks": 8192, 00:17:37.282 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:37.282 "md_size": 32, 00:17:37.282 "md_interleave": true, 00:17:37.282 "dif_type": 0, 00:17:37.282 "assigned_rate_limits": { 00:17:37.282 "rw_ios_per_sec": 0, 00:17:37.282 "rw_mbytes_per_sec": 0, 00:17:37.282 "r_mbytes_per_sec": 0, 00:17:37.282 "w_mbytes_per_sec": 0 00:17:37.282 }, 00:17:37.282 "claimed": true, 00:17:37.282 "claim_type": "exclusive_write", 00:17:37.282 "zoned": false, 00:17:37.282 "supported_io_types": { 00:17:37.282 "read": true, 00:17:37.282 "write": true, 00:17:37.282 "unmap": true, 00:17:37.282 "flush": true, 00:17:37.282 "reset": true, 00:17:37.282 "nvme_admin": false, 00:17:37.282 "nvme_io": false, 00:17:37.282 "nvme_io_md": false, 00:17:37.282 "write_zeroes": true, 00:17:37.282 "zcopy": true, 00:17:37.282 "get_zone_info": false, 00:17:37.282 "zone_management": false, 00:17:37.282 "zone_append": false, 00:17:37.282 "compare": false, 00:17:37.282 "compare_and_write": false, 00:17:37.282 "abort": true, 00:17:37.282 "seek_hole": false, 00:17:37.282 "seek_data": false, 00:17:37.282 "copy": true, 00:17:37.282 "nvme_iov_md": false 00:17:37.282 }, 00:17:37.282 "memory_domains": [ 00:17:37.282 { 00:17:37.282 "dma_device_id": "system", 00:17:37.282 "dma_device_type": 1 00:17:37.282 }, 00:17:37.282 { 00:17:37.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.282 "dma_device_type": 2 00:17:37.282 } 00:17:37.282 ], 00:17:37.282 "driver_specific": { 00:17:37.282 "passthru": { 00:17:37.282 "name": "pt2", 00:17:37.282 "base_bdev_name": "malloc2" 00:17:37.282 } 00:17:37.282 } 00:17:37.282 }' 00:17:37.282 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:37.282 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:37.282 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:17:37.282 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:37.282 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:37.282 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:37.282 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:37.282 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:37.282 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:17:37.282 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:37.282 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:37.282 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:37.282 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:37.282 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:17:37.540 [2024-07-15 21:53:52.627504] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:37.540 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # '[' b61a1ca2-42f4-11ef-9f7f-e9a656123a8b '!=' b61a1ca2-42f4-11ef-9f7f-e9a656123a8b ']' 00:17:37.541 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:17:37.541 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:37.541 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:17:37.541 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:37.800 [2024-07-15 21:53:52.835471] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:37.800 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:37.800 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:37.800 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:37.800 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:37.800 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:37.800 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:37.800 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:37.800 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:37.800 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:37.800 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:37.800 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.800 21:53:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.059 21:53:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:38.059 "name": "raid_bdev1", 00:17:38.059 "uuid": "b61a1ca2-42f4-11ef-9f7f-e9a656123a8b", 00:17:38.059 "strip_size_kb": 0, 00:17:38.059 "state": "online", 00:17:38.059 "raid_level": "raid1", 00:17:38.059 "superblock": true, 00:17:38.059 "num_base_bdevs": 2, 00:17:38.059 "num_base_bdevs_discovered": 1, 00:17:38.059 "num_base_bdevs_operational": 1, 00:17:38.059 "base_bdevs_list": [ 00:17:38.059 { 00:17:38.059 "name": null, 00:17:38.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.059 "is_configured": false, 00:17:38.059 "data_offset": 256, 00:17:38.059 "data_size": 7936 00:17:38.059 }, 00:17:38.059 { 00:17:38.059 "name": "pt2", 00:17:38.059 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:38.059 "is_configured": true, 00:17:38.059 "data_offset": 256, 00:17:38.059 "data_size": 7936 00:17:38.059 } 00:17:38.059 ] 00:17:38.059 }' 00:17:38.059 21:53:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:38.059 21:53:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.318 21:53:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:38.577 [2024-07-15 21:53:53.615502] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:38.577 [2024-07-15 21:53:53.615533] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:38.577 [2024-07-15 21:53:53.615561] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:38.577 [2024-07-15 21:53:53.615575] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:38.577 [2024-07-15 21:53:53.615579] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2b7ca0c35180 name raid_bdev1, state offline 00:17:38.577 21:53:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.577 21:53:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:17:38.835 21:53:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:17:38.835 21:53:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:17:38.835 21:53:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:17:38.835 21:53:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:38.835 21:53:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:39.093 21:53:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:17:39.093 21:53:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:39.093 21:53:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:17:39.093 21:53:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:17:39.093 21:53:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@518 -- # i=1 00:17:39.093 21:53:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:39.093 [2024-07-15 21:53:54.263568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:39.093 [2024-07-15 21:53:54.263650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.093 [2024-07-15 21:53:54.263667] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2b7ca0c34f00 00:17:39.093 [2024-07-15 21:53:54.263675] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.093 [2024-07-15 21:53:54.264575] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.093 [2024-07-15 21:53:54.264611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:39.093 [2024-07-15 21:53:54.264644] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:39.093 [2024-07-15 21:53:54.264657] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:39.093 [2024-07-15 21:53:54.264677] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2b7ca0c35180 00:17:39.093 [2024-07-15 21:53:54.264680] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:39.093 [2024-07-15 21:53:54.264699] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2b7ca0c97e20 00:17:39.093 [2024-07-15 21:53:54.264712] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2b7ca0c35180 00:17:39.093 [2024-07-15 21:53:54.264716] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2b7ca0c35180 00:17:39.093 [2024-07-15 21:53:54.264727] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.093 pt2 00:17:39.093 21:53:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:39.093 21:53:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:39.093 21:53:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:39.093 21:53:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:39.093 21:53:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:39.093 21:53:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:39.093 21:53:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:39.093 21:53:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:39.093 21:53:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:39.093 21:53:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:39.352 21:53:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.352 21:53:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.352 21:53:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:39.352 "name": "raid_bdev1", 00:17:39.352 "uuid": "b61a1ca2-42f4-11ef-9f7f-e9a656123a8b", 00:17:39.352 "strip_size_kb": 0, 00:17:39.352 "state": "online", 00:17:39.352 "raid_level": "raid1", 00:17:39.352 "superblock": true, 00:17:39.352 "num_base_bdevs": 2, 00:17:39.352 "num_base_bdevs_discovered": 1, 00:17:39.352 "num_base_bdevs_operational": 1, 00:17:39.352 "base_bdevs_list": [ 00:17:39.352 { 00:17:39.352 "name": null, 00:17:39.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.352 "is_configured": false, 00:17:39.352 "data_offset": 256, 00:17:39.352 "data_size": 7936 00:17:39.352 }, 00:17:39.352 { 00:17:39.352 "name": "pt2", 00:17:39.352 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:39.352 "is_configured": true, 00:17:39.352 "data_offset": 256, 00:17:39.352 "data_size": 7936 00:17:39.352 } 00:17:39.352 ] 00:17:39.352 }' 00:17:39.352 21:53:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:39.352 21:53:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.611 21:53:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:39.869 [2024-07-15 21:53:54.999575] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:39.869 [2024-07-15 21:53:54.999602] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:39.869 [2024-07-15 21:53:54.999629] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:39.869 [2024-07-15 21:53:54.999642] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:39.869 [2024-07-15 21:53:54.999646] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2b7ca0c35180 name raid_bdev1, state offline 00:17:39.869 21:53:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.869 21:53:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:17:40.128 21:53:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:17:40.128 21:53:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:17:40.128 21:53:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:17:40.128 21:53:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:40.405 [2024-07-15 21:53:55.491608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:40.405 [2024-07-15 21:53:55.491678] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:40.405 [2024-07-15 21:53:55.491691] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2b7ca0c34c80 00:17:40.405 [2024-07-15 21:53:55.491698] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:40.405 [2024-07-15 21:53:55.492518] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:40.405 [2024-07-15 21:53:55.492551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:40.405 [2024-07-15 21:53:55.492582] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:40.405 [2024-07-15 21:53:55.492594] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:40.405 [2024-07-15 21:53:55.492615] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:40.405 [2024-07-15 21:53:55.492619] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:40.405 [2024-07-15 21:53:55.492624] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2b7ca0c34780 name raid_bdev1, state configuring 00:17:40.406 [2024-07-15 21:53:55.492635] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:40.406 [2024-07-15 21:53:55.492650] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2b7ca0c34780 00:17:40.406 [2024-07-15 21:53:55.492653] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:40.406 [2024-07-15 21:53:55.492670] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2b7ca0c97e20 00:17:40.406 [2024-07-15 21:53:55.492682] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2b7ca0c34780 00:17:40.406 [2024-07-15 21:53:55.492685] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2b7ca0c34780 00:17:40.406 [2024-07-15 21:53:55.492702] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.406 pt1 00:17:40.406 21:53:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:17:40.406 21:53:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:40.406 21:53:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:40.406 21:53:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:40.406 21:53:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:40.406 21:53:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:40.406 21:53:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:40.406 21:53:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:40.406 21:53:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:40.406 21:53:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:40.406 21:53:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:40.406 21:53:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.406 21:53:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.672 21:53:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:40.672 "name": "raid_bdev1", 00:17:40.672 "uuid": "b61a1ca2-42f4-11ef-9f7f-e9a656123a8b", 00:17:40.672 "strip_size_kb": 0, 00:17:40.672 "state": "online", 00:17:40.672 "raid_level": "raid1", 00:17:40.672 "superblock": true, 00:17:40.672 "num_base_bdevs": 2, 00:17:40.672 "num_base_bdevs_discovered": 1, 00:17:40.672 "num_base_bdevs_operational": 1, 00:17:40.672 "base_bdevs_list": [ 00:17:40.672 { 00:17:40.672 "name": null, 00:17:40.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.672 "is_configured": false, 00:17:40.672 "data_offset": 256, 00:17:40.672 "data_size": 7936 00:17:40.672 }, 00:17:40.672 { 00:17:40.672 "name": "pt2", 00:17:40.672 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:40.672 "is_configured": true, 00:17:40.672 "data_offset": 256, 00:17:40.672 "data_size": 7936 00:17:40.672 } 00:17:40.672 ] 00:17:40.672 }' 00:17:40.672 21:53:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:40.672 21:53:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:40.931 21:53:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:17:40.931 21:53:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:41.191 21:53:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:17:41.191 21:53:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:41.191 21:53:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:17:41.191 [2024-07-15 21:53:56.367714] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:41.450 21:53:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' b61a1ca2-42f4-11ef-9f7f-e9a656123a8b '!=' b61a1ca2-42f4-11ef-9f7f-e9a656123a8b ']' 00:17:41.450 21:53:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@562 -- # killprocess 67042 00:17:41.450 21:53:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@942 -- # '[' -z 67042 ']' 00:17:41.450 21:53:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@946 -- # kill -0 67042 00:17:41.450 21:53:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@947 -- # uname 00:17:41.450 21:53:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:17:41.450 21:53:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # ps -c -o command 67042 00:17:41.450 21:53:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # tail -1 00:17:41.450 21:53:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # process_name=bdev_svc 00:17:41.450 21:53:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # '[' bdev_svc = sudo ']' 00:17:41.450 killing process with pid 67042 00:17:41.450 21:53:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # echo 'killing process with pid 67042' 00:17:41.450 21:53:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@961 -- # kill 67042 00:17:41.450 [2024-07-15 21:53:56.396066] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:41.450 [2024-07-15 21:53:56.396111] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:41.450 21:53:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@966 -- # wait 67042 00:17:41.450 [2024-07-15 21:53:56.396125] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:41.450 [2024-07-15 21:53:56.396130] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2b7ca0c34780 name raid_bdev1, state offline 00:17:41.450 [2024-07-15 21:53:56.413136] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:41.710 21:53:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@564 -- # return 0 00:17:41.710 00:17:41.710 real 0m12.185s 00:17:41.710 user 0m21.497s 00:17:41.710 sys 0m2.028s 00:17:41.710 21:53:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1118 -- # xtrace_disable 00:17:41.710 ************************************ 00:17:41.710 END TEST raid_superblock_test_md_interleaved 00:17:41.710 ************************************ 00:17:41.710 21:53:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:41.710 21:53:56 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:17:41.710 21:53:56 bdev_raid -- bdev/bdev_raid.sh@914 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:17:41.710 21:53:56 bdev_raid -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:17:41.710 21:53:56 bdev_raid -- common/autotest_common.sh@1099 -- # xtrace_disable 00:17:41.710 21:53:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:41.710 ************************************ 00:17:41.710 START TEST raid_rebuild_test_sb_md_interleaved 00:17:41.710 ************************************ 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1117 -- # raid_rebuild_test raid1 2 true false false 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local verify=false 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local strip_size 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local create_arg 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local data_offset 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # raid_pid=67425 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # waitforlisten 67425 /var/tmp/spdk-raid.sock 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@823 -- # '[' -z 67425 ']' 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@828 -- # local max_retries=100 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:41.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@832 -- # xtrace_disable 00:17:41.710 21:53:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:41.710 [2024-07-15 21:53:56.733117] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:17:41.710 [2024-07-15 21:53:56.733402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:41.710 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:41.710 Zero copy mechanism will not be used. 00:17:42.278 EAL: TSC is not safe to use in SMP mode 00:17:42.278 EAL: TSC is not invariant 00:17:42.278 [2024-07-15 21:53:57.265163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.278 [2024-07-15 21:53:57.360578] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:42.278 [2024-07-15 21:53:57.363338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.278 [2024-07-15 21:53:57.364329] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:42.278 [2024-07-15 21:53:57.364346] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:42.846 21:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:17:42.846 21:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@856 -- # return 0 00:17:42.846 21:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:17:42.847 21:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:17:42.847 BaseBdev1_malloc 00:17:42.847 21:53:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:43.106 [2024-07-15 21:53:58.181567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:43.106 [2024-07-15 21:53:58.181655] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:43.106 [2024-07-15 21:53:58.182334] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f5d6ea34780 00:17:43.106 [2024-07-15 21:53:58.182366] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:43.106 [2024-07-15 21:53:58.183107] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:43.106 [2024-07-15 21:53:58.183153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:43.106 BaseBdev1 00:17:43.106 21:53:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:17:43.106 21:53:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:17:43.365 BaseBdev2_malloc 00:17:43.365 21:53:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:43.624 [2024-07-15 21:53:58.609563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:43.624 [2024-07-15 21:53:58.609661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:43.624 [2024-07-15 21:53:58.609714] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f5d6ea34c80 00:17:43.624 [2024-07-15 21:53:58.609723] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:43.624 [2024-07-15 21:53:58.610352] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:43.624 [2024-07-15 21:53:58.610377] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:43.624 BaseBdev2 00:17:43.624 21:53:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:17:43.883 spare_malloc 00:17:43.883 21:53:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:44.142 spare_delay 00:17:44.142 21:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:17:44.402 [2024-07-15 21:53:59.357663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:44.402 [2024-07-15 21:53:59.357732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.402 [2024-07-15 21:53:59.357781] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f5d6ea35400 00:17:44.402 [2024-07-15 21:53:59.357789] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.402 [2024-07-15 21:53:59.358457] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.402 [2024-07-15 21:53:59.358483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:44.402 spare 00:17:44.402 21:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:17:44.402 [2024-07-15 21:53:59.561762] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:44.402 [2024-07-15 21:53:59.562670] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:44.402 [2024-07-15 21:53:59.562772] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f5d6ea35680 00:17:44.402 [2024-07-15 21:53:59.562789] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:44.402 [2024-07-15 21:53:59.562845] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f5d6ea97e20 00:17:44.402 [2024-07-15 21:53:59.562863] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f5d6ea35680 00:17:44.402 [2024-07-15 21:53:59.562868] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f5d6ea35680 00:17:44.402 [2024-07-15 21:53:59.562893] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.402 21:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:44.402 21:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:44.402 21:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:44.402 21:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:44.402 21:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:44.402 21:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:44.402 21:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:44.402 21:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:44.402 21:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:44.402 21:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:44.402 21:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.402 21:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.661 21:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:44.661 "name": "raid_bdev1", 00:17:44.661 "uuid": "bdcdecc5-42f4-11ef-9f7f-e9a656123a8b", 00:17:44.661 "strip_size_kb": 0, 00:17:44.661 "state": "online", 00:17:44.661 "raid_level": "raid1", 00:17:44.661 "superblock": true, 00:17:44.661 "num_base_bdevs": 2, 00:17:44.661 "num_base_bdevs_discovered": 2, 00:17:44.661 "num_base_bdevs_operational": 2, 00:17:44.661 "base_bdevs_list": [ 00:17:44.661 { 00:17:44.661 "name": "BaseBdev1", 00:17:44.661 "uuid": "81ea1601-5e73-1e5e-976b-0087d79cb6aa", 00:17:44.661 "is_configured": true, 00:17:44.661 "data_offset": 256, 00:17:44.661 "data_size": 7936 00:17:44.661 }, 00:17:44.661 { 00:17:44.661 "name": "BaseBdev2", 00:17:44.661 "uuid": "082fb6fa-c776-0d56-9963-b0d9b66732f0", 00:17:44.661 "is_configured": true, 00:17:44.661 "data_offset": 256, 00:17:44.661 "data_size": 7936 00:17:44.661 } 00:17:44.661 ] 00:17:44.661 }' 00:17:44.661 21:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:44.661 21:53:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.920 21:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:44.920 21:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:17:45.179 [2024-07-15 21:54:00.338040] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:45.179 21:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:17:45.179 21:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.179 21:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:45.438 21:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:17:45.438 21:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:17:45.438 21:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # '[' false = true ']' 00:17:45.438 21:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:17:45.697 [2024-07-15 21:54:00.750172] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:45.697 21:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:45.697 21:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:45.697 21:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:45.697 21:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:45.697 21:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:45.697 21:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:45.697 21:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:45.697 21:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:45.697 21:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:45.697 21:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:45.697 21:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.697 21:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.956 21:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:45.956 "name": "raid_bdev1", 00:17:45.956 "uuid": "bdcdecc5-42f4-11ef-9f7f-e9a656123a8b", 00:17:45.956 "strip_size_kb": 0, 00:17:45.956 "state": "online", 00:17:45.956 "raid_level": "raid1", 00:17:45.956 "superblock": true, 00:17:45.956 "num_base_bdevs": 2, 00:17:45.956 "num_base_bdevs_discovered": 1, 00:17:45.956 "num_base_bdevs_operational": 1, 00:17:45.956 "base_bdevs_list": [ 00:17:45.956 { 00:17:45.956 "name": null, 00:17:45.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.956 "is_configured": false, 00:17:45.956 "data_offset": 256, 00:17:45.956 "data_size": 7936 00:17:45.956 }, 00:17:45.956 { 00:17:45.956 "name": "BaseBdev2", 00:17:45.956 "uuid": "082fb6fa-c776-0d56-9963-b0d9b66732f0", 00:17:45.956 "is_configured": true, 00:17:45.956 "data_offset": 256, 00:17:45.956 "data_size": 7936 00:17:45.956 } 00:17:45.956 ] 00:17:45.956 }' 00:17:45.956 21:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:45.956 21:54:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.215 21:54:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:17:46.473 [2024-07-15 21:54:01.542438] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:46.473 [2024-07-15 21:54:01.542835] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f5d6ea97ec0 00:17:46.473 [2024-07-15 21:54:01.543963] bdev_raid.c:2825:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:46.473 21:54:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # sleep 1 00:17:47.850 21:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:47.850 21:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:47.850 21:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:17:47.850 21:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:17:47.850 21:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:47.850 21:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.850 21:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.850 21:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:47.850 "name": "raid_bdev1", 00:17:47.850 "uuid": "bdcdecc5-42f4-11ef-9f7f-e9a656123a8b", 00:17:47.850 "strip_size_kb": 0, 00:17:47.850 "state": "online", 00:17:47.850 "raid_level": "raid1", 00:17:47.850 "superblock": true, 00:17:47.850 "num_base_bdevs": 2, 00:17:47.850 "num_base_bdevs_discovered": 2, 00:17:47.850 "num_base_bdevs_operational": 2, 00:17:47.850 "process": { 00:17:47.850 "type": "rebuild", 00:17:47.850 "target": "spare", 00:17:47.850 "progress": { 00:17:47.850 "blocks": 3328, 00:17:47.850 "percent": 41 00:17:47.850 } 00:17:47.850 }, 00:17:47.850 "base_bdevs_list": [ 00:17:47.850 { 00:17:47.850 "name": "spare", 00:17:47.850 "uuid": "659206eb-6855-475a-b624-bd33d522bae5", 00:17:47.850 "is_configured": true, 00:17:47.850 "data_offset": 256, 00:17:47.850 "data_size": 7936 00:17:47.850 }, 00:17:47.850 { 00:17:47.850 "name": "BaseBdev2", 00:17:47.850 "uuid": "082fb6fa-c776-0d56-9963-b0d9b66732f0", 00:17:47.850 "is_configured": true, 00:17:47.850 "data_offset": 256, 00:17:47.850 "data_size": 7936 00:17:47.850 } 00:17:47.850 ] 00:17:47.850 }' 00:17:47.850 21:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:47.850 21:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:47.850 21:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:47.850 21:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:17:47.850 21:54:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:17:48.109 [2024-07-15 21:54:03.131029] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:48.109 [2024-07-15 21:54:03.154999] bdev_raid.c:2516:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:17:48.109 [2024-07-15 21:54:03.155062] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.109 [2024-07-15 21:54:03.155068] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:48.109 [2024-07-15 21:54:03.155081] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: Operation not supported by device 00:17:48.109 21:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:48.109 21:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:48.109 21:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:48.109 21:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:48.110 21:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:48.110 21:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:48.110 21:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:48.110 21:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:48.110 21:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:48.110 21:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:48.110 21:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.110 21:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.368 21:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:48.368 "name": "raid_bdev1", 00:17:48.368 "uuid": "bdcdecc5-42f4-11ef-9f7f-e9a656123a8b", 00:17:48.368 "strip_size_kb": 0, 00:17:48.368 "state": "online", 00:17:48.368 "raid_level": "raid1", 00:17:48.368 "superblock": true, 00:17:48.368 "num_base_bdevs": 2, 00:17:48.368 "num_base_bdevs_discovered": 1, 00:17:48.368 "num_base_bdevs_operational": 1, 00:17:48.368 "base_bdevs_list": [ 00:17:48.368 { 00:17:48.368 "name": null, 00:17:48.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.368 "is_configured": false, 00:17:48.368 "data_offset": 256, 00:17:48.368 "data_size": 7936 00:17:48.368 }, 00:17:48.368 { 00:17:48.368 "name": "BaseBdev2", 00:17:48.368 "uuid": "082fb6fa-c776-0d56-9963-b0d9b66732f0", 00:17:48.368 "is_configured": true, 00:17:48.368 "data_offset": 256, 00:17:48.368 "data_size": 7936 00:17:48.368 } 00:17:48.368 ] 00:17:48.368 }' 00:17:48.368 21:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:48.368 21:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.627 21:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:48.628 21:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:48.628 21:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:17:48.628 21:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:17:48.628 21:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:48.628 21:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.628 21:54:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.886 21:54:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:48.886 "name": "raid_bdev1", 00:17:48.886 "uuid": "bdcdecc5-42f4-11ef-9f7f-e9a656123a8b", 00:17:48.886 "strip_size_kb": 0, 00:17:48.886 "state": "online", 00:17:48.886 "raid_level": "raid1", 00:17:48.886 "superblock": true, 00:17:48.886 "num_base_bdevs": 2, 00:17:48.886 "num_base_bdevs_discovered": 1, 00:17:48.886 "num_base_bdevs_operational": 1, 00:17:48.886 "base_bdevs_list": [ 00:17:48.886 { 00:17:48.886 "name": null, 00:17:48.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.886 "is_configured": false, 00:17:48.886 "data_offset": 256, 00:17:48.886 "data_size": 7936 00:17:48.886 }, 00:17:48.886 { 00:17:48.886 "name": "BaseBdev2", 00:17:48.886 "uuid": "082fb6fa-c776-0d56-9963-b0d9b66732f0", 00:17:48.886 "is_configured": true, 00:17:48.886 "data_offset": 256, 00:17:48.886 "data_size": 7936 00:17:48.886 } 00:17:48.886 ] 00:17:48.886 }' 00:17:48.886 21:54:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:48.886 21:54:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:17:48.886 21:54:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:48.886 21:54:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:17:48.886 21:54:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:17:49.145 [2024-07-15 21:54:04.203871] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:49.145 [2024-07-15 21:54:04.204227] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f5d6ea97e20 00:17:49.145 [2024-07-15 21:54:04.205514] bdev_raid.c:2825:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:49.145 21:54:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:17:50.519 21:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:50.519 21:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:50.519 21:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:17:50.519 21:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:17:50.519 21:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:50.519 21:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.519 21:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.519 21:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:50.519 "name": "raid_bdev1", 00:17:50.519 "uuid": "bdcdecc5-42f4-11ef-9f7f-e9a656123a8b", 00:17:50.519 "strip_size_kb": 0, 00:17:50.519 "state": "online", 00:17:50.519 "raid_level": "raid1", 00:17:50.519 "superblock": true, 00:17:50.519 "num_base_bdevs": 2, 00:17:50.519 "num_base_bdevs_discovered": 2, 00:17:50.519 "num_base_bdevs_operational": 2, 00:17:50.519 "process": { 00:17:50.519 "type": "rebuild", 00:17:50.519 "target": "spare", 00:17:50.519 "progress": { 00:17:50.519 "blocks": 3072, 00:17:50.519 "percent": 38 00:17:50.519 } 00:17:50.519 }, 00:17:50.519 "base_bdevs_list": [ 00:17:50.519 { 00:17:50.519 "name": "spare", 00:17:50.520 "uuid": "659206eb-6855-475a-b624-bd33d522bae5", 00:17:50.520 "is_configured": true, 00:17:50.520 "data_offset": 256, 00:17:50.520 "data_size": 7936 00:17:50.520 }, 00:17:50.520 { 00:17:50.520 "name": "BaseBdev2", 00:17:50.520 "uuid": "082fb6fa-c776-0d56-9963-b0d9b66732f0", 00:17:50.520 "is_configured": true, 00:17:50.520 "data_offset": 256, 00:17:50.520 "data_size": 7936 00:17:50.520 } 00:17:50.520 ] 00:17:50.520 }' 00:17:50.520 21:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:50.520 21:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:50.520 21:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:50.520 21:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:17:50.520 21:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:17:50.520 21:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:17:50.520 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:17:50.520 21:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:17:50.520 21:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:17:50.520 21:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:17:50.520 21:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@705 -- # local timeout=684 00:17:50.520 21:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:17:50.520 21:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:50.520 21:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:50.520 21:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:17:50.520 21:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:17:50.520 21:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:50.520 21:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.520 21:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.778 21:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:50.778 "name": "raid_bdev1", 00:17:50.778 "uuid": "bdcdecc5-42f4-11ef-9f7f-e9a656123a8b", 00:17:50.778 "strip_size_kb": 0, 00:17:50.778 "state": "online", 00:17:50.778 "raid_level": "raid1", 00:17:50.778 "superblock": true, 00:17:50.778 "num_base_bdevs": 2, 00:17:50.778 "num_base_bdevs_discovered": 2, 00:17:50.778 "num_base_bdevs_operational": 2, 00:17:50.778 "process": { 00:17:50.778 "type": "rebuild", 00:17:50.778 "target": "spare", 00:17:50.778 "progress": { 00:17:50.778 "blocks": 3840, 00:17:50.778 "percent": 48 00:17:50.778 } 00:17:50.778 }, 00:17:50.778 "base_bdevs_list": [ 00:17:50.778 { 00:17:50.778 "name": "spare", 00:17:50.778 "uuid": "659206eb-6855-475a-b624-bd33d522bae5", 00:17:50.778 "is_configured": true, 00:17:50.778 "data_offset": 256, 00:17:50.778 "data_size": 7936 00:17:50.778 }, 00:17:50.778 { 00:17:50.778 "name": "BaseBdev2", 00:17:50.778 "uuid": "082fb6fa-c776-0d56-9963-b0d9b66732f0", 00:17:50.778 "is_configured": true, 00:17:50.778 "data_offset": 256, 00:17:50.778 "data_size": 7936 00:17:50.778 } 00:17:50.778 ] 00:17:50.778 }' 00:17:50.778 21:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:50.778 21:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:50.778 21:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:50.778 21:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:17:50.778 21:54:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:17:51.713 21:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:17:51.713 21:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:51.713 21:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:51.713 21:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:17:51.713 21:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:17:51.713 21:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:51.713 21:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.713 21:54:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.972 21:54:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:51.972 "name": "raid_bdev1", 00:17:51.972 "uuid": "bdcdecc5-42f4-11ef-9f7f-e9a656123a8b", 00:17:51.972 "strip_size_kb": 0, 00:17:51.972 "state": "online", 00:17:51.972 "raid_level": "raid1", 00:17:51.972 "superblock": true, 00:17:51.972 "num_base_bdevs": 2, 00:17:51.972 "num_base_bdevs_discovered": 2, 00:17:51.972 "num_base_bdevs_operational": 2, 00:17:51.972 "process": { 00:17:51.972 "type": "rebuild", 00:17:51.972 "target": "spare", 00:17:51.972 "progress": { 00:17:51.972 "blocks": 7168, 00:17:51.972 "percent": 90 00:17:51.972 } 00:17:51.972 }, 00:17:51.972 "base_bdevs_list": [ 00:17:51.972 { 00:17:51.972 "name": "spare", 00:17:51.972 "uuid": "659206eb-6855-475a-b624-bd33d522bae5", 00:17:51.972 "is_configured": true, 00:17:51.972 "data_offset": 256, 00:17:51.972 "data_size": 7936 00:17:51.972 }, 00:17:51.972 { 00:17:51.972 "name": "BaseBdev2", 00:17:51.972 "uuid": "082fb6fa-c776-0d56-9963-b0d9b66732f0", 00:17:51.972 "is_configured": true, 00:17:51.972 "data_offset": 256, 00:17:51.972 "data_size": 7936 00:17:51.972 } 00:17:51.972 ] 00:17:51.972 }' 00:17:51.972 21:54:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:51.972 21:54:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:51.972 21:54:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:51.972 21:54:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:17:51.972 21:54:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:17:52.230 [2024-07-15 21:54:07.326666] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:52.230 [2024-07-15 21:54:07.326711] bdev_raid.c:2506:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:52.230 [2024-07-15 21:54:07.326795] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.164 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:17:53.164 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:53.164 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:53.164 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:17:53.165 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:17:53.165 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:53.165 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.165 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.422 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:53.422 "name": "raid_bdev1", 00:17:53.422 "uuid": "bdcdecc5-42f4-11ef-9f7f-e9a656123a8b", 00:17:53.422 "strip_size_kb": 0, 00:17:53.422 "state": "online", 00:17:53.422 "raid_level": "raid1", 00:17:53.422 "superblock": true, 00:17:53.422 "num_base_bdevs": 2, 00:17:53.422 "num_base_bdevs_discovered": 2, 00:17:53.422 "num_base_bdevs_operational": 2, 00:17:53.422 "base_bdevs_list": [ 00:17:53.422 { 00:17:53.422 "name": "spare", 00:17:53.422 "uuid": "659206eb-6855-475a-b624-bd33d522bae5", 00:17:53.422 "is_configured": true, 00:17:53.422 "data_offset": 256, 00:17:53.422 "data_size": 7936 00:17:53.422 }, 00:17:53.422 { 00:17:53.422 "name": "BaseBdev2", 00:17:53.422 "uuid": "082fb6fa-c776-0d56-9963-b0d9b66732f0", 00:17:53.422 "is_configured": true, 00:17:53.422 "data_offset": 256, 00:17:53.422 "data_size": 7936 00:17:53.422 } 00:17:53.422 ] 00:17:53.422 }' 00:17:53.422 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:53.422 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:53.422 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:53.422 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:17:53.422 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # break 00:17:53.422 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:53.422 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:53.422 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:17:53.422 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:17:53.422 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:53.422 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.422 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.680 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:53.680 "name": "raid_bdev1", 00:17:53.680 "uuid": "bdcdecc5-42f4-11ef-9f7f-e9a656123a8b", 00:17:53.680 "strip_size_kb": 0, 00:17:53.680 "state": "online", 00:17:53.680 "raid_level": "raid1", 00:17:53.680 "superblock": true, 00:17:53.680 "num_base_bdevs": 2, 00:17:53.680 "num_base_bdevs_discovered": 2, 00:17:53.680 "num_base_bdevs_operational": 2, 00:17:53.680 "base_bdevs_list": [ 00:17:53.680 { 00:17:53.680 "name": "spare", 00:17:53.680 "uuid": "659206eb-6855-475a-b624-bd33d522bae5", 00:17:53.680 "is_configured": true, 00:17:53.680 "data_offset": 256, 00:17:53.680 "data_size": 7936 00:17:53.680 }, 00:17:53.680 { 00:17:53.680 "name": "BaseBdev2", 00:17:53.680 "uuid": "082fb6fa-c776-0d56-9963-b0d9b66732f0", 00:17:53.680 "is_configured": true, 00:17:53.680 "data_offset": 256, 00:17:53.680 "data_size": 7936 00:17:53.680 } 00:17:53.680 ] 00:17:53.680 }' 00:17:53.680 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:53.680 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:17:53.680 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:53.680 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:17:53.680 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:53.680 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:53.680 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:53.680 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:53.680 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:53.680 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:53.680 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:53.680 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:53.680 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:53.680 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:53.681 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.681 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.938 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:53.938 "name": "raid_bdev1", 00:17:53.938 "uuid": "bdcdecc5-42f4-11ef-9f7f-e9a656123a8b", 00:17:53.938 "strip_size_kb": 0, 00:17:53.938 "state": "online", 00:17:53.938 "raid_level": "raid1", 00:17:53.938 "superblock": true, 00:17:53.938 "num_base_bdevs": 2, 00:17:53.938 "num_base_bdevs_discovered": 2, 00:17:53.938 "num_base_bdevs_operational": 2, 00:17:53.938 "base_bdevs_list": [ 00:17:53.938 { 00:17:53.938 "name": "spare", 00:17:53.938 "uuid": "659206eb-6855-475a-b624-bd33d522bae5", 00:17:53.938 "is_configured": true, 00:17:53.938 "data_offset": 256, 00:17:53.938 "data_size": 7936 00:17:53.938 }, 00:17:53.938 { 00:17:53.938 "name": "BaseBdev2", 00:17:53.938 "uuid": "082fb6fa-c776-0d56-9963-b0d9b66732f0", 00:17:53.938 "is_configured": true, 00:17:53.938 "data_offset": 256, 00:17:53.938 "data_size": 7936 00:17:53.938 } 00:17:53.938 ] 00:17:53.938 }' 00:17:53.938 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:53.938 21:54:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.195 21:54:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:54.452 [2024-07-15 21:54:09.439828] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:54.452 [2024-07-15 21:54:09.439868] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:54.452 [2024-07-15 21:54:09.439918] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:54.452 [2024-07-15 21:54:09.439975] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:54.452 [2024-07-15 21:54:09.439979] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f5d6ea35680 name raid_bdev1, state offline 00:17:54.452 21:54:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.452 21:54:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # jq length 00:17:54.710 21:54:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:17:54.710 21:54:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # '[' false = true ']' 00:17:54.710 21:54:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:17:54.710 21:54:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:17:54.968 21:54:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:17:55.236 [2024-07-15 21:54:10.196069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:55.236 [2024-07-15 21:54:10.196125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:55.236 [2024-07-15 21:54:10.196166] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f5d6ea35400 00:17:55.236 [2024-07-15 21:54:10.196175] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:55.237 [2024-07-15 21:54:10.197397] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:55.237 [2024-07-15 21:54:10.197442] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:55.237 [2024-07-15 21:54:10.197474] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:55.237 [2024-07-15 21:54:10.197488] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:55.237 [2024-07-15 21:54:10.197513] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:55.237 spare 00:17:55.237 21:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:55.237 21:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:55.237 21:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:55.237 21:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:55.237 21:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:55.237 21:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:55.237 21:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:55.237 21:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:55.237 21:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:55.237 21:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:55.237 21:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.237 21:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.237 [2024-07-15 21:54:10.297501] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f5d6ea35680 00:17:55.237 [2024-07-15 21:54:10.297520] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:55.237 [2024-07-15 21:54:10.297544] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f5d6ea97e20 00:17:55.237 [2024-07-15 21:54:10.297560] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f5d6ea35680 00:17:55.237 [2024-07-15 21:54:10.297564] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f5d6ea35680 00:17:55.237 [2024-07-15 21:54:10.297617] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:55.495 21:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:55.495 "name": "raid_bdev1", 00:17:55.495 "uuid": "bdcdecc5-42f4-11ef-9f7f-e9a656123a8b", 00:17:55.495 "strip_size_kb": 0, 00:17:55.495 "state": "online", 00:17:55.495 "raid_level": "raid1", 00:17:55.495 "superblock": true, 00:17:55.495 "num_base_bdevs": 2, 00:17:55.495 "num_base_bdevs_discovered": 2, 00:17:55.495 "num_base_bdevs_operational": 2, 00:17:55.495 "base_bdevs_list": [ 00:17:55.495 { 00:17:55.495 "name": "spare", 00:17:55.495 "uuid": "659206eb-6855-475a-b624-bd33d522bae5", 00:17:55.495 "is_configured": true, 00:17:55.495 "data_offset": 256, 00:17:55.495 "data_size": 7936 00:17:55.495 }, 00:17:55.495 { 00:17:55.495 "name": "BaseBdev2", 00:17:55.495 "uuid": "082fb6fa-c776-0d56-9963-b0d9b66732f0", 00:17:55.495 "is_configured": true, 00:17:55.495 "data_offset": 256, 00:17:55.495 "data_size": 7936 00:17:55.495 } 00:17:55.495 ] 00:17:55.495 }' 00:17:55.495 21:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:55.495 21:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.758 21:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:55.758 21:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:55.758 21:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:17:55.758 21:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:17:55.758 21:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:55.758 21:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.758 21:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.758 21:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:55.758 "name": "raid_bdev1", 00:17:55.758 "uuid": "bdcdecc5-42f4-11ef-9f7f-e9a656123a8b", 00:17:55.758 "strip_size_kb": 0, 00:17:55.758 "state": "online", 00:17:55.758 "raid_level": "raid1", 00:17:55.758 "superblock": true, 00:17:55.758 "num_base_bdevs": 2, 00:17:55.758 "num_base_bdevs_discovered": 2, 00:17:55.758 "num_base_bdevs_operational": 2, 00:17:55.758 "base_bdevs_list": [ 00:17:55.758 { 00:17:55.758 "name": "spare", 00:17:55.758 "uuid": "659206eb-6855-475a-b624-bd33d522bae5", 00:17:55.758 "is_configured": true, 00:17:55.758 "data_offset": 256, 00:17:55.758 "data_size": 7936 00:17:55.758 }, 00:17:55.758 { 00:17:55.758 "name": "BaseBdev2", 00:17:55.758 "uuid": "082fb6fa-c776-0d56-9963-b0d9b66732f0", 00:17:55.758 "is_configured": true, 00:17:55.758 "data_offset": 256, 00:17:55.758 "data_size": 7936 00:17:55.758 } 00:17:55.758 ] 00:17:55.758 }' 00:17:55.758 21:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:56.019 21:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:17:56.019 21:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:56.019 21:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:17:56.019 21:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:56.019 21:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.287 21:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:17:56.287 21:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:17:56.287 [2024-07-15 21:54:11.448599] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:56.287 21:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:56.287 21:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:56.287 21:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:56.287 21:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:56.287 21:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:56.287 21:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:56.287 21:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:56.287 21:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:56.287 21:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:56.287 21:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:56.287 21:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.287 21:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.544 21:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:56.544 "name": "raid_bdev1", 00:17:56.544 "uuid": "bdcdecc5-42f4-11ef-9f7f-e9a656123a8b", 00:17:56.544 "strip_size_kb": 0, 00:17:56.544 "state": "online", 00:17:56.544 "raid_level": "raid1", 00:17:56.544 "superblock": true, 00:17:56.544 "num_base_bdevs": 2, 00:17:56.544 "num_base_bdevs_discovered": 1, 00:17:56.544 "num_base_bdevs_operational": 1, 00:17:56.544 "base_bdevs_list": [ 00:17:56.544 { 00:17:56.544 "name": null, 00:17:56.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.544 "is_configured": false, 00:17:56.544 "data_offset": 256, 00:17:56.544 "data_size": 7936 00:17:56.544 }, 00:17:56.544 { 00:17:56.544 "name": "BaseBdev2", 00:17:56.544 "uuid": "082fb6fa-c776-0d56-9963-b0d9b66732f0", 00:17:56.544 "is_configured": true, 00:17:56.544 "data_offset": 256, 00:17:56.544 "data_size": 7936 00:17:56.544 } 00:17:56.544 ] 00:17:56.544 }' 00:17:56.544 21:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:56.544 21:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.802 21:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:17:57.060 [2024-07-15 21:54:12.224890] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:57.060 [2024-07-15 21:54:12.224968] bdev_raid.c:3564:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:57.060 [2024-07-15 21:54:12.224974] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:57.060 [2024-07-15 21:54:12.225034] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:57.060 [2024-07-15 21:54:12.225365] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f5d6ea97ec0 00:17:57.060 [2024-07-15 21:54:12.226214] bdev_raid.c:2825:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:57.060 21:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # sleep 1 00:17:58.437 21:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:58.437 21:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:58.437 21:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:17:58.437 21:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:17:58.437 21:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:58.437 21:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.437 21:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.437 21:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:58.437 "name": "raid_bdev1", 00:17:58.437 "uuid": "bdcdecc5-42f4-11ef-9f7f-e9a656123a8b", 00:17:58.437 "strip_size_kb": 0, 00:17:58.437 "state": "online", 00:17:58.437 "raid_level": "raid1", 00:17:58.437 "superblock": true, 00:17:58.437 "num_base_bdevs": 2, 00:17:58.437 "num_base_bdevs_discovered": 2, 00:17:58.437 "num_base_bdevs_operational": 2, 00:17:58.437 "process": { 00:17:58.437 "type": "rebuild", 00:17:58.437 "target": "spare", 00:17:58.437 "progress": { 00:17:58.437 "blocks": 3072, 00:17:58.437 "percent": 38 00:17:58.437 } 00:17:58.437 }, 00:17:58.437 "base_bdevs_list": [ 00:17:58.437 { 00:17:58.437 "name": "spare", 00:17:58.437 "uuid": "659206eb-6855-475a-b624-bd33d522bae5", 00:17:58.437 "is_configured": true, 00:17:58.437 "data_offset": 256, 00:17:58.437 "data_size": 7936 00:17:58.437 }, 00:17:58.437 { 00:17:58.437 "name": "BaseBdev2", 00:17:58.437 "uuid": "082fb6fa-c776-0d56-9963-b0d9b66732f0", 00:17:58.437 "is_configured": true, 00:17:58.437 "data_offset": 256, 00:17:58.437 "data_size": 7936 00:17:58.437 } 00:17:58.437 ] 00:17:58.437 }' 00:17:58.437 21:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:58.437 21:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:58.437 21:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:58.437 21:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:17:58.437 21:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:17:58.695 [2024-07-15 21:54:13.785568] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:58.695 [2024-07-15 21:54:13.837483] bdev_raid.c:2516:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:17:58.695 [2024-07-15 21:54:13.837535] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.695 [2024-07-15 21:54:13.837555] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:58.695 [2024-07-15 21:54:13.837559] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: Operation not supported by device 00:17:58.695 21:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:58.695 21:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:58.695 21:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:58.695 21:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:58.695 21:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:58.695 21:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:58.695 21:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:58.695 21:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:58.695 21:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:58.695 21:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:58.695 21:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.695 21:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.953 21:54:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:58.953 "name": "raid_bdev1", 00:17:58.953 "uuid": "bdcdecc5-42f4-11ef-9f7f-e9a656123a8b", 00:17:58.953 "strip_size_kb": 0, 00:17:58.953 "state": "online", 00:17:58.953 "raid_level": "raid1", 00:17:58.953 "superblock": true, 00:17:58.953 "num_base_bdevs": 2, 00:17:58.953 "num_base_bdevs_discovered": 1, 00:17:58.953 "num_base_bdevs_operational": 1, 00:17:58.953 "base_bdevs_list": [ 00:17:58.953 { 00:17:58.953 "name": null, 00:17:58.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.953 "is_configured": false, 00:17:58.953 "data_offset": 256, 00:17:58.953 "data_size": 7936 00:17:58.953 }, 00:17:58.953 { 00:17:58.953 "name": "BaseBdev2", 00:17:58.953 "uuid": "082fb6fa-c776-0d56-9963-b0d9b66732f0", 00:17:58.953 "is_configured": true, 00:17:58.953 "data_offset": 256, 00:17:58.953 "data_size": 7936 00:17:58.953 } 00:17:58.953 ] 00:17:58.953 }' 00:17:58.953 21:54:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:58.953 21:54:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.211 21:54:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:17:59.471 [2024-07-15 21:54:14.566173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:59.471 [2024-07-15 21:54:14.566261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.471 [2024-07-15 21:54:14.566293] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f5d6ea35400 00:17:59.471 [2024-07-15 21:54:14.566301] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.471 [2024-07-15 21:54:14.566399] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.471 [2024-07-15 21:54:14.566408] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:59.471 [2024-07-15 21:54:14.566437] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:59.471 [2024-07-15 21:54:14.566442] bdev_raid.c:3564:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:59.471 [2024-07-15 21:54:14.566461] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:59.471 [2024-07-15 21:54:14.566472] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:59.471 [2024-07-15 21:54:14.566750] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f5d6ea97e20 00:17:59.471 [2024-07-15 21:54:14.567680] bdev_raid.c:2825:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:59.471 spare 00:17:59.471 21:54:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # sleep 1 00:18:00.849 21:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:00.849 21:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:00.849 21:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:00.849 21:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:00.849 21:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:00.849 21:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.850 21:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.850 21:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:00.850 "name": "raid_bdev1", 00:18:00.850 "uuid": "bdcdecc5-42f4-11ef-9f7f-e9a656123a8b", 00:18:00.850 "strip_size_kb": 0, 00:18:00.850 "state": "online", 00:18:00.850 "raid_level": "raid1", 00:18:00.850 "superblock": true, 00:18:00.850 "num_base_bdevs": 2, 00:18:00.850 "num_base_bdevs_discovered": 2, 00:18:00.850 "num_base_bdevs_operational": 2, 00:18:00.850 "process": { 00:18:00.850 "type": "rebuild", 00:18:00.850 "target": "spare", 00:18:00.850 "progress": { 00:18:00.850 "blocks": 3328, 00:18:00.850 "percent": 41 00:18:00.850 } 00:18:00.850 }, 00:18:00.850 "base_bdevs_list": [ 00:18:00.850 { 00:18:00.850 "name": "spare", 00:18:00.850 "uuid": "659206eb-6855-475a-b624-bd33d522bae5", 00:18:00.850 "is_configured": true, 00:18:00.850 "data_offset": 256, 00:18:00.850 "data_size": 7936 00:18:00.850 }, 00:18:00.850 { 00:18:00.850 "name": "BaseBdev2", 00:18:00.850 "uuid": "082fb6fa-c776-0d56-9963-b0d9b66732f0", 00:18:00.850 "is_configured": true, 00:18:00.850 "data_offset": 256, 00:18:00.850 "data_size": 7936 00:18:00.850 } 00:18:00.850 ] 00:18:00.850 }' 00:18:00.850 21:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:00.850 21:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:00.850 21:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:00.850 21:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:00.850 21:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:18:01.109 [2024-07-15 21:54:16.137367] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:01.109 [2024-07-15 21:54:16.178733] bdev_raid.c:2516:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:18:01.109 [2024-07-15 21:54:16.178778] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.109 [2024-07-15 21:54:16.178783] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:01.109 [2024-07-15 21:54:16.178787] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: Operation not supported by device 00:18:01.109 21:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:01.109 21:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:01.109 21:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:01.109 21:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:01.109 21:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:01.109 21:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:01.109 21:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:01.109 21:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:01.109 21:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:01.109 21:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:01.109 21:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.109 21:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.368 21:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:01.368 "name": "raid_bdev1", 00:18:01.368 "uuid": "bdcdecc5-42f4-11ef-9f7f-e9a656123a8b", 00:18:01.368 "strip_size_kb": 0, 00:18:01.368 "state": "online", 00:18:01.368 "raid_level": "raid1", 00:18:01.368 "superblock": true, 00:18:01.368 "num_base_bdevs": 2, 00:18:01.368 "num_base_bdevs_discovered": 1, 00:18:01.368 "num_base_bdevs_operational": 1, 00:18:01.368 "base_bdevs_list": [ 00:18:01.368 { 00:18:01.368 "name": null, 00:18:01.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.368 "is_configured": false, 00:18:01.368 "data_offset": 256, 00:18:01.368 "data_size": 7936 00:18:01.368 }, 00:18:01.368 { 00:18:01.368 "name": "BaseBdev2", 00:18:01.368 "uuid": "082fb6fa-c776-0d56-9963-b0d9b66732f0", 00:18:01.368 "is_configured": true, 00:18:01.368 "data_offset": 256, 00:18:01.368 "data_size": 7936 00:18:01.368 } 00:18:01.368 ] 00:18:01.368 }' 00:18:01.368 21:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:01.368 21:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.628 21:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:01.628 21:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:01.628 21:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:01.628 21:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:01.628 21:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:01.628 21:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.628 21:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.887 21:54:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:01.887 "name": "raid_bdev1", 00:18:01.887 "uuid": "bdcdecc5-42f4-11ef-9f7f-e9a656123a8b", 00:18:01.887 "strip_size_kb": 0, 00:18:01.887 "state": "online", 00:18:01.887 "raid_level": "raid1", 00:18:01.887 "superblock": true, 00:18:01.887 "num_base_bdevs": 2, 00:18:01.887 "num_base_bdevs_discovered": 1, 00:18:01.887 "num_base_bdevs_operational": 1, 00:18:01.887 "base_bdevs_list": [ 00:18:01.887 { 00:18:01.887 "name": null, 00:18:01.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.887 "is_configured": false, 00:18:01.887 "data_offset": 256, 00:18:01.887 "data_size": 7936 00:18:01.887 }, 00:18:01.887 { 00:18:01.887 "name": "BaseBdev2", 00:18:01.887 "uuid": "082fb6fa-c776-0d56-9963-b0d9b66732f0", 00:18:01.887 "is_configured": true, 00:18:01.887 "data_offset": 256, 00:18:01.887 "data_size": 7936 00:18:01.887 } 00:18:01.887 ] 00:18:01.887 }' 00:18:01.887 21:54:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:02.145 21:54:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:02.145 21:54:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:02.145 21:54:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:02.145 21:54:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:18:02.404 21:54:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:02.404 [2024-07-15 21:54:17.571757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:02.404 [2024-07-15 21:54:17.571832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.404 [2024-07-15 21:54:17.571866] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f5d6ea34780 00:18:02.404 [2024-07-15 21:54:17.571889] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.404 [2024-07-15 21:54:17.571981] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.404 [2024-07-15 21:54:17.571994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:02.404 [2024-07-15 21:54:17.572013] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:02.404 [2024-07-15 21:54:17.572018] bdev_raid.c:3564:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:02.404 [2024-07-15 21:54:17.572021] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:02.404 BaseBdev1 00:18:02.404 21:54:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # sleep 1 00:18:03.792 21:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:03.792 21:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:03.792 21:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:03.792 21:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:03.792 21:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:03.792 21:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:03.792 21:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:03.792 21:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:03.792 21:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:03.792 21:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:03.792 21:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.792 21:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.792 21:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:03.792 "name": "raid_bdev1", 00:18:03.792 "uuid": "bdcdecc5-42f4-11ef-9f7f-e9a656123a8b", 00:18:03.793 "strip_size_kb": 0, 00:18:03.793 "state": "online", 00:18:03.793 "raid_level": "raid1", 00:18:03.793 "superblock": true, 00:18:03.793 "num_base_bdevs": 2, 00:18:03.793 "num_base_bdevs_discovered": 1, 00:18:03.793 "num_base_bdevs_operational": 1, 00:18:03.793 "base_bdevs_list": [ 00:18:03.793 { 00:18:03.793 "name": null, 00:18:03.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.793 "is_configured": false, 00:18:03.793 "data_offset": 256, 00:18:03.793 "data_size": 7936 00:18:03.793 }, 00:18:03.793 { 00:18:03.793 "name": "BaseBdev2", 00:18:03.793 "uuid": "082fb6fa-c776-0d56-9963-b0d9b66732f0", 00:18:03.793 "is_configured": true, 00:18:03.793 "data_offset": 256, 00:18:03.793 "data_size": 7936 00:18:03.793 } 00:18:03.793 ] 00:18:03.793 }' 00:18:03.793 21:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:03.793 21:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.064 21:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:04.064 21:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:04.064 21:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:04.064 21:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:04.064 21:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:04.064 21:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.064 21:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.323 21:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:04.323 "name": "raid_bdev1", 00:18:04.323 "uuid": "bdcdecc5-42f4-11ef-9f7f-e9a656123a8b", 00:18:04.323 "strip_size_kb": 0, 00:18:04.323 "state": "online", 00:18:04.323 "raid_level": "raid1", 00:18:04.323 "superblock": true, 00:18:04.323 "num_base_bdevs": 2, 00:18:04.323 "num_base_bdevs_discovered": 1, 00:18:04.323 "num_base_bdevs_operational": 1, 00:18:04.323 "base_bdevs_list": [ 00:18:04.323 { 00:18:04.323 "name": null, 00:18:04.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.323 "is_configured": false, 00:18:04.323 "data_offset": 256, 00:18:04.323 "data_size": 7936 00:18:04.323 }, 00:18:04.323 { 00:18:04.323 "name": "BaseBdev2", 00:18:04.323 "uuid": "082fb6fa-c776-0d56-9963-b0d9b66732f0", 00:18:04.323 "is_configured": true, 00:18:04.323 "data_offset": 256, 00:18:04.323 "data_size": 7936 00:18:04.323 } 00:18:04.323 ] 00:18:04.323 }' 00:18:04.323 21:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:04.582 21:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:04.582 21:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:04.582 21:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:04.582 21:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:04.582 21:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # local es=0 00:18:04.582 21:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:04.582 21:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@630 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:04.582 21:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:04.582 21:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@634 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:04.582 21:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:04.582 21:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@636 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:04.582 21:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:04.582 21:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@636 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:04.582 21:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@636 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:04.582 21:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:04.870 [2024-07-15 21:54:19.772035] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:04.870 [2024-07-15 21:54:19.772122] bdev_raid.c:3564:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:04.870 [2024-07-15 21:54:19.772128] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:04.870 request: 00:18:04.870 { 00:18:04.870 "base_bdev": "BaseBdev1", 00:18:04.870 "raid_bdev": "raid_bdev1", 00:18:04.870 "method": "bdev_raid_add_base_bdev", 00:18:04.870 "req_id": 1 00:18:04.870 } 00:18:04.870 Got JSON-RPC error response 00:18:04.870 response: 00:18:04.870 { 00:18:04.870 "code": -22, 00:18:04.870 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:04.870 } 00:18:04.870 21:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@645 -- # es=1 00:18:04.870 21:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:18:04.870 21:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:18:04.870 21:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:18:04.870 21:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # sleep 1 00:18:05.808 21:54:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:05.808 21:54:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:05.808 21:54:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:05.808 21:54:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:05.808 21:54:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:05.808 21:54:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:05.808 21:54:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:05.808 21:54:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:05.808 21:54:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:05.808 21:54:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:05.808 21:54:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.808 21:54:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.068 21:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:06.068 "name": "raid_bdev1", 00:18:06.068 "uuid": "bdcdecc5-42f4-11ef-9f7f-e9a656123a8b", 00:18:06.068 "strip_size_kb": 0, 00:18:06.068 "state": "online", 00:18:06.068 "raid_level": "raid1", 00:18:06.068 "superblock": true, 00:18:06.068 "num_base_bdevs": 2, 00:18:06.068 "num_base_bdevs_discovered": 1, 00:18:06.068 "num_base_bdevs_operational": 1, 00:18:06.068 "base_bdevs_list": [ 00:18:06.068 { 00:18:06.068 "name": null, 00:18:06.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.068 "is_configured": false, 00:18:06.068 "data_offset": 256, 00:18:06.068 "data_size": 7936 00:18:06.068 }, 00:18:06.068 { 00:18:06.068 "name": "BaseBdev2", 00:18:06.068 "uuid": "082fb6fa-c776-0d56-9963-b0d9b66732f0", 00:18:06.068 "is_configured": true, 00:18:06.068 "data_offset": 256, 00:18:06.068 "data_size": 7936 00:18:06.068 } 00:18:06.068 ] 00:18:06.068 }' 00:18:06.068 21:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:06.068 21:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.325 21:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:06.325 21:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:06.326 21:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:06.326 21:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:06.326 21:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:06.326 21:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.326 21:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.584 21:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:06.584 "name": "raid_bdev1", 00:18:06.584 "uuid": "bdcdecc5-42f4-11ef-9f7f-e9a656123a8b", 00:18:06.584 "strip_size_kb": 0, 00:18:06.584 "state": "online", 00:18:06.584 "raid_level": "raid1", 00:18:06.584 "superblock": true, 00:18:06.584 "num_base_bdevs": 2, 00:18:06.584 "num_base_bdevs_discovered": 1, 00:18:06.584 "num_base_bdevs_operational": 1, 00:18:06.584 "base_bdevs_list": [ 00:18:06.584 { 00:18:06.584 "name": null, 00:18:06.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.584 "is_configured": false, 00:18:06.584 "data_offset": 256, 00:18:06.584 "data_size": 7936 00:18:06.584 }, 00:18:06.584 { 00:18:06.584 "name": "BaseBdev2", 00:18:06.584 "uuid": "082fb6fa-c776-0d56-9963-b0d9b66732f0", 00:18:06.584 "is_configured": true, 00:18:06.584 "data_offset": 256, 00:18:06.584 "data_size": 7936 00:18:06.584 } 00:18:06.584 ] 00:18:06.584 }' 00:18:06.584 21:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:06.584 21:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:06.584 21:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:06.584 21:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:06.584 21:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # killprocess 67425 00:18:06.584 21:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@942 -- # '[' -z 67425 ']' 00:18:06.584 21:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@946 -- # kill -0 67425 00:18:06.584 21:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@947 -- # uname 00:18:06.584 21:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:18:06.584 21:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # ps -c -o command 67425 00:18:06.584 21:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # tail -1 00:18:06.584 21:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # process_name=bdevperf 00:18:06.584 21:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' bdevperf = sudo ']' 00:18:06.584 21:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # echo 'killing process with pid 67425' 00:18:06.584 killing process with pid 67425 00:18:06.584 21:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@961 -- # kill 67425 00:18:06.584 Received shutdown signal, test time was about 60.000000 seconds 00:18:06.584 00:18:06.584 Latency(us) 00:18:06.584 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.584 =================================================================================================================== 00:18:06.584 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:06.584 [2024-07-15 21:54:21.682244] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:06.584 [2024-07-15 21:54:21.682275] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:06.584 [2024-07-15 21:54:21.682294] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:06.584 [2024-07-15 21:54:21.682298] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f5d6ea35680 name raid_bdev1, state offline 00:18:06.584 21:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # wait 67425 00:18:06.584 [2024-07-15 21:54:21.700776] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:06.842 21:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # return 0 00:18:06.842 00:18:06.842 real 0m25.151s 00:18:06.842 user 0m38.268s 00:18:06.842 sys 0m2.747s 00:18:06.842 21:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1118 -- # xtrace_disable 00:18:06.842 21:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.842 ************************************ 00:18:06.842 END TEST raid_rebuild_test_sb_md_interleaved 00:18:06.842 ************************************ 00:18:06.842 21:54:21 bdev_raid -- common/autotest_common.sh@1136 -- # return 0 00:18:06.842 21:54:21 bdev_raid -- bdev/bdev_raid.sh@916 -- # trap - EXIT 00:18:06.842 21:54:21 bdev_raid -- bdev/bdev_raid.sh@917 -- # cleanup 00:18:06.842 21:54:21 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 67425 ']' 00:18:06.842 21:54:21 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 67425 00:18:06.842 21:54:21 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:18:06.842 00:18:06.842 real 11m10.590s 00:18:06.842 user 19m21.884s 00:18:06.842 sys 1m48.121s 00:18:06.842 21:54:21 bdev_raid -- common/autotest_common.sh@1118 -- # xtrace_disable 00:18:06.842 21:54:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:06.842 ************************************ 00:18:06.842 END TEST bdev_raid 00:18:06.842 ************************************ 00:18:06.842 21:54:21 -- common/autotest_common.sh@1136 -- # return 0 00:18:06.842 21:54:21 -- spdk/autotest.sh@191 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:18:06.842 21:54:21 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:18:06.842 21:54:21 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:18:06.842 21:54:21 -- common/autotest_common.sh@10 -- # set +x 00:18:06.842 ************************************ 00:18:06.842 START TEST bdevperf_config 00:18:06.842 ************************************ 00:18:06.842 21:54:21 bdevperf_config -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:18:07.101 * Looking for test storage... 00:18:07.101 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:18:07.101 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:18:07.101 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:18:07.101 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:18:07.101 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:18:07.101 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:07.101 21:54:22 bdevperf_config -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:10.388 21:54:25 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-15 21:54:22.170427] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:18:10.388 [2024-07-15 21:54:22.170818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:10.388 Using job config with 4 jobs 00:18:10.388 EAL: TSC is not safe to use in SMP mode 00:18:10.388 EAL: TSC is not invariant 00:18:10.388 [2024-07-15 21:54:22.750820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.388 [2024-07-15 21:54:22.834649] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:10.388 [2024-07-15 21:54:22.837151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.388 cpumask for '\''job0'\'' is too big 00:18:10.388 cpumask for '\''job1'\'' is too big 00:18:10.388 cpumask for '\''job2'\'' is too big 00:18:10.388 cpumask for '\''job3'\'' is too big 00:18:10.388 Running I/O for 2 seconds... 00:18:10.388 00:18:10.388 Latency(us) 00:18:10.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.388 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:10.388 Malloc0 : 2.00 371833.40 363.12 0.00 0.00 688.24 215.97 1727.77 00:18:10.388 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:10.388 Malloc0 : 2.00 371859.68 363.14 0.00 0.00 688.03 205.73 1437.32 00:18:10.388 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:10.388 Malloc0 : 2.00 371895.24 363.18 0.00 0.00 687.77 175.94 1139.43 00:18:10.388 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:10.388 Malloc0 : 2.00 371956.80 363.24 0.00 0.00 687.50 107.52 1087.30 00:18:10.388 =================================================================================================================== 00:18:10.388 Total : 1487545.12 1452.68 0.00 0.00 687.88 107.52 1727.77' 00:18:10.388 21:54:25 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-15 21:54:22.170427] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:18:10.388 [2024-07-15 21:54:22.170818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:10.388 Using job config with 4 jobs 00:18:10.388 EAL: TSC is not safe to use in SMP mode 00:18:10.388 EAL: TSC is not invariant 00:18:10.388 [2024-07-15 21:54:22.750820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.388 [2024-07-15 21:54:22.834649] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:10.388 [2024-07-15 21:54:22.837151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.388 cpumask for '\''job0'\'' is too big 00:18:10.388 cpumask for '\''job1'\'' is too big 00:18:10.388 cpumask for '\''job2'\'' is too big 00:18:10.388 cpumask for '\''job3'\'' is too big 00:18:10.388 Running I/O for 2 seconds... 00:18:10.388 00:18:10.388 Latency(us) 00:18:10.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.388 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:10.388 Malloc0 : 2.00 371833.40 363.12 0.00 0.00 688.24 215.97 1727.77 00:18:10.388 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:10.388 Malloc0 : 2.00 371859.68 363.14 0.00 0.00 688.03 205.73 1437.32 00:18:10.388 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:10.388 Malloc0 : 2.00 371895.24 363.18 0.00 0.00 687.77 175.94 1139.43 00:18:10.388 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:10.388 Malloc0 : 2.00 371956.80 363.24 0.00 0.00 687.50 107.52 1087.30 00:18:10.388 =================================================================================================================== 00:18:10.388 Total : 1487545.12 1452.68 0.00 0.00 687.88 107.52 1727.77' 00:18:10.388 21:54:25 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-15 21:54:22.170427] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:18:10.388 [2024-07-15 21:54:22.170818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:10.388 Using job config with 4 jobs 00:18:10.388 EAL: TSC is not safe to use in SMP mode 00:18:10.388 EAL: TSC is not invariant 00:18:10.388 [2024-07-15 21:54:22.750820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.388 [2024-07-15 21:54:22.834649] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:10.388 [2024-07-15 21:54:22.837151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.388 cpumask for '\''job0'\'' is too big 00:18:10.388 cpumask for '\''job1'\'' is too big 00:18:10.388 cpumask for '\''job2'\'' is too big 00:18:10.388 cpumask for '\''job3'\'' is too big 00:18:10.388 Running I/O for 2 seconds... 00:18:10.388 00:18:10.388 Latency(us) 00:18:10.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.388 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:10.388 Malloc0 : 2.00 371833.40 363.12 0.00 0.00 688.24 215.97 1727.77 00:18:10.388 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:10.388 Malloc0 : 2.00 371859.68 363.14 0.00 0.00 688.03 205.73 1437.32 00:18:10.388 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:10.388 Malloc0 : 2.00 371895.24 363.18 0.00 0.00 687.77 175.94 1139.43 00:18:10.388 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:10.388 Malloc0 : 2.00 371956.80 363.24 0.00 0.00 687.50 107.52 1087.30 00:18:10.388 =================================================================================================================== 00:18:10.388 Total : 1487545.12 1452.68 0.00 0.00 687.88 107.52 1727.77' 00:18:10.388 21:54:25 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:18:10.388 21:54:25 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:18:10.388 21:54:25 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:18:10.388 21:54:25 bdevperf_config -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:10.388 [2024-07-15 21:54:25.066398] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:18:10.388 [2024-07-15 21:54:25.066669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:10.647 EAL: TSC is not safe to use in SMP mode 00:18:10.647 EAL: TSC is not invariant 00:18:10.647 [2024-07-15 21:54:25.596888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.647 [2024-07-15 21:54:25.675683] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:10.647 [2024-07-15 21:54:25.678208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.647 cpumask for 'job0' is too big 00:18:10.647 cpumask for 'job1' is too big 00:18:10.647 cpumask for 'job2' is too big 00:18:10.647 cpumask for 'job3' is too big 00:18:13.236 21:54:27 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:18:13.236 Running I/O for 2 seconds... 00:18:13.236 00:18:13.236 Latency(us) 00:18:13.236 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.236 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:13.236 Malloc0 : 2.00 376527.97 367.70 0.00 0.00 679.65 193.63 1556.48 00:18:13.236 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:13.236 Malloc0 : 2.00 376533.79 367.71 0.00 0.00 679.49 219.69 1333.06 00:18:13.236 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:13.236 Malloc0 : 2.00 376512.91 367.69 0.00 0.00 679.37 171.29 1072.41 00:18:13.236 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:13.236 Malloc0 : 2.00 376558.78 367.73 0.00 0.00 679.15 101.93 975.59 00:18:13.236 =================================================================================================================== 00:18:13.236 Total : 1506133.44 1470.83 0.00 0.00 679.41 101.93 1556.48' 00:18:13.236 21:54:27 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:18:13.236 21:54:27 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:13.236 21:54:27 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:18:13.236 21:54:27 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:18:13.236 21:54:27 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:18:13.236 21:54:27 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:18:13.236 21:54:27 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:18:13.236 21:54:27 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:18:13.236 00:18:13.236 21:54:27 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:13.236 21:54:27 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:13.236 21:54:27 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:18:13.236 21:54:27 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:18:13.236 21:54:27 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:18:13.236 21:54:27 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:18:13.236 21:54:27 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:18:13.236 21:54:27 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:18:13.236 00:18:13.236 21:54:27 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:13.236 21:54:27 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:13.236 21:54:27 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:18:13.236 21:54:27 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:18:13.236 21:54:27 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:18:13.236 21:54:27 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:18:13.236 21:54:27 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:18:13.236 00:18:13.236 21:54:27 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:18:13.236 21:54:27 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:13.236 21:54:27 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:13.236 21:54:27 bdevperf_config -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:15.774 21:54:30 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-15 21:54:27.919857] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:18:15.774 [2024-07-15 21:54:27.920152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:15.774 Using job config with 3 jobs 00:18:15.774 EAL: TSC is not safe to use in SMP mode 00:18:15.774 EAL: TSC is not invariant 00:18:15.774 [2024-07-15 21:54:28.449367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.774 [2024-07-15 21:54:28.527381] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:15.774 [2024-07-15 21:54:28.529948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.774 cpumask for '\''job0'\'' is too big 00:18:15.774 cpumask for '\''job1'\'' is too big 00:18:15.774 cpumask for '\''job2'\'' is too big 00:18:15.774 Running I/O for 2 seconds... 00:18:15.774 00:18:15.775 Latency(us) 00:18:15.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.775 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:15.775 Malloc0 : 2.00 471692.84 460.64 0.00 0.00 542.51 227.14 942.08 00:18:15.775 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:15.775 Malloc0 : 2.00 471677.29 460.62 0.00 0.00 542.40 167.56 860.16 00:18:15.775 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:15.775 Malloc0 : 2.00 471746.04 460.69 0.00 0.00 542.21 54.46 804.31 00:18:15.775 =================================================================================================================== 00:18:15.775 Total : 1415116.17 1381.95 0.00 0.00 542.38 54.46 942.08' 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-15 21:54:27.919857] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:18:15.775 [2024-07-15 21:54:27.920152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:15.775 Using job config with 3 jobs 00:18:15.775 EAL: TSC is not safe to use in SMP mode 00:18:15.775 EAL: TSC is not invariant 00:18:15.775 [2024-07-15 21:54:28.449367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.775 [2024-07-15 21:54:28.527381] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:15.775 [2024-07-15 21:54:28.529948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.775 cpumask for '\''job0'\'' is too big 00:18:15.775 cpumask for '\''job1'\'' is too big 00:18:15.775 cpumask for '\''job2'\'' is too big 00:18:15.775 Running I/O for 2 seconds... 00:18:15.775 00:18:15.775 Latency(us) 00:18:15.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.775 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:15.775 Malloc0 : 2.00 471692.84 460.64 0.00 0.00 542.51 227.14 942.08 00:18:15.775 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:15.775 Malloc0 : 2.00 471677.29 460.62 0.00 0.00 542.40 167.56 860.16 00:18:15.775 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:15.775 Malloc0 : 2.00 471746.04 460.69 0.00 0.00 542.21 54.46 804.31 00:18:15.775 =================================================================================================================== 00:18:15.775 Total : 1415116.17 1381.95 0.00 0.00 542.38 54.46 942.08' 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-15 21:54:27.919857] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:18:15.775 [2024-07-15 21:54:27.920152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:15.775 Using job config with 3 jobs 00:18:15.775 EAL: TSC is not safe to use in SMP mode 00:18:15.775 EAL: TSC is not invariant 00:18:15.775 [2024-07-15 21:54:28.449367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.775 [2024-07-15 21:54:28.527381] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:15.775 [2024-07-15 21:54:28.529948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.775 cpumask for '\''job0'\'' is too big 00:18:15.775 cpumask for '\''job1'\'' is too big 00:18:15.775 cpumask for '\''job2'\'' is too big 00:18:15.775 Running I/O for 2 seconds... 00:18:15.775 00:18:15.775 Latency(us) 00:18:15.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.775 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:15.775 Malloc0 : 2.00 471692.84 460.64 0.00 0.00 542.51 227.14 942.08 00:18:15.775 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:15.775 Malloc0 : 2.00 471677.29 460.62 0.00 0.00 542.40 167.56 860.16 00:18:15.775 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:15.775 Malloc0 : 2.00 471746.04 460.69 0.00 0.00 542.21 54.46 804.31 00:18:15.775 =================================================================================================================== 00:18:15.775 Total : 1415116.17 1381.95 0.00 0.00 542.38 54.46 942.08' 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:18:15.775 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:18:15.775 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:18:15.775 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:18:15.775 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:18:15.775 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:15.775 21:54:30 bdevperf_config -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:19.059 21:54:33 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-15 21:54:30.789278] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:18:19.059 [2024-07-15 21:54:30.789502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:19.059 Using job config with 4 jobs 00:18:19.059 EAL: TSC is not safe to use in SMP mode 00:18:19.059 EAL: TSC is not invariant 00:18:19.060 [2024-07-15 21:54:31.371722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.060 [2024-07-15 21:54:31.447860] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:19.060 [2024-07-15 21:54:31.450365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.060 cpumask for '\''job0'\'' is too big 00:18:19.060 cpumask for '\''job1'\'' is too big 00:18:19.060 cpumask for '\''job2'\'' is too big 00:18:19.060 cpumask for '\''job3'\'' is too big 00:18:19.060 Running I/O for 2 seconds... 00:18:19.060 00:18:19.060 Latency(us) 00:18:19.060 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.060 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:19.060 Malloc0 : 2.00 175019.70 170.92 0.00 0.00 1462.36 506.41 3142.75 00:18:19.060 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:19.060 Malloc1 : 2.00 175009.91 170.91 0.00 0.00 1462.22 465.45 3083.17 00:18:19.060 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:19.060 Malloc0 : 2.00 175000.12 170.90 0.00 0.00 1461.79 426.36 2546.97 00:18:19.060 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:19.060 Malloc1 : 2.00 175020.98 170.92 0.00 0.00 1461.46 355.61 2546.97 00:18:19.060 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:19.060 Malloc0 : 2.00 175011.46 170.91 0.00 0.00 1461.11 467.32 2025.66 00:18:19.060 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:19.060 Malloc1 : 2.00 174998.63 170.90 0.00 0.00 1460.99 385.40 2010.77 00:18:19.060 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:19.060 Malloc0 : 2.00 174986.14 170.88 0.00 0.00 1460.59 461.73 2040.55 00:18:19.060 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:19.060 Malloc1 : 2.00 174972.35 170.87 0.00 0.00 1460.57 366.78 2085.24 00:18:19.060 =================================================================================================================== 00:18:19.060 Total : 1400019.31 1367.21 0.00 0.00 1461.39 355.61 3142.75' 00:18:19.060 21:54:33 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-15 21:54:30.789278] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:18:19.060 [2024-07-15 21:54:30.789502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:19.060 Using job config with 4 jobs 00:18:19.060 EAL: TSC is not safe to use in SMP mode 00:18:19.060 EAL: TSC is not invariant 00:18:19.060 [2024-07-15 21:54:31.371722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.060 [2024-07-15 21:54:31.447860] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:19.060 [2024-07-15 21:54:31.450365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.060 cpumask for '\''job0'\'' is too big 00:18:19.060 cpumask for '\''job1'\'' is too big 00:18:19.060 cpumask for '\''job2'\'' is too big 00:18:19.060 cpumask for '\''job3'\'' is too big 00:18:19.060 Running I/O for 2 seconds... 00:18:19.060 00:18:19.060 Latency(us) 00:18:19.060 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.060 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:19.060 Malloc0 : 2.00 175019.70 170.92 0.00 0.00 1462.36 506.41 3142.75 00:18:19.060 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:19.060 Malloc1 : 2.00 175009.91 170.91 0.00 0.00 1462.22 465.45 3083.17 00:18:19.060 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:19.060 Malloc0 : 2.00 175000.12 170.90 0.00 0.00 1461.79 426.36 2546.97 00:18:19.060 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:19.060 Malloc1 : 2.00 175020.98 170.92 0.00 0.00 1461.46 355.61 2546.97 00:18:19.060 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:19.060 Malloc0 : 2.00 175011.46 170.91 0.00 0.00 1461.11 467.32 2025.66 00:18:19.060 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:19.060 Malloc1 : 2.00 174998.63 170.90 0.00 0.00 1460.99 385.40 2010.77 00:18:19.060 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:19.060 Malloc0 : 2.00 174986.14 170.88 0.00 0.00 1460.59 461.73 2040.55 00:18:19.060 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:19.060 Malloc1 : 2.00 174972.35 170.87 0.00 0.00 1460.57 366.78 2085.24 00:18:19.060 =================================================================================================================== 00:18:19.060 Total : 1400019.31 1367.21 0.00 0.00 1461.39 355.61 3142.75' 00:18:19.060 21:54:33 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-15 21:54:30.789278] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:18:19.060 [2024-07-15 21:54:30.789502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:19.060 Using job config with 4 jobs 00:18:19.060 EAL: TSC is not safe to use in SMP mode 00:18:19.060 EAL: TSC is not invariant 00:18:19.060 [2024-07-15 21:54:31.371722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.060 [2024-07-15 21:54:31.447860] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:19.060 [2024-07-15 21:54:31.450365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.060 cpumask for '\''job0'\'' is too big 00:18:19.060 cpumask for '\''job1'\'' is too big 00:18:19.060 cpumask for '\''job2'\'' is too big 00:18:19.060 cpumask for '\''job3'\'' is too big 00:18:19.060 Running I/O for 2 seconds... 00:18:19.060 00:18:19.060 Latency(us) 00:18:19.060 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.060 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:19.060 Malloc0 : 2.00 175019.70 170.92 0.00 0.00 1462.36 506.41 3142.75 00:18:19.060 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:19.060 Malloc1 : 2.00 175009.91 170.91 0.00 0.00 1462.22 465.45 3083.17 00:18:19.060 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:19.060 Malloc0 : 2.00 175000.12 170.90 0.00 0.00 1461.79 426.36 2546.97 00:18:19.060 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:19.060 Malloc1 : 2.00 175020.98 170.92 0.00 0.00 1461.46 355.61 2546.97 00:18:19.060 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:19.060 Malloc0 : 2.00 175011.46 170.91 0.00 0.00 1461.11 467.32 2025.66 00:18:19.060 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:19.060 Malloc1 : 2.00 174998.63 170.90 0.00 0.00 1460.99 385.40 2010.77 00:18:19.060 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:19.060 Malloc0 : 2.00 174986.14 170.88 0.00 0.00 1460.59 461.73 2040.55 00:18:19.060 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:19.060 Malloc1 : 2.00 174972.35 170.87 0.00 0.00 1460.57 366.78 2085.24 00:18:19.060 =================================================================================================================== 00:18:19.060 Total : 1400019.31 1367.21 0.00 0.00 1461.39 355.61 3142.75' 00:18:19.060 21:54:33 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:18:19.060 21:54:33 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:18:19.060 21:54:33 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:18:19.060 21:54:33 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:18:19.060 21:54:33 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:19.060 21:54:33 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:18:19.060 00:18:19.060 real 0m11.713s 00:18:19.060 user 0m9.177s 00:18:19.060 sys 0m2.574s 00:18:19.060 21:54:33 bdevperf_config -- common/autotest_common.sh@1118 -- # xtrace_disable 00:18:19.060 ************************************ 00:18:19.060 END TEST bdevperf_config 00:18:19.060 ************************************ 00:18:19.060 21:54:33 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:18:19.060 21:54:33 -- common/autotest_common.sh@1136 -- # return 0 00:18:19.060 21:54:33 -- spdk/autotest.sh@192 -- # uname -s 00:18:19.060 21:54:33 -- spdk/autotest.sh@192 -- # [[ FreeBSD == Linux ]] 00:18:19.060 21:54:33 -- spdk/autotest.sh@198 -- # uname -s 00:18:19.060 21:54:33 -- spdk/autotest.sh@198 -- # [[ FreeBSD == Linux ]] 00:18:19.060 21:54:33 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:18:19.060 21:54:33 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:18:19.060 21:54:33 -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:18:19.060 21:54:33 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:18:19.060 21:54:33 -- common/autotest_common.sh@10 -- # set +x 00:18:19.060 ************************************ 00:18:19.060 START TEST blockdev_nvme 00:18:19.060 ************************************ 00:18:19.060 21:54:33 blockdev_nvme -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:18:19.060 * Looking for test storage... 00:18:19.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:19.060 21:54:33 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:19.060 21:54:33 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:18:19.060 21:54:33 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:19.060 21:54:33 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:19.060 21:54:33 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:19.060 21:54:33 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:19.060 21:54:33 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:19.060 21:54:33 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:19.060 21:54:33 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:18:19.060 21:54:33 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:18:19.060 21:54:33 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:18:19.060 21:54:33 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:18:19.060 21:54:33 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:18:19.060 21:54:33 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' FreeBSD = Linux ']' 00:18:19.060 21:54:33 blockdev_nvme -- bdev/blockdev.sh@679 -- # PRE_RESERVED_MEM=2048 00:18:19.060 21:54:33 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:18:19.060 21:54:33 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:18:19.060 21:54:33 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:18:19.060 21:54:33 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:18:19.060 21:54:33 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:18:19.060 21:54:33 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:18:19.060 21:54:33 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:18:19.060 21:54:33 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:18:19.060 21:54:33 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:18:19.060 21:54:33 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=68165 00:18:19.060 21:54:33 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:19.060 21:54:33 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:19.060 21:54:33 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 68165 00:18:19.060 21:54:33 blockdev_nvme -- common/autotest_common.sh@823 -- # '[' -z 68165 ']' 00:18:19.060 21:54:33 blockdev_nvme -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.060 21:54:33 blockdev_nvme -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:19.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.060 21:54:33 blockdev_nvme -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.060 21:54:33 blockdev_nvme -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:19.060 21:54:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:19.060 [2024-07-15 21:54:33.901849] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:18:19.060 [2024-07-15 21:54:33.902111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:19.318 EAL: TSC is not safe to use in SMP mode 00:18:19.318 EAL: TSC is not invariant 00:18:19.318 [2024-07-15 21:54:34.496541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.574 [2024-07-15 21:54:34.568437] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:19.575 [2024-07-15 21:54:34.570944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.831 21:54:34 blockdev_nvme -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:19.831 21:54:34 blockdev_nvme -- common/autotest_common.sh@856 -- # return 0 00:18:19.831 21:54:34 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:18:19.831 21:54:34 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:18:19.831 21:54:34 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:18:19.831 21:54:34 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:18:19.831 21:54:34 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:19.831 21:54:34 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:18:19.831 21:54:34 blockdev_nvme -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:19.831 21:54:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:19.831 [2024-07-15 21:54:35.001191] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:20.089 21:54:35 blockdev_nvme -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:20.089 21:54:35 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:18:20.089 21:54:35 blockdev_nvme -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:20.089 21:54:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:20.089 21:54:35 blockdev_nvme -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:20.089 21:54:35 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:18:20.089 21:54:35 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:18:20.089 21:54:35 blockdev_nvme -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:20.089 21:54:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:20.089 21:54:35 blockdev_nvme -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:20.089 21:54:35 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:18:20.089 21:54:35 blockdev_nvme -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:20.089 21:54:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:20.089 21:54:35 blockdev_nvme -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:20.089 21:54:35 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:20.089 21:54:35 blockdev_nvme -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:20.089 21:54:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:20.089 21:54:35 blockdev_nvme -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:20.089 21:54:35 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:18:20.089 21:54:35 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:18:20.089 21:54:35 blockdev_nvme -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:20.089 21:54:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:20.089 21:54:35 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:18:20.089 21:54:35 blockdev_nvme -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:20.089 21:54:35 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:18:20.089 21:54:35 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:18:20.089 21:54:35 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "d2f56efa-42f4-11ef-9f7f-e9a656123a8b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d2f56efa-42f4-11ef-9f7f-e9a656123a8b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:18:20.089 21:54:35 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:18:20.089 21:54:35 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:18:20.089 21:54:35 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:18:20.089 21:54:35 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 68165 00:18:20.089 21:54:35 blockdev_nvme -- common/autotest_common.sh@942 -- # '[' -z 68165 ']' 00:18:20.089 21:54:35 blockdev_nvme -- common/autotest_common.sh@946 -- # kill -0 68165 00:18:20.089 21:54:35 blockdev_nvme -- common/autotest_common.sh@947 -- # uname 00:18:20.089 21:54:35 blockdev_nvme -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:18:20.089 21:54:35 blockdev_nvme -- common/autotest_common.sh@950 -- # ps -c -o command 68165 00:18:20.089 21:54:35 blockdev_nvme -- common/autotest_common.sh@950 -- # tail -1 00:18:20.089 21:54:35 blockdev_nvme -- common/autotest_common.sh@950 -- # process_name=spdk_tgt 00:18:20.089 21:54:35 blockdev_nvme -- common/autotest_common.sh@952 -- # '[' spdk_tgt = sudo ']' 00:18:20.089 killing process with pid 68165 00:18:20.089 21:54:35 blockdev_nvme -- common/autotest_common.sh@960 -- # echo 'killing process with pid 68165' 00:18:20.089 21:54:35 blockdev_nvme -- common/autotest_common.sh@961 -- # kill 68165 00:18:20.089 21:54:35 blockdev_nvme -- common/autotest_common.sh@966 -- # wait 68165 00:18:20.347 21:54:35 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:20.347 21:54:35 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:18:20.347 21:54:35 blockdev_nvme -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:18:20.347 21:54:35 blockdev_nvme -- common/autotest_common.sh@1099 -- # xtrace_disable 00:18:20.347 21:54:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:20.347 ************************************ 00:18:20.347 START TEST bdev_hello_world 00:18:20.347 ************************************ 00:18:20.347 21:54:35 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:18:20.347 [2024-07-15 21:54:35.408742] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:18:20.347 [2024-07-15 21:54:35.408901] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:20.912 EAL: TSC is not safe to use in SMP mode 00:18:20.912 EAL: TSC is not invariant 00:18:20.912 [2024-07-15 21:54:35.879296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.912 [2024-07-15 21:54:35.952524] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:20.912 [2024-07-15 21:54:35.954967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.912 [2024-07-15 21:54:36.013057] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:20.912 [2024-07-15 21:54:36.086182] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:20.912 [2024-07-15 21:54:36.086219] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:18:20.912 [2024-07-15 21:54:36.086231] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:20.912 [2024-07-15 21:54:36.086908] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:20.912 [2024-07-15 21:54:36.087329] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:20.912 [2024-07-15 21:54:36.087382] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:20.912 [2024-07-15 21:54:36.087503] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:20.912 00:18:20.912 [2024-07-15 21:54:36.087526] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:21.170 00:18:21.170 real 0m0.853s 00:18:21.170 user 0m0.356s 00:18:21.170 sys 0m0.495s 00:18:21.170 21:54:36 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1118 -- # xtrace_disable 00:18:21.170 ************************************ 00:18:21.170 END TEST bdev_hello_world 00:18:21.170 ************************************ 00:18:21.170 21:54:36 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:21.170 21:54:36 blockdev_nvme -- common/autotest_common.sh@1136 -- # return 0 00:18:21.170 21:54:36 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:18:21.170 21:54:36 blockdev_nvme -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:18:21.170 21:54:36 blockdev_nvme -- common/autotest_common.sh@1099 -- # xtrace_disable 00:18:21.170 21:54:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:21.170 ************************************ 00:18:21.170 START TEST bdev_bounds 00:18:21.170 ************************************ 00:18:21.170 21:54:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1117 -- # bdev_bounds '' 00:18:21.170 21:54:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=68232 00:18:21.170 21:54:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:21.170 Process bdevio pid: 68232 00:18:21.170 21:54:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 68232' 00:18:21.170 21:54:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:21.170 21:54:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 68232 00:18:21.170 21:54:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@823 -- # '[' -z 68232 ']' 00:18:21.170 21:54:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.170 21:54:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:21.170 21:54:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.170 21:54:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:21.170 21:54:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:21.170 [2024-07-15 21:54:36.313935] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:18:21.170 [2024-07-15 21:54:36.314202] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:21.735 EAL: TSC is not safe to use in SMP mode 00:18:21.735 EAL: TSC is not invariant 00:18:21.735 [2024-07-15 21:54:36.816402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:21.735 [2024-07-15 21:54:36.890593] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:21.735 [2024-07-15 21:54:36.890650] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:18:21.735 [2024-07-15 21:54:36.890675] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:18:21.735 [2024-07-15 21:54:36.894449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.735 [2024-07-15 21:54:36.894358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.735 [2024-07-15 21:54:36.894440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:21.992 [2024-07-15 21:54:36.951342] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:22.250 21:54:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:22.250 21:54:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@856 -- # return 0 00:18:22.250 21:54:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:22.251 I/O targets: 00:18:22.251 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:18:22.251 00:18:22.251 00:18:22.251 CUnit - A unit testing framework for C - Version 2.1-3 00:18:22.251 http://cunit.sourceforge.net/ 00:18:22.251 00:18:22.251 00:18:22.251 Suite: bdevio tests on: Nvme0n1 00:18:22.251 Test: blockdev write read block ...passed 00:18:22.251 Test: blockdev write zeroes read block ...passed 00:18:22.251 Test: blockdev write zeroes read no split ...passed 00:18:22.251 Test: blockdev write zeroes read split ...passed 00:18:22.251 Test: blockdev write zeroes read split partial ...passed 00:18:22.251 Test: blockdev reset ...[2024-07-15 21:54:37.425942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:18:22.251 [2024-07-15 21:54:37.427559] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:22.251 passed 00:18:22.251 Test: blockdev write read 8 blocks ...passed 00:18:22.251 Test: blockdev write read size > 128k ...passed 00:18:22.251 Test: blockdev write read invalid size ...passed 00:18:22.251 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:22.251 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:22.251 Test: blockdev write read max offset ...passed 00:18:22.251 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:22.251 Test: blockdev writev readv 8 blocks ...passed 00:18:22.251 Test: blockdev writev readv 30 x 1block ...passed 00:18:22.251 Test: blockdev writev readv block ...passed 00:18:22.251 Test: blockdev writev readv size > 128k ...passed 00:18:22.251 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:22.251 Test: blockdev comparev and writev ...[2024-07-15 21:54:37.431850] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x217945000 len:0x1000 00:18:22.251 [2024-07-15 21:54:37.431887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:18:22.251 passed 00:18:22.251 Test: blockdev nvme passthru rw ...passed 00:18:22.251 Test: blockdev nvme passthru vendor specific ...passed 00:18:22.251 Test: blockdev nvme admin passthru ...[2024-07-15 21:54:37.432505] nvme_qpair.c: 220:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:18:22.251 [2024-07-15 21:54:37.432526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:18:22.251 passed 00:18:22.251 Test: blockdev copy ...passed 00:18:22.251 00:18:22.251 Run Summary: Type Total Ran Passed Failed Inactive 00:18:22.251 suites 1 1 n/a 0 0 00:18:22.251 tests 23 23 23 0 0 00:18:22.251 asserts 152 152 152 0 n/a 00:18:22.251 00:18:22.251 Elapsed time = 0.047 seconds 00:18:22.509 0 00:18:22.509 21:54:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 68232 00:18:22.509 21:54:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@942 -- # '[' -z 68232 ']' 00:18:22.509 21:54:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@946 -- # kill -0 68232 00:18:22.509 21:54:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@947 -- # uname 00:18:22.509 21:54:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:18:22.509 21:54:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # ps -c -o command 68232 00:18:22.509 21:54:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # tail -1 00:18:22.509 21:54:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # process_name=bdevio 00:18:22.509 21:54:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' bdevio = sudo ']' 00:18:22.509 21:54:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # echo 'killing process with pid 68232' 00:18:22.509 killing process with pid 68232 00:18:22.509 21:54:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@961 -- # kill 68232 00:18:22.509 21:54:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@966 -- # wait 68232 00:18:22.509 21:54:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:18:22.509 00:18:22.509 real 0m1.318s 00:18:22.509 user 0m2.542s 00:18:22.509 sys 0m0.607s 00:18:22.509 ************************************ 00:18:22.509 END TEST bdev_bounds 00:18:22.509 ************************************ 00:18:22.509 21:54:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1118 -- # xtrace_disable 00:18:22.509 21:54:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:22.509 21:54:37 blockdev_nvme -- common/autotest_common.sh@1136 -- # return 0 00:18:22.509 21:54:37 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:18:22.509 21:54:37 blockdev_nvme -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:18:22.509 21:54:37 blockdev_nvme -- common/autotest_common.sh@1099 -- # xtrace_disable 00:18:22.509 21:54:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:22.509 ************************************ 00:18:22.509 START TEST bdev_nbd 00:18:22.509 ************************************ 00:18:22.509 21:54:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1117 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:18:22.509 21:54:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:18:22.509 21:54:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ FreeBSD == Linux ]] 00:18:22.509 21:54:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # return 0 00:18:22.509 00:18:22.509 real 0m0.005s 00:18:22.509 user 0m0.006s 00:18:22.509 sys 0m0.000s 00:18:22.509 21:54:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1118 -- # xtrace_disable 00:18:22.509 21:54:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:22.509 ************************************ 00:18:22.509 END TEST bdev_nbd 00:18:22.509 ************************************ 00:18:22.768 21:54:37 blockdev_nvme -- common/autotest_common.sh@1136 -- # return 0 00:18:22.768 21:54:37 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:18:22.768 21:54:37 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:18:22.768 skipping fio tests on NVMe due to multi-ns failures. 00:18:22.768 21:54:37 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:18:22.768 21:54:37 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:22.768 21:54:37 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:22.768 21:54:37 blockdev_nvme -- common/autotest_common.sh@1093 -- # '[' 16 -le 1 ']' 00:18:22.768 21:54:37 blockdev_nvme -- common/autotest_common.sh@1099 -- # xtrace_disable 00:18:22.768 21:54:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:22.768 ************************************ 00:18:22.768 START TEST bdev_verify 00:18:22.768 ************************************ 00:18:22.768 21:54:37 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:22.768 [2024-07-15 21:54:37.728655] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:18:22.768 [2024-07-15 21:54:37.728945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:23.335 EAL: TSC is not safe to use in SMP mode 00:18:23.335 EAL: TSC is not invariant 00:18:23.335 [2024-07-15 21:54:38.239882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:23.335 [2024-07-15 21:54:38.314282] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:23.335 [2024-07-15 21:54:38.314342] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:18:23.335 [2024-07-15 21:54:38.317191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.335 [2024-07-15 21:54:38.317185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:23.335 [2024-07-15 21:54:38.373786] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:23.335 Running I/O for 5 seconds... 00:18:28.617 00:18:28.617 Latency(us) 00:18:28.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.617 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:28.617 Verification LBA range: start 0x0 length 0xa0000 00:18:28.617 Nvme0n1 : 5.01 20455.25 79.90 0.00 0.00 6248.57 491.52 9592.09 00:18:28.617 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:28.617 Verification LBA range: start 0xa0000 length 0xa0000 00:18:28.617 Nvme0n1 : 5.00 20621.85 80.55 0.00 0.00 6197.51 785.69 10664.50 00:18:28.617 =================================================================================================================== 00:18:28.617 Total : 41077.10 160.46 0.00 0.00 6222.94 491.52 10664.50 00:18:29.183 00:18:29.184 real 0m6.424s 00:18:29.184 user 0m11.525s 00:18:29.184 sys 0m0.580s 00:18:29.184 21:54:44 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1118 -- # xtrace_disable 00:18:29.184 21:54:44 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:18:29.184 ************************************ 00:18:29.184 END TEST bdev_verify 00:18:29.184 ************************************ 00:18:29.184 21:54:44 blockdev_nvme -- common/autotest_common.sh@1136 -- # return 0 00:18:29.184 21:54:44 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:29.184 21:54:44 blockdev_nvme -- common/autotest_common.sh@1093 -- # '[' 16 -le 1 ']' 00:18:29.184 21:54:44 blockdev_nvme -- common/autotest_common.sh@1099 -- # xtrace_disable 00:18:29.184 21:54:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:29.184 ************************************ 00:18:29.184 START TEST bdev_verify_big_io 00:18:29.184 ************************************ 00:18:29.184 21:54:44 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:29.184 [2024-07-15 21:54:44.207871] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:18:29.184 [2024-07-15 21:54:44.208171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:29.752 EAL: TSC is not safe to use in SMP mode 00:18:29.752 EAL: TSC is not invariant 00:18:29.752 [2024-07-15 21:54:44.773947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:29.752 [2024-07-15 21:54:44.876732] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:29.752 [2024-07-15 21:54:44.876812] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:18:29.752 [2024-07-15 21:54:44.880495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.752 [2024-07-15 21:54:44.880481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.010 [2024-07-15 21:54:44.939138] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:30.010 Running I/O for 5 seconds... 00:18:35.339 00:18:35.339 Latency(us) 00:18:35.339 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.339 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:35.339 Verification LBA range: start 0x0 length 0xa000 00:18:35.339 Nvme0n1 : 5.01 9577.73 598.61 0.00 0.00 13288.89 459.87 22639.73 00:18:35.339 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:35.339 Verification LBA range: start 0xa000 length 0xa000 00:18:35.339 Nvme0n1 : 5.01 9513.90 594.62 0.00 0.00 13376.15 532.48 27405.98 00:18:35.339 =================================================================================================================== 00:18:35.339 Total : 19091.63 1193.23 0.00 0.00 13332.37 459.87 27405.98 00:18:38.619 00:18:38.619 real 0m9.126s 00:18:38.619 user 0m16.860s 00:18:38.619 sys 0m0.604s 00:18:38.619 21:54:53 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1118 -- # xtrace_disable 00:18:38.619 21:54:53 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:18:38.619 ************************************ 00:18:38.619 END TEST bdev_verify_big_io 00:18:38.619 ************************************ 00:18:38.619 21:54:53 blockdev_nvme -- common/autotest_common.sh@1136 -- # return 0 00:18:38.619 21:54:53 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:38.619 21:54:53 blockdev_nvme -- common/autotest_common.sh@1093 -- # '[' 13 -le 1 ']' 00:18:38.619 21:54:53 blockdev_nvme -- common/autotest_common.sh@1099 -- # xtrace_disable 00:18:38.619 21:54:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:38.619 ************************************ 00:18:38.619 START TEST bdev_write_zeroes 00:18:38.619 ************************************ 00:18:38.619 21:54:53 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:38.619 [2024-07-15 21:54:53.378961] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:18:38.619 [2024-07-15 21:54:53.379231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:38.876 EAL: TSC is not safe to use in SMP mode 00:18:38.877 EAL: TSC is not invariant 00:18:38.877 [2024-07-15 21:54:53.887477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.877 [2024-07-15 21:54:53.957744] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:38.877 [2024-07-15 21:54:53.960035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.877 [2024-07-15 21:54:54.016707] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:39.135 Running I/O for 1 seconds... 00:18:40.066 00:18:40.066 Latency(us) 00:18:40.066 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.066 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:40.066 Nvme0n1 : 1.00 79038.70 308.74 0.00 0.00 1615.90 437.53 17754.31 00:18:40.066 =================================================================================================================== 00:18:40.066 Total : 79038.70 308.74 0.00 0.00 1615.90 437.53 17754.31 00:18:40.066 00:18:40.066 real 0m1.882s 00:18:40.066 user 0m1.328s 00:18:40.066 sys 0m0.551s 00:18:40.325 21:54:55 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1118 -- # xtrace_disable 00:18:40.325 21:54:55 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:18:40.325 ************************************ 00:18:40.325 END TEST bdev_write_zeroes 00:18:40.325 ************************************ 00:18:40.325 21:54:55 blockdev_nvme -- common/autotest_common.sh@1136 -- # return 0 00:18:40.325 21:54:55 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:40.325 21:54:55 blockdev_nvme -- common/autotest_common.sh@1093 -- # '[' 13 -le 1 ']' 00:18:40.325 21:54:55 blockdev_nvme -- common/autotest_common.sh@1099 -- # xtrace_disable 00:18:40.325 21:54:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:40.325 ************************************ 00:18:40.325 START TEST bdev_json_nonenclosed 00:18:40.325 ************************************ 00:18:40.325 21:54:55 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:40.325 [2024-07-15 21:54:55.302327] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:18:40.325 [2024-07-15 21:54:55.302691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:40.892 EAL: TSC is not safe to use in SMP mode 00:18:40.892 EAL: TSC is not invariant 00:18:41.150 [2024-07-15 21:54:56.084057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.150 [2024-07-15 21:54:56.164589] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:41.150 [2024-07-15 21:54:56.166969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.150 [2024-07-15 21:54:56.167023] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:18:41.150 [2024-07-15 21:54:56.167034] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:41.150 [2024-07-15 21:54:56.167041] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:41.150 00:18:41.150 real 0m0.970s 00:18:41.150 user 0m0.167s 00:18:41.150 sys 0m0.803s 00:18:41.150 21:54:56 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1117 -- # es=234 00:18:41.150 21:54:56 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1118 -- # xtrace_disable 00:18:41.150 ************************************ 00:18:41.150 END TEST bdev_json_nonenclosed 00:18:41.150 ************************************ 00:18:41.150 21:54:56 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:18:41.150 21:54:56 blockdev_nvme -- common/autotest_common.sh@1136 -- # return 234 00:18:41.150 21:54:56 blockdev_nvme -- bdev/blockdev.sh@782 -- # true 00:18:41.150 21:54:56 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:41.150 21:54:56 blockdev_nvme -- common/autotest_common.sh@1093 -- # '[' 13 -le 1 ']' 00:18:41.150 21:54:56 blockdev_nvme -- common/autotest_common.sh@1099 -- # xtrace_disable 00:18:41.150 21:54:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:41.150 ************************************ 00:18:41.150 START TEST bdev_json_nonarray 00:18:41.150 ************************************ 00:18:41.150 21:54:56 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:41.150 [2024-07-15 21:54:56.320432] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:18:41.150 [2024-07-15 21:54:56.320687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:42.087 EAL: TSC is not safe to use in SMP mode 00:18:42.087 EAL: TSC is not invariant 00:18:42.087 [2024-07-15 21:54:57.067309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.087 [2024-07-15 21:54:57.145556] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:42.087 [2024-07-15 21:54:57.147945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.087 [2024-07-15 21:54:57.148012] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:18:42.087 [2024-07-15 21:54:57.148022] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:42.087 [2024-07-15 21:54:57.148030] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:42.087 00:18:42.087 real 0m0.932s 00:18:42.087 user 0m0.150s 00:18:42.087 sys 0m0.780s 00:18:42.087 21:54:57 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1117 -- # es=234 00:18:42.087 21:54:57 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1118 -- # xtrace_disable 00:18:42.087 ************************************ 00:18:42.087 END TEST bdev_json_nonarray 00:18:42.087 ************************************ 00:18:42.087 21:54:57 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:18:42.345 21:54:57 blockdev_nvme -- common/autotest_common.sh@1136 -- # return 234 00:18:42.345 21:54:57 blockdev_nvme -- bdev/blockdev.sh@785 -- # true 00:18:42.345 21:54:57 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:18:42.345 21:54:57 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:18:42.345 21:54:57 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:18:42.345 21:54:57 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:18:42.345 21:54:57 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:18:42.345 21:54:57 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:18:42.345 21:54:57 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:42.345 21:54:57 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:18:42.345 21:54:57 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:18:42.345 21:54:57 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:18:42.345 21:54:57 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:18:42.345 00:18:42.345 real 0m23.554s 00:18:42.345 user 0m34.519s 00:18:42.345 sys 0m5.462s 00:18:42.345 21:54:57 blockdev_nvme -- common/autotest_common.sh@1118 -- # xtrace_disable 00:18:42.345 ************************************ 00:18:42.345 END TEST blockdev_nvme 00:18:42.345 ************************************ 00:18:42.346 21:54:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:42.346 21:54:57 -- common/autotest_common.sh@1136 -- # return 0 00:18:42.346 21:54:57 -- spdk/autotest.sh@213 -- # uname -s 00:18:42.346 21:54:57 -- spdk/autotest.sh@213 -- # [[ FreeBSD == Linux ]] 00:18:42.346 21:54:57 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:18:42.346 21:54:57 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:18:42.346 21:54:57 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:18:42.346 21:54:57 -- common/autotest_common.sh@10 -- # set +x 00:18:42.346 ************************************ 00:18:42.346 START TEST nvme 00:18:42.346 ************************************ 00:18:42.346 21:54:57 nvme -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:18:42.346 * Looking for test storage... 00:18:42.346 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:18:42.346 21:54:57 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:42.604 hw.nic_uio.bdfs="0:16:0" 00:18:42.604 21:54:57 nvme -- nvme/nvme.sh@79 -- # uname 00:18:42.604 21:54:57 nvme -- nvme/nvme.sh@79 -- # '[' FreeBSD = Linux ']' 00:18:42.604 21:54:57 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:18:42.604 21:54:57 nvme -- common/autotest_common.sh@1093 -- # '[' 10 -le 1 ']' 00:18:42.604 21:54:57 nvme -- common/autotest_common.sh@1099 -- # xtrace_disable 00:18:42.604 21:54:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:18:42.604 ************************************ 00:18:42.604 START TEST nvme_reset 00:18:42.604 ************************************ 00:18:42.604 21:54:57 nvme.nvme_reset -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:18:43.171 EAL: TSC is not safe to use in SMP mode 00:18:43.171 EAL: TSC is not invariant 00:18:43.171 [2024-07-15 21:54:58.193733] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:43.171 Initializing NVMe Controllers 00:18:43.171 Skipping QEMU NVMe SSD at 0000:00:10.0 00:18:43.171 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:18:43.171 00:18:43.171 real 0m0.541s 00:18:43.171 user 0m0.008s 00:18:43.171 sys 0m0.535s 00:18:43.171 21:54:58 nvme.nvme_reset -- common/autotest_common.sh@1118 -- # xtrace_disable 00:18:43.171 21:54:58 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:18:43.171 ************************************ 00:18:43.171 END TEST nvme_reset 00:18:43.171 ************************************ 00:18:43.171 21:54:58 nvme -- common/autotest_common.sh@1136 -- # return 0 00:18:43.171 21:54:58 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:18:43.171 21:54:58 nvme -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:18:43.171 21:54:58 nvme -- common/autotest_common.sh@1099 -- # xtrace_disable 00:18:43.171 21:54:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:18:43.171 ************************************ 00:18:43.171 START TEST nvme_identify 00:18:43.171 ************************************ 00:18:43.171 21:54:58 nvme.nvme_identify -- common/autotest_common.sh@1117 -- # nvme_identify 00:18:43.171 21:54:58 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:18:43.171 21:54:58 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:18:43.171 21:54:58 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:18:43.171 21:54:58 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:18:43.171 21:54:58 nvme.nvme_identify -- common/autotest_common.sh@1507 -- # bdfs=() 00:18:43.171 21:54:58 nvme.nvme_identify -- common/autotest_common.sh@1507 -- # local bdfs 00:18:43.171 21:54:58 nvme.nvme_identify -- common/autotest_common.sh@1508 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:18:43.171 21:54:58 nvme.nvme_identify -- common/autotest_common.sh@1508 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:43.171 21:54:58 nvme.nvme_identify -- common/autotest_common.sh@1508 -- # jq -r '.config[].params.traddr' 00:18:43.171 21:54:58 nvme.nvme_identify -- common/autotest_common.sh@1509 -- # (( 1 == 0 )) 00:18:43.171 21:54:58 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # printf '%s\n' 0000:00:10.0 00:18:43.171 21:54:58 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:18:43.739 EAL: TSC is not safe to use in SMP mode 00:18:43.739 EAL: TSC is not invariant 00:18:43.739 [2024-07-15 21:54:58.824375] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:43.739 ===================================================== 00:18:43.739 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:18:43.739 ===================================================== 00:18:43.739 Controller Capabilities/Features 00:18:43.740 ================================ 00:18:43.740 Vendor ID: 1b36 00:18:43.740 Subsystem Vendor ID: 1af4 00:18:43.740 Serial Number: 12340 00:18:43.740 Model Number: QEMU NVMe Ctrl 00:18:43.740 Firmware Version: 8.0.0 00:18:43.740 Recommended Arb Burst: 6 00:18:43.740 IEEE OUI Identifier: 00 54 52 00:18:43.740 Multi-path I/O 00:18:43.740 May have multiple subsystem ports: No 00:18:43.740 May have multiple controllers: No 00:18:43.740 Associated with SR-IOV VF: No 00:18:43.740 Max Data Transfer Size: 524288 00:18:43.740 Max Number of Namespaces: 256 00:18:43.740 Max Number of I/O Queues: 64 00:18:43.740 NVMe Specification Version (VS): 1.4 00:18:43.740 NVMe Specification Version (Identify): 1.4 00:18:43.740 Maximum Queue Entries: 2048 00:18:43.740 Contiguous Queues Required: Yes 00:18:43.740 Arbitration Mechanisms Supported 00:18:43.740 Weighted Round Robin: Not Supported 00:18:43.740 Vendor Specific: Not Supported 00:18:43.740 Reset Timeout: 7500 ms 00:18:43.740 Doorbell Stride: 4 bytes 00:18:43.740 NVM Subsystem Reset: Not Supported 00:18:43.740 Command Sets Supported 00:18:43.740 NVM Command Set: Supported 00:18:43.740 Boot Partition: Not Supported 00:18:43.740 Memory Page Size Minimum: 4096 bytes 00:18:43.740 Memory Page Size Maximum: 65536 bytes 00:18:43.740 Persistent Memory Region: Not Supported 00:18:43.740 Optional Asynchronous Events Supported 00:18:43.740 Namespace Attribute Notices: Supported 00:18:43.740 Firmware Activation Notices: Not Supported 00:18:43.740 ANA Change Notices: Not Supported 00:18:43.740 PLE Aggregate Log Change Notices: Not Supported 00:18:43.740 LBA Status Info Alert Notices: Not Supported 00:18:43.740 EGE Aggregate Log Change Notices: Not Supported 00:18:43.740 Normal NVM Subsystem Shutdown event: Not Supported 00:18:43.740 Zone Descriptor Change Notices: Not Supported 00:18:43.740 Discovery Log Change Notices: Not Supported 00:18:43.740 Controller Attributes 00:18:43.740 128-bit Host Identifier: Not Supported 00:18:43.740 Non-Operational Permissive Mode: Not Supported 00:18:43.740 NVM Sets: Not Supported 00:18:43.740 Read Recovery Levels: Not Supported 00:18:43.740 Endurance Groups: Not Supported 00:18:43.740 Predictable Latency Mode: Not Supported 00:18:43.740 Traffic Based Keep ALive: Not Supported 00:18:43.740 Namespace Granularity: Not Supported 00:18:43.740 SQ Associations: Not Supported 00:18:43.740 UUID List: Not Supported 00:18:43.740 Multi-Domain Subsystem: Not Supported 00:18:43.740 Fixed Capacity Management: Not Supported 00:18:43.740 Variable Capacity Management: Not Supported 00:18:43.740 Delete Endurance Group: Not Supported 00:18:43.740 Delete NVM Set: Not Supported 00:18:43.740 Extended LBA Formats Supported: Supported 00:18:43.740 Flexible Data Placement Supported: Not Supported 00:18:43.740 00:18:43.740 Controller Memory Buffer Support 00:18:43.740 ================================ 00:18:43.740 Supported: No 00:18:43.740 00:18:43.740 Persistent Memory Region Support 00:18:43.740 ================================ 00:18:43.740 Supported: No 00:18:43.740 00:18:43.740 Admin Command Set Attributes 00:18:43.740 ============================ 00:18:43.740 Security Send/Receive: Not Supported 00:18:43.740 Format NVM: Supported 00:18:43.740 Firmware Activate/Download: Not Supported 00:18:43.740 Namespace Management: Supported 00:18:43.740 Device Self-Test: Not Supported 00:18:43.740 Directives: Supported 00:18:43.740 NVMe-MI: Not Supported 00:18:43.740 Virtualization Management: Not Supported 00:18:43.740 Doorbell Buffer Config: Supported 00:18:43.740 Get LBA Status Capability: Not Supported 00:18:43.740 Command & Feature Lockdown Capability: Not Supported 00:18:43.740 Abort Command Limit: 4 00:18:43.740 Async Event Request Limit: 4 00:18:43.740 Number of Firmware Slots: N/A 00:18:43.740 Firmware Slot 1 Read-Only: N/A 00:18:43.740 Firmware Activation Without Reset: N/A 00:18:43.740 Multiple Update Detection Support: N/A 00:18:43.740 Firmware Update Granularity: No Information Provided 00:18:43.740 Per-Namespace SMART Log: Yes 00:18:43.740 Asymmetric Namespace Access Log Page: Not Supported 00:18:43.740 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:18:43.740 Command Effects Log Page: Supported 00:18:43.740 Get Log Page Extended Data: Supported 00:18:43.740 Telemetry Log Pages: Not Supported 00:18:43.740 Persistent Event Log Pages: Not Supported 00:18:43.740 Supported Log Pages Log Page: May Support 00:18:43.740 Commands Supported & Effects Log Page: Not Supported 00:18:43.740 Feature Identifiers & Effects Log Page:May Support 00:18:43.740 NVMe-MI Commands & Effects Log Page: May Support 00:18:43.740 Data Area 4 for Telemetry Log: Not Supported 00:18:43.740 Error Log Page Entries Supported: 1 00:18:43.740 Keep Alive: Not Supported 00:18:43.740 00:18:43.740 NVM Command Set Attributes 00:18:43.740 ========================== 00:18:43.740 Submission Queue Entry Size 00:18:43.740 Max: 64 00:18:43.740 Min: 64 00:18:43.740 Completion Queue Entry Size 00:18:43.740 Max: 16 00:18:43.740 Min: 16 00:18:43.740 Number of Namespaces: 256 00:18:43.740 Compare Command: Supported 00:18:43.740 Write Uncorrectable Command: Not Supported 00:18:43.740 Dataset Management Command: Supported 00:18:43.740 Write Zeroes Command: Supported 00:18:43.740 Set Features Save Field: Supported 00:18:43.740 Reservations: Not Supported 00:18:43.740 Timestamp: Supported 00:18:43.740 Copy: Supported 00:18:43.740 Volatile Write Cache: Present 00:18:43.740 Atomic Write Unit (Normal): 1 00:18:43.740 Atomic Write Unit (PFail): 1 00:18:43.740 Atomic Compare & Write Unit: 1 00:18:43.740 Fused Compare & Write: Not Supported 00:18:43.740 Scatter-Gather List 00:18:43.740 SGL Command Set: Supported 00:18:43.740 SGL Keyed: Not Supported 00:18:43.740 SGL Bit Bucket Descriptor: Not Supported 00:18:43.740 SGL Metadata Pointer: Not Supported 00:18:43.740 Oversized SGL: Not Supported 00:18:43.740 SGL Metadata Address: Not Supported 00:18:43.740 SGL Offset: Not Supported 00:18:43.740 Transport SGL Data Block: Not Supported 00:18:43.740 Replay Protected Memory Block: Not Supported 00:18:43.740 00:18:43.740 Firmware Slot Information 00:18:43.740 ========================= 00:18:43.740 Active slot: 1 00:18:43.740 Slot 1 Firmware Revision: 1.0 00:18:43.740 00:18:43.740 00:18:43.740 Commands Supported and Effects 00:18:43.740 ============================== 00:18:43.740 Admin Commands 00:18:43.740 -------------- 00:18:43.740 Delete I/O Submission Queue (00h): Supported 00:18:43.740 Create I/O Submission Queue (01h): Supported 00:18:43.740 Get Log Page (02h): Supported 00:18:43.740 Delete I/O Completion Queue (04h): Supported 00:18:43.740 Create I/O Completion Queue (05h): Supported 00:18:43.740 Identify (06h): Supported 00:18:43.740 Abort (08h): Supported 00:18:43.740 Set Features (09h): Supported 00:18:43.740 Get Features (0Ah): Supported 00:18:43.740 Asynchronous Event Request (0Ch): Supported 00:18:43.740 Namespace Attachment (15h): Supported NS-Inventory-Change 00:18:43.740 Directive Send (19h): Supported 00:18:43.740 Directive Receive (1Ah): Supported 00:18:43.740 Virtualization Management (1Ch): Supported 00:18:43.740 Doorbell Buffer Config (7Ch): Supported 00:18:43.740 Format NVM (80h): Supported LBA-Change 00:18:43.740 I/O Commands 00:18:43.740 ------------ 00:18:43.740 Flush (00h): Supported LBA-Change 00:18:43.740 Write (01h): Supported LBA-Change 00:18:43.740 Read (02h): Supported 00:18:43.740 Compare (05h): Supported 00:18:43.740 Write Zeroes (08h): Supported LBA-Change 00:18:43.740 Dataset Management (09h): Supported LBA-Change 00:18:43.740 Unknown (0Ch): Supported 00:18:43.740 Unknown (12h): Supported 00:18:43.740 Copy (19h): Supported LBA-Change 00:18:43.740 Unknown (1Dh): Supported LBA-Change 00:18:43.740 00:18:43.740 Error Log 00:18:43.740 ========= 00:18:43.740 00:18:43.740 Arbitration 00:18:43.740 =========== 00:18:43.740 Arbitration Burst: no limit 00:18:43.740 00:18:43.740 Power Management 00:18:43.740 ================ 00:18:43.740 Number of Power States: 1 00:18:43.740 Current Power State: Power State #0 00:18:43.740 Power State #0: 00:18:43.740 Max Power: 25.00 W 00:18:43.740 Non-Operational State: Operational 00:18:43.740 Entry Latency: 16 microseconds 00:18:43.740 Exit Latency: 4 microseconds 00:18:43.740 Relative Read Throughput: 0 00:18:43.740 Relative Read Latency: 0 00:18:43.740 Relative Write Throughput: 0 00:18:43.740 Relative Write Latency: 0 00:18:43.740 Idle Power: Not Reported 00:18:43.740 Active Power: Not Reported 00:18:43.740 Non-Operational Permissive Mode: Not Supported 00:18:43.740 00:18:43.740 Health Information 00:18:43.740 ================== 00:18:43.740 Critical Warnings: 00:18:43.740 Available Spare Space: OK 00:18:43.740 Temperature: OK 00:18:43.740 Device Reliability: OK 00:18:43.740 Read Only: No 00:18:43.740 Volatile Memory Backup: OK 00:18:43.740 Current Temperature: 323 Kelvin (50 Celsius) 00:18:43.740 Temperature Threshold: 343 Kelvin (70 Celsius) 00:18:43.740 Available Spare: 0% 00:18:43.740 Available Spare Threshold: 0% 00:18:43.740 Life Percentage Used: 0% 00:18:43.741 Data Units Read: 13903 00:18:43.741 Data Units Written: 13888 00:18:43.741 Host Read Commands: 301400 00:18:43.741 Host Write Commands: 301249 00:18:43.741 Controller Busy Time: 0 minutes 00:18:43.741 Power Cycles: 0 00:18:43.741 Power On Hours: 0 hours 00:18:43.741 Unsafe Shutdowns: 0 00:18:43.741 Unrecoverable Media Errors: 0 00:18:43.741 Lifetime Error Log Entries: 0 00:18:43.741 Warning Temperature Time: 0 minutes 00:18:43.741 Critical Temperature Time: 0 minutes 00:18:43.741 00:18:43.741 Number of Queues 00:18:43.741 ================ 00:18:43.741 Number of I/O Submission Queues: 64 00:18:43.741 Number of I/O Completion Queues: 64 00:18:43.741 00:18:43.741 ZNS Specific Controller Data 00:18:43.741 ============================ 00:18:43.741 Zone Append Size Limit: 0 00:18:43.741 00:18:43.741 00:18:43.741 Active Namespaces 00:18:43.741 ================= 00:18:43.741 Namespace ID:1 00:18:43.741 Error Recovery Timeout: Unlimited 00:18:43.741 Command Set Identifier: NVM (00h) 00:18:43.741 Deallocate: Supported 00:18:43.741 Deallocated/Unwritten Error: Supported 00:18:43.741 Deallocated Read Value: All 0x00 00:18:43.741 Deallocate in Write Zeroes: Not Supported 00:18:43.741 Deallocated Guard Field: 0xFFFF 00:18:43.741 Flush: Supported 00:18:43.741 Reservation: Not Supported 00:18:43.741 Namespace Sharing Capabilities: Private 00:18:43.741 Size (in LBAs): 1310720 (5GiB) 00:18:43.741 Capacity (in LBAs): 1310720 (5GiB) 00:18:43.741 Utilization (in LBAs): 1310720 (5GiB) 00:18:43.741 Thin Provisioning: Not Supported 00:18:43.741 Per-NS Atomic Units: No 00:18:43.741 Maximum Single Source Range Length: 128 00:18:43.741 Maximum Copy Length: 128 00:18:43.741 Maximum Source Range Count: 128 00:18:43.741 NGUID/EUI64 Never Reused: No 00:18:43.741 Namespace Write Protected: No 00:18:43.741 Number of LBA Formats: 8 00:18:43.741 Current LBA Format: LBA Format #04 00:18:43.741 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:43.741 LBA Format #01: Data Size: 512 Metadata Size: 8 00:18:43.741 LBA Format #02: Data Size: 512 Metadata Size: 16 00:18:43.741 LBA Format #03: Data Size: 512 Metadata Size: 64 00:18:43.741 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:18:43.741 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:18:43.741 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:18:43.741 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:18:43.741 00:18:43.741 NVM Specific Namespace Data 00:18:43.741 =========================== 00:18:43.741 Logical Block Storage Tag Mask: 0 00:18:43.741 Protection Information Capabilities: 00:18:43.741 16b Guard Protection Information Storage Tag Support: No 00:18:43.741 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:18:43.741 Storage Tag Check Read Support: No 00:18:43.741 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:18:43.741 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:18:43.741 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:18:43.741 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:18:43.741 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:18:43.741 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:18:43.741 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:18:43.741 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:18:43.741 21:54:58 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:18:43.741 21:54:58 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:18:44.308 EAL: TSC is not safe to use in SMP mode 00:18:44.308 EAL: TSC is not invariant 00:18:44.308 [2024-07-15 21:54:59.365424] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:44.308 ===================================================== 00:18:44.308 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:18:44.308 ===================================================== 00:18:44.308 Controller Capabilities/Features 00:18:44.308 ================================ 00:18:44.308 Vendor ID: 1b36 00:18:44.308 Subsystem Vendor ID: 1af4 00:18:44.308 Serial Number: 12340 00:18:44.308 Model Number: QEMU NVMe Ctrl 00:18:44.308 Firmware Version: 8.0.0 00:18:44.308 Recommended Arb Burst: 6 00:18:44.308 IEEE OUI Identifier: 00 54 52 00:18:44.308 Multi-path I/O 00:18:44.308 May have multiple subsystem ports: No 00:18:44.308 May have multiple controllers: No 00:18:44.308 Associated with SR-IOV VF: No 00:18:44.308 Max Data Transfer Size: 524288 00:18:44.308 Max Number of Namespaces: 256 00:18:44.308 Max Number of I/O Queues: 64 00:18:44.308 NVMe Specification Version (VS): 1.4 00:18:44.308 NVMe Specification Version (Identify): 1.4 00:18:44.308 Maximum Queue Entries: 2048 00:18:44.308 Contiguous Queues Required: Yes 00:18:44.308 Arbitration Mechanisms Supported 00:18:44.308 Weighted Round Robin: Not Supported 00:18:44.308 Vendor Specific: Not Supported 00:18:44.308 Reset Timeout: 7500 ms 00:18:44.308 Doorbell Stride: 4 bytes 00:18:44.308 NVM Subsystem Reset: Not Supported 00:18:44.308 Command Sets Supported 00:18:44.308 NVM Command Set: Supported 00:18:44.308 Boot Partition: Not Supported 00:18:44.308 Memory Page Size Minimum: 4096 bytes 00:18:44.308 Memory Page Size Maximum: 65536 bytes 00:18:44.308 Persistent Memory Region: Not Supported 00:18:44.308 Optional Asynchronous Events Supported 00:18:44.308 Namespace Attribute Notices: Supported 00:18:44.308 Firmware Activation Notices: Not Supported 00:18:44.308 ANA Change Notices: Not Supported 00:18:44.308 PLE Aggregate Log Change Notices: Not Supported 00:18:44.308 LBA Status Info Alert Notices: Not Supported 00:18:44.308 EGE Aggregate Log Change Notices: Not Supported 00:18:44.308 Normal NVM Subsystem Shutdown event: Not Supported 00:18:44.308 Zone Descriptor Change Notices: Not Supported 00:18:44.308 Discovery Log Change Notices: Not Supported 00:18:44.308 Controller Attributes 00:18:44.308 128-bit Host Identifier: Not Supported 00:18:44.308 Non-Operational Permissive Mode: Not Supported 00:18:44.308 NVM Sets: Not Supported 00:18:44.308 Read Recovery Levels: Not Supported 00:18:44.308 Endurance Groups: Not Supported 00:18:44.308 Predictable Latency Mode: Not Supported 00:18:44.308 Traffic Based Keep ALive: Not Supported 00:18:44.308 Namespace Granularity: Not Supported 00:18:44.308 SQ Associations: Not Supported 00:18:44.308 UUID List: Not Supported 00:18:44.308 Multi-Domain Subsystem: Not Supported 00:18:44.308 Fixed Capacity Management: Not Supported 00:18:44.308 Variable Capacity Management: Not Supported 00:18:44.308 Delete Endurance Group: Not Supported 00:18:44.308 Delete NVM Set: Not Supported 00:18:44.308 Extended LBA Formats Supported: Supported 00:18:44.308 Flexible Data Placement Supported: Not Supported 00:18:44.308 00:18:44.308 Controller Memory Buffer Support 00:18:44.308 ================================ 00:18:44.308 Supported: No 00:18:44.308 00:18:44.308 Persistent Memory Region Support 00:18:44.308 ================================ 00:18:44.308 Supported: No 00:18:44.308 00:18:44.308 Admin Command Set Attributes 00:18:44.309 ============================ 00:18:44.309 Security Send/Receive: Not Supported 00:18:44.309 Format NVM: Supported 00:18:44.309 Firmware Activate/Download: Not Supported 00:18:44.309 Namespace Management: Supported 00:18:44.309 Device Self-Test: Not Supported 00:18:44.309 Directives: Supported 00:18:44.309 NVMe-MI: Not Supported 00:18:44.309 Virtualization Management: Not Supported 00:18:44.309 Doorbell Buffer Config: Supported 00:18:44.309 Get LBA Status Capability: Not Supported 00:18:44.309 Command & Feature Lockdown Capability: Not Supported 00:18:44.309 Abort Command Limit: 4 00:18:44.309 Async Event Request Limit: 4 00:18:44.309 Number of Firmware Slots: N/A 00:18:44.309 Firmware Slot 1 Read-Only: N/A 00:18:44.309 Firmware Activation Without Reset: N/A 00:18:44.309 Multiple Update Detection Support: N/A 00:18:44.309 Firmware Update Granularity: No Information Provided 00:18:44.309 Per-Namespace SMART Log: Yes 00:18:44.309 Asymmetric Namespace Access Log Page: Not Supported 00:18:44.309 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:18:44.309 Command Effects Log Page: Supported 00:18:44.309 Get Log Page Extended Data: Supported 00:18:44.309 Telemetry Log Pages: Not Supported 00:18:44.309 Persistent Event Log Pages: Not Supported 00:18:44.309 Supported Log Pages Log Page: May Support 00:18:44.309 Commands Supported & Effects Log Page: Not Supported 00:18:44.309 Feature Identifiers & Effects Log Page:May Support 00:18:44.309 NVMe-MI Commands & Effects Log Page: May Support 00:18:44.309 Data Area 4 for Telemetry Log: Not Supported 00:18:44.309 Error Log Page Entries Supported: 1 00:18:44.309 Keep Alive: Not Supported 00:18:44.309 00:18:44.309 NVM Command Set Attributes 00:18:44.309 ========================== 00:18:44.309 Submission Queue Entry Size 00:18:44.309 Max: 64 00:18:44.309 Min: 64 00:18:44.309 Completion Queue Entry Size 00:18:44.309 Max: 16 00:18:44.309 Min: 16 00:18:44.309 Number of Namespaces: 256 00:18:44.309 Compare Command: Supported 00:18:44.309 Write Uncorrectable Command: Not Supported 00:18:44.309 Dataset Management Command: Supported 00:18:44.309 Write Zeroes Command: Supported 00:18:44.309 Set Features Save Field: Supported 00:18:44.309 Reservations: Not Supported 00:18:44.309 Timestamp: Supported 00:18:44.309 Copy: Supported 00:18:44.309 Volatile Write Cache: Present 00:18:44.309 Atomic Write Unit (Normal): 1 00:18:44.309 Atomic Write Unit (PFail): 1 00:18:44.309 Atomic Compare & Write Unit: 1 00:18:44.309 Fused Compare & Write: Not Supported 00:18:44.309 Scatter-Gather List 00:18:44.309 SGL Command Set: Supported 00:18:44.309 SGL Keyed: Not Supported 00:18:44.309 SGL Bit Bucket Descriptor: Not Supported 00:18:44.309 SGL Metadata Pointer: Not Supported 00:18:44.309 Oversized SGL: Not Supported 00:18:44.309 SGL Metadata Address: Not Supported 00:18:44.309 SGL Offset: Not Supported 00:18:44.309 Transport SGL Data Block: Not Supported 00:18:44.309 Replay Protected Memory Block: Not Supported 00:18:44.309 00:18:44.309 Firmware Slot Information 00:18:44.309 ========================= 00:18:44.309 Active slot: 1 00:18:44.309 Slot 1 Firmware Revision: 1.0 00:18:44.309 00:18:44.309 00:18:44.309 Commands Supported and Effects 00:18:44.309 ============================== 00:18:44.309 Admin Commands 00:18:44.309 -------------- 00:18:44.309 Delete I/O Submission Queue (00h): Supported 00:18:44.309 Create I/O Submission Queue (01h): Supported 00:18:44.309 Get Log Page (02h): Supported 00:18:44.309 Delete I/O Completion Queue (04h): Supported 00:18:44.309 Create I/O Completion Queue (05h): Supported 00:18:44.309 Identify (06h): Supported 00:18:44.309 Abort (08h): Supported 00:18:44.309 Set Features (09h): Supported 00:18:44.309 Get Features (0Ah): Supported 00:18:44.309 Asynchronous Event Request (0Ch): Supported 00:18:44.309 Namespace Attachment (15h): Supported NS-Inventory-Change 00:18:44.309 Directive Send (19h): Supported 00:18:44.309 Directive Receive (1Ah): Supported 00:18:44.309 Virtualization Management (1Ch): Supported 00:18:44.309 Doorbell Buffer Config (7Ch): Supported 00:18:44.309 Format NVM (80h): Supported LBA-Change 00:18:44.309 I/O Commands 00:18:44.309 ------------ 00:18:44.309 Flush (00h): Supported LBA-Change 00:18:44.309 Write (01h): Supported LBA-Change 00:18:44.309 Read (02h): Supported 00:18:44.309 Compare (05h): Supported 00:18:44.309 Write Zeroes (08h): Supported LBA-Change 00:18:44.309 Dataset Management (09h): Supported LBA-Change 00:18:44.309 Unknown (0Ch): Supported 00:18:44.309 Unknown (12h): Supported 00:18:44.309 Copy (19h): Supported LBA-Change 00:18:44.309 Unknown (1Dh): Supported LBA-Change 00:18:44.309 00:18:44.309 Error Log 00:18:44.309 ========= 00:18:44.309 00:18:44.309 Arbitration 00:18:44.309 =========== 00:18:44.309 Arbitration Burst: no limit 00:18:44.309 00:18:44.309 Power Management 00:18:44.309 ================ 00:18:44.309 Number of Power States: 1 00:18:44.309 Current Power State: Power State #0 00:18:44.309 Power State #0: 00:18:44.309 Max Power: 25.00 W 00:18:44.309 Non-Operational State: Operational 00:18:44.309 Entry Latency: 16 microseconds 00:18:44.309 Exit Latency: 4 microseconds 00:18:44.309 Relative Read Throughput: 0 00:18:44.309 Relative Read Latency: 0 00:18:44.309 Relative Write Throughput: 0 00:18:44.309 Relative Write Latency: 0 00:18:44.309 Idle Power: Not Reported 00:18:44.309 Active Power: Not Reported 00:18:44.309 Non-Operational Permissive Mode: Not Supported 00:18:44.309 00:18:44.309 Health Information 00:18:44.309 ================== 00:18:44.309 Critical Warnings: 00:18:44.309 Available Spare Space: OK 00:18:44.309 Temperature: OK 00:18:44.309 Device Reliability: OK 00:18:44.309 Read Only: No 00:18:44.309 Volatile Memory Backup: OK 00:18:44.309 Current Temperature: 323 Kelvin (50 Celsius) 00:18:44.309 Temperature Threshold: 343 Kelvin (70 Celsius) 00:18:44.309 Available Spare: 0% 00:18:44.309 Available Spare Threshold: 0% 00:18:44.309 Life Percentage Used: 0% 00:18:44.309 Data Units Read: 13903 00:18:44.309 Data Units Written: 13888 00:18:44.309 Host Read Commands: 301400 00:18:44.309 Host Write Commands: 301249 00:18:44.309 Controller Busy Time: 0 minutes 00:18:44.309 Power Cycles: 0 00:18:44.309 Power On Hours: 0 hours 00:18:44.309 Unsafe Shutdowns: 0 00:18:44.309 Unrecoverable Media Errors: 0 00:18:44.309 Lifetime Error Log Entries: 0 00:18:44.309 Warning Temperature Time: 0 minutes 00:18:44.309 Critical Temperature Time: 0 minutes 00:18:44.309 00:18:44.309 Number of Queues 00:18:44.309 ================ 00:18:44.309 Number of I/O Submission Queues: 64 00:18:44.309 Number of I/O Completion Queues: 64 00:18:44.309 00:18:44.309 ZNS Specific Controller Data 00:18:44.309 ============================ 00:18:44.309 Zone Append Size Limit: 0 00:18:44.309 00:18:44.309 00:18:44.309 Active Namespaces 00:18:44.309 ================= 00:18:44.309 Namespace ID:1 00:18:44.309 Error Recovery Timeout: Unlimited 00:18:44.309 Command Set Identifier: NVM (00h) 00:18:44.309 Deallocate: Supported 00:18:44.309 Deallocated/Unwritten Error: Supported 00:18:44.309 Deallocated Read Value: All 0x00 00:18:44.309 Deallocate in Write Zeroes: Not Supported 00:18:44.309 Deallocated Guard Field: 0xFFFF 00:18:44.309 Flush: Supported 00:18:44.309 Reservation: Not Supported 00:18:44.309 Namespace Sharing Capabilities: Private 00:18:44.309 Size (in LBAs): 1310720 (5GiB) 00:18:44.309 Capacity (in LBAs): 1310720 (5GiB) 00:18:44.309 Utilization (in LBAs): 1310720 (5GiB) 00:18:44.309 Thin Provisioning: Not Supported 00:18:44.309 Per-NS Atomic Units: No 00:18:44.309 Maximum Single Source Range Length: 128 00:18:44.309 Maximum Copy Length: 128 00:18:44.309 Maximum Source Range Count: 128 00:18:44.309 NGUID/EUI64 Never Reused: No 00:18:44.309 Namespace Write Protected: No 00:18:44.309 Number of LBA Formats: 8 00:18:44.309 Current LBA Format: LBA Format #04 00:18:44.309 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:44.309 LBA Format #01: Data Size: 512 Metadata Size: 8 00:18:44.309 LBA Format #02: Data Size: 512 Metadata Size: 16 00:18:44.309 LBA Format #03: Data Size: 512 Metadata Size: 64 00:18:44.309 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:18:44.309 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:18:44.309 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:18:44.309 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:18:44.309 00:18:44.309 NVM Specific Namespace Data 00:18:44.309 =========================== 00:18:44.309 Logical Block Storage Tag Mask: 0 00:18:44.309 Protection Information Capabilities: 00:18:44.309 16b Guard Protection Information Storage Tag Support: No 00:18:44.309 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:18:44.309 Storage Tag Check Read Support: No 00:18:44.309 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:18:44.309 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:18:44.309 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:18:44.309 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:18:44.309 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:18:44.309 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:18:44.309 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:18:44.310 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:18:44.310 00:18:44.310 real 0m1.131s 00:18:44.310 user 0m0.033s 00:18:44.310 sys 0m1.114s 00:18:44.310 21:54:59 nvme.nvme_identify -- common/autotest_common.sh@1118 -- # xtrace_disable 00:18:44.310 21:54:59 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:18:44.310 ************************************ 00:18:44.310 END TEST nvme_identify 00:18:44.310 ************************************ 00:18:44.310 21:54:59 nvme -- common/autotest_common.sh@1136 -- # return 0 00:18:44.310 21:54:59 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:18:44.310 21:54:59 nvme -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:18:44.310 21:54:59 nvme -- common/autotest_common.sh@1099 -- # xtrace_disable 00:18:44.310 21:54:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:18:44.310 ************************************ 00:18:44.310 START TEST nvme_perf 00:18:44.310 ************************************ 00:18:44.310 21:54:59 nvme.nvme_perf -- common/autotest_common.sh@1117 -- # nvme_perf 00:18:44.310 21:54:59 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:18:44.883 EAL: TSC is not safe to use in SMP mode 00:18:44.883 EAL: TSC is not invariant 00:18:44.883 [2024-07-15 21:54:59.965528] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:45.817 Initializing NVMe Controllers 00:18:45.817 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:18:45.817 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:18:45.817 Initialization complete. Launching workers. 00:18:45.817 ======================================================== 00:18:45.817 Latency(us) 00:18:45.817 Device Information : IOPS MiB/s Average min max 00:18:45.817 PCIE (0000:00:10.0) NSID 1 from core 0: 83125.00 974.12 1540.54 171.79 3589.02 00:18:45.817 ======================================================== 00:18:45.817 Total : 83125.00 974.12 1540.54 171.79 3589.02 00:18:45.817 00:18:45.817 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:18:45.817 ================================================================================= 00:18:45.817 1.00000% : 1325.616us 00:18:45.817 10.00000% : 1392.641us 00:18:45.817 25.00000% : 1437.325us 00:18:45.817 50.00000% : 1496.903us 00:18:45.817 75.00000% : 1571.376us 00:18:45.817 90.00000% : 1772.452us 00:18:45.817 95.00000% : 2010.765us 00:18:45.817 98.00000% : 2144.816us 00:18:45.817 99.00000% : 2204.394us 00:18:45.817 99.50000% : 2249.078us 00:18:45.817 99.90000% : 3351.275us 00:18:45.817 99.99000% : 3559.799us 00:18:45.817 99.99900% : 3589.588us 00:18:45.817 99.99990% : 3589.588us 00:18:45.817 99.99999% : 3589.588us 00:18:45.817 00:18:45.817 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:18:45.817 ============================================================================== 00:18:45.817 Range in us Cumulative IO count 00:18:45.817 171.287 - 172.218: 0.0012% ( 1) 00:18:45.817 180.596 - 181.527: 0.0024% ( 1) 00:18:45.817 186.182 - 187.113: 0.0036% ( 1) 00:18:45.817 187.113 - 188.044: 0.0048% ( 1) 00:18:45.817 188.044 - 188.975: 0.0060% ( 1) 00:18:45.817 188.975 - 189.906: 0.0096% ( 3) 00:18:45.817 189.906 - 190.837: 0.0108% ( 1) 00:18:45.817 190.837 - 191.767: 0.0120% ( 1) 00:18:45.817 194.560 - 195.491: 0.0132% ( 1) 00:18:45.817 204.800 - 205.731: 0.0144% ( 1) 00:18:45.817 205.731 - 206.662: 0.0156% ( 1) 00:18:45.817 206.662 - 207.593: 0.0180% ( 2) 00:18:45.817 208.524 - 209.455: 0.0192% ( 1) 00:18:45.817 209.455 - 210.386: 0.0205% ( 1) 00:18:45.817 210.386 - 211.317: 0.0217% ( 1) 00:18:45.817 213.178 - 214.109: 0.0229% ( 1) 00:18:45.817 215.040 - 215.971: 0.0241% ( 1) 00:18:45.817 215.971 - 216.902: 0.0253% ( 1) 00:18:45.817 217.833 - 218.764: 0.0265% ( 1) 00:18:45.817 218.764 - 219.695: 0.0277% ( 1) 00:18:45.817 220.626 - 221.557: 0.0289% ( 1) 00:18:45.817 255.069 - 256.931: 0.0313% ( 2) 00:18:45.817 256.931 - 258.793: 0.0385% ( 6) 00:18:45.817 258.793 - 260.655: 0.0421% ( 3) 00:18:45.817 266.240 - 268.102: 0.0433% ( 1) 00:18:45.817 268.102 - 269.964: 0.0445% ( 1) 00:18:45.817 269.964 - 271.826: 0.0481% ( 3) 00:18:45.817 271.826 - 273.687: 0.0493% ( 1) 00:18:45.817 273.687 - 275.549: 0.0517% ( 2) 00:18:45.817 275.549 - 277.411: 0.0529% ( 1) 00:18:45.817 277.411 - 279.273: 0.0541% ( 1) 00:18:45.817 279.273 - 281.135: 0.0553% ( 1) 00:18:45.817 281.135 - 282.997: 0.0565% ( 1) 00:18:45.817 282.997 - 284.858: 0.0577% ( 1) 00:18:45.817 284.858 - 286.720: 0.0589% ( 1) 00:18:45.817 286.720 - 288.582: 0.0614% ( 2) 00:18:45.817 288.582 - 290.444: 0.0626% ( 1) 00:18:45.817 290.444 - 292.306: 0.0638% ( 1) 00:18:45.817 292.306 - 294.167: 0.0662% ( 2) 00:18:45.817 294.167 - 296.029: 0.0674% ( 1) 00:18:45.817 296.029 - 297.891: 0.0686% ( 1) 00:18:45.817 562.270 - 565.993: 0.0698% ( 1) 00:18:45.817 1094.750 - 1102.197: 0.0722% ( 2) 00:18:45.817 1109.644 - 1117.092: 0.0746% ( 2) 00:18:45.817 1117.092 - 1124.539: 0.0818% ( 6) 00:18:45.817 1124.539 - 1131.986: 0.0878% ( 5) 00:18:45.817 1131.986 - 1139.434: 0.0938% ( 5) 00:18:45.817 1139.434 - 1146.881: 0.0998% ( 5) 00:18:45.817 1146.881 - 1154.328: 0.1059% ( 5) 00:18:45.817 1154.328 - 1161.775: 0.1083% ( 2) 00:18:45.817 1161.775 - 1169.223: 0.1095% ( 1) 00:18:45.817 1169.223 - 1176.670: 0.1203% ( 9) 00:18:45.817 1176.670 - 1184.117: 0.1335% ( 11) 00:18:45.817 1184.117 - 1191.565: 0.1456% ( 10) 00:18:45.817 1191.565 - 1199.012: 0.1576% ( 10) 00:18:45.817 1199.012 - 1206.459: 0.1648% ( 6) 00:18:45.817 1213.906 - 1221.354: 0.1768% ( 10) 00:18:45.817 1221.354 - 1228.801: 0.1877% ( 9) 00:18:45.817 1228.801 - 1236.248: 0.2069% ( 16) 00:18:45.817 1236.248 - 1243.695: 0.2226% ( 13) 00:18:45.817 1243.695 - 1251.143: 0.2394% ( 14) 00:18:45.817 1251.143 - 1258.590: 0.2635% ( 20) 00:18:45.817 1258.590 - 1266.037: 0.2899% ( 22) 00:18:45.817 1266.037 - 1273.485: 0.3224% ( 27) 00:18:45.817 1273.485 - 1280.932: 0.3633% ( 34) 00:18:45.817 1280.932 - 1288.379: 0.4114% ( 40) 00:18:45.817 1288.379 - 1295.826: 0.4788% ( 56) 00:18:45.817 1295.826 - 1303.274: 0.5510% ( 60) 00:18:45.817 1303.274 - 1310.721: 0.7062% ( 129) 00:18:45.817 1310.721 - 1318.168: 0.9395% ( 194) 00:18:45.817 1318.168 - 1325.616: 1.2331% ( 244) 00:18:45.817 1325.616 - 1333.063: 1.6229% ( 324) 00:18:45.817 1333.063 - 1340.510: 2.1726% ( 457) 00:18:45.817 1340.510 - 1347.957: 2.8427% ( 557) 00:18:45.817 1347.957 - 1355.405: 3.7113% ( 722) 00:18:45.817 1355.405 - 1362.852: 4.7928% ( 899) 00:18:45.817 1362.852 - 1370.299: 6.0403% ( 1037) 00:18:45.817 1370.299 - 1377.746: 7.5392% ( 1246) 00:18:45.817 1377.746 - 1385.194: 9.1850% ( 1368) 00:18:45.817 1385.194 - 1392.641: 11.0544% ( 1554) 00:18:45.817 1392.641 - 1400.088: 13.1802% ( 1767) 00:18:45.817 1400.088 - 1407.536: 15.4683% ( 1902) 00:18:45.817 1407.536 - 1414.983: 17.9958% ( 2101) 00:18:45.817 1414.983 - 1422.430: 20.7194% ( 2264) 00:18:45.817 1422.430 - 1429.877: 23.5946% ( 2390) 00:18:45.817 1429.877 - 1437.325: 26.5203% ( 2432) 00:18:45.817 1437.325 - 1444.772: 29.5723% ( 2537) 00:18:45.817 1444.772 - 1452.219: 32.7808% ( 2667) 00:18:45.817 1452.219 - 1459.667: 36.0722% ( 2736) 00:18:45.817 1459.667 - 1467.114: 39.3756% ( 2746) 00:18:45.817 1467.114 - 1474.561: 42.7200% ( 2780) 00:18:45.817 1474.561 - 1482.008: 46.0535% ( 2771) 00:18:45.817 1482.008 - 1489.456: 49.2968% ( 2696) 00:18:45.817 1489.456 - 1496.903: 52.4223% ( 2598) 00:18:45.817 1496.903 - 1504.350: 55.4105% ( 2484) 00:18:45.817 1504.350 - 1511.797: 58.3002% ( 2402) 00:18:45.817 1511.797 - 1519.245: 61.0851% ( 2315) 00:18:45.817 1519.245 - 1526.692: 63.6680% ( 2147) 00:18:45.817 1526.692 - 1534.139: 66.1474% ( 2061) 00:18:45.817 1534.139 - 1541.587: 68.5245% ( 1976) 00:18:45.817 1541.587 - 1549.034: 70.7152% ( 1821) 00:18:45.817 1549.034 - 1556.481: 72.6953% ( 1646) 00:18:45.817 1556.481 - 1563.928: 74.5155% ( 1513) 00:18:45.817 1563.928 - 1571.376: 76.2009% ( 1401) 00:18:45.817 1571.376 - 1578.823: 77.7347% ( 1275) 00:18:45.817 1578.823 - 1586.270: 79.0821% ( 1120) 00:18:45.817 1586.270 - 1593.718: 80.2623% ( 981) 00:18:45.817 1593.718 - 1601.165: 81.3197% ( 879) 00:18:45.817 1601.165 - 1608.612: 82.2749% ( 794) 00:18:45.817 1608.612 - 1616.059: 83.1350% ( 715) 00:18:45.817 1616.059 - 1623.507: 83.8977% ( 634) 00:18:45.817 1623.507 - 1630.954: 84.5967% ( 581) 00:18:45.817 1630.954 - 1638.401: 85.2307% ( 527) 00:18:45.817 1638.401 - 1645.848: 85.8262% ( 495) 00:18:45.817 1645.848 - 1653.296: 86.3555% ( 440) 00:18:45.817 1653.296 - 1660.743: 86.7886% ( 360) 00:18:45.817 1660.743 - 1668.190: 87.1627% ( 311) 00:18:45.817 1668.190 - 1675.638: 87.5032% ( 283) 00:18:45.817 1675.638 - 1683.085: 87.8388% ( 279) 00:18:45.817 1683.085 - 1690.532: 88.1359% ( 247) 00:18:45.817 1690.532 - 1697.979: 88.3838% ( 206) 00:18:45.817 1697.979 - 1705.427: 88.6063% ( 185) 00:18:45.817 1705.427 - 1712.874: 88.8253% ( 182) 00:18:45.817 1712.874 - 1720.321: 89.0093% ( 153) 00:18:45.817 1720.321 - 1727.769: 89.1741% ( 137) 00:18:45.817 1727.769 - 1735.216: 89.3233% ( 124) 00:18:45.817 1735.216 - 1742.663: 89.4629% ( 116) 00:18:45.817 1742.663 - 1750.110: 89.6036% ( 117) 00:18:45.817 1750.110 - 1757.558: 89.7444% ( 117) 00:18:45.817 1757.558 - 1765.005: 89.8815% ( 114) 00:18:45.817 1765.005 - 1772.452: 90.0006% ( 99) 00:18:45.817 1772.452 - 1779.899: 90.1029% ( 85) 00:18:45.817 1779.899 - 1787.347: 90.2075% ( 87) 00:18:45.817 1787.347 - 1794.794: 90.2977% ( 75) 00:18:45.817 1794.794 - 1802.241: 90.3808% ( 69) 00:18:45.817 1802.241 - 1809.689: 90.4842% ( 86) 00:18:45.817 1809.689 - 1817.136: 90.6045% ( 100) 00:18:45.817 1817.136 - 1824.583: 90.7260% ( 101) 00:18:45.817 1824.583 - 1832.030: 90.8571% ( 109) 00:18:45.817 1832.030 - 1839.478: 90.9955% ( 115) 00:18:45.817 1839.478 - 1846.925: 91.1326% ( 114) 00:18:45.817 1846.925 - 1854.372: 91.2782% ( 121) 00:18:45.817 1854.372 - 1861.820: 91.4286% ( 125) 00:18:45.817 1861.820 - 1869.267: 91.5862% ( 131) 00:18:45.817 1869.267 - 1876.714: 91.7498% ( 136) 00:18:45.817 1876.714 - 1884.161: 91.9086% ( 132) 00:18:45.817 1884.161 - 1891.609: 92.0686% ( 133) 00:18:45.817 1891.609 - 1899.056: 92.2262% ( 131) 00:18:45.817 1899.056 - 1906.503: 92.4090% ( 152) 00:18:45.817 1906.503 - 1921.398: 92.7579% ( 290) 00:18:45.817 1921.398 - 1936.292: 93.1368% ( 315) 00:18:45.817 1936.292 - 1951.187: 93.5242% ( 322) 00:18:45.817 1951.187 - 1966.081: 93.9104% ( 321) 00:18:45.817 1966.081 - 1980.976: 94.3519% ( 367) 00:18:45.818 1980.976 - 1995.871: 94.7657% ( 344) 00:18:45.818 1995.871 - 2010.765: 95.1459% ( 316) 00:18:45.818 2010.765 - 2025.660: 95.5008% ( 295) 00:18:45.818 2025.660 - 2040.554: 95.8605% ( 299) 00:18:45.818 2040.554 - 2055.449: 96.2250% ( 303) 00:18:45.818 2055.449 - 2070.343: 96.5835% ( 298) 00:18:45.818 2070.343 - 2085.238: 96.9251% ( 284) 00:18:45.818 2085.238 - 2100.132: 97.2487% ( 269) 00:18:45.818 2100.132 - 2115.027: 97.5507% ( 251) 00:18:45.818 2115.027 - 2129.922: 97.8250% ( 228) 00:18:45.818 2129.922 - 2144.816: 98.1125% ( 239) 00:18:45.818 2144.816 - 2159.711: 98.3759% ( 219) 00:18:45.818 2159.711 - 2174.605: 98.6346% ( 215) 00:18:45.818 2174.605 - 2189.500: 98.8571% ( 185) 00:18:45.818 2189.500 - 2204.394: 99.0617% ( 170) 00:18:45.818 2204.394 - 2219.289: 99.2361% ( 145) 00:18:45.818 2219.289 - 2234.183: 99.3997% ( 136) 00:18:45.818 2234.183 - 2249.078: 99.5236% ( 103) 00:18:45.818 2249.078 - 2263.973: 99.6211% ( 81) 00:18:45.818 2263.973 - 2278.867: 99.7065% ( 71) 00:18:45.818 2278.867 - 2293.762: 99.7618% ( 46) 00:18:45.818 2293.762 - 2308.656: 99.8087% ( 39) 00:18:45.818 2308.656 - 2323.551: 99.8328% ( 20) 00:18:45.818 2323.551 - 2338.445: 99.8412% ( 7) 00:18:45.818 2502.285 - 2517.180: 99.8424% ( 1) 00:18:45.818 2844.860 - 2859.755: 99.8436% ( 1) 00:18:45.818 2859.755 - 2874.649: 99.8448% ( 1) 00:18:45.818 2978.911 - 2993.806: 99.8460% ( 1) 00:18:45.818 3202.330 - 3217.224: 99.8472% ( 1) 00:18:45.818 3217.224 - 3232.119: 99.8508% ( 3) 00:18:45.818 3232.119 - 3247.013: 99.8580% ( 6) 00:18:45.818 3247.013 - 3261.908: 99.8641% ( 5) 00:18:45.818 3261.908 - 3276.802: 99.8701% ( 5) 00:18:45.818 3276.802 - 3291.697: 99.8761% ( 5) 00:18:45.818 3291.697 - 3306.592: 99.8821% ( 5) 00:18:45.818 3306.592 - 3321.486: 99.8893% ( 6) 00:18:45.818 3321.486 - 3336.381: 99.8953% ( 5) 00:18:45.818 3336.381 - 3351.275: 99.9014% ( 5) 00:18:45.818 3351.275 - 3366.170: 99.9050% ( 3) 00:18:45.818 3366.170 - 3381.064: 99.9110% ( 5) 00:18:45.818 3381.064 - 3395.959: 99.9182% ( 6) 00:18:45.818 3395.959 - 3410.853: 99.9242% ( 5) 00:18:45.818 3410.853 - 3425.748: 99.9302% ( 5) 00:18:45.818 3425.748 - 3440.643: 99.9350% ( 4) 00:18:45.818 3440.643 - 3455.537: 99.9411% ( 5) 00:18:45.818 3455.537 - 3470.432: 99.9483% ( 6) 00:18:45.818 3470.432 - 3485.326: 99.9543% ( 5) 00:18:45.818 3485.326 - 3500.221: 99.9615% ( 6) 00:18:45.818 3500.221 - 3515.115: 99.9675% ( 5) 00:18:45.818 3515.115 - 3530.010: 99.9747% ( 6) 00:18:45.818 3530.010 - 3544.904: 99.9832% ( 7) 00:18:45.818 3544.904 - 3559.799: 99.9904% ( 6) 00:18:45.818 3559.799 - 3574.694: 99.9952% ( 4) 00:18:45.818 3574.694 - 3589.588: 100.0000% ( 4) 00:18:45.818 00:18:46.076 21:55:01 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:18:46.641 EAL: TSC is not safe to use in SMP mode 00:18:46.641 EAL: TSC is not invariant 00:18:46.641 [2024-07-15 21:55:01.795822] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:48.023 Initializing NVMe Controllers 00:18:48.023 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:18:48.023 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:18:48.023 Initialization complete. Launching workers. 00:18:48.023 ======================================================== 00:18:48.023 Latency(us) 00:18:48.023 Device Information : IOPS MiB/s Average min max 00:18:48.023 PCIE (0000:00:10.0) NSID 1 from core 0: 81340.29 953.21 1573.55 675.04 9635.96 00:18:48.023 ======================================================== 00:18:48.023 Total : 81340.29 953.21 1573.55 675.04 9635.96 00:18:48.023 00:18:48.023 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:18:48.023 ================================================================================= 00:18:48.023 1.00000% : 1072.408us 00:18:48.023 10.00000% : 1288.379us 00:18:48.023 25.00000% : 1407.536us 00:18:48.023 50.00000% : 1511.797us 00:18:48.023 75.00000% : 1675.638us 00:18:48.023 90.00000% : 1980.976us 00:18:48.023 95.00000% : 2129.922us 00:18:48.023 98.00000% : 2308.656us 00:18:48.023 99.00000% : 2502.285us 00:18:48.023 99.50000% : 2695.915us 00:18:48.023 99.90000% : 3604.483us 00:18:48.023 99.99000% : 8996.312us 00:18:48.023 99.99900% : 9651.673us 00:18:48.023 99.99990% : 9651.673us 00:18:48.023 99.99999% : 9651.673us 00:18:48.023 00:18:48.023 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:18:48.023 ============================================================================== 00:18:48.023 Range in us Cumulative IO count 00:18:48.023 673.979 - 677.702: 0.0012% ( 1) 00:18:48.023 755.899 - 759.622: 0.0025% ( 1) 00:18:48.023 763.346 - 767.070: 0.0037% ( 1) 00:18:48.023 778.241 - 781.964: 0.0061% ( 2) 00:18:48.023 781.964 - 785.688: 0.0086% ( 2) 00:18:48.023 785.688 - 789.411: 0.0111% ( 2) 00:18:48.023 789.411 - 793.135: 0.0147% ( 3) 00:18:48.023 793.135 - 796.859: 0.0160% ( 1) 00:18:48.023 796.859 - 800.582: 0.0246% ( 7) 00:18:48.023 811.753 - 815.477: 0.0258% ( 1) 00:18:48.023 822.924 - 826.648: 0.0283% ( 2) 00:18:48.023 837.819 - 841.542: 0.0307% ( 2) 00:18:48.023 841.542 - 845.266: 0.0344% ( 3) 00:18:48.023 845.266 - 848.990: 0.0381% ( 3) 00:18:48.023 848.990 - 852.713: 0.0430% ( 4) 00:18:48.023 852.713 - 856.437: 0.0479% ( 4) 00:18:48.023 856.437 - 860.161: 0.0614% ( 11) 00:18:48.023 860.161 - 863.884: 0.0663% ( 4) 00:18:48.023 863.884 - 867.608: 0.0712% ( 4) 00:18:48.023 867.608 - 871.332: 0.0848% ( 11) 00:18:48.023 871.332 - 875.055: 0.0872% ( 2) 00:18:48.023 875.055 - 878.779: 0.0884% ( 1) 00:18:48.023 878.779 - 882.502: 0.0921% ( 3) 00:18:48.023 882.502 - 886.226: 0.0983% ( 5) 00:18:48.023 893.673 - 897.397: 0.0995% ( 1) 00:18:48.023 897.397 - 901.121: 0.1007% ( 1) 00:18:48.023 901.121 - 904.844: 0.1167% ( 13) 00:18:48.023 904.844 - 908.568: 0.1191% ( 2) 00:18:48.023 908.568 - 912.292: 0.1216% ( 2) 00:18:48.023 912.292 - 916.015: 0.1265% ( 4) 00:18:48.023 916.015 - 919.739: 0.1290% ( 2) 00:18:48.023 919.739 - 923.463: 0.1388% ( 8) 00:18:48.023 923.463 - 927.186: 0.1425% ( 3) 00:18:48.023 927.186 - 930.910: 0.1449% ( 2) 00:18:48.023 934.633 - 938.357: 0.1572% ( 10) 00:18:48.023 938.357 - 942.081: 0.1658% ( 7) 00:18:48.023 942.081 - 945.804: 0.1732% ( 6) 00:18:48.023 945.804 - 949.528: 0.1867% ( 11) 00:18:48.023 949.528 - 953.252: 0.1978% ( 9) 00:18:48.023 953.252 - 960.699: 0.2297% ( 26) 00:18:48.023 960.699 - 968.146: 0.2690% ( 32) 00:18:48.023 968.146 - 975.593: 0.2764% ( 6) 00:18:48.024 975.593 - 983.041: 0.2936% ( 14) 00:18:48.024 983.041 - 990.488: 0.3157% ( 18) 00:18:48.024 990.488 - 997.935: 0.3280% ( 10) 00:18:48.024 997.935 - 1005.383: 0.3611% ( 27) 00:18:48.024 1005.383 - 1012.830: 0.4164% ( 45) 00:18:48.024 1012.830 - 1020.277: 0.4766% ( 49) 00:18:48.024 1020.277 - 1027.724: 0.5356% ( 48) 00:18:48.024 1027.724 - 1035.172: 0.5835% ( 39) 00:18:48.024 1035.172 - 1042.619: 0.6351% ( 42) 00:18:48.024 1042.619 - 1050.066: 0.7567% ( 99) 00:18:48.024 1050.066 - 1057.514: 0.8525% ( 78) 00:18:48.024 1057.514 - 1064.961: 0.9348% ( 67) 00:18:48.024 1064.961 - 1072.408: 1.0785% ( 117) 00:18:48.024 1072.408 - 1079.855: 1.2075% ( 105) 00:18:48.024 1079.855 - 1087.303: 1.3794% ( 140) 00:18:48.024 1087.303 - 1094.750: 1.4974% ( 96) 00:18:48.024 1094.750 - 1102.197: 1.6349% ( 112) 00:18:48.024 1102.197 - 1109.644: 1.7246% ( 73) 00:18:48.024 1109.644 - 1117.092: 1.8573% ( 108) 00:18:48.024 1117.092 - 1124.539: 2.0047% ( 120) 00:18:48.024 1124.539 - 1131.986: 2.1349% ( 106) 00:18:48.024 1131.986 - 1139.434: 2.2995% ( 134) 00:18:48.024 1139.434 - 1146.881: 2.5120% ( 173) 00:18:48.024 1146.881 - 1154.328: 2.7798% ( 218) 00:18:48.024 1154.328 - 1161.775: 3.0488% ( 219) 00:18:48.024 1161.775 - 1169.223: 3.3522% ( 247) 00:18:48.024 1169.223 - 1176.670: 3.6642% ( 254) 00:18:48.024 1176.670 - 1184.117: 3.9234% ( 211) 00:18:48.024 1184.117 - 1191.565: 4.2231% ( 244) 00:18:48.024 1191.565 - 1199.012: 4.5388% ( 257) 00:18:48.024 1199.012 - 1206.459: 4.8594% ( 261) 00:18:48.024 1206.459 - 1213.906: 5.2279% ( 300) 00:18:48.024 1213.906 - 1221.354: 5.6074% ( 309) 00:18:48.024 1221.354 - 1228.801: 6.1651% ( 454) 00:18:48.024 1228.801 - 1236.248: 6.5275% ( 295) 00:18:48.024 1236.248 - 1243.695: 6.9733% ( 363) 00:18:48.024 1243.695 - 1251.143: 7.4868% ( 418) 00:18:48.024 1251.143 - 1258.590: 7.9511% ( 378) 00:18:48.024 1258.590 - 1266.037: 8.5112% ( 456) 00:18:48.024 1266.037 - 1273.485: 9.0996% ( 479) 00:18:48.024 1273.485 - 1280.932: 9.6683% ( 463) 00:18:48.024 1280.932 - 1288.379: 10.3157% ( 527) 00:18:48.024 1288.379 - 1295.826: 10.9937% ( 552) 00:18:48.024 1295.826 - 1303.274: 11.7271% ( 597) 00:18:48.024 1303.274 - 1310.721: 12.4862% ( 618) 00:18:48.024 1310.721 - 1318.168: 13.2600% ( 630) 00:18:48.024 1318.168 - 1325.616: 14.1174% ( 698) 00:18:48.024 1325.616 - 1333.063: 14.9785% ( 701) 00:18:48.024 1333.063 - 1340.510: 15.9391% ( 782) 00:18:48.024 1340.510 - 1347.957: 16.9033% ( 785) 00:18:48.024 1347.957 - 1355.405: 17.9929% ( 887) 00:18:48.024 1355.405 - 1362.852: 19.0591% ( 868) 00:18:48.024 1362.852 - 1370.299: 20.1535% ( 891) 00:18:48.024 1370.299 - 1377.746: 21.1804% ( 836) 00:18:48.024 1377.746 - 1385.194: 22.3719% ( 970) 00:18:48.024 1385.194 - 1392.641: 23.5401% ( 951) 00:18:48.024 1392.641 - 1400.088: 24.9515% ( 1149) 00:18:48.024 1400.088 - 1407.536: 26.2806% ( 1082) 00:18:48.024 1407.536 - 1414.983: 27.7005% ( 1156) 00:18:48.024 1414.983 - 1422.430: 29.1819% ( 1206) 00:18:48.024 1422.430 - 1429.877: 30.9078% ( 1405) 00:18:48.024 1429.877 - 1437.325: 32.7417% ( 1493) 00:18:48.024 1437.325 - 1444.772: 34.5044% ( 1435) 00:18:48.024 1444.772 - 1452.219: 36.1712% ( 1357) 00:18:48.024 1452.219 - 1459.667: 37.7091% ( 1252) 00:18:48.024 1459.667 - 1467.114: 39.2900% ( 1287) 00:18:48.024 1467.114 - 1474.561: 41.2259% ( 1576) 00:18:48.024 1474.561 - 1482.008: 43.0660% ( 1498) 00:18:48.024 1482.008 - 1489.456: 44.8753% ( 1473) 00:18:48.024 1489.456 - 1496.903: 46.7178% ( 1500) 00:18:48.024 1496.903 - 1504.350: 48.5383% ( 1482) 00:18:48.024 1504.350 - 1511.797: 50.5122% ( 1607) 00:18:48.024 1511.797 - 1519.245: 52.3253% ( 1476) 00:18:48.024 1519.245 - 1526.692: 53.7084% ( 1126) 00:18:48.024 1526.692 - 1534.139: 55.1763% ( 1195) 00:18:48.024 1534.139 - 1541.587: 56.8960% ( 1400) 00:18:48.024 1541.587 - 1549.034: 58.7078% ( 1475) 00:18:48.024 1549.034 - 1556.481: 60.4324% ( 1404) 00:18:48.024 1556.481 - 1563.928: 61.9961% ( 1273) 00:18:48.024 1563.928 - 1571.376: 63.3055% ( 1066) 00:18:48.024 1571.376 - 1578.823: 64.6702% ( 1111) 00:18:48.024 1578.823 - 1586.270: 65.8408% ( 953) 00:18:48.024 1586.270 - 1593.718: 66.9242% ( 882) 00:18:48.024 1593.718 - 1601.165: 68.0396% ( 908) 00:18:48.024 1601.165 - 1608.612: 68.9891% ( 773) 00:18:48.024 1608.612 - 1616.059: 70.0639% ( 875) 00:18:48.024 1616.059 - 1623.507: 70.9458% ( 718) 00:18:48.024 1623.507 - 1630.954: 71.7050% ( 618) 00:18:48.024 1630.954 - 1638.401: 72.3953% ( 562) 00:18:48.024 1638.401 - 1645.848: 73.0217% ( 510) 00:18:48.024 1645.848 - 1653.296: 73.5659% ( 443) 00:18:48.024 1653.296 - 1660.743: 74.2354% ( 545) 00:18:48.024 1660.743 - 1668.190: 74.9478% ( 580) 00:18:48.024 1668.190 - 1675.638: 75.5300% ( 474) 00:18:48.024 1675.638 - 1683.085: 75.9772% ( 364) 00:18:48.024 1683.085 - 1690.532: 76.4120% ( 354) 00:18:48.024 1690.532 - 1697.979: 76.8542% ( 360) 00:18:48.024 1697.979 - 1705.427: 77.3517% ( 405) 00:18:48.024 1705.427 - 1712.874: 77.7460% ( 321) 00:18:48.024 1712.874 - 1720.321: 78.1231% ( 307) 00:18:48.024 1720.321 - 1727.769: 78.5125% ( 317) 00:18:48.024 1727.769 - 1735.216: 78.9203% ( 332) 00:18:48.024 1735.216 - 1742.663: 79.2482% ( 267) 00:18:48.024 1742.663 - 1750.110: 79.5308% ( 230) 00:18:48.024 1750.110 - 1757.558: 79.7863% ( 208) 00:18:48.024 1757.558 - 1765.005: 80.0454% ( 211) 00:18:48.024 1765.005 - 1772.452: 80.3906% ( 281) 00:18:48.024 1772.452 - 1779.899: 80.7677% ( 307) 00:18:48.024 1779.899 - 1787.347: 81.1153% ( 283) 00:18:48.024 1787.347 - 1794.794: 81.4200% ( 248) 00:18:48.024 1794.794 - 1802.241: 81.7099% ( 236) 00:18:48.024 1802.241 - 1809.689: 81.9764% ( 217) 00:18:48.024 1809.689 - 1817.136: 82.2700% ( 239) 00:18:48.024 1817.136 - 1824.583: 82.5673% ( 242) 00:18:48.024 1824.583 - 1832.030: 82.8805% ( 255) 00:18:48.024 1832.030 - 1839.478: 83.2158% ( 273) 00:18:48.024 1839.478 - 1846.925: 83.5548% ( 276) 00:18:48.024 1846.925 - 1854.372: 83.8816% ( 266) 00:18:48.024 1854.372 - 1861.820: 84.2341% ( 287) 00:18:48.024 1861.820 - 1869.267: 84.6039% ( 301) 00:18:48.024 1869.267 - 1876.714: 84.9810% ( 307) 00:18:48.025 1876.714 - 1884.161: 85.3642% ( 312) 00:18:48.025 1884.161 - 1891.609: 85.8003% ( 355) 00:18:48.025 1891.609 - 1899.056: 86.2130% ( 336) 00:18:48.025 1899.056 - 1906.503: 86.5864% ( 304) 00:18:48.025 1906.503 - 1921.398: 87.3455% ( 618) 00:18:48.025 1921.398 - 1936.292: 88.1206% ( 631) 00:18:48.025 1936.292 - 1951.187: 88.9129% ( 645) 00:18:48.025 1951.187 - 1966.081: 89.7040% ( 644) 00:18:48.025 1966.081 - 1980.976: 90.4066% ( 572) 00:18:48.025 1980.976 - 1995.871: 90.9090% ( 409) 00:18:48.025 1995.871 - 2010.765: 91.4237% ( 419) 00:18:48.025 2010.765 - 2025.660: 91.9850% ( 457) 00:18:48.025 2025.660 - 2040.554: 92.5513% ( 461) 00:18:48.025 2040.554 - 2055.449: 93.0328% ( 392) 00:18:48.025 2055.449 - 2070.343: 93.5917% ( 455) 00:18:48.025 2070.343 - 2085.238: 94.1727% ( 473) 00:18:48.025 2085.238 - 2100.132: 94.6309% ( 373) 00:18:48.025 2100.132 - 2115.027: 94.9331% ( 246) 00:18:48.025 2115.027 - 2129.922: 95.2279% ( 240) 00:18:48.025 2129.922 - 2144.816: 95.5472% ( 260) 00:18:48.025 2144.816 - 2159.711: 95.8580% ( 253) 00:18:48.025 2159.711 - 2174.605: 96.2007% ( 279) 00:18:48.025 2174.605 - 2189.500: 96.4660% ( 216) 00:18:48.025 2189.500 - 2204.394: 96.6724% ( 168) 00:18:48.025 2204.394 - 2219.289: 96.8419% ( 138) 00:18:48.025 2219.289 - 2234.183: 97.0532% ( 172) 00:18:48.025 2234.183 - 2249.078: 97.3185% ( 216) 00:18:48.025 2249.078 - 2263.973: 97.5642% ( 200) 00:18:48.025 2263.973 - 2278.867: 97.7558% ( 156) 00:18:48.025 2278.867 - 2293.762: 97.9020% ( 119) 00:18:48.025 2293.762 - 2308.656: 98.0383% ( 111) 00:18:48.025 2308.656 - 2323.551: 98.1599% ( 99) 00:18:48.025 2323.551 - 2338.445: 98.2729% ( 92) 00:18:48.025 2338.445 - 2353.340: 98.3786% ( 86) 00:18:48.025 2353.340 - 2368.234: 98.4584% ( 65) 00:18:48.025 2368.234 - 2383.129: 98.5469% ( 72) 00:18:48.025 2383.129 - 2398.024: 98.6279% ( 66) 00:18:48.025 2398.024 - 2412.918: 98.7102% ( 67) 00:18:48.025 2412.918 - 2427.813: 98.7716% ( 50) 00:18:48.025 2427.813 - 2442.707: 98.8183% ( 38) 00:18:48.025 2442.707 - 2457.602: 98.8797% ( 50) 00:18:48.025 2457.602 - 2472.496: 98.9362% ( 46) 00:18:48.025 2472.496 - 2487.391: 98.9891% ( 43) 00:18:48.025 2487.391 - 2502.285: 99.0357% ( 38) 00:18:48.025 2502.285 - 2517.180: 99.0886% ( 43) 00:18:48.025 2517.180 - 2532.075: 99.1893% ( 82) 00:18:48.025 2532.075 - 2546.969: 99.2556% ( 54) 00:18:48.025 2546.969 - 2561.864: 99.2998% ( 36) 00:18:48.025 2561.864 - 2576.758: 99.3269% ( 22) 00:18:48.025 2576.758 - 2591.653: 99.3711% ( 36) 00:18:48.025 2591.653 - 2606.547: 99.4067% ( 29) 00:18:48.025 2606.547 - 2621.442: 99.4350% ( 23) 00:18:48.025 2621.442 - 2636.336: 99.4608% ( 21) 00:18:48.025 2636.336 - 2651.231: 99.4755% ( 12) 00:18:48.025 2651.231 - 2666.126: 99.4841% ( 7) 00:18:48.025 2666.126 - 2681.020: 99.4964% ( 10) 00:18:48.025 2681.020 - 2695.915: 99.5148% ( 15) 00:18:48.025 2695.915 - 2710.809: 99.5246% ( 8) 00:18:48.025 2710.809 - 2725.704: 99.5357% ( 9) 00:18:48.025 2725.704 - 2740.598: 99.5504% ( 12) 00:18:48.025 2740.598 - 2755.493: 99.5762% ( 21) 00:18:48.025 2755.493 - 2770.388: 99.6118% ( 29) 00:18:48.025 2770.388 - 2785.282: 99.6254% ( 11) 00:18:48.025 2785.282 - 2800.177: 99.6352% ( 8) 00:18:48.025 2800.177 - 2815.071: 99.6389% ( 3) 00:18:48.025 2815.071 - 2829.966: 99.6450% ( 5) 00:18:48.025 2829.966 - 2844.860: 99.6597% ( 12) 00:18:48.025 2844.860 - 2859.755: 99.6659% ( 5) 00:18:48.025 2859.755 - 2874.649: 99.6782% ( 10) 00:18:48.025 2874.649 - 2889.544: 99.6831% ( 4) 00:18:48.025 2889.544 - 2904.439: 99.6855% ( 2) 00:18:48.025 2904.439 - 2919.333: 99.6880% ( 2) 00:18:48.025 2919.333 - 2934.228: 99.6941% ( 5) 00:18:48.025 2934.228 - 2949.122: 99.7015% ( 6) 00:18:48.025 2949.122 - 2964.017: 99.7163% ( 12) 00:18:48.025 2964.017 - 2978.911: 99.7470% ( 25) 00:18:48.025 2978.911 - 2993.806: 99.7728% ( 21) 00:18:48.025 2993.806 - 3008.700: 99.7752% ( 2) 00:18:48.025 3008.700 - 3023.595: 99.7801% ( 4) 00:18:48.025 3098.068 - 3112.962: 99.7826% ( 2) 00:18:48.025 3142.751 - 3157.646: 99.7863% ( 3) 00:18:48.025 3157.646 - 3172.541: 99.7900% ( 3) 00:18:48.025 3187.435 - 3202.330: 99.7924% ( 2) 00:18:48.025 3202.330 - 3217.224: 99.7973% ( 4) 00:18:48.025 3217.224 - 3232.119: 99.8010% ( 3) 00:18:48.025 3232.119 - 3247.013: 99.8047% ( 3) 00:18:48.025 3247.013 - 3261.908: 99.8071% ( 2) 00:18:48.025 3261.908 - 3276.802: 99.8121% ( 4) 00:18:48.025 3276.802 - 3291.697: 99.8170% ( 4) 00:18:48.025 3291.697 - 3306.592: 99.8207% ( 3) 00:18:48.025 3306.592 - 3321.486: 99.8231% ( 2) 00:18:48.025 3336.381 - 3351.275: 99.8268% ( 3) 00:18:48.025 3351.275 - 3366.170: 99.8305% ( 3) 00:18:48.025 3366.170 - 3381.064: 99.8342% ( 3) 00:18:48.025 3381.064 - 3395.959: 99.8379% ( 3) 00:18:48.025 3395.959 - 3410.853: 99.8428% ( 4) 00:18:48.025 3410.853 - 3425.748: 99.8452% ( 2) 00:18:48.025 3425.748 - 3440.643: 99.8489% ( 3) 00:18:48.025 3440.643 - 3455.537: 99.8526% ( 3) 00:18:48.025 3455.537 - 3470.432: 99.8563% ( 3) 00:18:48.025 3470.432 - 3485.326: 99.8600% ( 3) 00:18:48.025 3485.326 - 3500.221: 99.8637% ( 3) 00:18:48.025 3500.221 - 3515.115: 99.8673% ( 3) 00:18:48.025 3515.115 - 3530.010: 99.8710% ( 3) 00:18:48.025 3530.010 - 3544.904: 99.8747% ( 3) 00:18:48.025 3544.904 - 3559.799: 99.8772% ( 2) 00:18:48.025 3559.799 - 3574.694: 99.8845% ( 6) 00:18:48.025 3574.694 - 3589.588: 99.8931% ( 7) 00:18:48.025 3589.588 - 3604.483: 99.9066% ( 11) 00:18:48.025 3604.483 - 3619.377: 99.9128% ( 5) 00:18:48.025 3619.377 - 3634.272: 99.9140% ( 1) 00:18:48.025 3634.272 - 3649.166: 99.9275% ( 11) 00:18:48.025 3649.166 - 3664.061: 99.9300% ( 2) 00:18:48.025 3678.955 - 3693.850: 99.9312% ( 1) 00:18:48.025 3813.006 - 3842.796: 99.9324% ( 1) 00:18:48.025 3872.585 - 3902.374: 99.9337% ( 1) 00:18:48.025 4051.319 - 4081.108: 99.9447% ( 9) 00:18:48.025 4110.898 - 4140.687: 99.9460% ( 1) 00:18:48.025 4200.265 - 4230.054: 99.9472% ( 1) 00:18:48.025 4230.054 - 4259.843: 99.9570% ( 8) 00:18:48.026 4349.210 - 4379.000: 99.9582% ( 1) 00:18:48.026 4379.000 - 4408.789: 99.9595% ( 1) 00:18:48.026 4468.367 - 4498.156: 99.9631% ( 3) 00:18:48.026 4587.523 - 4617.313: 99.9644% ( 1) 00:18:48.026 4617.313 - 4647.102: 99.9656% ( 1) 00:18:48.026 5064.149 - 5093.938: 99.9668% ( 1) 00:18:48.026 5093.938 - 5123.727: 99.9693% ( 2) 00:18:48.026 5183.306 - 5213.095: 99.9705% ( 1) 00:18:48.026 5630.142 - 5659.931: 99.9730% ( 2) 00:18:48.026 5808.877 - 5838.666: 99.9742% ( 1) 00:18:48.026 6047.190 - 6076.979: 99.9779% ( 3) 00:18:48.026 6196.135 - 6225.925: 99.9791% ( 1) 00:18:48.026 6255.714 - 6285.503: 99.9803% ( 1) 00:18:48.026 6494.027 - 6523.816: 99.9816% ( 1) 00:18:48.026 6523.816 - 6553.605: 99.9828% ( 1) 00:18:48.026 7447.278 - 7477.067: 99.9840% ( 1) 00:18:48.026 8340.952 - 8400.530: 99.9853% ( 1) 00:18:48.026 8460.108 - 8519.686: 99.9865% ( 1) 00:18:48.026 8877.156 - 8936.734: 99.9889% ( 2) 00:18:48.026 8936.734 - 8996.312: 99.9902% ( 1) 00:18:48.026 8996.312 - 9055.890: 99.9914% ( 1) 00:18:48.026 9055.890 - 9115.469: 99.9926% ( 1) 00:18:48.026 9115.469 - 9175.047: 99.9939% ( 1) 00:18:48.026 9175.047 - 9234.625: 99.9963% ( 2) 00:18:48.026 9353.781 - 9413.360: 99.9988% ( 2) 00:18:48.026 9592.094 - 9651.673: 100.0000% ( 1) 00:18:48.026 00:18:48.283 21:55:03 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:18:48.283 00:18:48.283 real 0m4.005s 00:18:48.283 user 0m2.651s 00:18:48.283 sys 0m1.350s 00:18:48.283 21:55:03 nvme.nvme_perf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:18:48.283 21:55:03 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:18:48.283 ************************************ 00:18:48.283 END TEST nvme_perf 00:18:48.283 ************************************ 00:18:48.541 21:55:03 nvme -- common/autotest_common.sh@1136 -- # return 0 00:18:48.541 21:55:03 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:18:48.541 21:55:03 nvme -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:18:48.541 21:55:03 nvme -- common/autotest_common.sh@1099 -- # xtrace_disable 00:18:48.541 21:55:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:18:48.541 ************************************ 00:18:48.541 START TEST nvme_hello_world 00:18:48.541 ************************************ 00:18:48.541 21:55:03 nvme.nvme_hello_world -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:18:49.106 EAL: TSC is not safe to use in SMP mode 00:18:49.106 EAL: TSC is not invariant 00:18:49.106 [2024-07-15 21:55:04.082944] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:49.106 Initializing NVMe Controllers 00:18:49.106 Attaching to 0000:00:10.0 00:18:49.106 Attached to 0000:00:10.0 00:18:49.106 Namespace ID: 1 size: 5GB 00:18:49.106 Initialization complete. 00:18:49.106 INFO: using host memory buffer for IO 00:18:49.106 Hello world! 00:18:49.106 00:18:49.106 real 0m0.614s 00:18:49.106 user 0m0.000s 00:18:49.106 sys 0m0.616s 00:18:49.106 21:55:04 nvme.nvme_hello_world -- common/autotest_common.sh@1118 -- # xtrace_disable 00:18:49.106 21:55:04 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:49.106 ************************************ 00:18:49.106 END TEST nvme_hello_world 00:18:49.106 ************************************ 00:18:49.106 21:55:04 nvme -- common/autotest_common.sh@1136 -- # return 0 00:18:49.106 21:55:04 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:18:49.106 21:55:04 nvme -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:18:49.106 21:55:04 nvme -- common/autotest_common.sh@1099 -- # xtrace_disable 00:18:49.106 21:55:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:18:49.106 ************************************ 00:18:49.106 START TEST nvme_sgl 00:18:49.106 ************************************ 00:18:49.106 21:55:04 nvme.nvme_sgl -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:18:49.671 EAL: TSC is not safe to use in SMP mode 00:18:49.671 EAL: TSC is not invariant 00:18:49.671 [2024-07-15 21:55:04.694574] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:49.671 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:18:49.671 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:18:49.671 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:18:49.671 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:18:49.671 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:18:49.671 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:18:49.671 NVMe Readv/Writev Request test 00:18:49.671 Attaching to 0000:00:10.0 00:18:49.671 Attached to 0000:00:10.0 00:18:49.671 0000:00:10.0: build_io_request_2 test passed 00:18:49.671 0000:00:10.0: build_io_request_4 test passed 00:18:49.671 0000:00:10.0: build_io_request_5 test passed 00:18:49.671 0000:00:10.0: build_io_request_6 test passed 00:18:49.671 0000:00:10.0: build_io_request_7 test passed 00:18:49.671 0000:00:10.0: build_io_request_10 test passed 00:18:49.671 Cleaning up... 00:18:49.671 00:18:49.671 real 0m0.560s 00:18:49.671 user 0m0.015s 00:18:49.671 sys 0m0.545s 00:18:49.671 21:55:04 nvme.nvme_sgl -- common/autotest_common.sh@1118 -- # xtrace_disable 00:18:49.671 21:55:04 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:18:49.671 ************************************ 00:18:49.671 END TEST nvme_sgl 00:18:49.671 ************************************ 00:18:49.671 21:55:04 nvme -- common/autotest_common.sh@1136 -- # return 0 00:18:49.671 21:55:04 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:18:49.671 21:55:04 nvme -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:18:49.671 21:55:04 nvme -- common/autotest_common.sh@1099 -- # xtrace_disable 00:18:49.671 21:55:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:18:49.671 ************************************ 00:18:49.671 START TEST nvme_e2edp 00:18:49.671 ************************************ 00:18:49.671 21:55:04 nvme.nvme_e2edp -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:18:50.238 EAL: TSC is not safe to use in SMP mode 00:18:50.238 EAL: TSC is not invariant 00:18:50.238 [2024-07-15 21:55:05.319807] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:50.238 NVMe Write/Read with End-to-End data protection test 00:18:50.238 Attaching to 0000:00:10.0 00:18:50.238 Attached to 0000:00:10.0 00:18:50.238 Cleaning up... 00:18:50.238 00:18:50.238 real 0m0.569s 00:18:50.238 user 0m0.024s 00:18:50.238 sys 0m0.544s 00:18:50.238 21:55:05 nvme.nvme_e2edp -- common/autotest_common.sh@1118 -- # xtrace_disable 00:18:50.238 21:55:05 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:18:50.238 ************************************ 00:18:50.238 END TEST nvme_e2edp 00:18:50.238 ************************************ 00:18:50.238 21:55:05 nvme -- common/autotest_common.sh@1136 -- # return 0 00:18:50.238 21:55:05 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:18:50.238 21:55:05 nvme -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:18:50.238 21:55:05 nvme -- common/autotest_common.sh@1099 -- # xtrace_disable 00:18:50.238 21:55:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:18:50.238 ************************************ 00:18:50.238 START TEST nvme_reserve 00:18:50.238 ************************************ 00:18:50.238 21:55:05 nvme.nvme_reserve -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:18:50.809 EAL: TSC is not safe to use in SMP mode 00:18:50.809 EAL: TSC is not invariant 00:18:50.809 [2024-07-15 21:55:05.956779] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:51.067 ===================================================== 00:18:51.067 NVMe Controller at PCI bus 0, device 16, function 0 00:18:51.067 ===================================================== 00:18:51.067 Reservations: Not Supported 00:18:51.067 Reservation test passed 00:18:51.067 00:18:51.067 real 0m0.585s 00:18:51.067 user 0m0.016s 00:18:51.067 sys 0m0.568s 00:18:51.067 21:55:05 nvme.nvme_reserve -- common/autotest_common.sh@1118 -- # xtrace_disable 00:18:51.067 ************************************ 00:18:51.067 END TEST nvme_reserve 00:18:51.067 ************************************ 00:18:51.067 21:55:05 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:18:51.067 21:55:06 nvme -- common/autotest_common.sh@1136 -- # return 0 00:18:51.067 21:55:06 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:18:51.067 21:55:06 nvme -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:18:51.067 21:55:06 nvme -- common/autotest_common.sh@1099 -- # xtrace_disable 00:18:51.067 21:55:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:18:51.067 ************************************ 00:18:51.067 START TEST nvme_err_injection 00:18:51.067 ************************************ 00:18:51.067 21:55:06 nvme.nvme_err_injection -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:18:52.001 EAL: TSC is not safe to use in SMP mode 00:18:52.001 EAL: TSC is not invariant 00:18:52.001 [2024-07-15 21:55:06.830690] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:52.001 NVMe Error Injection test 00:18:52.001 Attaching to 0000:00:10.0 00:18:52.001 Attached to 0000:00:10.0 00:18:52.001 0000:00:10.0: get features failed as expected 00:18:52.001 0000:00:10.0: get features successfully as expected 00:18:52.001 0000:00:10.0: read failed as expected 00:18:52.001 0000:00:10.0: read successfully as expected 00:18:52.001 Cleaning up... 00:18:52.001 00:18:52.001 real 0m0.843s 00:18:52.001 user 0m0.017s 00:18:52.001 sys 0m0.826s 00:18:52.001 21:55:06 nvme.nvme_err_injection -- common/autotest_common.sh@1118 -- # xtrace_disable 00:18:52.001 ************************************ 00:18:52.001 END TEST nvme_err_injection 00:18:52.001 ************************************ 00:18:52.001 21:55:06 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:18:52.001 21:55:06 nvme -- common/autotest_common.sh@1136 -- # return 0 00:18:52.001 21:55:06 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:18:52.001 21:55:06 nvme -- common/autotest_common.sh@1093 -- # '[' 9 -le 1 ']' 00:18:52.001 21:55:06 nvme -- common/autotest_common.sh@1099 -- # xtrace_disable 00:18:52.001 21:55:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:18:52.001 ************************************ 00:18:52.001 START TEST nvme_overhead 00:18:52.001 ************************************ 00:18:52.001 21:55:06 nvme.nvme_overhead -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:18:52.568 EAL: TSC is not safe to use in SMP mode 00:18:52.568 EAL: TSC is not invariant 00:18:52.568 [2024-07-15 21:55:07.478204] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:53.505 Initializing NVMe Controllers 00:18:53.505 Attaching to 0000:00:10.0 00:18:53.505 Attached to 0000:00:10.0 00:18:53.505 Initialization complete. Launching workers. 00:18:53.505 submit (in ns) avg, min, max = 9741.1, 6569.1, 114813.3 00:18:53.505 complete (in ns) avg, min, max = 7256.4, 5308.2, 54221.9 00:18:53.505 00:18:53.505 Submit histogram 00:18:53.505 ================ 00:18:53.505 Range in us Cumulative Count 00:18:53.505 6.545 - 6.575: 0.0105% ( 1) 00:18:53.505 7.098 - 7.127: 0.0209% ( 1) 00:18:53.505 7.360 - 7.389: 0.0314% ( 1) 00:18:53.505 7.796 - 7.855: 0.0419% ( 1) 00:18:53.505 7.913 - 7.971: 0.1152% ( 7) 00:18:53.505 7.971 - 8.029: 0.2513% ( 13) 00:18:53.505 8.029 - 8.087: 0.5025% ( 24) 00:18:53.505 8.087 - 8.145: 1.4761% ( 93) 00:18:53.505 8.145 - 8.204: 3.4652% ( 190) 00:18:53.505 8.204 - 8.262: 5.4439% ( 189) 00:18:53.505 8.262 - 8.320: 8.3752% ( 280) 00:18:53.505 8.320 - 8.378: 15.0021% ( 633) 00:18:53.505 8.378 - 8.436: 22.6131% ( 727) 00:18:53.505 8.436 - 8.495: 27.8266% ( 498) 00:18:53.505 8.495 - 8.553: 34.0871% ( 598) 00:18:53.505 8.553 - 8.611: 43.2265% ( 873) 00:18:53.505 8.611 - 8.669: 49.4556% ( 595) 00:18:53.505 8.669 - 8.727: 53.4652% ( 383) 00:18:53.505 8.727 - 8.785: 59.2651% ( 554) 00:18:53.505 8.785 - 8.844: 65.4418% ( 590) 00:18:53.505 8.844 - 8.902: 68.9070% ( 331) 00:18:53.505 8.902 - 8.960: 70.9171% ( 192) 00:18:53.506 8.960 - 9.018: 72.6445% ( 165) 00:18:53.506 9.018 - 9.076: 74.1311% ( 142) 00:18:53.506 9.076 - 9.135: 75.4711% ( 128) 00:18:53.506 9.135 - 9.193: 76.3086% ( 80) 00:18:53.506 9.193 - 9.251: 77.1566% ( 81) 00:18:53.506 9.251 - 9.309: 78.0674% ( 87) 00:18:53.506 9.309 - 9.367: 78.8735% ( 77) 00:18:53.506 9.367 - 9.425: 79.7111% ( 80) 00:18:53.506 9.425 - 9.484: 80.2659% ( 53) 00:18:53.506 9.484 - 9.542: 80.6847% ( 40) 00:18:53.506 9.542 - 9.600: 81.0511% ( 35) 00:18:53.506 9.600 - 9.658: 81.3023% ( 24) 00:18:53.506 9.658 - 9.716: 81.4698% ( 16) 00:18:53.506 9.716 - 9.775: 81.5222% ( 5) 00:18:53.506 9.775 - 9.833: 81.6478% ( 12) 00:18:53.506 9.833 - 9.891: 81.7106% ( 6) 00:18:53.506 9.891 - 9.949: 81.7839% ( 7) 00:18:53.506 9.949 - 10.007: 81.8467% ( 6) 00:18:53.506 10.007 - 10.065: 81.8886% ( 4) 00:18:53.506 10.065 - 10.124: 82.1085% ( 21) 00:18:53.506 10.124 - 10.182: 83.2705% ( 111) 00:18:53.506 10.182 - 10.240: 85.1235% ( 177) 00:18:53.506 10.240 - 10.298: 86.7776% ( 158) 00:18:53.506 10.298 - 10.356: 87.7931% ( 97) 00:18:53.506 10.356 - 10.415: 88.5992% ( 77) 00:18:53.506 10.415 - 10.473: 88.9761% ( 36) 00:18:53.506 10.473 - 10.531: 89.2169% ( 23) 00:18:53.506 10.531 - 10.589: 89.4368% ( 21) 00:18:53.506 10.589 - 10.647: 89.5310% ( 9) 00:18:53.506 10.647 - 10.705: 89.5938% ( 6) 00:18:53.506 10.705 - 10.764: 89.6461% ( 5) 00:18:53.506 10.764 - 10.822: 89.6776% ( 3) 00:18:53.506 10.822 - 10.880: 89.6985% ( 2) 00:18:53.506 10.880 - 10.938: 89.7299% ( 3) 00:18:53.506 10.938 - 10.996: 89.7508% ( 2) 00:18:53.506 10.996 - 11.055: 89.8241% ( 7) 00:18:53.506 11.055 - 11.113: 89.8765% ( 5) 00:18:53.506 11.113 - 11.171: 89.9916% ( 11) 00:18:53.506 11.171 - 11.229: 90.3371% ( 33) 00:18:53.506 11.229 - 11.287: 91.1327% ( 76) 00:18:53.506 11.287 - 11.345: 91.9493% ( 78) 00:18:53.506 11.345 - 11.404: 92.4728% ( 50) 00:18:53.506 11.404 - 11.462: 92.7031% ( 22) 00:18:53.506 11.462 - 11.520: 92.8078% ( 10) 00:18:53.506 11.520 - 11.578: 92.9229% ( 11) 00:18:53.506 11.578 - 11.636: 93.1428% ( 21) 00:18:53.506 11.636 - 11.695: 93.3522% ( 20) 00:18:53.506 11.695 - 11.753: 93.5302% ( 17) 00:18:53.506 11.753 - 11.811: 93.5930% ( 6) 00:18:53.506 11.811 - 11.869: 93.6348% ( 4) 00:18:53.506 11.869 - 11.927: 93.6767% ( 4) 00:18:53.506 11.927 - 11.985: 93.7395% ( 6) 00:18:53.506 11.985 - 12.044: 93.7500% ( 1) 00:18:53.506 12.044 - 12.102: 93.7919% ( 4) 00:18:53.506 12.102 - 12.160: 93.8023% ( 1) 00:18:53.506 12.218 - 12.276: 93.8442% ( 4) 00:18:53.506 12.276 - 12.335: 93.8547% ( 1) 00:18:53.506 12.335 - 12.393: 93.8756% ( 2) 00:18:53.506 12.393 - 12.451: 93.8861% ( 1) 00:18:53.506 12.451 - 12.509: 93.9280% ( 4) 00:18:53.506 12.567 - 12.625: 93.9489% ( 2) 00:18:53.506 12.625 - 12.684: 93.9594% ( 1) 00:18:53.506 12.684 - 12.742: 93.9698% ( 1) 00:18:53.506 12.742 - 12.800: 93.9803% ( 1) 00:18:53.506 12.800 - 12.858: 93.9908% ( 1) 00:18:53.506 12.858 - 12.916: 94.0117% ( 2) 00:18:53.506 12.916 - 12.975: 94.0431% ( 3) 00:18:53.506 13.091 - 13.149: 94.0641% ( 2) 00:18:53.506 13.149 - 13.207: 94.0850% ( 2) 00:18:53.506 13.207 - 13.265: 94.0955% ( 1) 00:18:53.506 13.382 - 13.440: 94.1164% ( 2) 00:18:53.506 13.440 - 13.498: 94.1374% ( 2) 00:18:53.506 13.556 - 13.615: 94.1792% ( 4) 00:18:53.506 13.615 - 13.673: 94.2211% ( 4) 00:18:53.506 13.673 - 13.731: 94.2316% ( 1) 00:18:53.506 13.731 - 13.789: 94.2630% ( 3) 00:18:53.506 13.789 - 13.847: 94.3153% ( 5) 00:18:53.506 13.847 - 13.905: 94.3467% ( 3) 00:18:53.506 14.022 - 14.080: 94.3572% ( 1) 00:18:53.506 14.080 - 14.138: 94.3677% ( 1) 00:18:53.506 14.138 - 14.196: 94.3781% ( 1) 00:18:53.506 14.196 - 14.255: 94.3886% ( 1) 00:18:53.506 14.371 - 14.429: 94.3991% ( 1) 00:18:53.506 14.429 - 14.487: 94.4095% ( 1) 00:18:53.506 14.545 - 14.604: 94.4200% ( 1) 00:18:53.506 14.604 - 14.662: 94.4305% ( 1) 00:18:53.506 14.895 - 15.011: 94.4514% ( 2) 00:18:53.506 15.011 - 15.127: 94.4828% ( 3) 00:18:53.506 15.127 - 15.244: 94.5247% ( 4) 00:18:53.506 15.244 - 15.360: 94.5456% ( 2) 00:18:53.506 15.476 - 15.593: 94.5561% ( 1) 00:18:53.506 15.593 - 15.709: 94.5771% ( 2) 00:18:53.506 15.709 - 15.825: 94.6503% ( 7) 00:18:53.506 15.825 - 15.942: 94.6713% ( 2) 00:18:53.506 15.942 - 16.058: 94.7027% ( 3) 00:18:53.506 16.175 - 16.291: 94.7131% ( 1) 00:18:53.506 16.291 - 16.407: 94.7341% ( 2) 00:18:53.506 16.407 - 16.524: 94.7446% ( 1) 00:18:53.506 16.524 - 16.640: 94.7655% ( 2) 00:18:53.506 16.640 - 16.756: 94.7760% ( 1) 00:18:53.506 16.756 - 16.873: 94.7969% ( 2) 00:18:53.506 16.873 - 16.989: 94.8178% ( 2) 00:18:53.506 16.989 - 17.105: 94.8807% ( 6) 00:18:53.506 17.105 - 17.222: 94.9121% ( 3) 00:18:53.506 17.338 - 17.455: 94.9225% ( 1) 00:18:53.506 17.455 - 17.571: 94.9435% ( 2) 00:18:53.506 17.571 - 17.687: 94.9644% ( 2) 00:18:53.506 17.920 - 18.036: 94.9749% ( 1) 00:18:53.506 18.036 - 18.153: 94.9853% ( 1) 00:18:53.506 18.269 - 18.385: 94.9958% ( 1) 00:18:53.506 18.385 - 18.502: 95.0272% ( 3) 00:18:53.506 18.502 - 18.618: 95.0586% ( 3) 00:18:53.506 18.618 - 18.735: 95.0691% ( 1) 00:18:53.506 18.851 - 18.967: 95.0900% ( 2) 00:18:53.506 19.200 - 19.316: 95.1110% ( 2) 00:18:53.506 19.316 - 19.433: 95.1214% ( 1) 00:18:53.506 19.433 - 19.549: 95.1528% ( 3) 00:18:53.506 19.549 - 19.665: 95.1738% ( 2) 00:18:53.506 19.665 - 19.782: 95.1947% ( 2) 00:18:53.506 19.782 - 19.898: 95.2261% ( 3) 00:18:53.506 20.015 - 20.131: 95.2575% ( 3) 00:18:53.506 20.131 - 20.247: 95.2889% ( 3) 00:18:53.506 20.247 - 20.364: 95.2994% ( 1) 00:18:53.506 20.364 - 20.480: 95.3204% ( 2) 00:18:53.506 20.480 - 20.596: 95.3413% ( 2) 00:18:53.506 20.596 - 20.713: 95.3936% ( 5) 00:18:53.506 20.713 - 20.829: 95.4041% ( 1) 00:18:53.506 20.829 - 20.945: 95.4146% ( 1) 00:18:53.506 20.945 - 21.062: 95.4460% ( 3) 00:18:53.506 21.062 - 21.178: 95.4564% ( 1) 00:18:53.506 21.178 - 21.295: 95.4983% ( 4) 00:18:53.506 21.411 - 21.527: 95.5088% ( 1) 00:18:53.506 21.527 - 21.644: 95.5402% ( 3) 00:18:53.506 21.644 - 21.760: 95.5507% ( 1) 00:18:53.506 21.760 - 21.876: 95.5611% ( 1) 00:18:53.506 22.109 - 22.225: 95.5821% ( 2) 00:18:53.506 22.458 - 22.575: 95.6030% ( 2) 00:18:53.506 22.575 - 22.691: 95.6135% ( 1) 00:18:53.506 22.807 - 22.924: 95.6240% ( 1) 00:18:53.506 22.924 - 23.040: 95.6972% ( 7) 00:18:53.506 23.040 - 23.156: 95.7601% ( 6) 00:18:53.506 23.156 - 23.273: 95.8857% ( 12) 00:18:53.506 23.273 - 23.389: 95.9694% ( 8) 00:18:53.506 23.389 - 23.505: 96.0846% ( 11) 00:18:53.506 23.505 - 23.622: 96.2730% ( 18) 00:18:53.506 23.622 - 23.738: 96.5557% ( 27) 00:18:53.506 23.738 - 23.855: 96.8174% ( 25) 00:18:53.506 23.855 - 23.971: 97.0582% ( 23) 00:18:53.506 23.971 - 24.087: 97.5084% ( 43) 00:18:53.506 24.087 - 24.204: 97.8748% ( 35) 00:18:53.506 24.204 - 24.320: 98.1784% ( 29) 00:18:53.506 24.320 - 24.436: 98.4296% ( 24) 00:18:53.506 24.436 - 24.553: 98.7332% ( 29) 00:18:53.506 24.553 - 24.669: 98.9845% ( 24) 00:18:53.506 24.669 - 24.785: 99.1625% ( 17) 00:18:53.506 24.785 - 24.902: 99.2776% ( 11) 00:18:53.506 24.902 - 25.018: 99.3405% ( 6) 00:18:53.506 25.018 - 25.135: 99.4347% ( 9) 00:18:53.506 25.135 - 25.251: 99.4765% ( 4) 00:18:53.506 25.251 - 25.367: 99.4975% ( 2) 00:18:53.506 25.367 - 25.484: 99.5498% ( 5) 00:18:53.506 25.484 - 25.600: 99.5917% ( 4) 00:18:53.506 25.600 - 25.716: 99.6231% ( 3) 00:18:53.506 25.716 - 25.833: 99.6336% ( 1) 00:18:53.506 25.833 - 25.949: 99.6441% ( 1) 00:18:53.506 25.949 - 26.065: 99.6859% ( 4) 00:18:53.506 26.065 - 26.182: 99.7173% ( 3) 00:18:53.506 26.182 - 26.298: 99.7383% ( 2) 00:18:53.506 26.298 - 26.415: 99.7487% ( 1) 00:18:53.506 26.415 - 26.531: 99.7697% ( 2) 00:18:53.506 26.647 - 26.764: 99.7802% ( 1) 00:18:53.506 26.996 - 27.113: 99.7906% ( 1) 00:18:53.507 27.462 - 27.578: 99.8116% ( 2) 00:18:53.507 28.393 - 28.509: 99.8220% ( 1) 00:18:53.507 28.858 - 28.975: 99.8325% ( 1) 00:18:53.507 29.556 - 29.673: 99.8430% ( 1) 00:18:53.507 29.673 - 29.789: 99.8534% ( 1) 00:18:53.507 30.255 - 30.487: 99.8639% ( 1) 00:18:53.507 30.720 - 30.953: 99.8744% ( 1) 00:18:53.507 31.185 - 31.418: 99.8953% ( 2) 00:18:53.507 31.418 - 31.651: 99.9267% ( 3) 00:18:53.507 32.349 - 32.582: 99.9372% ( 1) 00:18:53.507 33.047 - 33.280: 99.9477% ( 1) 00:18:53.507 33.513 - 33.745: 99.9581% ( 1) 00:18:53.507 39.331 - 39.564: 99.9686% ( 1) 00:18:53.507 40.727 - 40.960: 99.9791% ( 1) 00:18:53.507 41.193 - 41.425: 99.9895% ( 1) 00:18:53.507 114.502 - 114.967: 100.0000% ( 1) 00:18:53.507 00:18:53.507 Complete histogram 00:18:53.507 ================== 00:18:53.507 Range in us Cumulative Count 00:18:53.507 5.295 - 5.324: 0.0105% ( 1) 00:18:53.507 5.324 - 5.353: 0.1570% ( 14) 00:18:53.507 5.353 - 5.382: 0.5235% ( 35) 00:18:53.507 5.382 - 5.411: 0.9422% ( 40) 00:18:53.507 5.411 - 5.440: 1.3400% ( 38) 00:18:53.507 5.440 - 5.469: 1.6122% ( 26) 00:18:53.507 5.469 - 5.498: 2.2194% ( 58) 00:18:53.507 5.498 - 5.527: 3.5699% ( 129) 00:18:53.507 5.527 - 5.556: 5.4544% ( 180) 00:18:53.507 5.556 - 5.585: 7.5796% ( 203) 00:18:53.507 5.585 - 5.615: 9.2546% ( 160) 00:18:53.507 5.615 - 5.644: 10.1340% ( 84) 00:18:53.507 5.644 - 5.673: 10.9506% ( 78) 00:18:53.507 5.673 - 5.702: 12.4162% ( 140) 00:18:53.507 5.702 - 5.731: 14.5729% ( 206) 00:18:53.507 5.731 - 5.760: 16.4468% ( 179) 00:18:53.507 5.760 - 5.789: 17.5775% ( 108) 00:18:53.507 5.789 - 5.818: 18.6034% ( 98) 00:18:53.507 5.818 - 5.847: 19.2839% ( 65) 00:18:53.507 5.847 - 5.876: 21.2626% ( 189) 00:18:53.507 5.876 - 5.905: 24.6231% ( 321) 00:18:53.507 5.905 - 5.935: 28.2035% ( 342) 00:18:53.507 5.935 - 5.964: 31.1872% ( 285) 00:18:53.507 5.964 - 5.993: 33.1344% ( 186) 00:18:53.507 5.993 - 6.022: 34.5791% ( 138) 00:18:53.507 6.022 - 6.051: 36.7567% ( 208) 00:18:53.507 6.051 - 6.080: 40.6616% ( 373) 00:18:53.507 6.080 - 6.109: 45.7705% ( 488) 00:18:53.507 6.109 - 6.138: 50.2827% ( 431) 00:18:53.507 6.138 - 6.167: 53.0360% ( 263) 00:18:53.507 6.167 - 6.196: 54.5540% ( 145) 00:18:53.507 6.196 - 6.225: 56.3128% ( 168) 00:18:53.507 6.225 - 6.255: 59.1290% ( 269) 00:18:53.507 6.255 - 6.284: 63.8086% ( 447) 00:18:53.507 6.284 - 6.313: 67.6403% ( 366) 00:18:53.507 6.313 - 6.342: 70.2261% ( 247) 00:18:53.507 6.342 - 6.371: 71.6604% ( 137) 00:18:53.507 6.371 - 6.400: 72.5188% ( 82) 00:18:53.507 6.400 - 6.429: 73.2307% ( 68) 00:18:53.507 6.429 - 6.458: 74.2567% ( 98) 00:18:53.507 6.458 - 6.487: 75.1989% ( 90) 00:18:53.507 6.487 - 6.516: 76.4133% ( 116) 00:18:53.507 6.516 - 6.545: 76.9577% ( 52) 00:18:53.507 6.545 - 6.575: 77.4393% ( 46) 00:18:53.507 6.575 - 6.604: 77.7848% ( 33) 00:18:53.507 6.604 - 6.633: 78.1093% ( 31) 00:18:53.507 6.633 - 6.662: 78.4757% ( 35) 00:18:53.507 6.662 - 6.691: 78.9782% ( 48) 00:18:53.507 6.691 - 6.720: 79.1981% ( 21) 00:18:53.507 6.720 - 6.749: 79.4807% ( 27) 00:18:53.507 6.749 - 6.778: 79.7111% ( 22) 00:18:53.507 6.778 - 6.807: 79.8576% ( 14) 00:18:53.507 6.807 - 6.836: 80.0461% ( 18) 00:18:53.507 6.836 - 6.865: 80.2869% ( 23) 00:18:53.507 6.865 - 6.895: 80.3601% ( 7) 00:18:53.507 6.895 - 6.924: 80.4648% ( 10) 00:18:53.507 6.924 - 6.953: 80.5695% ( 10) 00:18:53.507 6.953 - 6.982: 80.6847% ( 11) 00:18:53.507 6.982 - 7.011: 80.7998% ( 11) 00:18:53.507 7.011 - 7.040: 80.9045% ( 10) 00:18:53.507 7.040 - 7.069: 81.0197% ( 11) 00:18:53.507 7.069 - 7.098: 81.0825% ( 6) 00:18:53.507 7.098 - 7.127: 81.1139% ( 3) 00:18:53.507 7.127 - 7.156: 81.1558% ( 4) 00:18:53.507 7.156 - 7.185: 81.1767% ( 2) 00:18:53.507 7.185 - 7.215: 81.1977% ( 2) 00:18:53.507 7.215 - 7.244: 81.2186% ( 2) 00:18:53.507 7.244 - 7.273: 81.2395% ( 2) 00:18:53.507 7.273 - 7.302: 81.2500% ( 1) 00:18:53.507 7.302 - 7.331: 81.2605% ( 1) 00:18:53.507 7.331 - 7.360: 81.2709% ( 1) 00:18:53.507 7.360 - 7.389: 81.2814% ( 1) 00:18:53.507 7.389 - 7.418: 81.3023% ( 2) 00:18:53.507 7.418 - 7.447: 81.3128% ( 1) 00:18:53.507 7.564 - 7.622: 81.3338% ( 2) 00:18:53.507 7.680 - 7.738: 81.3547% ( 2) 00:18:53.507 7.738 - 7.796: 81.5431% ( 18) 00:18:53.507 7.796 - 7.855: 82.5691% ( 98) 00:18:53.507 7.855 - 7.913: 84.0243% ( 139) 00:18:53.507 7.913 - 7.971: 84.5791% ( 53) 00:18:53.507 7.971 - 8.029: 84.9246% ( 33) 00:18:53.507 8.029 - 8.087: 85.1026% ( 17) 00:18:53.507 8.087 - 8.145: 85.2701% ( 16) 00:18:53.507 8.145 - 8.204: 85.2806% ( 1) 00:18:53.507 8.204 - 8.262: 85.4271% ( 14) 00:18:53.507 8.262 - 8.320: 86.6520% ( 117) 00:18:53.507 8.320 - 8.378: 89.0285% ( 227) 00:18:53.507 8.378 - 8.436: 91.0909% ( 197) 00:18:53.507 8.436 - 8.495: 91.9179% ( 79) 00:18:53.507 8.495 - 8.553: 92.4518% ( 51) 00:18:53.507 8.553 - 8.611: 92.7345% ( 27) 00:18:53.507 8.611 - 8.669: 92.8915% ( 15) 00:18:53.507 8.669 - 8.727: 93.0276% ( 13) 00:18:53.507 8.727 - 8.785: 93.0695% ( 4) 00:18:53.507 8.785 - 8.844: 93.1009% ( 3) 00:18:53.507 8.844 - 8.902: 93.1114% ( 1) 00:18:53.507 8.960 - 9.018: 93.1428% ( 3) 00:18:53.507 9.076 - 9.135: 93.1533% ( 1) 00:18:53.507 9.135 - 9.193: 93.1847% ( 3) 00:18:53.507 9.193 - 9.251: 93.1951% ( 1) 00:18:53.507 9.251 - 9.309: 93.2161% ( 2) 00:18:53.507 9.309 - 9.367: 93.2475% ( 3) 00:18:53.507 9.367 - 9.425: 93.2580% ( 1) 00:18:53.507 9.425 - 9.484: 93.2684% ( 1) 00:18:53.507 9.484 - 9.542: 93.2894% ( 2) 00:18:53.507 9.542 - 9.600: 93.3208% ( 3) 00:18:53.507 9.600 - 9.658: 93.3836% ( 6) 00:18:53.507 9.658 - 9.716: 93.4255% ( 4) 00:18:53.507 9.716 - 9.775: 93.4778% ( 5) 00:18:53.507 9.775 - 9.833: 93.5197% ( 4) 00:18:53.507 9.833 - 9.891: 93.5616% ( 4) 00:18:53.507 9.891 - 9.949: 93.5930% ( 3) 00:18:53.507 9.949 - 10.007: 93.6139% ( 2) 00:18:53.507 10.007 - 10.065: 93.6348% ( 2) 00:18:53.507 10.065 - 10.124: 93.6662% ( 3) 00:18:53.507 10.124 - 10.182: 93.6767% ( 1) 00:18:53.507 10.182 - 10.240: 93.6872% ( 1) 00:18:53.507 10.240 - 10.298: 93.7291% ( 4) 00:18:53.507 10.298 - 10.356: 93.7395% ( 1) 00:18:53.507 10.356 - 10.415: 93.8023% ( 6) 00:18:53.507 10.415 - 10.473: 93.8128% ( 1) 00:18:53.507 10.473 - 10.531: 93.8547% ( 4) 00:18:53.507 10.589 - 10.647: 93.8756% ( 2) 00:18:53.507 10.647 - 10.705: 93.8861% ( 1) 00:18:53.507 10.822 - 10.880: 93.8966% ( 1) 00:18:53.507 10.880 - 10.938: 93.9175% ( 2) 00:18:53.507 11.113 - 11.171: 93.9280% ( 1) 00:18:53.507 11.229 - 11.287: 93.9384% ( 1) 00:18:53.507 11.345 - 11.404: 93.9489% ( 1) 00:18:53.507 11.404 - 11.462: 93.9594% ( 1) 00:18:53.507 11.462 - 11.520: 93.9698% ( 1) 00:18:53.507 12.102 - 12.160: 93.9803% ( 1) 00:18:53.507 12.218 - 12.276: 93.9908% ( 1) 00:18:53.507 12.393 - 12.451: 94.0013% ( 1) 00:18:53.507 12.509 - 12.567: 94.0117% ( 1) 00:18:53.507 12.567 - 12.625: 94.0327% ( 2) 00:18:53.507 12.684 - 12.742: 94.0431% ( 1) 00:18:53.507 12.975 - 13.033: 94.0641% ( 2) 00:18:53.507 13.207 - 13.265: 94.0850% ( 2) 00:18:53.507 13.382 - 13.440: 94.0955% ( 1) 00:18:53.507 13.498 - 13.556: 94.1059% ( 1) 00:18:53.507 13.673 - 13.731: 94.1164% ( 1) 00:18:53.507 13.789 - 13.847: 94.1374% ( 2) 00:18:53.507 13.905 - 13.964: 94.1688% ( 3) 00:18:53.507 13.964 - 14.022: 94.1792% ( 1) 00:18:53.507 14.080 - 14.138: 94.1897% ( 1) 00:18:53.507 14.255 - 14.313: 94.2002% ( 1) 00:18:53.507 14.371 - 14.429: 94.2420% ( 4) 00:18:53.507 14.662 - 14.720: 94.2525% ( 1) 00:18:53.507 14.895 - 15.011: 94.2630% ( 1) 00:18:53.507 15.011 - 15.127: 94.2944% ( 3) 00:18:53.507 15.127 - 15.244: 94.3153% ( 2) 00:18:53.507 15.244 - 15.360: 94.3363% ( 2) 00:18:53.507 15.360 - 15.476: 94.3677% ( 3) 00:18:53.507 15.476 - 15.593: 94.3781% ( 1) 00:18:53.507 15.593 - 15.709: 94.3886% ( 1) 00:18:53.507 15.709 - 15.825: 94.3991% ( 1) 00:18:53.507 15.825 - 15.942: 94.4095% ( 1) 00:18:53.507 15.942 - 16.058: 94.4305% ( 2) 00:18:53.507 16.058 - 16.175: 94.4514% ( 2) 00:18:53.507 16.175 - 16.291: 94.4619% ( 1) 00:18:53.507 16.407 - 16.524: 94.4828% ( 2) 00:18:53.507 16.524 - 16.640: 94.4933% ( 1) 00:18:53.507 16.756 - 16.873: 94.5038% ( 1) 00:18:53.507 17.105 - 17.222: 94.5142% ( 1) 00:18:53.507 17.571 - 17.687: 94.5352% ( 2) 00:18:53.507 18.153 - 18.269: 94.5456% ( 1) 00:18:53.507 18.269 - 18.385: 94.5771% ( 3) 00:18:53.507 18.502 - 18.618: 94.5875% ( 1) 00:18:53.507 19.898 - 20.015: 94.6085% ( 2) 00:18:53.507 20.015 - 20.131: 94.6189% ( 1) 00:18:53.507 20.131 - 20.247: 94.6608% ( 4) 00:18:53.507 20.247 - 20.364: 94.6817% ( 2) 00:18:53.507 20.364 - 20.480: 94.7131% ( 3) 00:18:53.507 20.480 - 20.596: 94.8074% ( 9) 00:18:53.507 20.596 - 20.713: 94.8597% ( 5) 00:18:53.507 20.713 - 20.829: 95.0482% ( 18) 00:18:53.508 20.829 - 20.945: 95.3622% ( 30) 00:18:53.508 20.945 - 21.062: 95.7601% ( 38) 00:18:53.508 21.062 - 21.178: 96.0741% ( 30) 00:18:53.508 21.178 - 21.295: 96.5662% ( 47) 00:18:53.508 21.295 - 21.411: 97.0477% ( 46) 00:18:53.508 21.411 - 21.527: 97.7387% ( 66) 00:18:53.508 21.527 - 21.644: 98.3668% ( 60) 00:18:53.508 21.644 - 21.760: 98.6076% ( 23) 00:18:53.508 21.760 - 21.876: 98.8275% ( 21) 00:18:53.508 21.876 - 21.993: 99.0473% ( 21) 00:18:53.508 21.993 - 22.109: 99.1520% ( 10) 00:18:53.508 22.109 - 22.225: 99.2672% ( 11) 00:18:53.508 22.225 - 22.342: 99.3509% ( 8) 00:18:53.508 22.342 - 22.458: 99.4661% ( 11) 00:18:53.508 22.458 - 22.575: 99.5708% ( 10) 00:18:53.508 22.575 - 22.691: 99.6545% ( 8) 00:18:53.508 22.691 - 22.807: 99.6859% ( 3) 00:18:53.508 22.807 - 22.924: 99.7278% ( 4) 00:18:53.508 22.924 - 23.040: 99.7487% ( 2) 00:18:53.508 23.273 - 23.389: 99.7592% ( 1) 00:18:53.508 23.389 - 23.505: 99.7697% ( 1) 00:18:53.508 23.855 - 23.971: 99.7802% ( 1) 00:18:53.508 24.436 - 24.553: 99.7906% ( 1) 00:18:53.508 24.785 - 24.902: 99.8011% ( 1) 00:18:53.508 25.251 - 25.367: 99.8116% ( 1) 00:18:53.508 25.367 - 25.484: 99.8220% ( 1) 00:18:53.508 25.484 - 25.600: 99.8430% ( 2) 00:18:53.508 25.600 - 25.716: 99.8639% ( 2) 00:18:53.508 26.182 - 26.298: 99.8848% ( 2) 00:18:53.508 27.811 - 27.927: 99.8953% ( 1) 00:18:53.508 28.276 - 28.393: 99.9058% ( 1) 00:18:53.508 28.858 - 28.975: 99.9162% ( 1) 00:18:53.508 29.440 - 29.556: 99.9267% ( 1) 00:18:53.508 33.745 - 33.978: 99.9372% ( 1) 00:18:53.508 36.073 - 36.305: 99.9477% ( 1) 00:18:53.508 39.331 - 39.564: 99.9581% ( 1) 00:18:53.508 40.262 - 40.495: 99.9686% ( 1) 00:18:53.508 41.891 - 42.124: 99.9791% ( 1) 00:18:53.508 52.131 - 52.364: 99.9895% ( 1) 00:18:53.508 53.993 - 54.225: 100.0000% ( 1) 00:18:53.508 00:18:53.508 00:18:53.508 real 0m1.576s 00:18:53.508 user 0m1.007s 00:18:53.508 sys 0m0.568s 00:18:53.508 21:55:08 nvme.nvme_overhead -- common/autotest_common.sh@1118 -- # xtrace_disable 00:18:53.508 21:55:08 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:18:53.508 ************************************ 00:18:53.508 END TEST nvme_overhead 00:18:53.508 ************************************ 00:18:53.508 21:55:08 nvme -- common/autotest_common.sh@1136 -- # return 0 00:18:53.508 21:55:08 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:18:53.508 21:55:08 nvme -- common/autotest_common.sh@1093 -- # '[' 6 -le 1 ']' 00:18:53.508 21:55:08 nvme -- common/autotest_common.sh@1099 -- # xtrace_disable 00:18:53.508 21:55:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:18:53.508 ************************************ 00:18:53.508 START TEST nvme_arbitration 00:18:53.508 ************************************ 00:18:53.508 21:55:08 nvme.nvme_arbitration -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:18:54.073 EAL: TSC is not safe to use in SMP mode 00:18:54.073 EAL: TSC is not invariant 00:18:54.073 [2024-07-15 21:55:09.084206] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:58.350 Initializing NVMe Controllers 00:18:58.350 Attaching to 0000:00:10.0 00:18:58.350 Attached to 0000:00:10.0 00:18:58.350 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:18:58.350 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:18:58.350 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:18:58.350 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:18:58.350 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:18:58.350 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:18:58.350 Initialization complete. Launching workers. 00:18:58.350 Starting thread on core 1 with urgent priority queue 00:18:58.350 Starting thread on core 2 with urgent priority queue 00:18:58.350 Starting thread on core 3 with urgent priority queue 00:18:58.350 Starting thread on core 0 with urgent priority queue 00:18:58.350 QEMU NVMe Ctrl (12340 ) core 0: 7030.67 IO/s 14.22 secs/100000 ios 00:18:58.350 QEMU NVMe Ctrl (12340 ) core 1: 7050.33 IO/s 14.18 secs/100000 ios 00:18:58.350 QEMU NVMe Ctrl (12340 ) core 2: 7027.00 IO/s 14.23 secs/100000 ios 00:18:58.350 QEMU NVMe Ctrl (12340 ) core 3: 7073.00 IO/s 14.14 secs/100000 ios 00:18:58.350 ======================================================== 00:18:58.350 00:18:58.350 00:18:58.350 real 0m4.360s 00:18:58.350 user 0m12.803s 00:18:58.350 sys 0m0.582s 00:18:58.350 21:55:12 nvme.nvme_arbitration -- common/autotest_common.sh@1118 -- # xtrace_disable 00:18:58.350 21:55:12 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:18:58.350 ************************************ 00:18:58.350 END TEST nvme_arbitration 00:18:58.350 ************************************ 00:18:58.350 21:55:12 nvme -- common/autotest_common.sh@1136 -- # return 0 00:18:58.350 21:55:12 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:18:58.350 21:55:12 nvme -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:18:58.350 21:55:12 nvme -- common/autotest_common.sh@1099 -- # xtrace_disable 00:18:58.350 21:55:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:18:58.350 ************************************ 00:18:58.350 START TEST nvme_single_aen 00:18:58.350 ************************************ 00:18:58.350 21:55:12 nvme.nvme_single_aen -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:18:58.350 EAL: TSC is not safe to use in SMP mode 00:18:58.350 EAL: TSC is not invariant 00:18:58.350 [2024-07-15 21:55:13.478007] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:58.607 Asynchronous Event Request test 00:18:58.607 Attaching to 0000:00:10.0 00:18:58.607 Attached to 0000:00:10.0 00:18:58.607 Reset controller to setup AER completions for this process 00:18:58.607 Registering asynchronous event callbacks... 00:18:58.607 Getting orig temperature thresholds of all controllers 00:18:58.607 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:18:58.607 Setting all controllers temperature threshold low to trigger AER 00:18:58.607 Waiting for all controllers temperature threshold to be set lower 00:18:58.607 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:18:58.607 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:18:58.607 Waiting for all controllers to trigger AER and reset threshold 00:18:58.607 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:18:58.607 Cleaning up... 00:18:58.607 00:18:58.607 real 0m0.574s 00:18:58.607 user 0m0.028s 00:18:58.607 sys 0m0.545s 00:18:58.607 21:55:13 nvme.nvme_single_aen -- common/autotest_common.sh@1118 -- # xtrace_disable 00:18:58.607 21:55:13 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:18:58.607 ************************************ 00:18:58.607 END TEST nvme_single_aen 00:18:58.607 ************************************ 00:18:58.607 21:55:13 nvme -- common/autotest_common.sh@1136 -- # return 0 00:18:58.607 21:55:13 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:18:58.607 21:55:13 nvme -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:18:58.607 21:55:13 nvme -- common/autotest_common.sh@1099 -- # xtrace_disable 00:18:58.607 21:55:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:18:58.607 ************************************ 00:18:58.607 START TEST nvme_doorbell_aers 00:18:58.607 ************************************ 00:18:58.607 21:55:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1117 -- # nvme_doorbell_aers 00:18:58.607 21:55:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:18:58.607 21:55:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:18:58.607 21:55:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:18:58.607 21:55:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:18:58.607 21:55:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1507 -- # bdfs=() 00:18:58.607 21:55:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1507 -- # local bdfs 00:18:58.607 21:55:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1508 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:18:58.607 21:55:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1508 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:58.607 21:55:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1508 -- # jq -r '.config[].params.traddr' 00:18:58.607 21:55:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1509 -- # (( 1 == 0 )) 00:18:58.607 21:55:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # printf '%s\n' 0000:00:10.0 00:18:58.607 21:55:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:18:58.607 21:55:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:18:59.172 EAL: TSC is not safe to use in SMP mode 00:18:59.172 EAL: TSC is not invariant 00:18:59.172 [2024-07-15 21:55:14.141527] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:59.172 Executing: test_write_invalid_db 00:18:59.172 Waiting for AER completion... 00:18:59.172 Asynchronous Event received. 00:18:59.172 Error Informaton Log Page received. 00:18:59.172 Success: test_write_invalid_db 00:18:59.172 00:18:59.172 Executing: test_invalid_db_write_overflow_sq 00:18:59.172 Waiting for AER completion... 00:18:59.172 Asynchronous Event received. 00:18:59.172 Error Informaton Log Page received. 00:18:59.172 Success: test_invalid_db_write_overflow_sq 00:18:59.172 00:18:59.172 Executing: test_invalid_db_write_overflow_cq 00:18:59.172 Waiting for AER completion... 00:18:59.172 Asynchronous Event received. 00:18:59.172 Error Informaton Log Page received. 00:18:59.172 Success: test_invalid_db_write_overflow_cq 00:18:59.172 00:18:59.172 00:18:59.172 real 0m0.607s 00:18:59.172 user 0m0.028s 00:18:59.172 sys 0m0.593s 00:18:59.172 21:55:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1118 -- # xtrace_disable 00:18:59.172 21:55:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:18:59.172 ************************************ 00:18:59.172 END TEST nvme_doorbell_aers 00:18:59.172 ************************************ 00:18:59.172 21:55:14 nvme -- common/autotest_common.sh@1136 -- # return 0 00:18:59.172 21:55:14 nvme -- nvme/nvme.sh@97 -- # uname 00:18:59.172 21:55:14 nvme -- nvme/nvme.sh@97 -- # '[' FreeBSD '!=' FreeBSD ']' 00:18:59.172 21:55:14 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:18:59.172 21:55:14 nvme -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:18:59.172 21:55:14 nvme -- common/autotest_common.sh@1099 -- # xtrace_disable 00:18:59.172 21:55:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:18:59.172 ************************************ 00:18:59.172 START TEST bdev_nvme_reset_stuck_adm_cmd 00:18:59.172 ************************************ 00:18:59.172 21:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:18:59.172 * Looking for test storage... 00:18:59.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:18:59.430 21:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:18:59.430 21:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:18:59.430 21:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:18:59.430 21:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:18:59.430 21:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:18:59.430 21:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:18:59.430 21:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1518 -- # bdfs=() 00:18:59.430 21:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1518 -- # local bdfs 00:18:59.430 21:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:18:59.430 21:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:18:59.431 21:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:18:59.431 21:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:18:59.431 21:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:18:59.431 21:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:59.431 21:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # jq -r '.config[].params.traddr' 00:18:59.431 21:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # (( 1 == 0 )) 00:18:59.431 21:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # printf '%s\n' 0000:00:10.0 00:18:59.431 21:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1521 -- # echo 0000:00:10.0 00:18:59.431 21:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:18:59.431 21:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:18:59.431 21:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=68906 00:18:59.431 21:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:18:59.431 21:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:59.431 21:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 68906 00:18:59.431 21:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@823 -- # '[' -z 68906 ']' 00:18:59.431 21:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.431 21:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:59.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.431 21:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.431 21:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:59.431 21:55:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:18:59.431 [2024-07-15 21:55:14.407123] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:18:59.431 [2024-07-15 21:55:14.407303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:59.998 EAL: TSC is not safe to use in SMP mode 00:18:59.998 EAL: TSC is not invariant 00:18:59.998 [2024-07-15 21:55:14.929756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:59.998 [2024-07-15 21:55:15.007365] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:59.998 [2024-07-15 21:55:15.007435] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:18:59.998 [2024-07-15 21:55:15.007459] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:18:59.998 [2024-07-15 21:55:15.007466] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:18:59.998 [2024-07-15 21:55:15.011841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.998 [2024-07-15 21:55:15.011637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.998 [2024-07-15 21:55:15.011750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:59.998 [2024-07-15 21:55:15.011838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:00.256 21:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:00.256 21:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@856 -- # return 0 00:19:00.256 21:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:19:00.256 21:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:00.256 21:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:19:00.257 [2024-07-15 21:55:15.385768] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:00.257 nvme0n1 00:19:00.257 21:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:00.516 21:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:19:00.516 21:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_XXXXX.txt 00:19:00.516 21:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:19:00.516 21:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:00.516 21:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:19:00.516 true 00:19:00.516 21:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:00.516 21:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:19:00.516 21:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721080515 00:19:00.516 21:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:19:00.516 21:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=68918 00:19:00.516 21:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:00.516 21:55:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:19:02.420 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:19:02.420 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:02.420 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:19:02.420 [2024-07-15 21:55:17.478317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:19:02.420 [2024-07-15 21:55:17.478514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:02.420 [2024-07-15 21:55:17.478531] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:02.420 [2024-07-15 21:55:17.478541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.420 [2024-07-15 21:55:17.479824] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:02.420 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:02.420 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 68918 00:19:02.420 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 68918 00:19:02.420 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 68918 00:19:02.420 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:19:02.420 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:19:02.420 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:02.420 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:02.420 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:19:02.420 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:02.420 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:19:02.420 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_XXXXX.txt 00:19:02.420 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:19:02.420 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:19:02.420 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:19:02.420 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:19:02.420 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:19:02.420 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.1QSbAi 00:19:02.421 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:19:02.421 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:19:02.421 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:19:02.421 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:19:02.421 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:19:02.421 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:19:02.421 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:19:02.421 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:19:02.421 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.nhrwgY 00:19:02.421 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:19:02.421 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:19:02.421 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:19:02.421 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:19:02.421 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_XXXXX.txt 00:19:02.421 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 68906 00:19:02.421 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@942 -- # '[' -z 68906 ']' 00:19:02.421 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@946 -- # kill -0 68906 00:19:02.421 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@947 -- # uname 00:19:02.421 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:19:02.421 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # ps -c -o command 68906 00:19:02.421 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # tail -1 00:19:02.421 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # process_name=spdk_tgt 00:19:02.421 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # '[' spdk_tgt = sudo ']' 00:19:02.421 killing process with pid 68906 00:19:02.421 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # echo 'killing process with pid 68906' 00:19:02.421 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@961 -- # kill 68906 00:19:02.421 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@966 -- # wait 68906 00:19:02.679 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:19:02.679 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:19:02.679 00:19:02.679 real 0m3.572s 00:19:02.679 user 0m11.577s 00:19:02.679 sys 0m0.848s 00:19:02.679 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1118 -- # xtrace_disable 00:19:02.679 21:55:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:19:02.679 ************************************ 00:19:02.679 END TEST bdev_nvme_reset_stuck_adm_cmd 00:19:02.679 ************************************ 00:19:02.679 21:55:17 nvme -- common/autotest_common.sh@1136 -- # return 0 00:19:02.679 21:55:17 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:19:02.679 21:55:17 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:19:02.679 21:55:17 nvme -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:19:02.679 21:55:17 nvme -- common/autotest_common.sh@1099 -- # xtrace_disable 00:19:02.679 21:55:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:02.679 ************************************ 00:19:02.679 START TEST nvme_fio 00:19:02.679 ************************************ 00:19:02.679 21:55:17 nvme.nvme_fio -- common/autotest_common.sh@1117 -- # nvme_fio_test 00:19:02.679 21:55:17 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:19:02.679 21:55:17 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:19:02.679 21:55:17 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:19:02.679 21:55:17 nvme.nvme_fio -- common/autotest_common.sh@1507 -- # bdfs=() 00:19:02.679 21:55:17 nvme.nvme_fio -- common/autotest_common.sh@1507 -- # local bdfs 00:19:02.679 21:55:17 nvme.nvme_fio -- common/autotest_common.sh@1508 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:02.679 21:55:17 nvme.nvme_fio -- common/autotest_common.sh@1508 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:02.679 21:55:17 nvme.nvme_fio -- common/autotest_common.sh@1508 -- # jq -r '.config[].params.traddr' 00:19:02.938 21:55:17 nvme.nvme_fio -- common/autotest_common.sh@1509 -- # (( 1 == 0 )) 00:19:02.938 21:55:17 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # printf '%s\n' 0000:00:10.0 00:19:02.938 21:55:17 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0') 00:19:02.938 21:55:17 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:19:02.938 21:55:17 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:19:02.938 21:55:17 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:19:02.938 21:55:17 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:19:03.506 EAL: TSC is not safe to use in SMP mode 00:19:03.506 EAL: TSC is not invariant 00:19:03.506 [2024-07-15 21:55:18.422996] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:03.506 21:55:18 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:19:03.506 21:55:18 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:19:04.074 EAL: TSC is not safe to use in SMP mode 00:19:04.074 EAL: TSC is not invariant 00:19:04.074 [2024-07-15 21:55:18.998605] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:04.074 21:55:19 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:19:04.074 21:55:19 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:19:04.074 21:55:19 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:19:04.074 21:55:19 nvme.nvme_fio -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:19:04.074 21:55:19 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:04.074 21:55:19 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local sanitizers 00:19:04.074 21:55:19 nvme.nvme_fio -- common/autotest_common.sh@1334 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:04.074 21:55:19 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # shift 00:19:04.074 21:55:19 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local asan_lib= 00:19:04.074 21:55:19 nvme.nvme_fio -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:19:04.074 21:55:19 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:04.074 21:55:19 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # grep libasan 00:19:04.074 21:55:19 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:19:04.074 21:55:19 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # asan_lib= 00:19:04.074 21:55:19 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:19:04.074 21:55:19 nvme.nvme_fio -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:19:04.074 21:55:19 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:04.074 21:55:19 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:19:04.074 21:55:19 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:19:04.074 21:55:19 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # asan_lib= 00:19:04.074 21:55:19 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:19:04.074 21:55:19 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:04.074 21:55:19 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:19:04.074 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:04.074 fio-3.35 00:19:04.074 Starting 1 thread 00:19:04.642 EAL: TSC is not safe to use in SMP mode 00:19:04.642 EAL: TSC is not invariant 00:19:04.642 [2024-07-15 21:55:19.646064] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:07.219 00:19:07.219 test: (groupid=0, jobs=1): err= 0: pid=101537: Mon Jul 15 21:55:22 2024 00:19:07.219 read: IOPS=41.9k, BW=164MiB/s (172MB/s)(327MiB/2001msec) 00:19:07.219 slat (nsec): min=378, max=29377, avg=672.01, stdev=1388.45 00:19:07.219 clat (usec): min=298, max=4788, avg=1527.71, stdev=307.00 00:19:07.219 lat (usec): min=298, max=4789, avg=1528.38, stdev=307.03 00:19:07.219 clat percentiles (usec): 00:19:07.219 | 1.00th=[ 562], 5.00th=[ 1188], 10.00th=[ 1254], 20.00th=[ 1319], 00:19:07.219 | 30.00th=[ 1385], 40.00th=[ 1450], 50.00th=[ 1516], 60.00th=[ 1565], 00:19:07.219 | 70.00th=[ 1631], 80.00th=[ 1713], 90.00th=[ 1827], 95.00th=[ 1991], 00:19:07.219 | 99.00th=[ 2540], 99.50th=[ 2769], 99.90th=[ 3228], 99.95th=[ 4424], 00:19:07.219 | 99.99th=[ 4752] 00:19:07.219 bw ( KiB/s): min=161984, max=172712, per=99.77%, avg=167173.33, stdev=5372.52, samples=3 00:19:07.219 iops : min=40496, max=43178, avg=41793.33, stdev=1343.13, samples=3 00:19:07.219 write: IOPS=41.7k, BW=163MiB/s (171MB/s)(326MiB/2001msec); 0 zone resets 00:19:07.219 slat (nsec): min=401, max=44576, avg=994.44, stdev=2086.64 00:19:07.219 clat (usec): min=301, max=4812, avg=1529.07, stdev=308.36 00:19:07.219 lat (usec): min=302, max=4813, avg=1530.07, stdev=308.39 00:19:07.219 clat percentiles (usec): 00:19:07.219 | 1.00th=[ 553], 5.00th=[ 1188], 10.00th=[ 1254], 20.00th=[ 1319], 00:19:07.219 | 30.00th=[ 1385], 40.00th=[ 1450], 50.00th=[ 1516], 60.00th=[ 1582], 00:19:07.219 | 70.00th=[ 1647], 80.00th=[ 1713], 90.00th=[ 1827], 95.00th=[ 1991], 00:19:07.219 | 99.00th=[ 2540], 99.50th=[ 2737], 99.90th=[ 3294], 99.95th=[ 4490], 00:19:07.219 | 99.99th=[ 4686] 00:19:07.219 bw ( KiB/s): min=161032, max=171504, per=99.73%, avg=166482.67, stdev=5249.18, samples=3 00:19:07.219 iops : min=40258, max=42876, avg=41620.67, stdev=1312.30, samples=3 00:19:07.219 lat (usec) : 500=0.65%, 750=1.21%, 1000=0.83% 00:19:07.219 lat (msec) : 2=92.48%, 4=4.75%, 10=0.08% 00:19:07.219 cpu : usr=99.95%, sys=0.00%, ctx=22, majf=0, minf=2 00:19:07.219 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:19:07.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:07.219 issued rwts: total=83821,83511,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.219 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:07.219 00:19:07.219 Run status group 0 (all jobs): 00:19:07.219 READ: bw=164MiB/s (172MB/s), 164MiB/s-164MiB/s (172MB/s-172MB/s), io=327MiB (343MB), run=2001-2001msec 00:19:07.219 WRITE: bw=163MiB/s (171MB/s), 163MiB/s-163MiB/s (171MB/s-171MB/s), io=326MiB (342MB), run=2001-2001msec 00:19:07.787 21:55:22 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:19:07.787 21:55:22 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:19:07.787 00:19:07.787 real 0m5.036s 00:19:07.787 user 0m2.656s 00:19:07.787 sys 0m2.295s 00:19:07.787 21:55:22 nvme.nvme_fio -- common/autotest_common.sh@1118 -- # xtrace_disable 00:19:07.787 ************************************ 00:19:07.787 END TEST nvme_fio 00:19:07.787 ************************************ 00:19:07.787 21:55:22 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:19:07.787 21:55:22 nvme -- common/autotest_common.sh@1136 -- # return 0 00:19:07.787 00:19:07.787 real 0m25.588s 00:19:07.787 user 0m31.137s 00:19:07.787 sys 0m12.109s 00:19:07.787 ************************************ 00:19:07.787 21:55:22 nvme -- common/autotest_common.sh@1118 -- # xtrace_disable 00:19:07.787 21:55:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:07.787 END TEST nvme 00:19:07.787 ************************************ 00:19:07.787 21:55:22 -- common/autotest_common.sh@1136 -- # return 0 00:19:07.787 21:55:22 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:19:07.787 21:55:22 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:19:07.787 21:55:22 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:19:07.787 21:55:22 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:19:07.787 21:55:22 -- common/autotest_common.sh@10 -- # set +x 00:19:08.046 ************************************ 00:19:08.046 START TEST nvme_scc 00:19:08.046 ************************************ 00:19:08.046 21:55:22 nvme_scc -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:19:08.046 * Looking for test storage... 00:19:08.046 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:19:08.047 21:55:23 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:19:08.047 21:55:23 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:19:08.047 21:55:23 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:19:08.047 21:55:23 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:08.047 21:55:23 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:08.047 21:55:23 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:08.047 21:55:23 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:08.047 21:55:23 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:08.047 21:55:23 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:19:08.047 21:55:23 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:19:08.047 21:55:23 nvme_scc -- paths/export.sh@4 -- # export PATH 00:19:08.047 21:55:23 nvme_scc -- paths/export.sh@5 -- # echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:19:08.047 21:55:23 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:19:08.047 21:55:23 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:19:08.047 21:55:23 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:19:08.047 21:55:23 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:19:08.047 21:55:23 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:19:08.047 21:55:23 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:19:08.047 21:55:23 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:19:08.047 21:55:23 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:19:08.047 21:55:23 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:19:08.047 21:55:23 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:08.047 21:55:23 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:19:08.047 21:55:23 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ FreeBSD == Linux ]] 00:19:08.047 21:55:23 nvme_scc -- nvme/nvme_scc.sh@12 -- # exit 0 00:19:08.047 00:19:08.047 real 0m0.141s 00:19:08.047 user 0m0.107s 00:19:08.047 sys 0m0.058s 00:19:08.047 21:55:23 nvme_scc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:19:08.047 21:55:23 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:19:08.047 ************************************ 00:19:08.047 END TEST nvme_scc 00:19:08.047 ************************************ 00:19:08.047 21:55:23 -- common/autotest_common.sh@1136 -- # return 0 00:19:08.047 21:55:23 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:19:08.047 21:55:23 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:19:08.047 21:55:23 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:19:08.047 21:55:23 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:19:08.047 21:55:23 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:19:08.047 21:55:23 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:19:08.047 21:55:23 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:19:08.047 21:55:23 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:19:08.047 21:55:23 -- common/autotest_common.sh@10 -- # set +x 00:19:08.047 ************************************ 00:19:08.047 START TEST nvme_rpc 00:19:08.047 ************************************ 00:19:08.047 21:55:23 nvme_rpc -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:19:08.305 * Looking for test storage... 00:19:08.305 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:19:08.305 21:55:23 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:08.305 21:55:23 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:19:08.305 21:55:23 nvme_rpc -- common/autotest_common.sh@1518 -- # bdfs=() 00:19:08.305 21:55:23 nvme_rpc -- common/autotest_common.sh@1518 -- # local bdfs 00:19:08.305 21:55:23 nvme_rpc -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:19:08.305 21:55:23 nvme_rpc -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:19:08.305 21:55:23 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:19:08.305 21:55:23 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:19:08.305 21:55:23 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:08.305 21:55:23 nvme_rpc -- common/autotest_common.sh@1508 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:08.305 21:55:23 nvme_rpc -- common/autotest_common.sh@1508 -- # jq -r '.config[].params.traddr' 00:19:08.305 21:55:23 nvme_rpc -- common/autotest_common.sh@1509 -- # (( 1 == 0 )) 00:19:08.305 21:55:23 nvme_rpc -- common/autotest_common.sh@1513 -- # printf '%s\n' 0000:00:10.0 00:19:08.305 21:55:23 nvme_rpc -- common/autotest_common.sh@1521 -- # echo 0000:00:10.0 00:19:08.305 21:55:23 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:19:08.305 21:55:23 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=69156 00:19:08.305 21:55:23 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:19:08.305 21:55:23 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:19:08.305 21:55:23 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 69156 00:19:08.305 21:55:23 nvme_rpc -- common/autotest_common.sh@823 -- # '[' -z 69156 ']' 00:19:08.305 21:55:23 nvme_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.305 21:55:23 nvme_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:08.305 21:55:23 nvme_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.306 21:55:23 nvme_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:08.306 21:55:23 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:08.306 [2024-07-15 21:55:23.358007] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:19:08.306 [2024-07-15 21:55:23.358236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:08.871 EAL: TSC is not safe to use in SMP mode 00:19:08.871 EAL: TSC is not invariant 00:19:08.871 [2024-07-15 21:55:23.883139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:08.871 [2024-07-15 21:55:23.973190] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:08.871 [2024-07-15 21:55:23.973257] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:19:08.871 [2024-07-15 21:55:23.976681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.871 [2024-07-15 21:55:23.976670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.439 21:55:24 nvme_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:09.439 21:55:24 nvme_rpc -- common/autotest_common.sh@856 -- # return 0 00:19:09.439 21:55:24 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:19:09.698 [2024-07-15 21:55:24.694779] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:09.698 Nvme0n1 00:19:09.698 21:55:24 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:19:09.698 21:55:24 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:19:09.957 request: 00:19:09.957 { 00:19:09.957 "bdev_name": "Nvme0n1", 00:19:09.957 "filename": "non_existing_file", 00:19:09.957 "method": "bdev_nvme_apply_firmware", 00:19:09.957 "req_id": 1 00:19:09.957 } 00:19:09.957 Got JSON-RPC error response 00:19:09.957 response: 00:19:09.957 { 00:19:09.957 "code": -32603, 00:19:09.957 "message": "open file failed." 00:19:09.957 } 00:19:09.957 21:55:24 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:19:09.957 21:55:24 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:19:09.957 21:55:24 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:19:10.216 21:55:25 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:19:10.216 21:55:25 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 69156 00:19:10.216 21:55:25 nvme_rpc -- common/autotest_common.sh@942 -- # '[' -z 69156 ']' 00:19:10.216 21:55:25 nvme_rpc -- common/autotest_common.sh@946 -- # kill -0 69156 00:19:10.216 21:55:25 nvme_rpc -- common/autotest_common.sh@947 -- # uname 00:19:10.216 21:55:25 nvme_rpc -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:19:10.216 21:55:25 nvme_rpc -- common/autotest_common.sh@950 -- # ps -c -o command 69156 00:19:10.216 21:55:25 nvme_rpc -- common/autotest_common.sh@950 -- # tail -1 00:19:10.216 21:55:25 nvme_rpc -- common/autotest_common.sh@950 -- # process_name=spdk_tgt 00:19:10.216 21:55:25 nvme_rpc -- common/autotest_common.sh@952 -- # '[' spdk_tgt = sudo ']' 00:19:10.216 killing process with pid 69156 00:19:10.216 21:55:25 nvme_rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 69156' 00:19:10.216 21:55:25 nvme_rpc -- common/autotest_common.sh@961 -- # kill 69156 00:19:10.216 21:55:25 nvme_rpc -- common/autotest_common.sh@966 -- # wait 69156 00:19:10.475 00:19:10.475 real 0m2.276s 00:19:10.475 user 0m4.180s 00:19:10.475 sys 0m0.787s 00:19:10.475 21:55:25 nvme_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:19:10.475 21:55:25 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.475 ************************************ 00:19:10.475 END TEST nvme_rpc 00:19:10.475 ************************************ 00:19:10.475 21:55:25 -- common/autotest_common.sh@1136 -- # return 0 00:19:10.475 21:55:25 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:19:10.475 21:55:25 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:19:10.475 21:55:25 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:19:10.475 21:55:25 -- common/autotest_common.sh@10 -- # set +x 00:19:10.475 ************************************ 00:19:10.475 START TEST nvme_rpc_timeouts 00:19:10.475 ************************************ 00:19:10.475 21:55:25 nvme_rpc_timeouts -- common/autotest_common.sh@1117 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:19:10.475 * Looking for test storage... 00:19:10.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:19:10.475 21:55:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:10.475 21:55:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_69197 00:19:10.475 21:55:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_69197 00:19:10.475 21:55:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=69225 00:19:10.475 21:55:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:19:10.475 21:55:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 69225 00:19:10.475 21:55:25 nvme_rpc_timeouts -- common/autotest_common.sh@823 -- # '[' -z 69225 ']' 00:19:10.475 21:55:25 nvme_rpc_timeouts -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.475 21:55:25 nvme_rpc_timeouts -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:10.475 21:55:25 nvme_rpc_timeouts -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.475 21:55:25 nvme_rpc_timeouts -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:10.475 21:55:25 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:19:10.475 21:55:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:19:10.475 [2024-07-15 21:55:25.646632] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:19:10.475 [2024-07-15 21:55:25.646857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:11.043 EAL: TSC is not safe to use in SMP mode 00:19:11.043 EAL: TSC is not invariant 00:19:11.043 [2024-07-15 21:55:26.180540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:11.302 [2024-07-15 21:55:26.254910] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:11.302 [2024-07-15 21:55:26.254975] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:19:11.302 [2024-07-15 21:55:26.257777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.302 [2024-07-15 21:55:26.257767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.561 21:55:26 nvme_rpc_timeouts -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:11.561 21:55:26 nvme_rpc_timeouts -- common/autotest_common.sh@856 -- # return 0 00:19:11.561 Checking default timeout settings: 00:19:11.561 21:55:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:19:11.561 21:55:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:19:11.820 Making settings changes with rpc: 00:19:11.820 21:55:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:19:11.820 21:55:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:19:12.079 Check default vs. modified settings: 00:19:12.079 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:19:12.079 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_69197 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_69197 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:19:12.339 Setting action_on_timeout is changed as expected. 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_69197 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_69197 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:19:12.339 Setting timeout_us is changed as expected. 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_69197 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_69197 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:19:12.339 Setting timeout_admin_us is changed as expected. 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_69197 /tmp/settings_modified_69197 00:19:12.339 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 69225 00:19:12.339 21:55:27 nvme_rpc_timeouts -- common/autotest_common.sh@942 -- # '[' -z 69225 ']' 00:19:12.339 21:55:27 nvme_rpc_timeouts -- common/autotest_common.sh@946 -- # kill -0 69225 00:19:12.339 21:55:27 nvme_rpc_timeouts -- common/autotest_common.sh@947 -- # uname 00:19:12.339 21:55:27 nvme_rpc_timeouts -- common/autotest_common.sh@947 -- # '[' FreeBSD = Linux ']' 00:19:12.339 21:55:27 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # ps -c -o command 69225 00:19:12.339 21:55:27 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # tail -1 00:19:12.339 21:55:27 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # process_name=spdk_tgt 00:19:12.339 21:55:27 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # '[' spdk_tgt = sudo ']' 00:19:12.339 21:55:27 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # echo 'killing process with pid 69225' 00:19:12.339 killing process with pid 69225 00:19:12.339 21:55:27 nvme_rpc_timeouts -- common/autotest_common.sh@961 -- # kill 69225 00:19:12.339 21:55:27 nvme_rpc_timeouts -- common/autotest_common.sh@966 -- # wait 69225 00:19:12.598 RPC TIMEOUT SETTING TEST PASSED. 00:19:12.598 21:55:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:19:12.598 00:19:12.598 real 0m2.273s 00:19:12.598 user 0m4.135s 00:19:12.598 sys 0m0.841s 00:19:12.598 21:55:27 nvme_rpc_timeouts -- common/autotest_common.sh@1118 -- # xtrace_disable 00:19:12.598 ************************************ 00:19:12.598 END TEST nvme_rpc_timeouts 00:19:12.598 ************************************ 00:19:12.598 21:55:27 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:19:12.857 21:55:27 -- common/autotest_common.sh@1136 -- # return 0 00:19:12.857 21:55:27 -- spdk/autotest.sh@243 -- # uname -s 00:19:12.857 21:55:27 -- spdk/autotest.sh@243 -- # '[' FreeBSD = Linux ']' 00:19:12.857 21:55:27 -- spdk/autotest.sh@247 -- # [[ 0 -eq 1 ]] 00:19:12.857 21:55:27 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:12.857 21:55:27 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:12.857 21:55:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:12.857 21:55:27 -- common/autotest_common.sh@10 -- # set +x 00:19:12.857 21:55:27 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:12.857 21:55:27 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:19:12.857 21:55:27 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:19:12.857 21:55:27 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:19:12.858 21:55:27 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:19:12.858 21:55:27 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:19:12.858 21:55:27 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:19:12.858 21:55:27 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:19:12.858 21:55:27 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:19:12.858 21:55:27 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:19:12.858 21:55:27 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:19:12.858 21:55:27 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:19:12.858 21:55:27 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:19:12.858 21:55:27 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:19:12.858 21:55:27 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:19:12.858 21:55:27 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:19:12.858 21:55:27 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:19:12.858 21:55:27 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:19:12.858 21:55:27 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:19:12.858 21:55:27 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:19:12.858 21:55:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:12.858 21:55:27 -- common/autotest_common.sh@10 -- # set +x 00:19:12.858 21:55:27 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:19:12.858 21:55:27 -- common/autotest_common.sh@1386 -- # local autotest_es=0 00:19:12.858 21:55:27 -- common/autotest_common.sh@1387 -- # xtrace_disable 00:19:12.858 21:55:27 -- common/autotest_common.sh@10 -- # set +x 00:19:13.425 setup.sh cleanup function not yet supported on FreeBSD 00:19:13.425 21:55:28 -- common/autotest_common.sh@1445 -- # return 0 00:19:13.425 21:55:28 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:19:13.425 21:55:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:13.425 21:55:28 -- common/autotest_common.sh@10 -- # set +x 00:19:13.425 21:55:28 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:19:13.425 21:55:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:13.425 21:55:28 -- common/autotest_common.sh@10 -- # set +x 00:19:13.425 21:55:28 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:13.425 21:55:28 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:13.425 21:55:28 -- spdk/autotest.sh@391 -- # hash lcov 00:19:13.425 /home/vagrant/spdk_repo/spdk/autotest.sh: line 391: hash: lcov: not found 00:19:13.685 21:55:28 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:13.685 21:55:28 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:19:13.685 21:55:28 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:13.685 21:55:28 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:13.685 21:55:28 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:19:13.685 21:55:28 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:19:13.685 21:55:28 -- paths/export.sh@4 -- $ export PATH 00:19:13.685 21:55:28 -- paths/export.sh@5 -- $ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:19:13.685 21:55:28 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:19:13.685 21:55:28 -- common/autobuild_common.sh@444 -- $ date +%s 00:19:13.685 21:55:28 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721080528.XXXXXX 00:19:13.685 21:55:28 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721080528.XXXXXX.25DRDEiA6q 00:19:13.685 21:55:28 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:19:13.685 21:55:28 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:19:13.685 21:55:28 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:19:13.685 21:55:28 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:19:13.685 21:55:28 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:19:13.685 21:55:28 -- common/autobuild_common.sh@460 -- $ get_config_params 00:19:13.685 21:55:28 -- common/autotest_common.sh@390 -- $ xtrace_disable 00:19:13.685 21:55:28 -- common/autotest_common.sh@10 -- $ set +x 00:19:13.685 21:55:28 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:19:13.685 21:55:28 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:19:13.685 21:55:28 -- pm/common@17 -- $ local monitor 00:19:13.685 21:55:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:19:13.685 21:55:28 -- pm/common@25 -- $ sleep 1 00:19:13.685 21:55:28 -- pm/common@21 -- $ date +%s 00:19:13.685 21:55:28 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721080528 00:19:13.685 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721080528_collect-vmstat.pm.log 00:19:15.062 21:55:29 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:19:15.062 21:55:29 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:19:15.062 21:55:29 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:19:15.062 21:55:29 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:19:15.062 21:55:29 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:19:15.062 21:55:29 -- spdk/autopackage.sh@19 -- $ timing_finish 00:19:15.062 21:55:29 -- common/autotest_common.sh@728 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:19:15.062 21:55:29 -- common/autotest_common.sh@729 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:19:15.062 21:55:29 -- spdk/autopackage.sh@20 -- $ exit 0 00:19:15.062 21:55:29 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:19:15.062 21:55:29 -- pm/common@29 -- $ signal_monitor_resources TERM 00:19:15.062 21:55:29 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:19:15.062 21:55:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:19:15.062 21:55:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:19:15.062 21:55:29 -- pm/common@44 -- $ pid=69444 00:19:15.062 21:55:29 -- pm/common@50 -- $ kill -TERM 69444 00:19:15.062 + [[ -n 1231 ]] 00:19:15.062 + sudo kill 1231 00:19:16.006 [Pipeline] } 00:19:16.028 [Pipeline] // timeout 00:19:16.034 [Pipeline] } 00:19:16.054 [Pipeline] // stage 00:19:16.060 [Pipeline] } 00:19:16.080 [Pipeline] // catchError 00:19:16.090 [Pipeline] stage 00:19:16.093 [Pipeline] { (Stop VM) 00:19:16.109 [Pipeline] sh 00:19:16.388 + vagrant halt 00:19:19.668 ==> default: Halting domain... 00:19:41.616 [Pipeline] sh 00:19:41.895 + vagrant destroy -f 00:19:46.083 ==> default: Removing domain... 00:19:46.096 [Pipeline] sh 00:19:46.378 + mv output /var/jenkins/workspace/freebsd-vg-autotest_2/output 00:19:46.387 [Pipeline] } 00:19:46.407 [Pipeline] // stage 00:19:46.413 [Pipeline] } 00:19:46.431 [Pipeline] // dir 00:19:46.437 [Pipeline] } 00:19:46.461 [Pipeline] // wrap 00:19:46.469 [Pipeline] } 00:19:46.488 [Pipeline] // catchError 00:19:46.520 [Pipeline] stage 00:19:46.523 [Pipeline] { (Epilogue) 00:19:46.537 [Pipeline] sh 00:19:46.817 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:19:46.829 [Pipeline] catchError 00:19:46.831 [Pipeline] { 00:19:46.844 [Pipeline] sh 00:19:47.125 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:19:47.125 Artifacts sizes are good 00:19:47.134 [Pipeline] } 00:19:47.151 [Pipeline] // catchError 00:19:47.162 [Pipeline] archiveArtifacts 00:19:47.169 Archiving artifacts 00:19:47.212 [Pipeline] cleanWs 00:19:47.220 [WS-CLEANUP] Deleting project workspace... 00:19:47.220 [WS-CLEANUP] Deferred wipeout is used... 00:19:47.226 [WS-CLEANUP] done 00:19:47.228 [Pipeline] } 00:19:47.247 [Pipeline] // stage 00:19:47.252 [Pipeline] } 00:19:47.267 [Pipeline] // node 00:19:47.270 [Pipeline] End of Pipeline 00:19:47.298 Finished: SUCCESS